Using the memcache module to your advantage


#1

Ok I am trying to figure out how to use the memcache module to my
advantage. I run multiple forums that can put their “datastore” into
memcached already.

Would just adding the memcached server info help?

Confused on how to make this work.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,738,738#msg-738


#2

SSSloppy,

In short, using the memcached backend with NginX is just like publishing
to
flat-files except your storage engine is in memcached and not the local
filesystem.

Going deeper, your forum software using memcached helps it cache
specific
query results so it need not go to the database. NginX’s memcached
module
serves content directly from memcached, completely bypassing your
application and touching the database not at all. This means to utilize
it
with your forum, you will need to publish, or store, the output for each
URI
in memcached. In Python this can be accomplished very easily with a
simple
WSGI middleware. In PHP you will need to use the output buffering
functions
(ob_*). In other stuff, others.

Now the more in depth how to…

The NginX Memcached Module (documented
http://wiki.nginx.org/NginxHttpMemcachedModule and
http://sysoev.ru/nginx/docs/http/ngx_http_memcached_module.html in
original
Russian) simply turns memcached into a backend for NginX to reverse
proxy
to, much like another HTTP server or fastcgi, etc.

What this means is that NginX will attempt to serve resources from
memcached
keys. The example from the example from the wiki is pretty succinct and
very complete:

server http://wiki.nginx.org/NginxHttpCoreModule#server {
location http://wiki.nginx.org/NginxHttpCoreModule#location / {
set http://wiki.nginx.org/NginxHttpRewriteModule#set
$memcached_key $uri;
memcached_pass
http://wiki.nginx.org/NginxHttpMemcachedModule#memcached_pass
name:11211;
default_type
http://wiki.nginx.org/NginxHttpCoreModule#default_type
text/html;
error_page http://wiki.nginx.org/NginxHttpCoreModule#error_page
404 = /fallback;
}

location http://wiki.nginx.org/NginxHttpCoreModule#location =
/fallback {
proxy_pass http://wiki.nginx.org/NginxHttpProxyModule#proxy_pass
backend;
}
}

[apologies if you don’t have HTML mail]
Here we see that the memcached key is set to the URI requested. Next is
the
memcached_pass directive which is like the rest of the *_pass directives
(proxy_pass, fastcgi_pass, etc) in that it tells which backend to go to.
The rest is just setting the default type (In my experience it seems
MIME
Type is not checked from memcached) and setting a fallback location to
serve
from in case the content is not in memcached yet.

What this setup then assumes is that the backend (in your case, the
forum
software) will publish the page output into memcached in the key that is
the
URI. As I previously mentioned, the best way to go about this is with
some
kind of middleware or output buffering, in my experience. The problem
with
the forum is it might need to be a little more complicated, depending on
how
you want to do it.

If you want to just cache page output for a minute or two, it should be
as
simple as pushing into memcached with a minute expiration time, nothing
else
need be done except not serving POST requests from memcached and
disallowing
your very dynamic pages with the same URI from being cached (so like
/forum/post would be not cached but
/forum/main-category/this-is-a-thread
would).

However, if you want to “cache forever” it gets a lot more complicated.
You’ll need to do the above, without the minute limit, and in addition
to
that, you’ll need to include code so that every action that changes
something on a page causes a republication. This can obviously get
pretty
hairy if your application was not designed with such a thing in mind in
the
first place.

At any rate, good luck!

  • Merlin

#3

We love the memcached module with Nginx, one thing we rewrite all of our
initial
requests (that don’t have args and are pretty urls into a structure that
does
have args.)

Example:

www.example.com/forum/posts/235?sort=desc rewrites to
www.example.com/index.php?framework_uri=forum/posts/235&sort=desc

It seems as such that when Nginx initially gets this requests the $args
variable
is empty so it never initializes the variable in the memcached module?
Hard to
tell, just guessing here.

So when we do something like this.

set $memcached_key $uri$is_args$args;
memcached_pass localhost:11211
error_page 404 /fallback;

That it looks for the key “/index.php?” with $args not on the string.
This
seems to only be a memcached module problem however, because if you
comment out
the memcached lines and do an "add_header “test_header” $args; they will
contain
your proper args from the rewrite.

We started a post a week or so ago and never heard from anyone, just
wondering
if anyone has experienced this or knows a fix, Igor? :slight_smile:

Thank you,
Josh


#4

Hi, I know this thread is old, but I’ve got the same problem and I foud
this thread on google. To solve this problem, simply use “set
$memcached_key $request_uri;” instead of “set $memcached_key $uri;” !

It work well for me :wink:

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,738,41474#msg-41474