I want to use Nginx as a caching reverse proxy to a Django website that
runs behind Nginx over uWSGI.
Is it possible for Nginx to use multiple memcached servers to cache the
dynamic pages generated by Django? If so can you share some example
config? All the examples I’ve seen only uses one local memcached
server.
And if it’s possible to use multiple memcached servers, who will be
doing the storing and invalidation of cached pages - Nginx or Django?
And if it’s Django, how do i make sure that given the same key, both
Django and Nginx will hash to the same memcached server?
I’m new to this. Any help would be greatly appreciated. Thanks!
Is it possible for Nginx to use multiple memcached servers to cache the
dynamic pages generated by Django? If so can you share some example
config? All the examples I’ve seen only uses one local memcached
server.
FWIW, I use moxi for nginx and whatever app I’m using.
If so can you share some example
config? All the examples I’ve seen only uses one local memcached
server.
See above.
And if it’s possible to use multiple memcached servers, who will be
doing the storing and invalidation of cached pages - Nginx or Django?
If ngx_srcache is used, nginx is doing caching completely.
And if it’s Django, how do i make sure that given the same key, both
Django and Nginx will hash to the same memcached server?
If you want your python app to access memcached as well, the
recommended approach is to pass the nginx variable holding the hashed
memcached upstream name back to your fastcgi app.
When caching does not even reach your python app, your server will
easily reach 10k ~ 20k req/sec for a simple machine when a cache hit
happens
recommended approach is to pass the nginx variable
holding the hashed
memcached upstream name back to your fastcgi app.
How do I pass the nginx variable holding the hashed memcached upstream?
Which variable is that?
Here’s a complete nginx.conf example for this:
http {
upstream A {
server 10.32.110.5:11211;
}
upstream B {
server 10.32.110.16:11211;
}
upstream C {
server 10.32.110.27:11211;
}
upstream_list my_cluster A B C;
server {
location = /memc {
internal;
set $memc_key $query_string;
set $memc_exptime 3600; # cache one hour
# hashing the $memc_key to an upstream backend
# in the my_cluster upstream list, and set $backend:
set_hashed_upstream $backend my_cluster $memc_key;
# pass $backend to memc_pass:
memc_pass $backend;
}
location /myapp {
set $key 'my page key'; # you may want to construct a
dynamic cache key here…
set_hashed_upstream $backend my_cluster $key;
srcache_fetch GET /memc $key;
srcache_store PUT /memc $key;
fastcgi_param MEMC_BACKEND $backend;
fastcgi_pass unix:/path/to/your/python/app.sock;
}
}
}
Another way is to use Lua to pick up a memcached backend from your
memcached cluster, such that you can define your own hashing
algorithm, including your custom consistent hashing ones.
Here we use fastcgi_param to pass nginx variable $backend to your
fastcgi app such that your fastcgi app can read that from its
environment MEMC_BACKEND. Another approach is to append the $backend
value to the query string of the forwarded http request.
Another way is to use Lua to pick up a memcached backend from your
memcached cluster, such that you can define your own hashing
algorithm, including your custom consistent hashing ones.
Here’s an example for defining custom backend routing rules in Lua (by
means of our ngx_lua module) from my slides:
Well, you can also use the arrow keys or pageup/pagedown keys to
switch pages in my (ajax-based) slides.
Here we use fastcgi_param to pass nginx variable $backend to your
fastcgi app such that your fastcgi app can read that from its
environment MEMC_BACKEND. Another approach is to append the $backend
value to the query string of the forwarded http request.