Switching backends based on a cookie

Is it possible to switch backend clusters of servers based on a cookie?

I would like to set a cookie named “env” and do something like this:

    if ($http_cookie ~* "env=testing(;|$)") {
        proxy_pass http://backend_testing;
    }
    if ($http_cookie ~* "env=staging(;|$)") {
        proxy_pass http://backend_staging;
    }
    if ($http_cookie ~* "env=production(;|$)") {
        proxy_pass http://backend_production;
    }

However the “proxy_pass” directive is not allowed inside an “if”. Is
there another way I can approach this?

Thanks,
Eliot

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,46979,46979#msg-46979

Hi,

saltyflorida wrote:

    if ($http_cookie ~* "env=production(;|$)") {
        proxy_pass http://backend_production;
    }

However the “proxy_pass” directive is not allowed inside an “if”. Is there another way I can approach this?

Take a look at the map module :

http://wiki.nginx.org/NginxHttpMapModule

One possibility would be :

http {

map $cookie_env $backend {

testing      http://backend_testing;
staging      http://backend_staging;
production   http://backend_production;

}

server {

proxy_pass $backend;

}

}

Marcus.

On Thu, Jan 28, 2010 at 4:03 PM, Marcus C. [email protected]
wrote:

   }

}

Marcus.


nginx mailing list
[email protected]
http://nginx.org/mailman/listinfo/nginx

Marcus’ solution should be just fine, but I feel I must ask an
important question:

Doesn’t it make more sense to have production, static, and dev as
separate server blocks entirely with their own hostnames? This is, at
the least, traditional :).

– Merlin

Hi,

merlin corey wrote:

Doesn’t it make more sense to have production, static, and dev as
separate server blocks entirely with their own hostnames? This is, at
the least, traditional :).

Yes, I would agree with this (and it should perform a little better
too).

Marcus.

On Thu, Jan 28, 2010 at 4:24 PM, merlin corey [email protected]
wrote:

Marcus’ solution should be just fine, but I feel I must ask an
important question:

Doesn’t it make more sense to have production, static, and dev as
separate server blocks entirely with their own hostnames? This is, at
the least, traditional :).

– Merlin

Of course static should read staging -.-

Eugaia Wrote:

{

    }

One possibility would be :
server {


nginx mailing list
[email protected]
http://nginx.org/mailman/listinfo/nginx

Marcus, thank you for your response. The map module is very useful.
I implemented your suggestion and am now able to switch backend servers
using the cookie.

Now I have another problem: the cache is storing pages generated by
the 3 different backend clusters. Is there a way I can bypass the cache
if the cookie is set to either “testing” or “staging”?

Here is my simplified config:
http {
upstream backend_testing {
ip_hash;
server …
}
upstream backend_staging {
ip_hash;
server …
}
upstream backend_production {
ip_hash;
server …
}
proxy_cache_path /mnt/nginx_cache levels=1:2
keys_zone=one:100m
inactive=7d max_size=10g;
proxy_temp_path /var/www/nginx_temp;

map $cookie_uslnn_env $mybackend {
    default      http://backend_production;
    testing      http://backend_testing;
    staging      http://backend_staging;
    production   http://backend_production;
}

server {
    location / {
        proxy_pass $mybackend;
        proxy_cache one;
        proxy_cache_key $my_cache_key;
        proxy_cache_valid  200 302 304 10m;
        proxy_cache_valid  301 1h;
        proxy_cache_valid  any 1m;
        proxy_cache_use_stale updating error timeout invalid_header 

http_500 http_502 http_503 http_504;
}
location /wp-admin {
proxy_pass $mybackend;
proxy_read_timeout 300;
}
}
}

Thanks,
Eliot

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,46979,47024#msg-47024

merlin corey Wrote:

the least, traditional :).

– Merlin

Of course static should read staging -.-


nginx mailing list
[email protected]
http://nginx.org/mailman/listinfo/nginx

I realize this idea is unorthodox… We are using Wordpress MU on the
backend and it uses the domain name to generate pages. We want to serve
many domains with a single server cluster and we wanted to be able to
test using the production domain names.

Eliot

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,46979,47030#msg-47030

Sorry, I mis-read your question…
I don’t think that you can conditionally disable cache.

Best regards,
Piotr S. < [email protected] >

Eliot,
try using: proxy_cache_key $my_cache_key$cookie_uslnn_env;

Best regards,
Piotr S. < [email protected] >

Piotr S. Wrote:

[email protected]
http://nginx.org/mailman/listinfo/nginx

Hi Piotr,
Thank you, that helps. Ideally, we’d like to bypass the cache for faster
testing.
But this will get us up and running. (Also, thank you for your help
earlier-- I am still
looking into that problem.)

Eliot

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,46979,47035#msg-47035

saltyflorida wrote:

I forgot to mention that I am using caching with the HTTP Proxy module and that I only want to cache responses from the production servers. When I have the cookie set to “testing” or “staging”, I’d like to bypass the cache and talk directly to the backend. Does this sound feasible?

Sure. Do a rewrite using your $backend variable under the ‘location /’
block to one of three other blocks, which have the different definitions
of your proxy_pass, proxy_cache_valid…

e.g.

map $cookie_[name] $backend {

default   production;
test      test;
...

}

location / {
rewrite ^(.*)$ /$backend/$1;
}

location /production/ {
proxy_pass http://backend_production;
proxy_cache_valid …
}

location /test/ {
proxy_pass
# no proxy_cache_valid

}

Note, you’ll need some way to catch the case of no cookie variable, so
it’s unwise to put $cookie_[name] directly in the rewrite result (you’ll
get an infinite loop on such results).

Marcus.

merlin corey Wrote:

something like this:

    }

– Merlin


nginx mailing list
[email protected]
http://nginx.org/mailman/listinfo/nginx

Marcus,
Thank you for the quick response. I will try the map module.
I forgot to mention that I am using caching with the HTTP Proxy module
and that I only want to cache responses from the production servers.
When I have the cookie set to “testing” or “staging”, I’d like to bypass
the cache and talk directly to the backend. Does this sound feasible?

Merlin,
I realize this setup is unorthodox. We are using Wordpress MU and it
generates different pages based on the domain name. We are serving many
domains with one server cluster and wanted to be able to test using the
production domain names.

Thanks,
Eliot

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,46979,47012#msg-47012

Eugaia Wrote:

block to one of three other blocks, which have the
}

get an infinite loop on such results).

Marcus.


nginx mailing list
[email protected]
http://nginx.org/mailman/listinfo/nginx

Marcus,
Thank you for your help. I had wondered if I could use a rewrite, but I
don’t
understand how this works. I tried to implement your suggestion, but I
am
being redirected to /testing/ or /production/. These show up as part of
the
URL in the browser. Also, trying to visit pages other than the root
return a
404 error. Here is my configuration. Can you point out what I’m doing
wrong?

http {
upstream backend_testing {
ip_hash;
server …
}
upstream backend_staging {
ip_hash;
server …
}
upstream backend_production {
ip_hash;
server …
}
proxy_cache_path /mnt/nginx_cache levels=1:2
keys_zone=one:100m
inactive=7d max_size=10g;
proxy_temp_path /var/www/nginx_temp;

map $cookie_uslnn_env $backend {
    default      http://backend_production;
    testing      http://backend_testing;
    staging      http://backend_staging;
    production   http://backend_production;
}

server {
    location / {
        rewrite ^(.*)$ /$backend/$1;
    }
    location /testing/ {
        proxy_pass http://backend_testing;
    }
    location /staging/ {
        proxy_pass http://backend_staging;
    }
    location /production/ {
        proxy_pass http://backend_production;
        proxy_cache one;
        proxy_cache_key $my_cache_key;
        proxy_cache_valid  200 302 304 10m;
        proxy_cache_valid  301 1h;
        proxy_cache_valid  any 1m;
        proxy_cache_use_stale updating error timeout invalid_header 

http_500 http_502 http_503 http_504;
}
location /wp-admin {
proxy_pass http://backend_production;
proxy_read_timeout 300;
}
}
}

Thanks,
Eliot

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,46979,47092#msg-47092

saltyflorida Wrote:

“staging”,

of your proxy_pass, proxy_cache_valid…
location / {
proxy_pass

understand how this works. I tried to implement
ip_hash;
proxy_cache_path /mnt/nginx_cache levels=1:2

    location /production/ {
    location /wp-admin {
        proxy_pass http://backend_production;
        proxy_read_timeout 300;
    }
}

}

Thanks,
Eliot

Correction:
The configuration I tried looks like this:

http {
upstream backend_testing {
ip_hash;
server …
}
upstream backend_staging {
ip_hash;
server …
}
upstream backend_production {
ip_hash;
server …
}
proxy_cache_path /mnt/nginx_cache levels=1:2
keys_zone=one:100m
inactive=7d max_size=10g;
proxy_temp_path /var/www/nginx_temp;

map $cookie_uslnn_env $backend {
    default production;
    production production;
    testing testing;
    staging staging;
}

server {
    location / {
        rewrite ^(.*)$ /$backend/$1;
    }
    location /testing/ {
        proxy_pass http://backend_testing;
    }
    location /staging/ {
        proxy_pass http://backend_staging;
    }
    location /production/ {
        proxy_pass http://backend_production;
        proxy_cache one;
        proxy_cache_key $my_cache_key;
        proxy_cache_valid  200 302 304 10m;
        proxy_cache_valid  301 1h;
        proxy_cache_valid  any 1m;
        proxy_cache_use_stale updating error timeout invalid_header 

http_500 http_502 http_503 http_504;
}
location /wp-admin {
proxy_pass http://backend_production;
proxy_read_timeout 300;
}
}
}

Eliot

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,46979,47093#msg-47093

On Thu, Jan 28, 2010 at 4:49 PM, Marcus C. [email protected]
wrote:

Marcus.


nginx mailing list
[email protected]
http://nginx.org/mailman/listinfo/nginx

And the configuration will be simpler and easier to understand six
months from now :open_mouth: without any ifs or rewrites ;). Also, the servers
can be moved to separate hardware (another tradition)!

We are serving many domains with one server cluster and wanted to be able to test using the production domain names.

Use the power of NginX at your disposal! TELL Wordpress MU what the
domain name is. :wink:

fastcgi_param SERVER_NAME myawesomeproductiondomain.com;

– Merlin

saltyflorida wrote:

variable

}
Marcus.
your suggestion, but I am
server …
keys_zone=one:100m
server {
proxy_pass http://backend_production;
proxy_pass http://backend_production;
The configuration I tried looks like this:
upstream backend_production {
production production;
}
proxy_cache_use_stale updating error timeout invalid_header http_500 http_502 http_503 http_504;
}
location /wp-admin {
proxy_pass http://backend_production;
proxy_read_timeout 300;
}
}
}

Sorry, my fault. That should have read ‘proxy_pass
htttp://backend_production/;’. The final slash ‘deletes’ the first part
of the location that’s passed.

Note that you will want to add the slash for the /production/,
/testing/… blocks, but not for the /wp-admin block.

Marcus.

Laurence Rowe wrote:

I would take a look at HAProxy which has better support for this use
case, allowing for requests to be retried against another server if
their associated backend is down.

I would agree that if you’re just wanting to do proxying, then HAProxy
is probably a better way to go, however the above is also possible in
Nginx using upstreams.

Marcus.

I would take a look at HAProxy which has better support for this use
case, allowing for requests to be retried against another server if
their associated backend is down.

Laurence

2010/1/29 Marcus C. [email protected]:

2010/1/29 Marcus C. [email protected]:

Laurence Rowe wrote:

I would take a look at HAProxy which has better support for this use
case, allowing for requests to be retried against another server if
their associated backend is down.

I would agree that if you’re just wanting to do proxying, then HAProxy is
probably a better way to go, however the above is also possible in Nginx
using upstreams.

The only option then for sticky sessions is ip_hash, not cookies.

Laurence

Hi,

The only option then for sticky sessions is ip_hash, not cookies.

No, it’s also possible to direct traffic to particular backend servers
using
cookies too.

In fact there are more ways of directing traffic to backends/clusters
with
Nginx than there are with HAProxy - in the sense of the number of ways
of
choosing a cluster (which could just be one server) - but AFAIK there
are
currently fewer ways of hashing / distributing over the servers in a
particular cluster of backends with Nginx than HAProxy (even if you
include
the non-core modules).

If you did want high redundancy as well as sticky sessions, though, then
you’d probably want to store your key application data in something like
memcached and get your backend application to quiz that.

Marcus.

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs