Cache as failover

So far I am using Nginx as loadbalancer only. Since we have a huge
amounts of ( frequently changing ) pages, and a small hitrate per unique
page, output caching has never been really interesting.

I have a failover app server in my upstream configuration for the case
one of the active servers fails.
Now if all app servers would fail - e.g. the database backend has a
problem etc. app-server failover would not help at all, in some cases I
would get 503 Bad Gateway Error… not nice.

To prevent that I’d like to build a static cache with nginx while
everything works fine, and deliver those cached sites when all
appservers fail.

Is that possible?

Posted at Nginx Forum:

Any ideas? Could proxy_store (
Module ngx_http_proxy_module ) help me in that
case?

Posted at Nginx Forum:

proxy_ignore_headers “Cache-Control” “Expires”;
proxy_hide_header ‘Set-Cookie’;
proxy_cache_key
“$request_method|$http_if_modified_since|$http_if_none_match|$host|$request_uri”;
proxy_cache_valid 200 302 301 304 1m;
proxy_cache_valid any 0m;
proxy_cache_use_stale error timeout invalid_header http_500
http_502 http_503 http_504;
proxy_cache_path cache/ levels=2:2 keys_zone=cache:256m
inactive=7d
max_size=65536m;
server {
server_name …;
location / {
proxy_pass http://…;
proxy_cache cache;
}
}

proxy_cache_use_stale: cached content will return instead of proxying in
case of 502/503/504 errors.

Thank you, this is working excellent…
I’ve set the proxy_cache_valid to 10m to soften the I/O overhead… still

  • would be cool if I could set
    the cache TTL to a way higher value and bypass the caching completely
    while the backend servers return valid codes.

When backends fail and I have to deliver from cache, additionally I’d
like to send a header “304 Not modified” ( does that make sense, since
it is true…!? )

For those who might have a similar scenario…

Content backend servers fail, backup servers fail => deliver cache built
on the Nginx proxy
search backend fails => no backup => deliver “503 Service Unavailable”
maintainance site

upstream contentBackend {
server 192.168.50.101 weight=2 max_fails=1 fail_timeout=30s;
server 192.168.50.102 weight=2 max_fails=1 fail_timeout=30s;
server 192.168.50.103 weight=2 max_fails=1 fail_timeout=30s;
server 192.168.50.104 weight=2 max_fails=1 fail_timeout=30s;
server 192.168.50.105 weight=2 max_fails=1 fail_timeout=30s;
server 192.168.50.106 weight=1 max_fails=1 fail_timeout=30s;
server 192.168.50.107 weight=1 max_fails=1 fail_timeout=30s;
server 192.168.50.108 weight=1 max_fails=1 fail_timeout=30s;
server 192.168.50.109 backup;
server 192.168.50.110 backup;
}

upstream searchBackend {
server 192.168.50.106 weight=1 max_fails=1 fail_timeout=30s;
server 192.168.50.107 weight=1 max_fails=1 fail_timeout=30s;
server 192.168.50.108 weight=1 max_fails=1 fail_timeout=30s;
}

proxy_ignore_headers “Cache-Control” “Expires”;
proxy_hide_header “Set-Cookie”;
proxy_cache_key
“$request_method|$http_if_modified_since|$http_if_none_match|$host|$request_uri”;
proxy_cache_valid 200 302 301 304 1s;
proxy_cache_valid any 0m;
proxy_cache_use_stale error timeout invalid_header http_500 http_502
http_503 http_504;
proxy_cache_path /tmp/nginxCache levels=2:2 keys_zone=cache:256m
inactive=7d max_size=65536m;

server {

    server_name _;

    location / {

            proxy_pass http://contentBackend;
            proxy_cache cache;
    }

    # no sense to deliver cached search masks, search wont work 

without a backend
location /search/ {

            proxy_pass http://searchBackend;
            error_page 500 502 503 504 =510 /510.html;
    }

    location /510.html {
            root /srv/www/nginx/htdocs;
    }

}

Posted at Nginx Forum: