So far I am using Nginx as loadbalancer only. Since we have a huge
amounts of ( frequently changing ) pages, and a small hitrate per unique
page, output caching has never been really interesting.
I have a failover app server in my upstream configuration for the case
one of the active servers fails.
Now if all app servers would fail - e.g. the database backend has a
problem etc. app-server failover would not help at all, in some cases I
would get 503 Bad Gateway Error… not nice.
To prevent that I’d like to build a static cache with nginx while
everything works fine, and deliver those cached sites when all
appservers fail.
Thank you, this is working excellent…
I’ve set the proxy_cache_valid to 10m to soften the I/O overhead… still
would be cool if I could set
the cache TTL to a way higher value and bypass the caching completely
while the backend servers return valid codes.
When backends fail and I have to deliver from cache, additionally I’d
like to send a header “304 Not modified” ( does that make sense, since
it is true…!? )
For those who might have a similar scenario…
Content backend servers fail, backup servers fail => deliver cache built
on the Nginx proxy
search backend fails => no backup => deliver “503 Service Unavailable”
maintainance site
upstream contentBackend {
server 192.168.50.101 weight=2 max_fails=1 fail_timeout=30s;
server 192.168.50.102 weight=2 max_fails=1 fail_timeout=30s;
server 192.168.50.103 weight=2 max_fails=1 fail_timeout=30s;
server 192.168.50.104 weight=2 max_fails=1 fail_timeout=30s;
server 192.168.50.105 weight=2 max_fails=1 fail_timeout=30s;
server 192.168.50.106 weight=1 max_fails=1 fail_timeout=30s;
server 192.168.50.107 weight=1 max_fails=1 fail_timeout=30s;
server 192.168.50.108 weight=1 max_fails=1 fail_timeout=30s;
server 192.168.50.109 backup;
server 192.168.50.110 backup;
}
upstream searchBackend {
server 192.168.50.106 weight=1 max_fails=1 fail_timeout=30s;
server 192.168.50.107 weight=1 max_fails=1 fail_timeout=30s;
server 192.168.50.108 weight=1 max_fails=1 fail_timeout=30s;
}