Primary and fallback to backup -- try_files, proxy_pass, upstream?

What I would like to have is nginx first to try primary service and then
to
fallback to the backup if primary is not available.

What I’ve tried:

  1. try_files:

    location /api {
    try_files @primary @backup;
    }

    location @primary {
    proxy_pass http://localhost:81;
    }

    location @backup {
    include /etc/nginx/fastcgi_params;
    fastcgi_param SCRIPT_FILENAME $document_root/boot/api.php;
    fastcgi_pass unix:/tmp/php-fpm.sock;
    }

This results in nginx only invoking @backup.
If I switch try_files params (try_files @backup @primary) then @primary
is
invoked but in case it’s not running I get 502 error (error log: 9228#0:
*1850 no live upstreams while connecting to upstream).

  1. upstream

upstream backend {
server localhost:81;
server unix:/tmp/php-fpm.sock;
}

location /api {
proxy_pass http://backend;
}

The first part of this solution works – requests are first passed to
localhost:81 and handled there without any problems and if this one is
down
requests go to the second server in the upstream.
What happens there is another 502 error:
error-log: *1840 recv() failed (104: Connection reset by peer) while
reading
response header from upstream, client: …, server: …, request:
“GET
/api HTTP/1.1”, upstream: “http://unix:/tmp/php-fpm.sock:/api”

How to configure fastcgi backend propery to handle this properly?

Is there any way to make this work?
If both are possible, which one is better?

Hello!

On Sun, Feb 14, 2010 at 01:43:55PM +0100, Denis Arh wrote:

What I would like to have is nginx first to try primary service and then to
fallback to the backup if primary is not available.

What I’ve tried:

  1. try_files:

    location /api {
    try_files @primary @backup;

This won’t work as try_files tris files (surprise!) and uses
fallback uri if no one was found. See syntax details here:

http://wiki.nginx.org/NginxHttpCoreModule#try_files

[…]

The first part of this solution works – requests are first passed to
localhost:81 and handled there without any problems and if this one is down
requests go to the second server in the upstream.

Not really. Requests are randomly balanced between two servers
you specified. Once one of them down it uses another one to
process requests.

What happens there is another 502 error:
error-log: *1840 recv() failed (104: Connection reset by peer) while reading
response header from upstream, client: …, server: …, request: “GET
/api HTTP/1.1”, upstream: “http://unix:/tmp/php-fpm.sock:/api”

How to configure fastcgi backend propery to handle this properly?

You use proxy_pass, and all backends should be http. Using
fastcgi backend in one pool with http ones isn’t an option.

Is there any way to make this work?
If both are possible, which one is better?

location /api {
    error_page 502 504 = @fallback;
    proxy_pass http://primary-backend;
}

location @fallback {
    fastcgi_pass ...
}

Maxim D.

I will not even try to explain why I’ve tried the “hard” way first :slight_smile:

Your solution It worked perfectly.

I’ve added “error_log /dev/null crit;” to “location /api {” so it does
not
fill the log files with connection refused"

Thank you.

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs