Send 502 when all php-fpm workers are in use

I have a php-fpm pool of workers which is 6. There are long running
requests
being sent, so I have the following fastcgi directives set:

fastcgi_connect_timeout 15;
fastcgi_send_timeout 1200;
fastcgi_read_timeout 1200;

However right now, if the php-fpm pool of workers is full, a request
waits
the full 20 minutes. I’d like requests to fail with a 502 status code if
the
php-fpm pool of workers is full instead. This change should still allow
long
running requests (max 20 minutes) though. I would have thought if the
php-fpm pool workers are all being used, a request would timeout in 15
seconds according to fastcgi_connect_timeout, but this does not seem to
be
the case.

Thanks for the help.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,251476,251476#msg-251476

The only way around this would be some kind of counter keeping track of
whats available and if max is reached create a file where you test on in
a
nginx config, maybe Lua can do this counting part.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,251476,251478#msg-251478

Hello!

On Sat, Jul 05, 2014 at 10:47:43PM -0400, justink101 wrote:

running requests (max 20 minutes) though. I would have thought if the
php-fpm pool workers are all being used, a request would timeout in 15
seconds according to fastcgi_connect_timeout, but this does not seem to be
the case.

From nginx side a connection in a backend listen queue isn’t
distinguishable from an accepted connection, hence
fastcgi_connect_timeout doesn’t apply as long as a backend is
reacheable and it’s listen queue isn’t full. (And
fastcgi_send_timeout doesn’t apply either if a request is small
enough to fit into socket send buffer.)

To reduce a number of affected requests you may consider using
smaller backlog in php-fpm, see here:

http://www.php.net/manual/en/install.fpm.configuration.php#listen-backlog


Maxim D.
http://nginx.org/

Hello!

On Sun, Jul 06, 2014 at 10:14:42PM -0400, justink101 wrote:

Maxim,

If I set the php-fpm pool listen.backlog to 0, will this accomplish what I
want. I.e. fill up workers, once all the workers are used, fail requests.

Note that such a low value will have a downside of not tolerating
connection spikes, so you may want to actually use something
slightly bigger.


Maxim D.
http://nginx.org/

Maxim,

If I set the php-fpm pool listen.backlog to 0, will this accomplish what
I
want. I.e. fill up workers, once all the workers are used, fail
requests.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,251476,251488#msg-251488

Starting php-fpm: [07-Jul-2014 17:52:33] WARNING: [pool app-execute]
listen.backlog(0) was too low for the ondemand process manager. I
updated it
for you to 128

Well that is unfortunate, not sure why using on-demand required a
backlog of
128. Essentially this php-fpm pool runs jobs then the workers
automatically
exit. So essentially they spawn run and die.

pm = ondemand
pm.max_children = 100
pm.process_idle_timeout = 3s;

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,251476,251519#msg-251519

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs