However right now, if the php-fpm pool of workers is full, a request
waits
the full 20 minutes. I’d like requests to fail with a 502 status code if
the
php-fpm pool of workers is full instead. This change should still allow
long
running requests (max 20 minutes) though. I would have thought if the
php-fpm pool workers are all being used, a request would timeout in 15
seconds according to fastcgi_connect_timeout, but this does not seem to
be
the case.
The only way around this would be some kind of counter keeping track of
whats available and if max is reached create a file where you test on in
a
nginx config, maybe Lua can do this counting part.
On Sat, Jul 05, 2014 at 10:47:43PM -0400, justink101 wrote:
running requests (max 20 minutes) though. I would have thought if the
php-fpm pool workers are all being used, a request would timeout in 15
seconds according to fastcgi_connect_timeout, but this does not seem to be
the case.
From nginx side a connection in a backend listen queue isn’t
distinguishable from an accepted connection, hence
fastcgi_connect_timeout doesn’t apply as long as a backend is
reacheable and it’s listen queue isn’t full. (And
fastcgi_send_timeout doesn’t apply either if a request is small
enough to fit into socket send buffer.)
To reduce a number of affected requests you may consider using
smaller backlog in php-fpm, see here:
Starting php-fpm: [07-Jul-2014 17:52:33] WARNING: [pool app-execute]
listen.backlog(0) was too low for the ondemand process manager. I
updated it
for you to 128
Well that is unfortunate, not sure why using on-demand required a
backlog of
128. Essentially this php-fpm pool runs jobs then the workers
automatically
exit. So essentially they spawn run and die.