Forum: NGINX Send 502 when all php-fpm workers are in use

2974d09ac2541e892966b762aad84943?d=identicon&s=25 justink101 (Guest)
on 2014-07-06 04:48
(Received via mailing list)
I have a php-fpm pool of workers which is 6. There are long running
requests
being sent, so I have the following fastcgi directives set:

fastcgi_connect_timeout 15;
fastcgi_send_timeout 1200;
fastcgi_read_timeout 1200;

However right now, if the php-fpm pool of workers is full, a request
waits
the full 20 minutes. I'd like requests to fail with a 502 status code if
the
php-fpm pool of workers is full instead. This change should still allow
long
running requests (max 20 minutes) though. I would have thought if the
php-fpm pool workers are all being used, a request would timeout in 15
seconds according to fastcgi_connect_timeout, but this does not seem to
be
the case.

Thanks for the help.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,251476,251476#msg-251476
A8108a0961c6087c43cda32c8616dcba?d=identicon&s=25 Maxim Dounin (Guest)
on 2014-07-06 13:20
(Received via mailing list)
Hello!

On Sat, Jul 05, 2014 at 10:47:43PM -0400, justink101 wrote:

> running requests (max 20 minutes) though. I would have thought if the
> php-fpm pool workers are all being used, a request would timeout in 15
> seconds according to fastcgi_connect_timeout, but this does not seem to be
> the case.

From nginx side a connection in a backend listen queue isn't
distinguishable from an accepted connection, hence
fastcgi_connect_timeout doesn't apply as long as a backend is
reacheable and it's listen queue isn't full.  (And
fastcgi_send_timeout doesn't apply either if a request is small
enough to fit into socket send buffer.)

To reduce a number of affected requests you may consider using
smaller backlog in php-fpm, see here:

http://www.php.net/manual/en/install.fpm.configura...

--
Maxim Dounin
http://nginx.org/
2974d09ac2541e892966b762aad84943?d=identicon&s=25 itpp2012 (Guest)
on 2014-07-06 14:06
(Received via mailing list)
The only way around this would be some kind of counter keeping track of
whats available and if max is reached create a file where you test on in
a
nginx config, maybe Lua can do this counting part.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,251476,251478#msg-251478
2974d09ac2541e892966b762aad84943?d=identicon&s=25 justink101 (Guest)
on 2014-07-07 04:15
(Received via mailing list)
Maxim,

If I set the php-fpm pool listen.backlog to 0, will this accomplish what
I
want. I.e. fill up workers, once all the workers are used, fail
requests.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,251476,251488#msg-251488
A8108a0961c6087c43cda32c8616dcba?d=identicon&s=25 Maxim Dounin (Guest)
on 2014-07-07 14:06
(Received via mailing list)
Hello!

On Sun, Jul 06, 2014 at 10:14:42PM -0400, justink101 wrote:

> Maxim,
>
> If I set the php-fpm pool listen.backlog to 0, will this accomplish what I
> want. I.e. fill up workers, once all the workers are used, fail requests.

Note that such a low value will have a downside of not tolerating
connection spikes, so you may want to actually use something
slightly bigger.

--
Maxim Dounin
http://nginx.org/
2974d09ac2541e892966b762aad84943?d=identicon&s=25 justink101 (Guest)
on 2014-07-08 02:56
(Received via mailing list)
Starting php-fpm: [07-Jul-2014 17:52:33] WARNING: [pool app-execute]
listen.backlog(0) was too low for the ondemand process manager. I
updated it
for you to 128

Well that is unfortunate, not sure why using on-demand required a
backlog of
128. Essentially this php-fpm pool runs jobs then the workers
automatically
exit. So essentially they spawn run and die.

pm = ondemand
pm.max_children = 100
pm.process_idle_timeout = 3s;

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,251476,251519#msg-251519
Please log in before posting. Registration is free and takes only a minute.
Existing account

NEW: Do you have a Google/GoogleMail, Yahoo or Facebook account? No registration required!
Log in with Google account | Log in with Yahoo account | Log in with Facebook account
No account? Register here.