PHP-FPM and concurrency

Hello,

I have a strange problem maybe not related to nginx but it could very
well be. So I make a long (time consuming) request to a PHP page. Where
that request is in progress, let’s say 1 minute, with the SAME browser I
cannot open another page from that same “server {}” block. If within
that minute I use ANOTHER browser to open a page from that “server {}”
block, it does open just fine. Which rules any database locks, for
example. I have tested this with at least one more person, so it is not
a browser configuration problem. Also, I can open static files with the
same browser which means that it is something related to the PHP
requests only.

What can I try and where to look for the problem?

Thanks you!
Kiril

It is probably more related to how many connections at max a single
browser instance keeps open to a single hostname.

For Firefox for example usually the default value is only 2. ( can
search google for network.http.max-connections-per-server )
IE has 6 at least (but seems you are not using that).

Increase those and see if it helps.

rr

----- Original Message -----
From: “Kiril A.” [email protected]
To: [email protected]
Sent: Thursday, January 28, 2010 10:01 AM
Subject: PHP-FPM and concurrency

Hello,

thanks but for the reason of browser configuration, I checked to see if
I can open other resources from the same domain from the same browser
and it work for static files. Also, browser limits would be per tab or
at least 6 requests per second, not really 6 concurrent connections.

Any other suggestions?

Static files are most likely served instantly rather than keeping a
connection hanging for a minute (to check something different
than php you can try a perl script with just sleep(60); in it).
You can also look if nginx gets the second request (if not then its
still the browser problem and not webserver) just by checking
the access and errorlog (in case there is some fastcgi backend timeout).

Of course it might be a problem with php/fpm config. How many php childs
do you spawn? Could it be possible that all childs are
taken at the moment for processing your ~1min scripts?

rr

----- Original Message -----
From: “Kiril A.” [email protected]
To: [email protected]
Sent: Thursday, January 28, 2010 3:00 PM
Subject: Re: PHP-FPM and concurrency

If this is PHP and you are using sessions, I would guess that your
sessions are blocking. With sessions enabled, each PHP client has a
write lock on the sessions file and concurrent requests are blocked to
wait for the session to be available for an exclusive lock. As soon as
you are done making changes to a session, close it for writing and other
requests will be handled. See this page for details:

http://php.net/session_write_close

…Patrick

Hello,

a short question about the “backup” and “down” tag in my server section.
When i mark
all servers with “down” tag the backup server does not respond, but when
the server is
really down the backup server works. Is it not possible to mark all
upstream servers as
down? I would be a nice feature while updating the backends, so i put a
messe an my
backup server… The following setup does not work

upstream web1 {
server 10.0.0.10:80 down;
server 10.0.0.10:81 down;
server unix:/tmp/nginx1.sock backup;
}

Thanks for your help.

Hello!

On Thu, Jan 28, 2010 at 05:12:55PM +0100, Alexander K. wrote:

upstream web1 {
server 10.0.0.10:80 down;
server 10.0.0.10:81 down;
server unix:/tmp/nginx1.sock backup;
}

See this thread:

http://nginx.org/pipermail/nginx/2010-January/018327.html

Maxim D.

Hey that was going to be my suggestion :slight_smile:

Sent from my iPhone

Right on! I switched to session in the database and no more problems.
Thank you very much for your time!

Regards,
Kiril

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs