Nginx blocking during long running PHP request?


I use nginx to frontend fastcgi and PHP. I have a PHP page that
accesses a mySQL db and executes a query which takes about 30 seconds to

During that time, if I hit another php page on my site, the browser just
spins and does not load the php page until the other php page finishes
executing the query.

I am not sure if it is nginx blocking somehow, or perhaps PHP can only
execute one page at a time (which would be a major bottleneck when we go
to production - right now we are just testing).

Here is my location block that handles the PHP request:

location ~ .*.php$ {
include fastcgi;
fastcgi_read_timeout 180s;
fastcgi_index index.php;
fastcgi_intercept_errors on;
error_page 404 /404.php;
root html;

Am I missing a setting somewhere that is preventing nginx and/or PHP
from executing on multiple requests simultaneously? What other
troubleshooting steps can I take?

Thank you!


You may need to look at your fastcgi setup and see how many children and
requests you have available.

more /etc/default/php-fastcgi

Should php-fastcgi run automatically on startup? (default: no)


Which user runs PHP? (default: www-data)


Host and TCP port for FASTCGI-Listener (default: localhost:9000)


Environment variables, which are processed by PHP



I would suggest it’s due to needing more fastcgi processes available -
php max children etc. Also I would recommend trying to refactor the
code and batch the job, denormalize the data or something else to get
that load time down :slight_smile:


Are you using something that runs php -b or spawn-fcgi? Obviously not


Do you use php-fpm (I hope?)

I noticed you did not mention how you manage your fastcgi engines.


I am using php-fpm and compiled PHP with the --enable-fastcgi and
–enable-fpm switches among others. At boot time a script in /etc/init.d
launches it.

phpinfo shows CGI/FastCGI as the server API.

In looking at /usr/php/etc/php-fpm.conf I see:



How much requests each process should execute before respawn.
Useful to work around memory leaks in 3rd party libraries.
For endless request processing please specify 0

Although - I just noticed this - in my php-fpm start up script it has


i.e. php_opts="–fpm-config $php_fpm_CONF"

Notice how the php_opts is blank. So perhaps my config file is being
ignored completely?

Then again, it probably is getting picked up because the listening port
of 8888 is set in the php-fpm.conf file and obviously being picked up
from there I would think, because where else could it get this (it is
not set in the php-fpm startup script)?

Is there a way I can verify what php-fpm is using as the fcgi-children
and max request settings? What else can I check into?

Thank you!!


i wouldn’t say it’s “solved” but i don’t seem to have any problems
from what i can tell.

they have upgraded spawn-fcgi this week, maybe it has a lot better
features/fixes some odd bugs…

however i’d switch to php-fpm and if i still saw this behavior i would
bump up the number of children. (this is where the process spawning of
php-fpm will rock!)


I’m experiencing blocking with php-fcgi spawner script (3 children).

/usr/bin/spawn-fcgi -a -p 9000 -u www-data -g www-data -C 3 -f

www-data 5731 0.0 7.1 162968 37456 ? S Feb24 0:28
www-data 9566 0.0 4.9 160484 26224 ? S 08:06 0:02
www-data 9843 0.0 3.9 155216 20640 ? S 11:54 0:01
www-data 27917 0.0 1.4 149648 7608 ? Ss Feb19 0:00

Is blocking solved when using php-fpm?