Nginx on windows keeps hanging

Hello,

I run my websites using linux, but for my current situation, I test my
sites on my windows 7 64bit. I have setup nginx with php and mysql and
everything works fine. But depending on how many requests get done in
short period of time nginx keep hanging and I have to restart it.
Hopefully someone knows what’s wrong with it.

My machine is Intel Quad Core + 4GB RAM + windows7 ultimate 64

This is my nginx conf:

worker_processes 1;

events {
worker_connections 1024;
}

http {
include mime.types;
default_type application/octet-stream;

sendfile        off;
keepalive_timeout  0;

server

{
listen 3334;
server_name localhost;
root E:/emergency/HTML/;

    location / {
        autoindex  on;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }

    location ~ \.php$ {

        fastcgi_pass   127.0.0.1:9000;
        fastcgi_index  index.php;
        fastcgi_param PHP_FCGI_MAX_REQUESTS 0;
        fastcgi_param PHP_FCGI_CHILDREN 100;
        fastcgi_param  SCRIPT_FILENAME

$document_root$fastcgi_script_name;
include fastcgi_params;
}
}

My error log is filled with this:
2012/05/18 09:29:16 [error] 3028#3052: *504 upstream timed out (10060: A
connection attempt failed because the connected party did not properly
respond after a period of time, or established connection failed because
connected host has failed to respond) while connecting to upstream,
client: 127.0.0.1, server: localhost, request: “GET
/index.php?action=home HTTP/1.1”, upstream: “fastcgi://127.0.0.1:9000”,
host: “localhost:3334”, referrer: “http://localhost:3334/index.php

Thanks for your time.
(keepalive_timeout was 200, then I tried 0, but still same result)

Posted at Nginx Forum:

You need to make a pool for fastcgi, see the loadbalance example

This solution is the same for any OS suffering from upstream timeouts.
With a pool of 4 you can easily handle 1000 requests/sec, and yes even
with nginx win32.

Posted at Nginx Forum:

Thanks for your post.

I did the following changes, but I’m still getting this error. Note that
it is a local environment and I’m barely having 10 requests per second.
Probably I didn’t get your point and circling around myself. I created 4
more server blocks listening for 8000 to 8003 ports. I tried it without
them and I couldn’t open any page so I tried this one:

http {
include mime.types;
default_type application/octet-stream;

sendfile        off;

keepalive_timeout  0;

upstream myproject {
server 127.0.0.1:8000 weight=3;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
}

server

{
listen 3334;

    server_name  localhost;
    root   E:/emergency/HTML/;

    location / {
        autoindex  on;
        proxy_pass http://myproject;
    }

}

server

{
listen 8000;

    server_name  localhost;
    root   E:/emergency/HTML/;

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }

    location ~ \.php$ {

        fastcgi_pass   127.0.0.1:9000;
        fastcgi_index  index.php;
        fastcgi_param PHP_FCGI_MAX_REQUESTS 0;
        fastcgi_param PHP_FCGI_CHILDREN 100;
        fastcgi_param  SCRIPT_FILENAME

$document_root$fastcgi_script_name;
include fastcgi_params;
}
}
server
{
listen 8001;

    server_name  localhost;
    root   E:/emergency/HTML/;

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }

    location ~ \.php$ {

        fastcgi_pass   127.0.0.1:9000;
        fastcgi_index  index.php;
        fastcgi_param PHP_FCGI_MAX_REQUESTS 0;
        fastcgi_param PHP_FCGI_CHILDREN 100;
        fastcgi_param  SCRIPT_FILENAME

$document_root$fastcgi_script_name;
include fastcgi_params;
}
}
server
{
listen 8002;

    server_name  localhost;
    root   E:/emergency/HTML/;

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }

    location ~ \.php$ {

        fastcgi_pass   127.0.0.1:9000;
        fastcgi_index  index.php;
        fastcgi_param PHP_FCGI_MAX_REQUESTS 0;
        fastcgi_param PHP_FCGI_CHILDREN 100;
        fastcgi_param  SCRIPT_FILENAME

$document_root$fastcgi_script_name;
include fastcgi_params;
}
}
server
{
listen 8003;

    server_name  localhost;
    root   E:/emergency/HTML/;

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }

    location ~ \.php$ {

        fastcgi_pass   127.0.0.1:9000;
        fastcgi_index  index.php;
        fastcgi_param PHP_FCGI_MAX_REQUESTS 0;
        fastcgi_param PHP_FCGI_CHILDREN 100;
        fastcgi_param  SCRIPT_FILENAME

$document_root$fastcgi_script_name;
include fastcgi_params;
}
}

Posted at Nginx Forum:

I’m new to this, please bare with me…

I’m running my fastcgi script using this .bat file:

@ECHO OFF
D:\environments\php\php-cgi -b 127.0.0.1:9000 -c
D:\environments\php\php.ini
ping 127.0.0.1 -n 1>NUL
echo .
echo .
echo .
EXIT

So for my upstream, I modified it to: (duplicated that line 4 times.
will that work?)
@ECHO OFF
D:\environments\php\php-cgi -b 127.0.0.1:8004 -c
D:\environments\php\php.ini
D:\environments\php\php-cgi -b 127.0.0.1:8005 -c
D:\environments\php\php.ini
D:\environments\php\php-cgi -b 127.0.0.1:8006 -c
D:\environments\php\php.ini
D:\environments\php\php-cgi -b 127.0.0.1:8007 -c
D:\environments\php\php.ini
ping 127.0.0.1 -n 1>NUL
echo .
echo .
echo .
EXIT

and this is my latest nginx.conf:

http {
include mime.types;
default_type application/octet-stream;

sendfile        off;

keepalive_timeout  0;

upstream myproject {
server 127.0.0.1:8004;
server 127.0.0.1:8005;
server 127.0.0.1:8006;
server 127.0.0.1:8007;
}
server
{
listen 3334;

    server_name  localhost;
    root   E:/emergency/HTML/;

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }

    location ~ \.php$ {

        fastcgi_pass   myproject;
        fastcgi_index  index.php;
        fastcgi_param PHP_FCGI_MAX_REQUESTS 0;
        fastcgi_param PHP_FCGI_CHILDREN 100;
        fastcgi_param  SCRIPT_FILENAME

$document_root$fastcgi_script_name;
include fastcgi_params;
}
}

I get those error still, but I think they are less now. Still irritating
though. I tried an ajax shoutbox script which fires requests every few
seconds and opened up 4-5 browsers to see the reaction. I have upstream
timeout errors. This time they haven’t paralyze nginx but the errors
appear and ajax requests pend without any response. Lots of same error
on port 8006.

2012/05/19 23:22:31 [error] 1092#3704: *126 upstream timed out (10060: A
connection attempt failed because the connected party did not properly
respond after a period of time, or established connection failed because
connected host has failed to respond) while connecting to upstream,
client: 127.0.0.1, server: localhost, request: “GET
/index.php?action=ajaxrequest HTTP/1.1”, upstream:
“fastcgi://127.0.0.1:8006”, host: “localhost:3334”, referrer:
http://localhost:3334/index.php

Posted at Nginx Forum:

Remove the PHP_FCGI_x lines and add “fastcgi_ignore_client_abort on;”
For the pool add for all of them “weight=1 fail_timeout=5”

Posted at Nginx Forum:

You have created a upstream but ain’t using it…

each line “fastcgi_pass 127.0.0.1:9000;” must be changed to
“fastcgi_pass myproject;” then make sure the fpm processes are running
for your pool.

Posted at Nginx Forum:

Without PHP_FCGI_x lines system got so much slow. I had to put them
back. I noticed in the task bar there are always 1 or 2 php-cgi.exe
processes while it must be 4. I splitted cgi.bat file into four
individual files each handling one port. Now I have 4 php-cgi.exe
processes and system is working even faster than before without any
failures. Thanks! without you I couldn’t figure it out.

Since I’m holding task manager open, I see that sometimes cgi processes
get closed and that causes upstream errors again. Why they are getting
closed? is there a closure timeout for inactivity?

Also since we’re at it, how do I calculate my exact settings? number of
cgi processes, max_requests, children? (i’m planning to go with php-fpm
on ubuntu)

Say in my busy hour, I have 150 users online. each one sending a request
per second. Should I calculate every page request with their images and
scripts, adding one request for each?

Posted at Nginx Forum:

A fpm pool of 4 for 1 worker is more then enough to have the same
performance with win32 compared to linux, of-course linux supports more
workers but you can get that on win32 too by loadbalancing multiple
nginx installations using nginx itself in fpm frontend mode to multiple
nginx backend nodes as, affinity managed, fpm pool.

Why php aborts depends on whats happening, have a look at php’s timeout
settings and “ignore_user_abort = On”. Add a loop label in the batch so
it restarts the cgi process. I haven’t yet had a fpm process terminate
while the pool balances nicely to the amount of requests.

Use ab.exe(apache tool) to benchmark and tune things.

Posted at Nginx Forum: