A question of fastcgi_pass method

HI,

I’m a chinese ,sorry for my poor english.

I’m new to nginx. I have installed nginx 0.7.67 ,and installed flup
,web.py to use python .
I writer a simple test.py file ,and just run : python test.py . VIsit
http://ip:8080 and it works well.
That means the web.py is right installed.

So, I try to use nginx with this python file.
I edit the nginx.conf like this:

         fastcgi_pass 127.0.0.1:9000;
         fastcgi_param SERVER_NAME $server_name;
         fastcgi_param SERVER_PORT $server_port;
         fastcgi_param SERVER_ADDR $server_addr;
         fastcgi_param REMOTE_ADDR $remote_addr;
         fastcgi_param REMOTE_PORT $remote_port;
         fastcgi_param SERVER_PROTOCOL $server_protocol;
         fastcgi_param PATH_INFO $fastcgi_script_name;
         fastcgi_param REQUEST_METHOD $request_method;
         fastcgi_param QUERY_STRING $query_string;
         fastcgi_param CONTENT_TYPE $content_type;
         fastcgi_param CONTENT_LENGTH $content_length;

and use spawn-fcgi to start up the python file :

        spawn-fcgi -f FILE -a 127.0.0.1 -p 9000

then visit the address http://ip it works ,shows the result what I
wanted
.

and then I simply change the fastcgi_pass method TCP to UNIX SOCKET

nginx.conf : fastcgi_pass unix:/tmp/nginx.socket

spawn-fcgi command : spawn-fcgi -f FILE -s /tmp/nginx.socket ( And
change
the mod of /tmp/nginx.socket to 777 ).

This times it works well too .

Then I just did a simple test for this two different way of connections
between nginx and spawn-fcgi .

I use webbench to do this test : webbench -c 100 http://IP/

This is the result:

USE TCP Method:

Speed=5730 pages/min, 14038 bytes/sec.
Requests: 2865 susceed, 0 failed.

USE UNIX SOCKET Method:

Speed=5548 pages/min, 13592 bytes/sec.
Requests: 2774 susceed, 0 failed.

Why the test shows that TCP method is faster than Unix Socket ?

And an other quertion is :during the TCP Method test ,I typed : netstat
-an
| grep 9000 ,this were many ports shows
TIME WAIT .almost over 3,000 . when the webbench is over ,zhe TIME WAIT
ports number is decrease.
Why this ,And is it safe ?

THANKS

Hello!

On Sun, Oct 17, 2010 at 06:42:51PM +0800, SanCao Jie wrote:

[…]

Speed=5548 pages/min, 13592 bytes/sec.
Requests: 2774 susceed, 0 failed.

Why the test shows that TCP method is faster than Unix Socket ?

There are no real reasons why tcp over loopback should be slower
than unix sockets. It’s up to OS implementation details.

In your particular case it looks like the speed is limited by your
fastcgi app, not by nginx or OS. The difference (if any, as your
test results don’t include statistical errors) is most likely due
to different connection queueing with different sockets used by OS.

And an other quertion is :during the TCP Method test ,I typed : netstat -an
| grep 9000 ,this were many ports shows
TIME WAIT .almost over 3,000 . when the webbench is over ,zhe TIME WAIT
ports number is decrease.
Why this ,And is it safe ?

This is per TCP, see RFC 973 for details.

See your OS guides to find out what will happen if all possible
(src, srcport, dst, dstport) between frontend and backend happen
to be in TIME-WAIT. Usually it’s good idea to tune portrange and
check for tw reuse/recycle options if you are going to work with
high backend connection rates.

Maxim D.

Thanks for reply.

Now, I have two new questions.

(1) , spawn-fcgi -f FILE -a IP -p PORT -F 6
use -F flag to spawn 6 text.py processes.

   And in this webbench shows that more text.py processes does not 

have
a better
performance than single text.py process.

  Why?

(2), Same as the question 1 , there is a multi-processes test .
But this time I use nginx’s reverse proxy and upstream .

  Like this:

 nginx.config:

 upstream test {
           server 127.0.0.1:8080;
          server 127.0.0.1:8081;
          server 127.0.0.1:8082;
}

location / {
proxy_pass http://test;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

copy test.py twice ,named test1.py and test2,py
And run them in turn:
python test.py
python test1.py 8081
python test2.py 8082

This situation is totally different from question 1 .
This use web.py’s web server itself and ONE port with ONE single
test.py
process.

But this time , webbench still show that the performance does not
increase .

Why?

Why multi-processes don’t show a better performance than single one
processes?

Is it that my python web app is to simple ? It just render some
plain
text.
So ,this app is so easy that just one single process is enough?

Today I used the ab tool test the web .It shows that ‘Request Per
Second’ is just about 110 . Is this number too little ?