php5-cgi+Nginx: which is faster 127.0.0.1:9000 or Unix sockets?

We intended to test server performance using either:
/usr/bin/php-cgi -q -b 127.0.0.1:9000 # i.e., the classic IP:port
or:
/usr/bin/php-cgi -q -b /tmp/php-fastcgi.socket # i.e. using Unix
domain sockets (and having taken care to change the relevant lines in
nginx config files from: fastcgi_pass 127.0.0.1:9000 to fastcgi_pass
unix:/tmp/php-fastcgi.socket)

In order to verify the impact of the differences described at:
http://lists.freebsd.org/pipermail/freebsd-performance/2005-February/001143.html

Material

Methods

  • Create php test script with:
    echo -e ‘<?php\n;\n?>’ | sudo -u www-data tee
    /var/www/nginx-default/benchtest.php
  • Start php-cgi using the Ubuntu’s UPSTART script posted as “20081106 -
    How to enable Nginx to serve PHP code/pages in UBUNTU” in this Nginx
    mailing list.
  • Start Apache Bench script with:
    sudo ab -n 10000 -c 100 http://127.0.0.1/benchtest.php | tee
    bench-ip.log # after configuring php-cgi to listen at 127.0.0.1:9000
    sudo ab -n 10000 -c 100 http://127.0.0.1/benchtest.php | tee
    bench-socket.log #after configuring php-cgi a unix socket at
    /tmp/php-fastcgi.socket

Results and Discussion
We made a delta file (bench-diff.txt) using the linux diff utility to
show the differences between the two approaches:

  • diff -Naur bench-ip.log bench-socket.log > bench-diff.txt

Using the described testing procedure shows that using using Unix domain
sockets, besides the elsewhere described advantage of being more secure
(see external link above), significantly improves server cgi
performance.

Regards,

M.

File bench-diff.txt follows:
— cut here —
— bench-ip.log 2008-11-06 13:22:30.000000000 +0000
+++ bench-socket.log 2008-11-06 13:18:15.000000000 +0000
@@ -13,31 +13,31 @@
Document Length: 0 bytes

Concurrency Level: 100
-Time taken for tests: 5.708770 seconds
+Time taken for tests: 4.980691 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 1510000 bytes
HTML transferred: 0 bytes
-Requests per second: 1751.69 [#/sec] (mean)
-Time per request: 57.088 [ms] (mean)
-Time per request: 0.571 [ms] (mean, across all concurrent
requests)
-Transfer rate: 258.20 [Kbytes/sec] received
+Requests per second: 2007.75 [#/sec] (mean)
+Time per request: 49.807 [ms] (mean)
+Time per request: 0.498 [ms] (mean, across all concurrent
requests)
+Transfer rate: 295.94 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
-Connect: 0 0 1.4 0 27
-Processing: 9 55 9.5 55 156
-Waiting: 9 55 9.3 55 154
-Total: 15 56 9.0 55 156
+Connect: 0 1 4.1 0 35
+Processing: 4 47 12.8 48 145
+Waiting: 4 46 12.6 48 129
+Total: 12 48 11.1 49 145

Percentage of the requests served within a certain time (ms)

  • 50% 55
  • 66% 57
  • 75% 59
  • 80% 61
  • 90% 67
  • 95% 71
  • 98% 81
  • 99% 91
  • 100% 156 (longest request)
  • 50% 49
  • 66% 51
  • 75% 53
  • 80% 55
  • 90% 60
  • 95% 69
  • 98% 78
  • 99% 83
  • 100% 145 (longest request)
    — cut here —

Mark A. wrote:

echo -e ‘<?php\n;\n?>’ | sudo -u www-data tee
/var/www/nginx-default/benchtest.php

You are testing here only one aspect - connection rate. Try something
like phpinfo() or just some huge html file with php extension - this way
php would do nothing apart feeding the data to nginx through socket. It
would be good to see tests for 50k docs and for something like 500k.

funny, but

ab -k would be mutch better

Thanks for sharing - this is very interesting.

How did you create the socket for fastcgi?

Hi.

2008/11/8 Joe A. [email protected]:

Thanks for sharing - this is very interesting.

How did you create the socket for fastcgi?

It’s created automatically by PHP. Nothing needs to be
done from the user, as far as that’s concerned.

Michael

Joe A. wrote:

How did you create the socket for fastcgi?

Check my posts (with subject starting with “20081106 -”) to see the
HowTo’s

M.