Static file benchmarks

I’ve seen places online casually mention that nginx can server static
files at 13k requests/sec:
http://www.vbulletin.com/forum/showpost.php?s=bc85853b9f0f48462767c44081cf7057&p=1474904&postcount=1
http://brainspl.at/articles/2006/08/23/nginx-my-new-favorite-front-end-for-mongrel-cluster

One blog entry achieved 8K requests/sec and posted their test files
and nginx conf file:

When running nginx on Amazon’s EC2 and using this same test file and
nginx conf, I’m only getting 4K requests/sec and am trying to
understand why.

In my tests both the nginx machine and the machine running Apache
Bench are on EC2 with a max speed of 20 MB/s between them. I’m using
the internal EC2 IP addresses.

The instances have 1.7 GB or RAM and the equivalent of a single 1-1.2
Ghz Xeon. The machines are running Fedora Core 4.

During the test the CPU load is a bit over 10% and used memory is
constant at 0.3 GB.

Any thoughts as to what is the bottleneck? Hardware? OS? Network?
nginx misconfiguration?

Cheers!

ab -c 1000 -n 100000 http://xxxxxx/
This is ApacheBench, Version 2.0.41-dev <$Revision: 1.141 $> apache-2.0
Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd,
http://www.zeustech.net/
Copyright (c) 1998-2002 The Apache Software Foundation,
http://www.apache.org/

Benchmarking xxxxx (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Finished 100000 requests

Server Software: nginx/0.5.35
Server Hostname: xxxxx
Server Port: 80

Document Path: /
Document Length: 356 bytes

Concurrency Level: 1000
Time taken for tests: 24.326633 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Total transferred: 56720399 bytes
HTML transferred: 35611748 bytes
Requests per second: 4110.72 [#/sec] (mean)
Time per request: 243.266 [ms] (mean)
Time per request: 0.243 [ms] (mean, across all concurrent
requests)
Transfer rate: 2276.97 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 140 842.2 2 21073
Processing: 0 68 504.6 4 13673
Waiting: 0 66 503.5 2 13671
Total: 0 209 992.7 7 21078

Percentage of the requests served within a certain time (ms)
50% 7
66% 8
75% 9
80% 33
90% 75
95% 765
98% 3047
99% 3857
100% 21078 (longest request)

nginx conf:
worker_processes 1;
events {
worker_connections 1024;
}

http {
include conf/mime.types;
default_type application/octet-stream;

access_log  logs/access.log;

sendfile        on;
keepalive_timeout  65;

gzip  on;

server {
    listen       80;
    server_name  localhost;

    location / {
        root   /www/pages;
        index  index.html;
    }
}

}

It appears that you are using the Amazon EC2 Small Instance (default):
Note that it says “I/O Performance: Moderate” in the description of
small instance.

Maybe you are hitting the bottleneck of the disk IO…

-Liang

Amazon EC2 works in a virtualized environment with a network filesystem.
It’s hard to expect top performance in such setting.
It’s also hard to tell which factors limit the throughput because you
don’t
get to see the full picture.

With a

Hi, you are using 1000 concurrency, which is gigantic!!! Did you try
lowering the concurrency to see the difference?

PS: sorry for the previous mail, I hit the tab button then space by
mistake.

On Friday 18 April 2008 15:47:11 Cocoa G. wrote:

for the disk to be the limiting factor? I’ve read that files read from
disk are cached in the kernel cache, but I have no clue…

The problem is probably the latency rather than raw disk read
throughput.
For example, in NFS, every time a process opens a file, a request is
sent to
the NFS server to check if the file has changed, even though the file
itself
can be locally cached. This is done to implement so called close-to-open
consistency.

If this is the case (I’m not even sure what filesystem they use) then
using
NGINX’s file descriptor cache might help. Sadly, it is described in
the
wiki, yet. Try adding the following snippet:

open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;

On Fri, Apr 18, 2008 at 11:23 AM, kingler [email protected] wrote:

It appears that you are using the Amazon EC2 Small Instance (default):
Note that it says “I/O Performance: Moderate” in the description of
small instance.

Maybe you are hitting the bottleneck of the disk IO…

Yes, the numbers I posted are for the small instance. Unfortunately I
tried a Large instance and got the same numbers (but the OS remained
the same 32 bit OS)

On Fri, Apr 18, 2008 at 11:38 AM, Denis S. Filimonov
[email protected] wrote:

Amazon EC2 works in a virtualized environment with a network filesystem.
It’s hard to expect top performance in such setting.
It’s also hard to tell which factors limit the throughput because you don’t
get to see the full picture.

Right, this is a virtualized environment, but the disk performance on
an EC2 instance is at least 40 MB/sec. Is it possible at that speed
for the disk to be the limiting factor? I’ve read that files read from
disk are cached in the kernel cache, but I have no clue…

On Fri, Apr 18, 2008 at 12:20 PM, Thomas [email protected] wrote:

Hi, you are using 1000 concurrency, which is gigantic!!! Did you try
lowering the concurrency to see the difference?

I’m using the same concurrency used in the superjared.com blog entry,
which I’m using as my baseline.

It might also help troubleshooting if you turn off the gzip compression
option.
This will reduce the CPU overhead (in virtualized environment, there
might be some CPU overhead, which is not there in a dedicated server
environment)

-Liang

On Fri, Apr 18, 2008 at 1:10 PM, Denis S. Filimonov
[email protected] wrote:

open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;

I upgraded to nginx 0.6.29 (it doesn’t look like open_file_cache is
supported in 0.5.x) and added these settings, but it made no
difference in the benchmark results.

On Fri, Apr 18, 2008 at 2:17 PM, kingler [email protected] wrote:

It might also help troubleshooting if you turn off the gzip compression option.
This will reduce the CPU overhead (in virtualized environment, there
might be some CPU overhead, which is not there in a dedicated server
environment)

I turned off the gzip option, but it made no difference. I would have
to indicate to Apache Bench the proper headers to enable gzipping
anyways.

I appreciate everyone’s willingness to help so far, but with so much
talk about nginx’s performance, I would have thought there would be
more concrete information as to the reasons for nginx’s performance.

Igor, any information you can share as to the limiting factors for
static file performance? Is nginx CPU bound? Disk bound? RAM bound?
Network bound? Even if you can point to older discussions about the
same topic. I haven’t been able to find any information.

Cheers.

On Fri, 2008-04-18 at 16:10 -0400, Denis S. Filimonov wrote:

If this is the case (I’m not even sure what filesystem they use) then using
NGINX’s file descriptor cache might help. Sadly, it is described in the
wiki, yet. Try adding the following snippet:

open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;

If you know of a feature not documented in the wiki, please add it at
your earliest possible convenience =)

Regards,
Cliff

On Mon, Apr 21, 2008 at 12:08:49PM -0700, Cocoa G. wrote:

I appreciate everyone’s willingness to help so far, but with so much
talk about nginx’s performance, I would have thought there would be
more concrete information as to the reasons for nginx’s performance.

Igor, any information you can share as to the limiting factors for
static file performance? Is nginx CPU bound? Disk bound? RAM bound?
Network bound? Even if you can point to older discussions about the
same topic. I haven’t been able to find any information.

BTW have you tried keepalive benchmark ?

ab -c 1000 -n 100000 -k …

On Mon, Apr 21, 2008 at 12:08:49PM -0700, Cocoa G. wrote:

disk are cached in the kernel cache, but I have no clue…


On Fri, Apr 18, 2008 at 2:17 PM, kingler [email protected] wrote:
talk about nginx’s performance, I would have thought there would be
more concrete information as to the reasons for nginx’s performance.

Igor, any information you can share as to the limiting factors for
static file performance? Is nginx CPU bound? Disk bound? RAM bound?
Network bound? Even if you can point to older discussions about the
same topic. I haven’t been able to find any information.

nginx can be bound by CPU, disk, RAM, network, OS, virtualized
environment, or benchmark tool.

In your case if ab fetches not too big single file, nginx certainly is
not
disk bound, because OS should cache the file in VM.

I suspect that it could be some virtualized environment limitations.

On Mon, 2008-04-21 at 12:08 -0700, Cocoa G. wrote:

I appreciate everyone’s willingness to help so far, but with so much
talk about nginx’s performance, I would have thought there would be
more concrete information as to the reasons for nginx’s performance.

You continue to assume you are measuring Nginx’s performance, but
performance is only relative (10k requests on machine A doesn’t mean 10k
requests is achievable on machine B). You can only say that Nginx is x
times as fast as Apache (or other server) on the same hardware.

As others have mentioned, you are probably measuring EC2’s performance
(or more to the point, hitting some EC2 bottleneck), not Nginx’s. Try
Apache, show how it’s faster than Nginx in the same environment, then
we would know whether your questions should be directed here or to EC2
support. It’s quite probable EC2 intentionally throttles I/O at some
level. It’s also quite possible EC2 has anti-DoS measures which you may
be triggering with your tests.

Regards,
Cliff