Tuning nginx on EC2

Hi,

I’ve been doing some benchmarks with nginx on EC2 and have run into a
bit of a wall. I’m trying to establish a baseline from which to
compare my application against. For this I’m using the simplest
possible nginx config:

server {
listen 80;
access_log off;
return 204;
}

With a large instance (1gbps nic), I’ve only been able to get up to
about 12k reqs/sec despite the fact that all resources
(cpu/mem/disk/network) appear to be under-utilized. My base platform
is Ubuntu 10.04 with nginx 0.8.53. I’ve tried generating load from
multiple nodes using a variety of tools, but still can’t seem to push
passed 12k. Perhaps there’s just some limit within EC2 that I’m
unaware of.

I’ve tried tweaking lots of sysctl settings to no end, but it could be
I just haven’t found the correct one yet. Has anyone had any luck
getting higher throughputs on ec2?

regards,

chetan

On 12/11/2010 06:31 AM, Chetan Sarva wrote:

     return 204;

I’ve tried tweaking lots of sysctl settings to no end, but it could be
I just haven’t found the correct one yet. Has anyone had any luck
getting higher throughputs on ec2?

I don’t have any direct experience with services like EC2 but remember
that
before the traffic hits your machine it first has to pass through
various
routers and switches in Amazons infrastructure that is shared between
probably millions of EC2 instances. You might either hit a bottleneck on
these shared components or maybe Amazon just has put some limitations in
place to prevent abuse.

Regards,
Dennis

On Sat, Dec 11, 2010 at 9:12 AM, Dennis J.
[email protected] wrote:

I don’t have any direct experience with services like EC2 but remember that
before the traffic hits your machine it first has to pass through various
routers and switches in Amazons infrastructure that is shared between
probably millions of EC2 instances. You might either hit a bottleneck on
these shared components or maybe Amazon just has put some limitations in
place to prevent abuse.

With shared resources you would expect to see a range of throughput
numbers from when resources are constrained and when they are not. On
EC2, disk & network I/O is a shared resource and both can be measured.
I’ve done raw network throughput tests and they confirm this. With my
nginx tests (and, to be fair, other webservers as well), I
consistently get around 12k reqs/sec.

Hi!

May I ask which type of instance where you using in EC2 ?

Thanks,

Guzman

On Sat, Dec 11, 2010 at 2:20 PM, Guzman B. [email protected]
wrote:

Hi!

May I ask which type of instance where you using in EC2 ?

A large instance (m1.large) in us-east-1c. I also tried one of the
cluster compute instances with 10gbps nic and hit a limit around 22k
reqs/sec with similar nginx config (though platform was CentOS this
time).

At Sat, 11 Dec 2010 11:53:17 -0500,
Chetan Sarva wrote:

nginx Info Page
There are some things you have to consider, when doing network
benchmarks:

  1. bandwidth (as you were told) - you can test with iperf
  2. rtt
  3. statefull firewall (statefull NAT goes in here too) - big penalty
    especially when every new request is a new connection and you are using
    keepalive
  4. OS max and per nginx process limits for file descriptors
  5. network buffers
  6. polling
  7. backlog
  8. tcp fin timeout
  9. nginx number of processes
  10. nginx backlog per socket
  11. nginx regexp in locations, ifs and so on…
  12. nginx keepalive? this is important when you are benchmarking req/s,
    becase a) 3 way tcp handshake takes some time, closing connections too,
    b) requests cannot be parallelized per connection

good luck with benchmarking :slight_smile:

For the record, it turns out that the limiting factor is the
throughput of the virtual NIC itself. Specifically, it can push a max
of about 100k packets/sec. Without some improvements in that area, it
seems the max requests possible on a standard ec2 instance is about
12k/sec (a 10gbps instance can actually do about double that number).