Tuning Nginx as a Load Balancer

I’m running the 0.7 branch as the front door to 6 machines running
nginx + passenger. The two workers (two core machine) are using a
modest amount of CPU but I am still seeing spikes in my queue time
metric. I’ve patched my instance to add a X_REQUEST_START header
similar to this Add a 'start_time' variable to nginx 0.8.33 to support an X-REQUEST-START header. This header is used by New Relic RPM to record queue time. · GitHub

According to my metrics, Skitch | Evernote,
somewhere around 200-250ms of each request is spent in ‘queue time’.
My site runs 100% under SSL so a large portion of that is the SSL
negotiation, but I think there is room to improve. These machines are
all in ec2’s cloud with latencies < 1ms and plenty of spare capacity
so I’m pretty convinced that my load-balancer just needs some
tweaking. My nginx.conf for the load-balancer is here:
nginx.conf · GitHub

The load-balancing instance has 1.7GB of RAM, however over 1.5GB is
free. Can I change some of the settings to allow nginx to keep more
requests off the disk without opening my self up to possible denial of
service? iostat/vmstat does show pretty consistent writes to the disk,
but nothing like maxing it out. The X-REQUEST-START is only tracked on
GET requests as well so slow client uploads should not be throwing off
the numbers.

Turning on the SSL cache has been a good win so far.

ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m; # default, included for completeness

On Tue, Jul 13, 2010 at 10:13 AM, W. Andrew Loe III