I have a reasonably beefy VPS (16gb RAM, 4x vCores) running Ubuntu
LTS on a 1GigE line that is basically uncontested at the moment. Speed
on the box show reasonably high bandwidth available up and down (VirtIO
isn’t on at the moment, but that doesn’t seem to be affecting it). When
doing a load test on a static object via HTTPS (apachebench on a 100kb
image) with a concurrency of 1000 I’m seeing pretty poor performance -
requests per second, about 4.5mbps traffic, and an average of about 2.2s
request. Monitoring the server in htop I’m not seeing the memory even
above 570mb (out of 16gb) and an overall processor usage of like 25% per
core, if that much.
My config is fairly standard - this is a static file, after all, so
not even touching php-fpm. I have my hard and soft ulimits raised to
for the www-data user. I have my worker_processes set to 4,
worker_rlimit_nofile set to 100k, and worker_connections set to 2048.
multi_accept is on and epoll is on. I have a keepalive timeout of 2. For
purposes of this test I have a self-signed cert on the server, the
ssl_protocols are set to SSLv2 SSLv3 TLSv1; and the ssl_ciphers are set
RC4:HIGH:!aNULL:!MD5:!kEDH;. Suggestions? How do I debug the poor
performance so I at least know what to fix? Is there a way to step
exactly what is happening in a request under load to see where it’s
delayed? I’d like to get it up to at least 1k RPS if not more, and I
the server and the bandwidth are up to the task.
Posted at Nginx Forum: