How to tune nginx for 800-1200 concurrent connections?

Dear all,

I run nginx as stand-alone webserver (not as Apache proxy) on an image
gallery website. The server has to handle up to 1,200 concurrent
connections on Port 80, the average number throughout the day is around
500-600. During peak times, the server suffers a bit under its load and
I wonder whether there is anything that I can do to decrease the load by
tuning the nginx config. I should add that Apache went nuts handling the
site and therefore nginx is already a great relief to have. The site has
just around 3,000 unique visitors but up to 250,000 pageviews per day.

Here are the relevant server specs:

4 cores at 2.1 GHz
1 GB RAM (average free RAM is around 800 MB even during peak times)

Here is my current nginx.conf:

user nginx nginx;
worker_processes 8; # default: 2

error_log logs/error.log;

pid logs/nginx.pid;

events {
worker_connections 2048; # default: 1024
}

http {
include mime.types;
default_type application/octet-stream;
client_max_body_size 64M;
sendfile on;
tcp_nopush on;

keepalive_timeout  20; #default: 3

gzip  on;
gzip_comp_level 2;
gzip_proxied any;
gzip_types      text/plain text/css application/x-javascript

text/xml application/xml application/xml+rss text/javascript;

server_tokens off;

include /etc/nginx/conf.d/*;

}

Here are the fastcgi settings (The gallery is heavily PHP-driven):

include fastcgi_params;
fastcgi_intercept_errors on;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
try_files $uri =404;
fastcgi_pass 127.0.0.1:9000;

I would appreciate your comments and suggestions a lot - Thank you very
much in advance!

Kind regards
-A

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,215383,215383#msg-215383

are the images served through PHP? if so make them static and it will
increase thoughrougput by tons.

Additionally you could possibly reduce worker_processes to 4-6 to reduce
context switching.

  • might consider turning up keepalive to 65
  • depending on the nature of your content you might want to turn up when
    it
    expires – example: expires 10m;
  • looks like you have 2 workers per core – typically a worker process
    per
    core is recommended (or maybe your have intel cores with “hyerthreding”,
    in
    which case maybe 8 is the more relevant #
  • what’s your cpu load like? is your server cpu-bound or is IO the
    problem?
    if you can find some way to do fewer disk reads and more ram reads that
    will
    help

On 19 September 2011 16:29, dullnicker [email protected] wrote:

Dear all,

I run nginx as stand-alone webserver (not as Apache proxy) on an image
gallery website. The server has to handle up to 1,200 concurrent
connections on Port 80, the average number throughout the day is around
500-600.

The site has
just around 3,000 unique visitors but up to 250,000 pageviews per day.

So 80+ hits per person per day and 3 per second, but 550 concurrent
connections? I haven’t seen a pattern like this before and I’d be
interested to know how it works. There may be ways to optimize the
application for it. I wouldn’t expect nginx to be the problem though.

If any significant portion of that concurrency is being held by PHP,
that’ll be tying up a lot of resources.

Thomas

I think the bottleneck may be the PHP, it is a piece of cake for ngx to
handle 1k concurrent connections without any special tuning.

Lots of directions to go with this question, here are 2.

  1. The IO bound questions:
    Your client_max_body_size has me wondering how big these images
    actually are. Whats the average size of these images?

    A few lines of output from “dstat” or “vmstat 1” during peak output
    would be useful (or even “sar” if you run it).

    If you are IO bound like I suspect, a reverse proxy via nginx’s
    proxy_cache might be beneficial. You could just toss some ‘hot’ files
    into a ramdrive (/dev/shm) and symlink them (ln -s) to test.

  2. Your PHP app is inefficent:
    I run a php gallery (gallery2). Its pretty damn slow/cpu hungry. In
    order to speed things up make sure your php has something like
    http://eaccelerator.net running. It increased my performance more then
    moving from apace1 (yes 1) to nginx.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,215383,215807#msg-215807

On 09/25/2011 06:45 PM, tsaavik wrote:

proxy_cache might be beneficial. You could just toss some ‘hot’ files
into a ramdrive (/dev/shm) and symlink them (ln -s) to test.

The page cache will do something like this transparently. If there’s not
enough ram left for the page cache to keep the files in memory then you
also don’t have enough ram to add a ram drive.

Regards,
Dennis

Dear all,

thank you very much for your answers. What I did some days ago was
installing APC (Alternative PHP Cache) on the server. I could not
believe the results, but still - days after - they are as good as they
were from the start. Server Load dropped like crazy. The momentary
average server load is < 0.60, while it was >2.00 before. Therefore the
need for any further optimization has vanished. Again, thank you both
for taking the time to answer to my question.

Kind regards
-Amitz

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,215383,215831#msg-215831

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs