Max_clients and keep alive

Hello,

Trying to better understand the worker_processes and worker_connections
settings.

We have always run Nginx with worker_processes 4 and worker_connections
1024 on a dual Xeon 2.4GHz with Hyper Threading. In the documentation,
it suggests it is better to run worker_processes 2 and
worker_cpu_affinity 0101 1010 for a dual HTT machine but according to
the formula (max_clients = worker_processes * worker_connections) that
would effectively halve the connections we can support to clients and
backend servers.

We regularly see more than 2000 keep alive connections at any point in
time, so do keep alive connections count towards the maximum client
connections?

Thanks,

Tristan

Hello!

On Mon, May 18, 2009 at 09:54:34AM +1000, Tristan Griffiths wrote:

would effectively halve the connections we can support to clients and
backend servers.

We regularly see more than 2000 keep alive connections at any point in
time, so do keep alive connections count towards the maximum client
connections?

Yes.

Feel free to increase worker_connections. Note also that worker
connections are cheap - they consume only about 200 bytes of
memory per connection, so setting something like 16384 isn’t a
big deal.

The only thing to keep in mind is that your OS settings should be
tuned as well or this may lead to bad results. E.g. make sure
that worker_connections are less than available to nginx process
number of file descriptors (kern.maxfilesperproc under FreeBSD).

Maxim D.

-----Original Message-----
From: [email protected] [mailto:[email protected]] On Behalf
Of

Trying to better understand the worker_processes and
worker_connections
settings.

We have always run Nginx with worker_processes 4 and
worker_connections
1024 on a dual Xeon 2.4GHz with Hyper Threading. In the
documentation,
it suggests it is better to run worker_processes 2 and
worker_cpu_affinity 0101 1010 for a dual HTT machine but according
to
the formula (max_clients = worker_processes * worker_connections)
that
would effectively halve the connections we can support to clients
and
connections are cheap - they consume only about 200 bytes of
memory per connection, so setting something like 16384 isn’t a
big deal.

The only thing to keep in mind is that your OS settings should be
tuned as well or this may lead to bad results. E.g. make sure
that worker_connections are less than available to nginx process
number of file descriptors (kern.maxfilesperproc under FreeBSD).

Maxim D.

CentOS 5.3 kernel-PAE

[root@lb1 ~]# sysctl fs.file-max
fs.file-max = 400136

And that’s the default setting!

Thank you for your help Maxim.

On May 18, Maxim D. wrote:

1024 on a dual Xeon 2.4GHz with Hyper Threading. In the documentation,
Yes.

Feel free to increase worker_connections. Note also that worker
connections are cheap - they consume only about 200 bytes of
memory per connection, so setting something like 16384 isn’t a
big deal.

The only thing to keep in mind is that your OS settings should be
tuned as well or this may lead to bad results. E.g. make sure
that worker_connections are less than available to nginx process
number of file descriptors (kern.maxfilesperproc under FreeBSD).

If the master process starts as root and worker_rlimit_nofile is set,
then does any of system limit wrt fd matter?

Hello!

On Wed, Jun 10, 2009 at 11:01:30PM +0530, Arvind Jayaprakash wrote:

[…]

If the master process starts as root and worker_rlimit_nofile is set,
then does any of system limit wrt fd matter?

Yes, they does matter. Via setrlimit() you may not set limits
greater than kernel ones (like kern.maxfilesperproc,
kern.maxfiles under FreeBSD).

On the other hand, worker_rlimit_nofile usually required only when
you expanded kernel limits and want running nginx to expand it’s
limits without restarting master (as rlimits usally defaults to
kernel limits at the time of process creation).

Maxim D.