Hi,
I need some help to fully understand what´s a worker is.
For example, if I have only one worker, and run a long request, like a
file download/upload or a long running proxy script like php or rails.
Will this block the server, so it can´t serve other client until de
request finish?
And what for worker_connections?
I´m trying to understand the request flow on nginx, so I can optimize
it the right way.
A worker is a process of nginx… Each process is multithreaded and can
handle thousands of connections each.
You can have one worker and 50,000 connections to it, but it’s good to
have at least as many workers as you have CPUs and I usually multiply
this times 4 (so 4 worker per CPU)…
Worker connections is how many connections each process (worker) can
have open at one time (max files open, sockets, etc)
Paul
On Monday 30 June 2008, Paul wrote:
A worker is a process of nginx… Each process is multithreaded and can
nope, no threads
Asynchronous I/O for network and blocking I/O for disk access
Well that’s what I consider multithreaded I guess… It doesn’t use any
more processes or scheduling time to handle multiple connections where
as apache is multi process… I guess i consider multithreaded the
ability to handle multiple instances within the same process without
creating new processes (having 50000 open connections to the same
process wether it’s network, disk, etc). Maybe that is the wrong word
to use…
I haven’t tested it, I just like having a few extra processes, but we
get a ton of connections per second and a ton of stale connections so I
like to have as much ‘socket space’ as possible. It seems to spread the
connections somewhat evenly over the worker processes…
It probably won’t matter much as long as you have 1 per cpu, but there
is some sort of connection limitation on each process imposed by the
kernel of the OS and if you run into that you will need more processes.
Paul wrote:
A worker is a process of nginx… Each process is multithreaded and
can handle thousands of connections each.
You can have one worker and 50,000 connections to it, but it’s good to
have at least as many workers as you have CPUs and I usually multiply
this times 4 (so 4 worker per CPU)…
Worker connections is how many connections each process (worker) can
have open at one time (max files open, sockets, etc)
I would like to know if this really makes more sense that just using
1-to-1 workers to CPUs.
Have you done any benchmark to determine the 4-to-1 ratio gives better
performance?
What is the “best practice?”
Thanks.
On Mon, Jun 30, 2008 at 17:31:42, Paul said…
Well that’s what I consider multithreaded I guess… It doesn’t use any more
processes or scheduling time to handle multiple connections where as apache
is multi process… I guess i consider multithreaded the ability to handle
multiple instances within the same process without creating new processes
(having 50000 open connections to the same process wether it’s network,
disk, etc). Maybe that is the wrong word to use…
Yes, it’s definitely the wrong word. Threads are very specific things,
and
“multithreaded” refers to them, not to a generic methodology.
You can use a few methods in a single process to handle multiple
connections,
threads and async I/O (what Nginx does) are completely different ways to
do
this.
Whatever we call it, the main point is that Nginx won’t have any
problem handling concurrent requests even if they take a lot of time
such as upload a big file?
On Tuesday 01 July 2008, Thomas wrote:
Whatever we call it, the main point is that Nginx won’t have any
problem handling concurrent requests even if they take a lot of time
such as upload a big file?
nginx uses blocking I/O for disk access
in some rare conditions it could be a problem