What does error 24 mean? Also how does nginx handle the Slowloris tool?

I’m looking to test out nginx with the Slowloris tool
(http://ha.ckers.org/slowloris/), and I noticed that the following error
appears a LOT in the error logs at one point:
accept() failed (24: Too many open files) while accepting new connection

This is what happens:

  1. started nginx on a Debian machine, and the Slowloris tool on another
    using default parameters (1000 connections, 100 sec timeouts)
  2. connections are established, and data is sent via these connections
  3. shortly after, the error log throws up the error 24 (too many open
    files) and all the connections are closed
  4. the tool continues to run, although every thread has to reinitiate
    the connection (actually, the “attack” has stopped at step 3)

This tool basically works by sending a GET request for a random URI per
thread
GET /$rand HTTP/1.1
and then keeps sending a short header every $timeout seconds in order to
force the HTTP request to stay unfinished.

So…

a) what does error 24 mean in this case? I’m not sure whether the file
descriptors limit has really been hit, with the tool set to 1000 threads
only…?

b) how do async web servers like nginx handle unfinished requests like
these (as opposed to process based servers like Apache)?

Posted at Nginx Forum:

On 10/12/09 9:35 AM, “gunblad3” [email protected] wrote:

accept() failed (24: Too many open files) while accepting new connection

Did you adjust your ulimit?

Sorry for the late reply, was flooded with other things :wink:

I tried by adding
worker_rlimit_nofile 10240;
instead and it worked with 10000 connections.

Have a few questions:

  1. Is there a implicit default max number of file descriptors set for
    worker_rlimit_nofile?

  2. How does worker_rlimit_nofile and ulimit affect the actual max number
    of file descriptors?

  3. How does nginx handle unfinished requests that are taking a long time
    to complete sending? (slow upstream)

Thanks in advance for your help!

Posted at Nginx Forum:

Thanks Maxim, that’s the most informative info on this I’ve seen so far
:slight_smile:

What I meant by my third question was: where/how does nginx cache
incomplete requests that are “still being sent” from the client(, if it
does so in the first place)? In memory?

Posted at Nginx Forum:

Hello!

On Tue, Oct 13, 2009 at 10:44:54PM -0400, gunblad3 wrote:

Sorry for the late reply, was flooded with other things :wink:

I tried by adding
worker_rlimit_nofile 10240;
instead and it worked with 10000 connections.

Have a few questions:

  1. Is there a implicit default max number of file descriptors set for worker_rlimit_nofile?

It’s you OS who sets limits, not nginx. Directive
worker_rlimit_nofile only needed when you want nginx to ask OS to
change limits for already running nginx process. By default (i.e.
without worker_rlimit_nofile set) nginx don’t ask to change
limits.

  1. How does worker_rlimit_nofile and ulimit affect the actual max number of file descriptors?

Basically, OS sets limits, ulimit shows them, worker_rlimit_nofile
asks OS to change limits. Result is something that OS decided to
do, usually minimum of worker_rlimit_nofile and global OS maxfiles
limit.

  1. How does nginx handle unfinished requests that are taking a long time to complete sending? (slow upstream)

Nothing special. They are just handled. And when both OS and
nginx are properly tuned - nginx can handle lots of such (and
other) connections.

Maxim D.

Hello!

On Wed, Oct 14, 2009 at 07:17:36AM -0400, gunblad3 wrote:

Thanks Maxim, that’s the most informative info on this I’ve seen so far :slight_smile:

What I meant by my third question was: where/how does nginx cache incomplete requests that are “still being sent” from the client(, if it does so in the first place)? In memory?

“Incomplete” requests are pretty normal - you can’t expect that
single read() from socket will return you full request. To store
data that has been already read nginx uses either in-memory
buffers or disk buffers (for large request bodies).

The following configuration directives are available to fine-tune
buffers used:

http://wiki.nginx.org/NginxHttpCoreModule#client_header_buffer_size
http://wiki.nginx.org/NginxHttpCoreModule#large_client_header_buffers
http://wiki.nginx.org/NginxHttpCoreModule#client_body_buffer_size

See there for more info.

Maxim D.

Thanks a lot Maxim!

Posted at Nginx Forum: