client_body_timeout 10;
IP. This fist lines are same sane connection timeouts.
Best regards and keep the great work!
If you process some large uploads or the page generation gets over 10
seconds you could raise the timeouts. Actually the fix is the last
lines: limiting the connection number per client IP
Out of the box nginx is also vulnerable (I have tested it on latest 0.7
installation).
What were the results of your tests? I can see Apache being vulnerable
to this, given the amount of resources it requires per connection, but
Nginx should be much less susceptible. The only resource I’d expect to
see exhausted might be sockets, which can be tuned at the OS level.
On Fri, 2009-06-19 at 22:09 +0300, luben karavelov wrote:
The last 2 configuration lines are for limiting connections per client
IP. This fist lines are same sane connection timeouts.
Best regards and keep the great work!
If you process some large uploads or the page generation gets over 10
seconds you could raise the timeouts. Actually the fix is the last
lines: limiting the connection number per client IP
This will probably also cause issues where a large number of clients are
behind a single NAT firewall, such as a corporate portal.
I don’t think such an attack can be prevented at any single level.
Although such measures might help in some cases, I think we should be
wary of presenting them as a universal solution.
to this, given the amount of resources it requires per connection, but
Nginx should be much less susceptible. The only resource I’d expect to
see exhausted might be sockets, which can be tuned at the OS level.
I’ve already seen that. What I’d like to see is what data the OP
extracted from his tests to determine that Nginx is also vulnerable.
Apache and IIS are clearly vulnerable due to their threaded architecture
(they consume a relatively large amount of memory with each connection
which makes this sort of attack easy). With Nginx this isn’t true, so
I suspect the correct place to address resource consumption lies in the
underlying OS’ TCP stack settings rather than in nginx.conf (but of
course, I’m willing to stand corrected if the OP’s tests showed
otherwise).
In short, the attack effectively simulates what would happen if
thousands of 1200 baud dialup users simultaneously accessed a website.
Nginx should be as close to ideal as you can get for this situation,
provided your OS is properly tuned and has enough resources to handle
that many concurrent connections.
On Fri, Jun 19, 2009 at 12:22:35PM -0700, Cliff W. wrote:
Nginx should be much less susceptible. The only resource I’d expect to
see exhausted might be sockets, which can be tuned at the OS level.
Yes, as to nginx this DoS is more related to OS resources, but to nginx
itself. On FreeBSD I use usually settings like these: http://wiki.nginx.org/FreeBSDOptimizations
Note, they are applicable for FreeBSD/amd64 only, but for FreeBSD/i386.
On Sat, Jun 20, 2009 at 04:41:48PM +0400, Igor S. wrote:
worker_processes 4;
exhausts available sockets and the server stops replying to new requests.
5000 and 2048 are too small values in modern Internet, I use usually
about 200,000.
You need to increase
OS sockets limit,
OS network memory limits (buffers, etc.)
By buffers I meant not increasing send/receive buffers limits: actually,
you should decrease them. I meant a total number of memory dedicated
to the buffers by kernel.
As by default the (undocumented?) ignore_invalid_headers directive is
enabled in nginx, isn’t this attack a non-issue, unless one disables the
directive?
Sending such headers to an nginx server with the directive enabled
results in a “400 Bad Request”.
When using telnet to send above header, I received the 400 response.
But when I tested the slowloris.pl script in nginx_0.7.59. The
ignore_invalid_headers directive is useless, Nginx treate the
header_line ‘X-a: b\r\n’ as valid header.
The debug log is like this:
I wasn’t able to raise the load above 0,1 with nginx-0.6.32 on freebsd.
What did I wrong if nginx is affected “much stronger”?
Under this attack, Nginx just blocks all the sockets for
client_header_timeout seconds, the load is always very low.
In my tests, apache2 stops working when the attack number is above 500.
I think maybe apache2 can’t fork more processes or threads.
But Nginx can survive when the attack number is below
woker_processes*worker_connections. It’s more difficult to attack Nginx
than apache. But if you have enough attack computers, you also can make
a Nginx server deny service.
I am not able to reproduce this. The server is answering and serving
./slowloris.pl -dns doma.in -port 80 -timeout 2 -num 10000
The load is zero, there is not even a delay in the response time. Would
you
mind to share your slowloris.pl command and/or the nginx relevant
config, OS
type and version, sysctl.conf(or equivalent).
It would be also nice to know what the nginx is doing in that time, do
you
have dtrace on that node? Enable debug level logging in nginx is a
really
bad idea if you have 5000 requests…
“But if you have enough attack computers, you also can make a Nginx
server
deny service.”
*
*
If you have enough computer you can take down even google.com, this is
not
relevant to this conversation, moreover the slowloris is a dedicated
tool to
low bandwith/low amount of computers attacks.
Yeah I agree, basically it is not easy to take down nginx with such an
attack. The question is still there, what kind of limitations do we have
to
put in place to avoid such an abuser?
My consideration:
-firewall -> max connection by ips
-firewall -> syn proxy(to avoid syn attacks)
-firewall -> connection rate
-OS -> max open sockets by processes
-OS -> tcp/ip stack tuning, allocated memory
-OS -> max CPU time
-OS -> max used memory(slightly different terminology all across unixes)
-webserver-> max fds, running workers etc.
Basically you have to have a multi layer limitation to avoid resource
abusing and then you can sleep well