DoS attack in the wild

A DoS attack against number of http servers is available and has hit
slashdot today:

Out of the box nginx is also vulnerable (I have tested it on latest 0.7
installation). A quick fix for the vulnerability follows:

Put in “http” section:

client_body_timeout 10;
client_header_timeout 10;
keepalive_timeout 10;
send_timeout 10;
limit_zone limit_per_ip $binary_remote_addr 1m;

and put in “server” section :

limit_conn limit_per 16;

The last 2 configuration lines are for limiting connections per client
IP. This fist lines are same sane connection timeouts.

Best regards and keep the great work!

luben karavelov wrote:

client_body_timeout 10;
IP. This fist lines are same sane connection timeouts.

Best regards and keep the great work!

If you process some large uploads or the page generation gets over 10
seconds you could raise the timeouts. Actually the fix is the last
lines: limiting the connection number per client IP

Luben

On Fri, 2009-06-19 at 21:45 +0300, luben karavelov wrote:

A DoS attack against number of http servers is available and has hit
slashdot today:
Attack On a Significant Flaw In Apache Released - Slashdot

Out of the box nginx is also vulnerable (I have tested it on latest 0.7
installation).

What were the results of your tests? I can see Apache being vulnerable
to this, given the amount of resources it requires per connection, but
Nginx should be much less susceptible. The only resource I’d expect to
see exhausted might be sockets, which can be tuned at the OS level.

Cliff

On Fri, 2009-06-19 at 21:45 +0300, luben karavelov wrote:

A DoS attack against number of http servers is available and has hit
slashdot today:
Attack On a Significant Flaw In Apache Released - Slashdot

Out of the box nginx is also vulnerable (I have tested it on latest 0.7
installation). A quick fix for the vulnerability follows:

I notice that one of the prerequisites is:

“2) Negotiate a high TCP window size for each of the connections (1 GB
should be doable)”

This seems to point to TCP stack tuning to prevent this.

Cliff

On Fri, 2009-06-19 at 22:09 +0300, luben karavelov wrote:

The last 2 configuration lines are for limiting connections per client
IP. This fist lines are same sane connection timeouts.

Best regards and keep the great work!

If you process some large uploads or the page generation gets over 10
seconds you could raise the timeouts. Actually the fix is the last
lines: limiting the connection number per client IP

This will probably also cause issues where a large number of clients are
behind a single NAT firewall, such as a corporate portal.

I don’t think such an attack can be prevented at any single level.
Although such measures might help in some cases, I think we should be
wary of presenting them as a universal solution.

Regards,
Cliff

Hello,
Can anybody tell how to test DoS attack on webserver please ?

Regards
NeeleshG

On Sat, Jun 20, 2009 at 12:52 AM, Cliff W. [email protected] wrote:

to this, given the amount of resources it requires per connection, but
Nginx should be much less susceptible. The only resource I’d expect to
see exhausted might be sockets, which can be tuned at the OS level.

Cliff


vonage sucks - Google Search


Regards
NeeleshG

LINUX is basically a simple operating system, but you have to be a
genius to
understand the simplicity

“Welcome to Slowloris - the low bandwidth, yet greedy and poisonous HTTP
client!”
*
*
http://ha.ckers.org/slowloris/

this attack works great on apache but I was unable, yet, to make it
works on nginx (0.8.3).

2009/6/19 Neelesh G. [email protected]:

On Fri, 2009-06-19 at 16:24 -0400, E. Johnson wrote:

“Welcome to Slowloris - the low bandwidth, yet greedy and poisonous
HTTP client!”

http://ha.ckers.org/slowloris/

I’ve already seen that. What I’d like to see is what data the OP
extracted from his tests to determine that Nginx is also vulnerable.

Apache and IIS are clearly vulnerable due to their threaded architecture
(they consume a relatively large amount of memory with each connection
which makes this sort of attack easy). With Nginx this isn’t true, so
I suspect the correct place to address resource consumption lies in the
underlying OS’ TCP stack settings rather than in nginx.conf (but of
course, I’m willing to stand corrected if the OP’s tests showed
otherwise).

In short, the attack effectively simulates what would happen if
thousands of 1200 baud dialup users simultaneously accessed a website.
Nginx should be as close to ideal as you can get for this situation,
provided your OS is properly tuned and has enough resources to handle
that many concurrent connections.

Cliff

On Fri, Jun 19, 2009 at 12:22:35PM -0700, Cliff W. wrote:

Nginx should be much less susceptible. The only resource I’d expect to
see exhausted might be sockets, which can be tuned at the OS level.

Yes, as to nginx this DoS is more related to OS resources, but to nginx
itself. On FreeBSD I use usually settings like these:
http://wiki.nginx.org/FreeBSDOptimizations
Note, they are applicable for FreeBSD/amd64 only, but for FreeBSD/i386.

On Sat, Jun 20, 2009 at 03:33:40PM +0300, luben karavelov wrote:

events {
worker_connections 2048;
use epoll;
}

and without the fixes I could DoS the server with:
./slowloris.pl -dns photomoment.bg -timeout 30 -num 10000 -tcpto 5

exhausts available sockets and the server stops replying to new requests.

5000 and 2048 are too small values in modern Internet, I use usually
about 200,000.

You need to increase

  1. OS sockets limit,
  2. OS network memory limits (buffers, etc.)
  3. OS files limit,
  4. OS per process files limit (worker_rlimit_nofile),
  5. and finally, nginx’s worker_connections.

On Sat, Jun 20, 2009 at 04:41:48PM +0400, Igor S. wrote:

worker_processes 4;
exhausts available sockets and the server stops replying to new requests.

5000 and 2048 are too small values in modern Internet, I use usually
about 200,000.

You need to increase

  1. OS sockets limit,
  2. OS network memory limits (buffers, etc.)

By buffers I meant not increasing send/receive buffers limits: actually,
you should decrease them. I meant a total number of memory dedicated
to the buffers by kernel.

luben karavelov Wrote:

limit_conn limit_per 16;

The last 2 configuration lines are for limiting
connections per client
IP. This fist lines are same sane connection
timeouts.

Best regards and keep the great work!

A look at the script reveals it keeps connections open with invalid
headers (note the appended “\r\n”):

“GET /$rand HTTP/1.1\r\n”
. “Host: $sendhost\r\n”
. “User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT
5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.503l3; .NET CLR
3.0.4506.2152; .NET CLR 3.5.30729; MSOffice 12)\r\n”
. “Content-Length: 42\r\n”;

As by default the (undocumented?) ignore_invalid_headers directive is
enabled in nginx, isn’t this attack a non-issue, unless one disables the
directive?

Sending such headers to an nginx server with the directive enabled
results in a “400 Bad Request”.

Posted at Nginx Forum:

On Fri, Jun 19, 2009 at 08:09:28PM -0400, w3wsrmn wrote:

installation). A quick fix for the vulnerability
and put in “server” section :

A look at the script reveals it keeps connections open with invalid headers (note the appended “\r\n”):

No, “\r\n” is valid sequence in HTTP request. Actually, the only “\n” is
rather invalid, but most web servers treat it as “\r\n”.

Jérôme Loyet wrote:

this attack works great on apache but I was unable, yet, to make it
works on nginx (0.8.3).

On nginx it exhuases the available sockets. My setup is nginx-0.7.58
with cofig: :

worker_processes 4;
worker_rlimit_nofile 5000;
events {
worker_connections 2048;
use epoll;
}

and without the fixes I could DoS the server with:
./slowloris.pl -dns photomoment.bg -timeout 30 -num 10000 -tcpto 5

exhausts available sockets and the server stops replying to new
requests.

Sorry for the late reply.

Luben

w3wsrmn at 2009-6-20 8:09 wrote:

When using telnet to send above header, I received the 400 response.
But when I tested the slowloris.pl script in nginx_0.7.59. The
ignore_invalid_headers directive is useless, Nginx treate the
header_line ‘X-a: b\r\n’ as valid header.
The debug log is like this:

2009/06/22 16:58:58 [debug] 25864#0: *1 accept: 172.19.1.209 fd:9
2009/06/22 16:58:58 [debug] 25864#0: *1 event timer add: 9:
60000:120682241
2009/06/22 16:58:58 [debug] 25864#0: *1 epoll add event: fd:9 op:1
ev:80000001
.
.
.
2009/06/22 16:58:58 [debug] 25864#0: *1 http process request line
2009/06/22 16:58:58 [debug] 25864#0: *1 recv: fd:9 236 of 1024
2009/06/22 16:58:58 [debug] 25864#0: *1 http request line: “GET /
HTTP/1.1”
2009/06/22 16:58:58 [debug] 25864#0: *1 http uri: “/”
2009/06/22 16:58:58 [debug] 25864#0: *1 http args: “”
2009/06/22 16:58:58 [debug] 25864#0: *1 http exten: “”
2009/06/22 16:58:58 [debug] 25864#0: *1 http process request header line
2009/06/22 16:58:58 [debug] 25864#0: *1 http header: “Host:
edu-9.space.163.org
2009/06/22 16:58:58 [debug] 25864#0: *1 http header: “User-Agent:
Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR
1.1.4322; .NET CLR 2.0.503l3; .NET CLR 3.0.4506.2152; .NET CLR
3.5.30729; MSOffice 12)”
2009/06/22 16:58:58 [debug] 25864#0: *1 http header: “Content-Length:
42”
2009/06/22 16:58:58 [debug] 25864#0: *1 recv: fd:9 -1 of 788
2009/06/22 16:58:58 [debug] 25864#0: *1 recv() not ready (11: Resource
temporarily unavailable
.
.
.
2009/06/22 16:58:58 [debug] 25864#0: *1 http process request header line
2009/06/22 16:58:58 [debug] 25864#0: *1 recv: fd:9 8 of 788
2009/06/22 16:58:58 [debug] 25864#0: *1 http header: “X-a: b”
2009/06/22 16:58:58 [debug] 25864#0: *1 recv: fd:9 -1 of 780
2009/06/22 16:58:58 [debug] 25864#0: *1 recv() not ready (11: Resource
temporarily unavailable)
.
.
.
2009/06/22 16:59:48 [debug] 25864#0: *1 http process request header line
2009/06/22 16:59:48 [debug] 25864#0: *1 recv: fd:9 8 of 780
2009/06/22 16:59:48 [debug] 25864#0: *1 http header: “X-a: b”
2009/06/22 16:59:48 [debug] 25864#0: *1 recv: fd:9 -1 of 772
2009/06/22 16:59:48 [debug] 25864#0: *1 recv() not ready (11: Resource
temporarily unavailable)
.
.
.
2009/06/22 16:59:58 [debug] 25864#0: *1 event timer del: 9: 120682241
2009/06/22 16:59:58 [debug] 25864#0: *1 http process request header line
2009/06/22 16:59:58 [info] 25864#0: *1 client timed out (110: Connection
timed out) while reading client request headers, client: 172.19.1.209,
server: _, request: “GET / HTTP/1.1”, host: “edu-9.space.163.org
2009/06/22 16:59:58 [debug] 25864#0: *1 http close request
2009/06/22 16:59:58 [debug] 25864#0: *1 http log handler
2009/06/22 16:59:58 [debug] 25864#0: *1 close http connection: 9

The default timeout value is 60 seconds. But you can set with
client_header_timeout(Module ngx_http_core_module),
This directive is much useful。

I think Nginx is also effected by such DoS attack, but much stronger
than apache.

I wasn’t able to raise the load above 0,1 with nginx-0.6.32 on freebsd.
What did I wrong if nginx is affected “much stronger”?

Regards,
Istvan

István at 2009-6-22 20:40 wrote:

I wasn’t able to raise the load above 0,1 with nginx-0.6.32 on freebsd.

What did I wrong if nginx is affected “much stronger”?
Under this attack, Nginx just blocks all the sockets for
client_header_timeout seconds, the load is always very low.

In my tests, apache2 stops working when the attack number is above 500.
I think maybe apache2 can’t fork more processes or threads.
But Nginx can survive when the attack number is below
woker_processes*worker_connections. It’s more difficult to attack Nginx
than apache. But if you have enough attack computers, you also can make
a Nginx server deny service.

I am not able to reproduce this. The server is answering and serving
./slowloris.pl -dns doma.in -port 80 -timeout 2 -num 10000

The load is zero, there is not even a delay in the response time. Would
you
mind to share your slowloris.pl command and/or the nginx relevant
config, OS
type and version, sysctl.conf(or equivalent).

It would be also nice to know what the nginx is doing in that time, do
you
have dtrace on that node? Enable debug level logging in nginx is a
really
bad idea if you have 5000 requests…

“But if you have enough attack computers, you also can make a Nginx
server
deny service.”

*
*
If you have enough computer you can take down even google.com, this is
not
relevant to this conversation, moreover the slowloris is a dedicated
tool to
low bandwith/low amount of computers attacks.

Regards,
Istvan

Yeah I agree, basically it is not easy to take down nginx with such an
attack. The question is still there, what kind of limitations do we have
to
put in place to avoid such an abuser?
My consideration:

-firewall -> max connection by ips
-firewall -> syn proxy(to avoid syn attacks)
-firewall -> connection rate
-OS -> max open sockets by processes
-OS -> tcp/ip stack tuning, allocated memory
-OS -> max CPU time
-OS -> max used memory(slightly different terminology all across unixes)
-webserver-> max fds, running workers etc.

Basically you have to have a multi layer limitation to avoid resource
abusing and then you can sleep well :slight_smile:

Regards,
Istvan