Nginx and Apache killer

Following “Apache Killer” discussions and the advisory from 2011-08-24
(Advisory: Range header DoS vulnerability Apache HTTPD 2.x
CVE-2011-3192)
we’d like to clarify a couple of things in regards to nginx behavior
either in standalone or “combo” (nginx+apache) modes.

First of all, nginx doesn’t favor HEAD requests with compression,
so the exact mentioned attack doesn’t work against a standalone
nginx installation.

If you’re using nginx in combination with proxying to apache backend,
please check your configuration to see if nginx actually passes range
requests to the backend:

  1. If you’re using proxying WITH caching then range requests are not
    sent to backend and your apache should be safe.

  2. If you’re NOT using caching then you might be vulnerable to the
    attack.

In order to mitigate this attack when your installation includes
apache behind nginx we recommend you the following:

  1. Refer to the above mentioned security advisory CVE-2011-3192 for
    apache
    and implement described measures accordingly.

  2. Consider using nginx configuration below (in server{} section of
    configuration). This particular example filters 5 and more ranges
    in the request:

if ($http_range ~ “(?:\d*\s*-\s*\d*\s*,\s*){5,}”) {
return 416;
}

We’d also like to notify you that for standalone nginx installations
we’ve produced the attached patch. This patch prevents handling
malicious range requests at all, instead outputting just the entire file
if the total size of all ranges is greater than the expected response.

Hi,

I use nginx 1.0 in my server (with fastcgi + php5 support), it runs
several website using wordpress. Today my harddisk is full (this run
in VPS service). error.log file occupied 6.8 Gb and mysql server is
frozen. How I can prevent that if someone applies the Apache killer
script to my nginx stop filling the disk?.

Thanks you!

2011/8/27 Igor S. [email protected]:

please check your configuration to see if nginx actually passes range

  1. Refer to the above mentioned security advisory CVE-2011-3192 for apache
    We’d also like to notify you that for standalone nginx installations
    [email protected]
    nginx Info Page


Juan A. Moreno
http://apostols.net
Fingerprint GPG: 0FEE E0BF 2904 FE77 1682 2171 C842 DBF1 34BC CD04

Hello!

On Sat, Aug 27, 2011 at 09:34:11PM -0430, Juan Angulo M. wrote:

Hi,

I use nginx 1.0 in my server (with fastcgi + php5 support), it runs
several website using wordpress. Today my harddisk is full (this run
in VPS service). error.log file occupied 6.8 Gb and mysql server is
frozen. How I can prevent that if someone applies the Apache killer
script to my nginx stop filling the disk?.

Usual aproach is to rotate logs periodically and/or control
logging level via error_log directive. And this isn’t specific to
any particular script, this is just administration basics.

Maxim D.

First of all, nginx doesn’t favor HEAD requests with compression,
so the exact mentioned attack doesn’t work against a standalone
nginx installation.

Well, with apache; the problem is not really due to the compression
module
(you can disable compression and still get DoS’ed)

It is with how it handles byte ranges (by ignoring overlapping ranges
etc…)

Currently with apache requests like

Range: bytes=0-1,0-2,0-3…

OR

Range: bytes=0-0, 1-1, 2-2…

will not result in merging of the ranges and deliver data for each
range.
With huge number of those ranges there is a lot of memory consumed.

On 27.08.2011 11:11, Igor S. wrote:

Following “Apache Killer” discussions and the advisory from 2011-08-24
(Advisory: Range header DoS vulnerability Apache HTTPD 2.x CVE-2011-3192)
we’d like to clarify a couple of things in regards to nginx behavior
either in standalone or “combo” (nginx+apache) modes.

CVE-2011-3192 updated 26 Aug 2011. UPDATE 2 version available from
http://mail-archives.apache.org/mod_mbox/httpd-announce/201108.mbox/browser

In order to mitigate this attack when your installation includes
apache behind nginx we recommend you the following:

  1. Refer to the above mentioned security advisory CVE-2011-3192 for apache
    and implement described measures accordingly.

these workarounds are needed only if “naked” apache open to internet.
if apache listen only at 127.0.0.1 and located behind nginx frontend,
enough ipmlement protection only at nginx level.

  1. Consider using nginx configuration below (in server{} section of
    configuration). This particular example filters 5 and more ranges
    in the request:

    if ($http_range ~ “(?:\d*\s*-\s*\d*\s*,\s*){5,}”) {
    return 416;
    }

this example allow 5 ranges in the request,
and blocks with 416 return code only 6 and more ranges.

to protect apache it is necessary filter malicious range requests
not only in “Range:” header, but also in “Request-Range:” header.

to emulate directive “max_ranges 5;” to allow max 5 ranges:

if ($http_range ~ “(?:\d*\s*-\s*\d*\s*,\s*){5,}”) {return 416;}
if ($http_request_range ~ “(?:\d*\s*-\s*\d*\s*,\s*){5,}”) {return 416;}

to emulate directive “max_ranges 1;” to allow only one range:

if ($http_range ~ “,”) {return 416;}
if ($http_request_range ~ “,”) {return 416;}

to completely remove these headers while proxying requests to apache:

proxy_set_header Range “”;
proxy_set_header Request-Range “”;

We’d also like to notify you that for standalone nginx installations
we’ve produced the attached patch. This patch prevents handling
malicious range requests at all, instead outputting just the entire file
if the total size of all ranges is greater than the expected response.

this not protect nginx from “frequent nginx disk seek (D)Dos attack”,
and additional max_ranges checks/protections for nginx is required!!!

but workarounds

“if ($http_range ~ …”
“if ($http_request_range ~ …”

can be implemented to protect apache and nginx itself
from (D)Dos attacks only if nginx was configured
and compiled with ngx_http_rewrite_module

if nginx was configured --without-http_rewrite_module workarounds
in nginx config to protect nginx itself can not be implemented,
and nginx will remain in vulnerable state to other type of DDoS

  • “frequent nginx disk seek (D)Dos attack” on huge static files.


Best regards,
Gena

Hello!

On Sun, Aug 28, 2011 at 09:42:23AM +0000, Venky Shankar wrote:

etc…)
With huge number of those ranges there is a lot of memory consumed.
Not really. The problem in Apache is not “not merging”, but O(N^2)
memory consumption while handling Range requests, where N - number
of ranges requested.

See here for more information:

http://permalink.gmane.org/gmane.comp.apache.devel/45196
http://permalink.gmane.org/gmane.comp.apache.devel/45290

With nginx you are safe: there is no O(N^2) memory consumption.
Additionally, it won’t do any actual data processing with HEAD
requests as used in attacking script in question.

Maxim D.

Hello!

On Sun, Aug 28, 2011 at 05:19:25PM +0300, Gena M. wrote:

In order to mitigate this attack when your installation includes
apache behind nginx we recommend you the following:

  1. Refer to the above mentioned security advisory CVE-2011-3192 for apache
    and implement described measures accordingly.

these workarounds are needed only if “naked” apache open to internet.
if apache listen only at 127.0.0.1 and located behind nginx frontend,
enough ipmlement protection only at nginx level.

I don’t recommend relying on only nginx level protection even if
your backend server is only reachable from localhost. It’s
always a good idea to follow vendor recommendation and apply
needed security fixes to affected software. This applies to other
cases as well, not only to this particular Apache problem.

to protect apache it is necessary filter malicious range requests
not only in “Range:” header, but also in “Request-Range:” header.

Yes, this is valid point. Please note that Apache’s updated
advisory incorrectly calls the header as “Range-Request” in some
places, while it’s instead “Request-Range”. Actual
countermeasures in advisory are correct though.

to completely remove these headers while proxying requests to apache:

proxy_set_header Range “”;
proxy_set_header Request-Range “”;

In case of Request-Range you don’t need any checks, just unset it
with

proxy_set_header Request-Range "";

in nginx or equivalent

RequestHeader unset Request-Range

in Apache.

It’s long obsolete one used by ancient browsers, never defined in any
standard. It’s not even supported by nginx.

We’d also like to notify you that for standalone nginx installations
we’ve produced the attached patch. This patch prevents handling
malicious range requests at all, instead outputting just the entire file
if the total size of all ranges is greater than the expected response.

this not protect nginx from “frequent nginx disk seek (D)Dos attack”,
and additional max_ranges checks/protections for nginx is required!!!

I don’t think the “attack” you are talking about is something
practical. It requires prior knowledge of urls of many really
large (and “cold”, i.e. not cached) files on the attacked site,
and it as well relies on disk seeks to be costly which is not
always true (and almost always not true for a single file, as even
“really large” still usually means “much less than disk size”,
i.e. one can’t force full disk seeks). Additionally, maximum
number of ranges requested in such “attack” is effectively limited
by maximum header length to something about 500 by default.

(On the other hand, I do think that limiting number of ranges to
low number like 5 suggested here and there is harmfull. Quick
look over logs on my server for a couple of days reveals perfectly
valid requests from Adobe Reader with up to 17 ranges. Minimum
sane value will be something about 50.)

Maxim D.

Not really. The problem in Apache is not “not merging”, but O(N^2)
memory consumption while handling Range requests, where N - number
of ranges requested.

Sure, but it hits even badly when it does not check overlapping/same
range
request. I guess nginx would send back 416 when it
encounters overlapping ranges (?) and the patch from Igor takes care of
exceeding content length case.

See here for more information:

http://permalink.gmane.org/gmane.comp.apache.devel/45196
http://permalink.gmane.org/gmane.comp.apache.devel/45290

With nginx you are safe: there is no O(N^2) memory consumption.
Additionally, it won’t do any actual data processing with HEAD
requests as used in attacking script in question.

But GET involves data processing. But as you said since there is no
O(N*2)
[or the like] memory consumption with nginx, even GET requests are safe.

Maxim D.


nginx mailing list
[email protected]
nginx Info Page

Thanks,
-Venky

Hello!

On Sun, Aug 28, 2011 at 04:48:59PM +0000, Venky Shankar wrote:

Not really. The problem in Apache is not “not merging”, but O(N^2)
memory consumption while handling Range requests, where N - number
of ranges requested.

Sure, but it hits even badly when it does not check overlapping/same range
request.

O(N^2) in Apache is only possible with overlapping ranges. It
doesn’t mean though that handling overlapping ranges isn’t
possible without O(N^2) memory consumption, this is how such
handling is implemented in Apache. (And the patch I linked
actually fixes memory consumption to be O(N).)

I guess nginx would send back 416 when it
encounters overlapping ranges (?) and the patch from Igor takes care of
exceeding content length case.

No, overlapped ranges are perfectly ok in nginx, you are free to
request them and your request will likely be satisfied. While
they don’t really make sense from theoretical point of view I
would expect some sloppy software to actually use them.

But GET involves data processing. But as you said since there is no O(N*2)
[or the like] memory consumption with nginx, even GET requests are safe.

Yes.

Maxim D.

Hello!

On Sun, Aug 28, 2011 at 11:39:14PM +0300, Gena M. wrote:

In order to mitigate this attack when your installation includes
your backend server is only reachable from localhost. It’s

“A full fix is expected in the next 24 hours”.

[facepalm.jpg should be here]

Just follow vendor recommendations. “There are several immediate
options to mitigate this issue…”

to completely remove these headers while proxying requests to apache:

as built-in feature of nginx to protect all vulnerable backends ?
like as built-it feature “merge_slashes” with default merge_slashes on;

I don’t think so. Unsetting arbitrary headers just because
there are some vulnerable software looks wrong for me. I believe
Apache folks will be able to release a fix in a week or so, and
everybody will be happy enough after that. And I don’t expect
this vulnerability to appear in other software.

large (and “cold”, i.e. not cached) files on the attacked site,
and it as well relies on disk seeks to be costly which is not
always true (and almost always not true for a single file, as even
“really large” still usually means “much less than disk size”,
i.e. one can’t force full disk seeks).

one - can’t. but, multiple such requests can make very high seek rate
of disk subsystem, and performance of disk subsystem will be very low

For multiple requests you need multiple large and cold files.
That is what I was talking about.

Additionally, maximum
number of ranges requested in such “attack” is effectively limited
by maximum header length to something about 500 by default.

no, it is limited by large_client_header_buffers
directive, by default it is 8k for 64-bit systems.

It’s not clear why you said “no” here. Maximum header length as
you rightfully outlined is 8k by default, and this gives us about
500 as a maximum number of ranges possible with at least 1M
distance between them.

[…]

(On the other hand, I do think that limiting number of ranges to
low number like 5 suggested here and there is harmfull. Quick
look over logs on my server for a couple of days reveals perfectly
valid requests from Adobe Reader with up to 17 ranges. Minimum
sane value will be something about 50.)

FWIW: grepped a bit more logs, and it looks like Adobe Reader uses
up to 200 ranges in a single request. At least I see multiple
requests with 200 ranges used (or less), but not any single
request with more than 200.

probably - this is (special) feature only of some pdf readers software,
and for all other file types “max_ranges 1;” will be safe and harmless?

I’ve not seen myself any applications using multiple ranges except
Adobe Reader. Though there were at least attempts to implement
JPEG2000 streaming using multiple ranges requests, and probably
there are other applications as well.

Maxim D.

On 28.08.2011 19:36, Maxim D. wrote:

Following “Apache Killer” discussions and the advisory from 2011-08-24
(Advisory: Range header DoS vulnerability Apache HTTPD 2.x CVE-2011-3192)
we’d like to clarify a couple of things in regards to nginx behavior
either in standalone or “combo” (nginx+apache) modes.

CVE-2011-3192 updated 26 Aug 2011. UPDATE 2 version available from
http://mail-archives.apache.org/mod_mbox/httpd-announce/201108.mbox/browser

In order to mitigate this attack when your installation includes
apache behind nginx we recommend you the following:

  1. Refer to the above mentioned security advisory CVE-2011-3192 for apache
    and implement described measures accordingly.

these workarounds are needed only if “naked” apache open to internet.
if apache listen only at 127.0.0.1 and located behind nginx frontend,
enough ipmlement protection only at nginx level.

I don’t recommend relying on only nginx level protection even if
your backend server is only reachable from localhost. It’s
always a good idea to follow vendor recommendation and apply
needed security fixes to affected software. This applies to other
cases as well, not only to this particular Apache problem.

quote from the CVE-2011-3192 UPDATE 2 version from 26 Aug 2011:

“There is currently no patch/new version of Apache HTTPD which fixes
this vulnerability. This advisory will be updated when a long term fix
is available.”

“A full fix is expected in the next 24 hours”.

to completely remove these headers while proxying requests to apache:

 RequestHeader unset Request-Range

in Apache.

It’s long obsolete one used by ancient browsers, never defined in any
standard. It’s not even supported by nginx.

Ok.

May be it will be better unset “Request-Range” request header
as built-in feature of nginx to protect all vulnerable backends ?
like as built-it feature “merge_slashes” with default merge_slashes on;

We’d also like to notify you that for standalone nginx installations
we’ve produced the attached patch. This patch prevents handling
malicious range requests at all, instead outputting just the entire file
if the total size of all ranges is greater than the expected response.

this not protect nginx from “frequent nginx disk seek (D)Dos attack”,
and additional max_ranges checks/protections for nginx is required!!!

I don’t think the “attack” you are talking about is something
practical. It requires prior knowledge of urls of many really
large (and “cold”, i.e. not cached) files on the attacked site,
and it as well relies on disk seeks to be costly which is not
always true (and almost always not true for a single file, as even
“really large” still usually means “much less than disk size”,
i.e. one can’t force full disk seeks).

one - can’t. but, multiple such requests can make very high seek rate
of disk subsystem, and performance of disk subsystem will be very low

Additionally, maximum
number of ranges requested in such “attack” is effectively limited
by maximum header length to something about 500 by default.

no, it is limited by large_client_header_buffers
directive, by default it is 8k for 64-bit systems.

and one such malicious “Range:” request
can make a few hundred “seek” operations.

and few hundred such malicious requests easy can
generate very high seek rate via vulnerable nginx.

and none of nginx built-in protection directives
limit_req / limit_conn / keepalive_requests / …
can help to protect from such type of DDoS attack.

workaround/fix for this “vulnerability” can be implemented
only via optional rewrite module if nginx compiled with it.

and yes, this vector of attack applicable only for web servers
with many large files (video/iso) and non-SSD storage devices.

(On the other hand, I do think that limiting number of ranges to
low number like 5 suggested here and there is harmfull. Quick
look over logs on my server for a couple of days reveals perfectly
valid requests from Adobe Reader with up to 17 ranges. Minimum
sane value will be something about 50.)

probably - this is (special) feature only of some pdf readers software,
and for all other file types “max_ranges 1;” will be safe and harmless?

P.S.

how to force 411 seek operations by one malicious request:

Range: bytes=0-1,1072694271-1072694272,2097154-2097155,…

========================================================================

#!/usr/bin/python

size = 1 * 1024 * 1024 * 1024
step = 1 * 1024 * 1024 + 1

line = ‘Range: bytes=’
limit = 8 * 1024

count = 0

def point( x ): return str( x ) + ‘-’ + str( x + 1 ) + ‘,’

seq = [ point( x ) for x in range( 0, size - 1 , step ) ]

seq[ 1 :: 2 ] = reversed( seq[ 1 :: 2 ] )

for range_ in seq:
if len( line + range_ ) < limit:
line += range_
count+= 1
else:
line = line[ : -1 ]
break

print ‘how to force’, count, ‘seek operations by one malicious request:’
print
print line

========================================================================


Best regards,
Gena

On 29.08.2011 3:15, Maxim D. wrote:

We’d also like to notify you that for standalone nginx installations
we’ve produced the attached patch. This patch prevents handling
malicious range requests at all, instead outputting just the entire file
if the total size of all ranges is greater than the expected response.

this not protect nginx from “frequent nginx disk seek (D)Dos attack”,
and additional max_ranges checks/protections for nginx is required!!!

I don’t think the “attack” you are talking about is something
practical. It requires prior knowledge of urls of many really
large (and “cold”, i.e. not cached) files on the attacked site,
and it as well relies on disk seeks to be costly which is not
always true (and almost always not true for a single file, as even
“really large” still usually means “much less than disk size”,
i.e. one can’t force full disk seeks).

one - can’t. but, multiple such requests can make very high seek rate
of disk subsystem, and performance of disk subsystem will be very low

For multiple requests you need multiple large and cold files.
That is what I was talking about.

for example - public mirror servers with multiple large *.iso files

in this and like cases - web server disk subsystem will be bottleneck,
if one malicious request can force nginx to make 411 seek operations
and immediately send to such attacker only ~ 411 bytes as server reply.

nginx now easy can handle thousands of requests in one second,
but server disk subsystem can’t efficiently handle so many seeks.
all other methods of DDoS attack to such servers are more laborious.

and if nginx build without optional rewrite module - no way to protect.

(On the other hand, I do think that limiting number of ranges to
low number like 5 suggested here and there is harmfull. Quick
look over logs on my server for a couple of days reveals perfectly
valid requests from Adobe Reader with up to 17 ranges. Minimum
sane value will be something about 50.)

FWIW: grepped a bit more logs, and it looks like Adobe Reader uses
up to 200 ranges in a single request. At least I see multiple
requests with 200 ranges used (or less), but not any single
request with more than 200.

for relatively small and “hot” pdf files - this is not problem at all.
such popular pdf file can be completly in operating system file cache.

also, only for such special cases more ranges can be easy allowed:

http {

 max_ranges 1;

 server {

     location ~* \.pdf$ { max_ranges 200; }
 }

}

Adobe Reader will be happy, and also web server will not be
vulnerable to “very frequent disk seek by nginx DDos attack”.

or - make the built-in nginx max_ranges autotuning feature:

for bigger files - smaller count of range requests is allowed.

for example:

sz == file_size of requested file_name

sz <= 1M: range requests limited only by large_client_header_buffers

1M < sz <= 64M: allow only 200 range requests, else return 416 error

64M < sz <= 128M: allow only 64 range requests, else return 416 error

128M < sz <= 256M: allow only 32 range requests, else return 416 error

256M < sz <= 512M: allow only 16 range requests, else return 416 error

512M < sz <= 1G: allow only 8 range requests, else return 416 error

1G < sz <= 2G: allow only 4 range requests, else return 416 error

2G < sz <= 4G: allow only 2 range requests, else return 416 error

4G < sz <= inf: allow only 1 range requests, else return 416 error

also - use these rules only if “max_ranges auto;”
and make “auto” the default value for “max_ranges” directive.

if “max_ranges off;” - totally disable this type of protection,
and range requests are limited only by large_client_header_buffers

if “max_ranges N;” - limit current and nested locations to N ranges.

probably - this is (special) feature only of some pdf readers software,
and for all other file types “max_ranges 1;” will be safe and harmless?

I’ve not seen myself any applications using multiple ranges except
Adobe Reader. Though there were at least attempts to implement
JPEG2000 streaming using multiple ranges requests, and probably
there are other applications as well.

Ok.

Maxim, what you think about built-in nginx protection feature
such as “max_ranges” directive - it will be useless or useful?

I think: with “max_ranges” directive with default value “auto”,
nginx will be “secure by default” as vsftpd or postfix servers.


Best regards,
Gena

On Mon, Aug 29, 2011 at 09:30:54PM +0300, Gena M. wrote:

I don’t think the “attack” you are talking about is something
For multiple requests you need multiple large and cold files.
all other methods of DDoS attack to such servers are more laborious.
up to 200 ranges in a single request. At least I see multiple
max_ranges 1;
or - make the built-in nginx max_ranges autotuning feature:

2G < sz <= 4G: allow only 2 range requests, else return 416 error

Maxim, what you think about built-in nginx protection feature
such as “max_ranges” directive - it will be useless or useful?

I think: with “max_ranges” directive with default value “auto”,
nginx will be “secure by default” as vsftpd or postfix servers.

max_ranges is obvious soultion. With default value “any”.


Igor S.

On 8/27/11 4:11 AM, Igor S. wrote:

please check your configuration to see if nginx actually passes range

  1. Refer to the above mentioned security advisory CVE-2011-3192 for apache
    and implement described measures accordingly.

Apache 2.2.20 has been released to address this issue. Please see
http://www.apache.org/dist/httpd/Announcement2.2.html.

we’ve produced the attached patch. This patch prevents handling
nginx Info Page

Jim O.