Forum: NGINX Patch against server DOS

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
2974d09ac2541e892966b762aad84943?d=identicon&s=25 double (Guest)
on 2014-08-15 20:25
(Received via mailing list)
Hello,

My NGINX got a denial of service. The machine proxied large files using
"proxy_store".
Someone was creating an artifical request for a rarely used file,
causing
NGINX to download a big file from upstream, then he immediately closed
the
connection. NGINX continued to download this file.
Then he did the same again with some other rarely used file.
Within a couple of minutes I had thousands of connections, downloading
huge
files from the backend.

My solution was, to add a small feature:
proxy_ignore_client_abort    10%;
If the server did not download at least 10% from the backend-machine, he
closes the connection to the backend as soon as the client closed the
connection to the server, even if "proxy_store" was used.

The patch:
http://doppelbauer.name/abort-upstream-161.patch

Thanks a lot
Markus

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,252594,252594#msg-252594
1266aa99d1601b47bbd3ec22affbb81c?d=identicon&s=25 B.R. (Guest)
on 2014-08-16 10:22
(Received via mailing list)
Hello,

I may have missed something, but it was to my understanding that nginx
continuously send data to clients, thus fill up buffers whil the client
empties it at the same time (FIFO).
Thus, to me, backend upload was stopping when the allocated buffer(s)
was(were) full, waiting for space being available in it(them).

That is how/why, to my understanding (again), nginx was supposed to be
able
to handle slow clients.

The intuitive solution if it was to happen to me, would have been to
reduce
buffer(s) size + number to ensure they fill up quickier (and thus stop
downloading from upstream with the same velocity).
In the end, the computation of the 'lost' resource is done:
- in space with number of 'attackers' * num buffers * size buffer
- in time with space calculated above / upstream sownloading speed (an
average would be enough)

Is not your patch redundant with existing capabilities?
You just added another caluclation, competing with the one above,
multiplying the above values per 10%. You could as much have reduced the
settings above to meet the same result, could not you? Not talking about
the risk of introducing vulnerabilities/instabilities with custom patch.

What if the attacker modifies its client to ensure downloading 50% of
the
file (thanks to his /dev/null)? Your patch becomes useless and the
resources grow back to what they used to be... on the other hand, the
standard way of having modified how you handle upstream data would have
been resisting, whatever amount of data any client grabs.

What have I missed here?
---
*B. R.*
This topic is locked and can not be replied to.