Forum: NGINX Patch against server DOS

2974d09ac2541e892966b762aad84943?d=identicon&s=25 double (Guest)
on 2014-08-15 20:25
(Received via mailing list)

My NGINX got a denial of service. The machine proxied large files using
Someone was creating an artifical request for a rarely used file,
NGINX to download a big file from upstream, then he immediately closed
connection. NGINX continued to download this file.
Then he did the same again with some other rarely used file.
Within a couple of minutes I had thousands of connections, downloading
files from the backend.

My solution was, to add a small feature:
proxy_ignore_client_abort    10%;
If the server did not download at least 10% from the backend-machine, he
closes the connection to the backend as soon as the client closed the
connection to the server, even if "proxy_store" was used.

The patch:

Thanks a lot

Posted at Nginx Forum:,252594,252594#msg-252594
1266aa99d1601b47bbd3ec22affbb81c?d=identicon&s=25 B.R. (Guest)
on 2014-08-16 10:22
(Received via mailing list)

I may have missed something, but it was to my understanding that nginx
continuously send data to clients, thus fill up buffers whil the client
empties it at the same time (FIFO).
Thus, to me, backend upload was stopping when the allocated buffer(s)
was(were) full, waiting for space being available in it(them).

That is how/why, to my understanding (again), nginx was supposed to be
to handle slow clients.

The intuitive solution if it was to happen to me, would have been to
buffer(s) size + number to ensure they fill up quickier (and thus stop
downloading from upstream with the same velocity).
In the end, the computation of the 'lost' resource is done:
- in space with number of 'attackers' * num buffers * size buffer
- in time with space calculated above / upstream sownloading speed (an
average would be enough)

Is not your patch redundant with existing capabilities?
You just added another caluclation, competing with the one above,
multiplying the above values per 10%. You could as much have reduced the
settings above to meet the same result, could not you? Not talking about
the risk of introducing vulnerabilities/instabilities with custom patch.

What if the attacker modifies its client to ensure downloading 50% of
file (thanks to his /dev/null)? Your patch becomes useless and the
resources grow back to what they used to be... on the other hand, the
standard way of having modified how you handle upstream data would have
been resisting, whatever amount of data any client grabs.

What have I missed here?
*B. R.*
Please log in before posting. Registration is free and takes only a minute.
Existing account

NEW: Do you have a Google/GoogleMail, Yahoo or Facebook account? No registration required!
Log in with Google account | Log in with Yahoo account | Log in with Facebook account
No account? Register here.