Protection against massiv requests from single server / ip

Hello Nginx community,

what is the best way protecting my nginx webserver against massiv
request from single server/ips? I made some tests with openload and see
one server with openload can fill the whole 100Mbit connection to my
server.

What are your setups against a lot of request from single servers?

Thanks for your help.

Kind regrads.

Jetzt kostenlos herunterladen: Internet Explorer 8 und Mozilla Firefox
3.5 -
sicherer, schneller und einfacher! Aktuelle Nachrichten aus Politik, Wirtschaft & Panorama | GMX


Jetzt kostenlos herunterladen: Internet Explorer 8 und Mozilla Firefox
3.5 -
sicherer, schneller und einfacher! Aktuelle Nachrichten aus Politik, Wirtschaft & Panorama | GMX

On 1/31/10 2:36 AM, [email protected] wrote:

Hello Nginx community,

what is the best way protecting my nginx webserver against massiv request from single server/ips? I made some tests with openload and see one server with openload can fill the whole 100Mbit connection to my server.

http://wiki.nginx.org/NginxHttpLimitReqModule

http://wiki.nginx.org/NginxHttpLimitZoneModule

These should do the trick for you.


Jim O.

2010/1/31 [email protected]:

What are your setups against a lot of request from single servers?

For larger installations firewalls or properly configured routers
before any servers.

For tiny, home and experimental setups iptables [1] with rules such as:
-A INPUT -s 300.300.300.0/24 -j ACCEPT
-A INPUT -m recent --rcheck --seconds 120 --name ATTACKER --rsource -j
DROP
-A INPUT -p tcp -m tcp --tcp-flags SYN,RST,ACK SYN -j syn-flood
-A syn-flood -m limit --limit 14/sec --limit-burst 30 -j RETURN
-A syn-flood -j LOG --log-prefix "Firewall: SYN-flood "
-A syn-flood -m recent --set --name ATTACKER --rsource
-A syn-flood -j DROP
… where lots of requests equal a syn-flood.
But beware, someone could exploit these rules by forging source
IPs (see source address validation [2]) and your server is still doing
work discarding these request packets, therefore could become
unresponsive if the request amount is very high (at least take a look
on syncookies [3]).


W-Mark K.

[1] http://www.netfilter.org/
[2] Source Address Validation Improvements (savi)
[3] SYN cookies - Wikipedia

Hello Jim,

thanks for your help. I am right that HttpLimitReq and HttpLimitZone is
a per user limit? Every client (ip) get this limit and not all client
together?

Is $binary_remote_addr the right value if my nginx is behind haproxy? Or
must i use something like X-Real-IP, i am not sure how i can debug what
$binary_remote_addr includes behind haproxy, perhaps it is the haproxy
ip and not the clients ip address?

Kind regards

-------- Original-Nachricht --------

Datum: Sun, 31 Jan 2010 02:46:32 -0500
Von: Jim O. [email protected]
An: [email protected]
Betreff: Re: Protection against massiv requests from single server / ip

http://wiki.nginx.org/NginxHttpLimitZoneModule
nginx Info Page

GRATIS für alle GMX-Mitglieder: Die maxdome Movie-FLAT!
Jetzt freischalten unter Aktuelle Nachrichten aus Politik, Wirtschaft & Panorama | GMX