Limiting number of connections to upstream servers

Hello,

Currently we use nginx 0.7 in combination with multiple fastcgi backends
that process requests dynamically without caching. We’d like to prevent
a DOS attack by limiting the number of connections handled
simultaneously for each backend.

I’m wondering what the best way is to do this. I’d love to be able to
specify the maximum number of open connections for each upstream server
individually; that seems to be the most straighforward solution. I
couldn’t find anything in the docs that would allow one to do this
though. There’s worker_processes and worker_connections, but they’re
global to the entire nginx server. Since the server also handles static
requests (many more than dynamic fcgi requests), there’s little I can do
with that.

The other solution I can think of, is by having the fcgi backend
processes monitor the number of connections they’re handling. That has
the drawback that each type of backend process must be able to do this.
Also, I imagine it could happen that one backend would refuse to handle
a connection, while another backend still had some open slots. Nginx
itself could handle that better.

How should this best be resolved?

Posted at Nginx Forum:

Nothing? Is there anything else I should be looking at to prevent
overloading a backend?

E.g. splitting off nginx configs into 1 instance for dynamic requests,
and 1 for static requests? The dynamic requests nginx would then limit
connections by setting worker.max_connections, and assuming it fairly
distributes requests over the multiple (identical) backends?

Posted at Nginx Forum:

A bit o

On Sunday, May 2, 2010, brama [email protected] wrote:

Nothing? Is there anything else I should be looking at to prevent overloading a backend?

E.g. splitting off nginx configs into 1 instance for dynamic requests, and 1 for static requests? The dynamic requests nginx would then limit connections by setting worker.max_connections, and assuming it fairly distributes requests over the multiple (identical) backends?

It’s a bit of a hack, but you could have nginx proxy to itself and
then to the backend. At the middle layer, you use connection limits.
You could probably even use error_page to have clients random wait and
retry if the backend was too busy.


RPM

You could direct your traffic to something like HAProxy. It can detect
upstream server failures. I am not sure what facilities are available
for NGinx.

Posted at Nginx Forum:

I’m using the nginx as a reverse proxy server, with the upstream module,
successfully doing round-robin load balancing between two backend
servers. Is there some way I can determine if one of the back end
servers has gone down? Nginx will seamlessly take it out of pool if it
does go down, which is great, but it would be handy to know about it so
some alert could be sent to the admin.

John M. at 2010-5-4 21:07 wrote:

I’m using the nginx as a reverse proxy server, with the upstream
module, successfully doing round-robin load balancing between two
backend servers. Is there some way I can determine if one of the back
end servers has gone down? Nginx will seamlessly take it out of pool
if it does go down, which is great, but it would be handy to know
about it so some alert could be sent to the admin.

See this:


nginx mailing list
[email protected]
nginx Info Page


Weibin Y.

iptables -A INPUT -p tcp -m tcp --dport 111 --tcp-flags FIN,SYN,RST,ACK
SYN -m connlimit --connlimit-above 1024 --connlimit-mask 32 -j REJECT
–reject-with tcp-reset

Posted at Nginx Forum: