Forum: NGINX limit_req_zone limit by location/proxy

A187d269cb1095808a93ffba24acfa01?d=identicon&s=25 Justin Deltener (Guest)
on 2013-11-13 04:25
(Received via mailing list)
For the life of me I can't seem to get my configuration correct to limit
requests. I'm running nginx 1.5.1 and have it serving up static content
and
pushing all non-existent requests to the apache2 proxy backend for
serving
up. I don't want to limit any requests to static content but do want to
limit requests to the proxy. It seems no matter what I put in my
configuration I continue to see entries in the error log for ip
addresses
which are not breaking the rate limit.

2013/11/12 20:55:28 [warn] 10568#0: *1640292 delaying request, excess:
0.412, by zone "proxyzone" client ABCD

I've tried using a map in the top level like so

 limit_req_zone  $limit_proxy_hits  zone=proxyzone:10m   rate=4r/s;

 map $request_filename $limit_proxy_hits
 {
        default "";
       ~/$ $binary_remote_addr; (only limit filename requests ending in
slash as we may have something.php which should not be limited)
 }

yet when i look at the logs, ip ABCD has been delayed for a url ending
in
slash BUT when i look at all proxy requests for the IP, it is clearly
not
going over the limit. It really seems that no matter what, the
limit_req_zone still counts static content against the limit or
something
else equally as confusing.

I've also attempted

limit_req_zone  $limit_proxy_hits  zone=proxyzone:10m   rate=4r/s;

and then use $limit_proxy_hits inside the server/location

server
{
    set $limit_proxy_hits "";

    location /
    {
        set $limit_proxy_hits $binary_remote_addr;
    }
}

and while the syntax doesn't bomb, it seems to exhibit the exact same
behavior as above as well.

ASSERT:

a) When i clearly drop 40 requests from an ip, it clearly lays the smack
down on a ton of requests as it should
b) I do a kill -HUP on the primary nginx process after each test
c) I keep getting warnings on requests from ip's which are clearly not
going over the proxy limit
d) I have read the leaky-bucket algorithm and unless i'm totally missing
something a max of 4r/s should always allow traffic until we start to go
OVER 4r/s which isn't the case.

The documentation doesn't have any real deep insight into how this works
and I could really use a helping hand. Thanks!
A8108a0961c6087c43cda32c8616dcba?d=identicon&s=25 Maxim Dounin (Guest)
on 2013-11-13 12:27
(Received via mailing list)
Hello!

On Tue, Nov 12, 2013 at 09:24:57PM -0600, Justin Deltener wrote:

>
>
> and then use $limit_proxy_hits inside the server/location
>
> d) I have read the leaky-bucket algorithm and unless i'm totally missing
> something a max of 4r/s should always allow traffic until we start to go
> OVER 4r/s which isn't the case.
>
> The documentation doesn't have any real deep insight into how this works
> and I could really use a helping hand. Thanks!

Just some arbitrary facts:

1. The config you've provided doesn't configure any limits, as it
doesn't contatin limit_req directive.  See
http://nginx.org/r/limit_req for documentation.

2. The "delaying request" message means exactly this - nginx is
delaying requests since average speed of requests exceeds
configured request rate.  It basically means that the "bucket"
isn't empty and a request have to wait some time till it will be
allowed to continue.  This message shouldn't be confused with
"limiting requests" message, which is logged when requests are
rejected due to burst limit reached.

As long as rate is set to 4r/s, it's enough to do two requests
with less than 250ms between them to trigger "delaying request"
message, which can easily happen as a pageview usually results in
multiple requests (one request to load the page itself, and
several other requests to load various include resources like css,
images and so on).

It might be a good idea to use "limit_req ...  nodelay" to
instruct nginx to don't do anything unless configured burst limit
is reached.

3. Doing a "kill -HUP" doesn't clear limit_req stats and mostly
useless between tests.

4. To differentiate between various resources, there is a
directive called "location", see http://nginx.org/r/location.
If you want to limit requests to some resources, but not others, it's
good idea to do so by using two distinct locations, e.g.:

    location / {
        limit_req zone burst=10 nodelay;
        proxy_pass http://...
    }

    location /static/ {
        # static files, no limit_req here
    }

5. The documentation is here:

http://nginx.org/en/docs/http/ngx_http_limit_req_module.html

--
Maxim Dounin
http://nginx.org/en/donation.html
A187d269cb1095808a93ffba24acfa01?d=identicon&s=25 Justin Deltener (Guest)
on 2013-11-13 14:18
(Received via mailing list)
I thought I did a good job detailing my issue and setup and clearly
didn't
do that well. I apologize.

1) I am using limits, which is why i mentioned it is delaying requests.
Specifically i'm using  limit_req zone=proxyzone burst=6;

2) I understand the difference between delay request and one responded
to
with a 503. What I think i'm getting hung up on is what to expect under
a
given scenario. If i setup the zone with a rate of 4r/s I would expect
no
matter what, pivoting on the IP a person should be able to perform 4
requests every second without any delay or 503's. (assuming we're able
to
count ONLY proxy hits and not take static content into account for the
current requests..which is what i'm attempting to do) Using a burst of
6, i
would expect a request of 8 in one second would have 4 at full speed, 2
delayed and 2 dropped but it seems that's where i'm horribly wrong. You
said "As long as rate is set to 4r/s, it's enough to do two requests
with
less than 250ms between them to trigger "delaying request message". I'm
confused, why would 4r/s not allow 4 requests per second at full speed??
Isn't that the entire point.

I do realize a given page with have numerous static hits that would
normally count against a person's request rate, but i'm literally
attempting to take all other static requests out of the equation so the
rate per second as well as the burst/503's are only applied to a given
url
pattern. I really shouldn't/will not throttle static requests but I do
know
for any proxy hits, an actual person browsing the site should never
exceed
4 requests per second and for a bit of a fudge factor, allow them a
burst
of up to 6. After 6 I expect the site to start spitting out 503's.

3) I was pointing out i am refreshing the config. I"m not worried about
hit
counters as in reality if my counting of ONLY proxy hits was working
properly, this wouldn't be any real issue.

4) Yup i do have numerous location directives and I'm only placing the
limit_req directive under a single proxy location directive.

5) Thanks for the link, but I have read that document a hundred times
and
there is still a ton that it doesn't cover.


I appreciate your response Maxim!







On Wed, Nov 13, 2013 at 5:27 AM, Maxim Dounin <mdounin@mdounin.ru>
wrote:

> > limit requests to the proxy. It seems no matter what I put in my
> >  map $request_filename $limit_proxy_hits
> > else equally as confusing.
> >
> >
> > and I could really use a helping hand. Thanks!
> isn't empty and a request have to wait some time till it will be
>
> good idea to do so by using two distinct locations, e.g.:
> 5. The documentation is here:
> http://mailman.nginx.org/mailman/listinfo/nginx
>



--

Justin Deltener

Nerd Curator | Alpha Omega Battle Squadron

Toll Free: 1-877-216-5446 x3921

Local: 701-253-5906 x3921

RealTruck.com <http://www.realtruck.com/>

Guiding Principle
#3<http://www.realtruck.com/about-realtruck/#realtruc...
Improve
A8108a0961c6087c43cda32c8616dcba?d=identicon&s=25 Maxim Dounin (Guest)
on 2013-11-13 14:40
(Received via mailing list)
Hello!

On Wed, Nov 13, 2013 at 07:17:36AM -0600, Justin Deltener wrote:

[...]

> current requests..which is what i'm attempting to do) Using a burst of 6, i
> would expect a request of 8 in one second would have 4 at full speed, 2
> delayed and 2 dropped but it seems that's where i'm horribly wrong. You
> said "As long as rate is set to 4r/s, it's enough to do two requests with
> less than 250ms between them to trigger "delaying request message". I'm
> confused, why would 4r/s not allow 4 requests per second at full speed??
> Isn't that the entire point.

Two request with 100ms between them means that requests are coming
at a 10 requests per second rate.  That is, second request have to
be delayed.

Note that specifying rate of 4 r/s doesn't imply 1-second
measurement granularity.  Much like 60 km/h speed limit doesn't
imply that you have to drive for an hour before you'll reach a
limit.

--
Maxim Dounin
http://nginx.org/en/donation.html
A187d269cb1095808a93ffba24acfa01?d=identicon&s=25 Justin Deltener (Guest)
on 2013-11-13 16:10
(Received via mailing list)
Aha, that is the lightbulb moment.

So if we're talking actual rate..which makes sense how would you setup a
scenario with the following requirements.

You can have whatever rate you want as long as you don't exceed 5 proxy
requests in the same second. I don't care if 5 come within 5ms of each
other.. Hitting 6 total proxy requests in 1 second would kill the
request.
It seems we can't really specify that without increasing the rate which
in
turn could allow a sustained session with high rates to still have a ton
of
requests come in to kill the server.

We're attempting to account for 301 redirects which spawn requests much
faster than normal human requests. I realize we could add a get param to
the url to excuse it from the limit, but that seems a bit out there..

I also don't quite understand how long a burst rate can be sustained. It
seems one could set the default rate to 1/m and set the burst to
whatever
you like..

Does that make sense?



On Wed, Nov 13, 2013 at 7:40 AM, Maxim Dounin <mdounin@mdounin.ru>
wrote:

> > said "As long as rate is set to 4r/s, it's enough to do two requests with
> imply that you have to drive for an hour before you'll reach a
>
--

Justin Deltener

Nerd Curator | Alpha Omega Battle Squadron

Toll Free: 1-877-216-5446 x3921

Local: 701-253-5906 x3921

RealTruck.com <http://www.realtruck.com/>

Guiding Principle
#3<http://www.realtruck.com/about-realtruck/#realtruc...
Improve
A8108a0961c6087c43cda32c8616dcba?d=identicon&s=25 Maxim Dounin (Guest)
on 2013-11-13 17:02
(Received via mailing list)
Hello!

On Wed, Nov 13, 2013 at 09:09:55AM -0600, Justin Deltener wrote:

> requests come in to kill the server.
What you are asking about is close to something like this:

    limit_req_zone ... rate=5r/s;
    limit_req ... burst=5 nodelay;

That is, up to 5 requests (note "burst=5") are allowed at any rate
without any delays.  If there are more requests and the rate
remains above 5r/s, they are rejected.

> We're attempting to account for 301 redirects which spawn requests much
> faster than normal human requests. I realize we could add a get param to
> the url to excuse it from the limit, but that seems a bit out there..
>
> I also don't quite understand how long a burst rate can be sustained. It
> seems one could set the default rate to 1/m and set the burst to whatever
> you like..
>
> Does that make sense?

The burst parameter configures maximum burst size, in requests (in
terms of "leaky bucket" - it's the bucket size).  In most cases,
it's a reasonable aproach to set a relatively low rate, switch off
delay, and configure a reasonable burst size to account for
various things like redirects, opening multiple pages to read them
later, and so on.

--
Maxim Dounin
http://nginx.org/en/donation.html
A187d269cb1095808a93ffba24acfa01?d=identicon&s=25 Justin Deltener (Guest)
on 2013-11-13 17:20
(Received via mailing list)
I'll give that a try. I really appreciate your help Maxim!


On Wed, Nov 13, 2013 at 10:01 AM, Maxim Dounin <mdounin@mdounin.ru>
wrote:

> > requests in the same second. I don't care if 5 come within 5ms of each
>     limit_req_zone ... rate=5r/s;
> > I also don't quite understand how long a burst rate can be sustained. It
> later, and so on.
>
> --
> Maxim Dounin
> http://nginx.org/en/donation.html
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



--

Justin Deltener

Nerd Curator | Alpha Omega Battle Squadron

Toll Free: 1-877-216-5446 x3921

Local: 701-253-5906 x3921

RealTruck.com <http://www.realtruck.com/>

Guiding Principle
#3<http://www.realtruck.com/about-realtruck/#realtruc...
Improve
A187d269cb1095808a93ffba24acfa01?d=identicon&s=25 Justin Deltener (Guest)
on 2013-11-14 03:06
(Received via mailing list)
Rolled into production and after tens of thousands of page requests only
3
were smacked down and all were bogus security scanners or "bad dudes"
MISSION ACCOMPLISHED! Thanks a ton Maxim!


On Wed, Nov 13, 2013 at 10:20 AM, Justin Deltener
<jdeltener@realtruck.com>wrote:

>> >
>> ton of
>>
>>
>>
> Justin Deltener
> Improve
>



--

Justin Deltener

Nerd Curator | Alpha Omega Battle Squadron

Toll Free: 1-877-216-5446 x3921

Local: 701-253-5906 x3921

RealTruck.com <http://www.realtruck.com/>

Guiding Principle
#3<http://www.realtruck.com/about-realtruck/#realtruc...
Improve
Please log in before posting. Registration is free and takes only a minute.
Existing account

NEW: Do you have a Google/GoogleMail, Yahoo or Facebook account? No registration required!
Log in with Google account | Log in with Yahoo account | Log in with Facebook account
No account? Register here.