Possible to have a limit_req "nodelay burst" option?

Hello,
I’m using the limit_req directive to control the rate at which my
backends
are hit with requests. Typically a backend will generate a page and the
client will not request anything for a short while, so a rate of 1 per
second works well. Sometimes however a backend will return a HTTP
redirect,
and then the client must wait for a one second delay on the request to
the
redirected page. I’d like to avoid this if possible to avoid the slow
feeling when users click on redirected links.

The nodelay option looked like it would work at first glance, but this
bypasses the delay completely for all requests up to the burst, so it’s
still possible for the backend to be hit with many requests at once.
Ideally I would like to have a “nodelay burst” option to control how
many
of the burst requests are processed without delay which I could set to 2
in
my situation, while still delaying any further requests beyond that.

Another idea I had was to have the backend send a special header similar
to
how X-Accel-Redirect works, eg X-Limit-Req: 0 to avoid counting a single
request towards the rate limit for purposes of redirects and similar
situations.

Any other thoughts how something like this could work?

Hello!

On Mon, Apr 15, 2013 at 06:18:04PM -0400, Richard S. wrote:

bypasses the delay completely for all requests up to the burst, so it’s
still possible for the backend to be hit with many requests at once.
Ideally I would like to have a “nodelay burst” option to control how many
of the burst requests are processed without delay which I could set to 2 in
my situation, while still delaying any further requests beyond that.

… and next time you’ll notice that site feels slow on a page
which uses 2 redirects in a row, or includes an image/css from a
backend, or user just clicks links fast enough.

I would recommend just using “limit_req … nodelay” unless you
are really sure you need a delay in a particular case.


Maxim D.
http://nginx.org/en/donation.html

On Mon, Apr 15, 2013 at 6:38 PM, Maxim D. [email protected]
wrote:

and then the client must wait for a one second delay on the request to the
… and next time you’ll notice that site feels slow on a page
which uses 2 redirects in a row, or includes an image/css from a
backend, or user just clicks links fast enough.

I would recommend just using “limit_req … nodelay” unless you
are really sure you need a delay in a particular case.

Thanks for the reply. I’ll try to explain my situation a little
better, the delay is mainly there to prevent crazy scripts / spambots
/ etc from making too many fast requests to the backend and tying up
the “expensive” processes. Images and CSS are served from a separate
backend that isn’t subject to rate limiting, and if the main backend
determines a redirect is needed, it will guarantee a redirect to a
final URL with no further intermediate redirects.

Currently I’m using a burst of 10 since showing a 503 to users is a
bad experience, in the event they click links really fast I’d prefer
them to just think the server is a little busy. I was hoping to remove
this delay on redirects or at least the first couple of “fast clicks”,
while still causing spambots / etc to be subjected to 1 req/s delay so
the backend is not suddenly hit with up to 10 requests at once. For
now I’m going to try using rewrite rules to catch the most commonly
redirected paths and pass them to a non-limited backend, but I’d
really like to see this feature if possible.

Regards,
Richard

+1 to the idea.

Maybe something like:
limit_req one burst=10 nodelay=5; # first 5 ‘bursts’ don’t have a delay,
the
next 5 do

I haven’t tried, but I suspect this doesn’t do the desired thing:
limit_req one burst=10;
limit_req one burst=5 nodelay;

(I’m guessing that the first directive above essentially overrides the
second for the first 5, then the second directive overrides the first
after
that)

Posted at Nginx Forum:

ppy wrote in post #1114430:

I have to agree with this completely. In fact, I thought this was the
intended behaviour of the “burst” argument, and it wasn’t until further
testing that I realised its true meaning.

I am looking for the exact same behaviour here – to allow actual burst
requests before the delay starts to kick in. The eventual 503 is not
necessary.

I came across the same issue today. Actually, a lot of the explanations
of the limit_req functionality that you can find on the web seem to
think that it works that way, probably because it would be the most
intuitive and useful way to do it.

I support this proposal. We need this functionality, too.

Posted at Nginx Forum:

I have to agree with this completely. In fact, I thought this was the
intended behaviour of the “burst” argument, and it wasn’t until further
testing that I realised its true meaning.

I am looking for the exact same behaviour here – to allow actual burst
requests before the delay starts to kick in. The eventual 503 is not
necessary. I believe this is a very common scenario and it would likely
benefit a lot of others looking for the same kind of thing.

Posted at Nginx Forum: