Forum: NGINX Nginx $upstream_cache_status not available when used in rate limiting

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
081c2cec588dc22ba69d59d70d5b7ac3?d=identicon&s=25 Linna.Ding (Guest)
on 2016-07-18 21:27
(Received via mailing list)
Hi,

I use Nginx as reverse proxy, and I would like to rate limit the
requests
to origin server, but only limit the requests with cache status EXPIRED.
I
just tested with a map "cache_key", and the rate limiting doesn't work,
the
 $cache_key was logged as empty string. But changing
$upstream_cache_status
to non-upstream variables like $remote_addr and adding an IP match value
will make the rate limiting work. The zone I defined like so:
       limit_req_zone $cache_key zone=cache_host:1m rate=1r/m;
       map $upstream_cache_status $cache_key {
           EXPIRED $host;
           default "";
        }
I enabled cache setting in nginx.conf, and one of my server chunk uses
the
rate limit zone like below:
    limit_req zone=cache_host busrt=1;

Is this because $upstream_cache_status value is set after the request is
sent to origin server and got the response, while $cache_key is used in
rate limit zone which checked before the request was sent to origin
server? If
so, is there a recommended way to implement rate limiting only for
requests
with specific cache status?

Thanks!
Linna
De7b680154f831d87d8ea48743852f14?d=identicon&s=25 linnading (Guest)
on 2016-07-20 20:04
(Received via mailing list)
I assume $upstream_cache_status variable is set after requests are sent
and
responses are got. But is there a way to do do rate limiting ignoring
cache?
 Really appreciate any help on this.

Thanks.

Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,268345,268383#msg-268383
36a8284995fa0fb82e6aa2bede32adac?d=identicon&s=25 Francis Daly (Guest)
on 2016-07-20 20:17
(Received via mailing list)
On Wed, Jul 20, 2016 at 02:03:44PM -0400, linnading wrote:

Hi there,

> I assume $upstream_cache_status variable is set after requests are sent and
> responses are got. But is there a way to do do rate limiting ignoring cache?
>  Really appreciate any help on this.

I'm afraid that, having read the mails, I'm not at all sure what kind
of limiting you want to do.

If 10 requests come in at the same time to-or-from the same something,
you want the last few requests to be delayed or rejected.

What is the "something" that you care about?

  f
--
Francis Daly        francis@daoine.org
De7b680154f831d87d8ea48743852f14?d=identicon&s=25 linnading (Guest)
on 2016-07-20 20:52
(Received via mailing list)
Hi Francis,

It is "to the same upstream server"  that I care about.  I would like to
limit the request rate to the same upstream server.

The Scenarios is like:
10 requests at the same time to the same upstream server, the upstream
server should only receive requests at rate 1r/m. Last few requests will
be
delayed or rejected. But for these last few requests, some of them can
be
served by cache, they should not be delayed/rejected.

Thanks,
Linna

Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,268345,268386#msg-268386
36a8284995fa0fb82e6aa2bede32adac?d=identicon&s=25 Francis Daly (Guest)
on 2016-07-20 23:29
(Received via mailing list)
On Wed, Jul 20, 2016 at 02:52:10PM -0400, linnading wrote:

Hi there,

> It is "to the same upstream server"  that I care about.  I would like to
> limit the request rate to the same upstream server.

That makes sense, thanks.

I am not aware of a way to achieve this directly in stock nginx.

I see that there is a third-party module at
https://github.com/cfsego/nginx-limit-upstream which looks like
it aims to do what you want; and I see that nginx-plus has a
"max_conns" value per server in an upstream block, documented at
http://nginx.org/en/docs/http/ngx_http_upstream_mo...

If non-stock is ok for you, possibly one of those can work?

> The Scenarios is like:
> 10 requests at the same time to the same upstream server, the upstream
> server should only receive requests at rate 1r/m. Last few requests will be
> delayed or rejected. But for these last few requests, some of them can be
> served by cache, they should not be delayed/rejected.

I think that the limit_* directives implementation is such that the
choice is made before the upstream is chosen; and there are no explicit
limits on the connections to upstream. That is likely why the
third-party
module was created.

Cheers,

  f
--
Francis Daly        francis@daoine.org
40b4c848b8fcd63b0cb60b9d170c3a77?d=identicon&s=25 Valentin V. Bartenev (Guest)
on 2016-07-20 23:55
(Received via mailing list)
On Wednesday 20 July 2016 14:52:10 linnading wrote:
>
[..]

While "proxy_cache_lock" isn't what you're asking about, it can
significantly
reduce number of requests that reaches your upstream server.

http://nginx.org/r/proxy_cache_lock

  wbr, Valentin V. Bartenev
De7b680154f831d87d8ea48743852f14?d=identicon&s=25 linnading (Guest)
on 2016-07-21 16:52
(Received via mailing list)
Thanks Francis and Valentin!
These options can help a lot to limit requests to upstream server,
thought
not related to rate limiting.

Thanks!
~L

Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,268345,268404#msg-268404
This topic is locked and can not be replied to.