Nginx $upstream_cache_status not available when used in rate limiting

Hi,

I use Nginx as reverse proxy, and I would like to rate limit the
requests
to origin server, but only limit the requests with cache status EXPIRED.
I
just tested with a map “cache_key”, and the rate limiting doesn’t work,
the
$cache_key was logged as empty string. But changing
$upstream_cache_status
to non-upstream variables like $remote_addr and adding an IP match value
will make the rate limiting work. The zone I defined like so:
limit_req_zone $cache_key zone=cache_host:1m rate=1r/m;
map $upstream_cache_status $cache_key {
EXPIRED $host;
default “”;
}
I enabled cache setting in nginx.conf, and one of my server chunk uses
the
rate limit zone like below:
limit_req zone=cache_host busrt=1;

Is this because $upstream_cache_status value is set after the request is
sent to origin server and got the response, while $cache_key is used in
rate limit zone which checked before the request was sent to origin
server? If
so, is there a recommended way to implement rate limiting only for
requests
with specific cache status?

Thanks!
Linna

I assume $upstream_cache_status variable is set after requests are sent
and
responses are got. But is there a way to do do rate limiting ignoring
cache?
Really appreciate any help on this.

Thanks.

Posted at Nginx Forum:

On Wed, Jul 20, 2016 at 02:03:44PM -0400, linnading wrote:

Hi there,

I assume $upstream_cache_status variable is set after requests are sent and
responses are got. But is there a way to do do rate limiting ignoring cache?
Really appreciate any help on this.

I’m afraid that, having read the mails, I’m not at all sure what kind
of limiting you want to do.

If 10 requests come in at the same time to-or-from the same something,
you want the last few requests to be delayed or rejected.

What is the “something” that you care about?

f

Francis D. [email protected]

Hi Francis,

It is “to the same upstream server” that I care about. I would like to
limit the request rate to the same upstream server.

The Scenarios is like:
10 requests at the same time to the same upstream server, the upstream
server should only receive requests at rate 1r/m. Last few requests will
be
delayed or rejected. But for these last few requests, some of them can
be
served by cache, they should not be delayed/rejected.

Thanks,
Linna

Posted at Nginx Forum:

On Wed, Jul 20, 2016 at 02:52:10PM -0400, linnading wrote:

Hi there,

It is “to the same upstream server” that I care about. I would like to
limit the request rate to the same upstream server.

That makes sense, thanks.

I am not aware of a way to achieve this directly in stock nginx.

I see that there is a third-party module at
GitHub - cfsego/nginx-limit-upstream: limit the number of connections to upstream for NGINX which looks like
it aims to do what you want; and I see that nginx-plus has a
“max_conns” value per server in an upstream block, documented at
Module ngx_http_upstream_module

If non-stock is ok for you, possibly one of those can work?

The Scenarios is like:
10 requests at the same time to the same upstream server, the upstream
server should only receive requests at rate 1r/m. Last few requests will be
delayed or rejected. But for these last few requests, some of them can be
served by cache, they should not be delayed/rejected.

I think that the limit_* directives implementation is such that the
choice is made before the upstream is chosen; and there are no explicit
limits on the connections to upstream. That is likely why the
third-party
module was created.

Cheers,

f

Francis D. [email protected]

Thanks Francis and Valentin!
These options can help a lot to limit requests to upstream server,
thought
not related to rate limiting.

Thanks!
~L

Posted at Nginx Forum:

On Wednesday 20 July 2016 14:52:10 linnading wrote:

[…]

While “proxy_cache_lock” isn’t what you’re asking about, it can
significantly
reduce number of requests that reaches your upstream server.

http://nginx.org/r/proxy_cache_lock

wbr, Valentin V. Bartenev