Question about proxy_cache_valid

Hi,

what is the difference between the proxy_cache_valid directive and the
“inactive” parameter of the proxy_cache_path directive?
When I set “proxy_cache_valid 10m;” and “inactive=1h”, how long will the
request be stored?
And what about the controverse: “proxy_cache_valid 200 1h;” and
“inactive=10m”?

For some reason my cache still contains a file, that has been deleted on
my backend 10 hours ago. Both, proxy_cache_valid and inactive= are set
to 2h. Maybe the two settings have different meanings and I do not
understand yet.

Hello!

On Thu, Sep 10, 2009 at 05:06:26PM +0200, [email protected] wrote:

Hi,

what is the difference between the proxy_cache_valid directive and the
“inactive” parameter of the proxy_cache_path directive?
When I set “proxy_cache_valid 10m;” and “inactive=1h”, how long will the
request be stored?
And what about the controverse: “proxy_cache_valid 200 1h;” and
“inactive=10m”?

Directive proxy_cache_valid specifies how long response will be
considered valid (and will be returned without any requests to
backend). After this time response will be considered “stale” and
either won’t be returned or will be depending on
proxy_cache_use_stale setting.

Argument “inactive” of proxy_cache_path specifies how long
response will be stored in cache after last use. Note that even
stale responses will be considered recently used if there are
requests to them.

For some reason my cache still contains a file, that has been deleted on
my backend 10 hours ago. Both, proxy_cache_valid and inactive= are set
to 2h. Maybe the two settings have different meanings and I do not
understand yet.

Which version? From the description it looks like a bug fixed in
0.8.14:

*) Bugfix: an expired cached response might stick in the "UPDATING"
   state.

If you see this in 0.8.14 - please provide more details.

Maxim D.

Hello,

Maxim D. wrote:

Directive proxy_cache_valid specifies how long response will be
considered valid (and will be returned without any requests to
backend). After this time response will be considered “stale” and
either won’t be returned or will be depending on
proxy_cache_use_stale setting.

So when I use “proxy_cache_use_stale off;” which is default, nginx will
request the file again from the backend, if the time in
“proxy_cache_valid” is over?

Argument “inactive” of proxy_cache_path specifies how long
response will be stored in cache after last use. Note that even
stale responses will be considered recently used if there are
requests to them.

I am not quite sure whether I understand it correctly or not. I simply
want nginx to cache responses with 200 status for 2 hours. After this
time the response gets deleted from cache, so when new clients want the
file, nginx requests it again from backend.
In other words: The requests should be stored in cache for 2 hours after
the last request from backend, not for 2 hours after the last request
from a client.
Could you give me a hint on how to achieve this behavior?

If you see this in 0.8.14 - please provide more details.

I am using the stable version (0.7.61). Do I have to update to 0.8.14 in
order to get this working? I’m not sure whether it’s my config or a bug.
Hm… and I can’t figure out the meaning of the “updating” state. All the
other states are clear to me.

Thanks for your help!

Hello!

On Thu, Sep 10, 2009 at 06:39:31PM +0200, [email protected] wrote:

request the file again from the backend, if the time in
“proxy_cache_valid” is over?

Yes.

from a client.
Could you give me a hint on how to achieve this behavior?

No way. Deleting files from cache is completely separate process,
and it’s based on last use time (LRU queue) and total cache size.
This way nginx is able to manage cache efficiently.

If you see this in 0.8.14 - please provide more details.

I am using the stable version (0.7.61). Do I have to update to 0.8.14 in
order to get this working? I’m not sure whether it’s my config or a bug.

Stable doesn’t have this fix yet, so you either have to upgrade or
wait for fix to be merged. No idea when merge will happen though.

Hm… and I can’t figure out the meaning of the “updating” state. All the
other states are clear to me.

“UPDATING” means that response was considered stale and after it
some request to backend was issued. Such response may be returned
to clients with “proxy_cache_use_stale updating;” without further
actions as some request to backend that will update response in
cache already in flight.

The bug is that nginx wasn’t reset this state upon receiving 404
from upstream, and hence cache manager wasn’t able to remove this
file even it was already inactive for a long time.

Maxim D.

Maxim D. wrote:

So when I use “proxy_cache_use_stale off;” which is default, nginx will
request the file again from the backend, if the time in
“proxy_cache_valid” is over?

Yes.

This did not work for me at all.

For some reason my cache still contains a file, that has been
deleted

I am using the stable version (0.7.61). Do I have to update to 0.8.14 in
order to get this working? I’m not sure whether it’s my config or a bug.

Stable doesn’t have this fix yet, so you either have to upgrade or
wait for fix to be merged. No idea when merge will happen though.

So I updated to 0.8.14, but still the files were in the nginx cache for
hours even though I deleted them on the backend.
Then I stumbled on this topic: http://forum.nginx.org/read.php?2,2182
After adding “proxy_ignore_headers Expires Cache-Control;” to my config,
the cache started working as expected.
Especially this part is interesting:

This just means that upstream’s caching instructions are more important
than proxy_cache_valid, right?

Yes, the order is
X-Accel-Expires
Expires/Cache-Control
proxy_cache_valid

I think this should be added to the wiki to “proxy_cache_valid”.
There already is a small hint to that behavior in the wiki at
“proxy_cache” but I think this is not enough (at least it wasn’t for
me).

Well, now it works perfectly and I’m happy :slight_smile:

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs