Question about proxy cache when it expires


#1

Hello igor,

I have a question about the cache behaviour in proxy mode.

I have nginx in front head which redirect to an apache back end. Nginx
caches eveything for M minutes.

If I have a large number of requests for the same page and this page
is cached : nginx returns the cached page … no problems
After M minutes, the cached page expires
The first request coming after the expiration makes nginx to ask the
backend for refresh
When nginx receives the backend fresh response, it’s saved to cache
and then nginx serves the fresh cached page

But what happen between the start of the request to the backend and
the end of the response from the backend ? (let’s assume that the
backend serves the page in 5s … and in 5s I can have a lot of
request to this page).:

  • Are the request queued waiting for the backend response ?
  • Every request makes try to refresh the cache from the backend ? (in
    this case, I have multiple request for the same page to the backend
    … I can have a burst of request and my apache can be overflowed by
    request – that’s why I’m using nignx with cache).
  • Do the requests serve the cached page even if it’s expired until the
    backend response has been received ?
  • Maybe something else :slight_smile:

Thanks for your answer.

++ jerome


#2

On Tue, May 12, 2009 at 10:38:04AM +0200, J?r?me Loyet wrote:

The first request coming after the expiration makes nginx to ask the
this case, I have multiple request for the same page to the backend
… I can have a burst of request and my apache can be overflowed by
request – that’s why I’m using nignx with cache).

  • Do the requests serve the cached page even if it’s expired until the
    backend response has been received ?
  • Maybe something else :slight_smile:

Currently all requests which found that a cached response expired
are proxied to backend. I plan to implement busy locks to pass the
single
request and leave others to wait the response up to specified time.


#3

OK thanks for the answer.

I’m already ready to test this new feature which could be very benifit
to us :slight_smile:

++ Jerome

2009/5/12 Igor S. removed_email_address@domain.invalid:


#4

当缓存过期时,所有的请求都会转发到后端,至于能否处理,看apache了。
如果请求数量巨大,并且生成缓存的时间较长,比如5ç§’ï¼Œå»ºè®®ä½ æ”¹å˜ç¼“å­˜çš„æ€è·¯ã€‚
ç»™ä½ å‡ ç‚¹æ€è·¯
1)由phpæå‰ç”Ÿæˆç¼“å­˜ï¼ˆæ ¸å¿ƒï¼‰
2)使用memcached在内存中缓存页面
3)用perl控制过期及缓存,减小后端压力

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,1952,1970#msg-1970


#5

Igor, thanks.

2009/5/12 Igor S. removed_email_address@domain.invalid


#6

J Wrote:

this is an ENGLISH mailing list. Please use
english so that everybody
here can understand what you want to say !!!

++ Jerome

2009/5/12 "

我不懂英语,见谅

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,1952,1983#msg-1983


#7

this is an ENGLISH mailing list. Please use english so that everybody
here can understand what you want to say !!!

++ Jerome

2009/5/12 “»µÈË” removed_email_address@domain.invalid:


#8

is cached : nginx returns the cached page … no problems

  • Are the request queued waiting for the backend response ?
    request and leave others to wait the response up to specified time.

Hi igor,

about this feature. Do you know when you plan do implement it ? I
really need this feature. If you don’t have enough time, I can look
into it if you explain to me briefly how you want to do it.

Thx
++ jerome


#9

坏人 Wrote:

我不懂英语,见谅

Translation:

I do not understand English, forgive me

My translation:

This is a troll who understands enough to answer appropriately.

I’m banning him from the forum as I have had enough. Sorry everyone for
the inconvenience.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,1952,1985#msg-1985


#10

OK

I’ll try to look into the code to see what I can do. Do you have any
lead to guide me on this quest ? :slight_smile:

2009/5/20 Igor S. removed_email_address@domain.invalid:


#11

On Tue, May 19, 2009 at 05:14:11PM +0200, J?r?me Loyet wrote:

is cached : nginx returns the cached page … no problems

  • Are the request queued waiting for the backend response ?
    request and leave others to wait the response up to specified time.

Hi igor,

about this feature. Do you know when you plan do implement it ? I
really need this feature. If you don’t have enough time, I can look
into it if you explain to me briefly how you want to do it.

This is complex thing that I plan to implement in 0.8.


#12

On Wed, May 20, 2009 at 02:32:39PM +0200, J?r?me Loyet wrote:

OK

I’ll try to look into the code to see what I can do. Do you have any
lead to guide me on this quest ? :slight_smile:

This is complex thing. It requires sending notifications from one worker
to another when busy lock is being freed.


#13

Hello!

On Wed, May 20, 2009 at 04:36:12PM +0400, Igor S. wrote:

On Wed, May 20, 2009 at 02:32:39PM +0200, J?r?me Loyet wrote:

OK

I’ll try to look into the code to see what I can do. Do you have any
lead to guide me on this quest ? :slight_smile:

This is complex thing. It requires sending notifications from one worker
to another when busy lock is being freed.

BTW, what about something like “in-process” busy locks? This will
effectively limit number of requests simulteneously send to
backends to number of worker processes. At least it looks much
better than nothing, and should be simpler.

Maxim D.


#14

On Wed, May 20, 2009 at 05:42:12PM +0400, Maxim D. wrote:

This is complex thing. It requires sending notifications from one worker
to another when busy lock is being freed.

BTW, what about something like “in-process” busy locks? This will
effectively limit number of requests simulteneously send to
backends to number of worker processes. At least it looks much
better than nothing, and should be simpler.

Yes, they are much simpler, but I want to do it at once.


#15

On May 12, Igor S. wrote:

is cached : nginx returns the cached page … no problems

  • Are the request queued waiting for the backend response ?
    request and leave others to wait the response up to specified time.
    How about the notion of soft timeouts. Say the ttl is set to 300
    seconds. We pick a value like say 10% and say that any request received
    for the next 30 seconds will still get the stale content without any
    waits and somewhere in there, we initiate a request to the backends and
    refresh the cache. Beyond this window, we can take the busy wait
    approach.

Squid 2.7.x has something like this.
http://www.squid-cache.org/Versions/v2/2.7/cfgman/refresh_stale_hit.html