Serve from cache but fire request to upstream server to increment page view counter

Hi all,

I have a strange case, not sure if this is addressed by nginx yet

My site is cached as follows.

location = / {

proxy_pass  http://localhost:82;

proxy_set_header   Host             $host;

proxy_set_header   X-Real-IP        $remote_addr;

proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;

proxy_set_header Accept-Encoding "";

proxy_ignore_headers Set-Cookie;

proxy_ignore_headers Cache-Control;

proxy_ignore_headers Expires;

proxy_ignore_headers X-Accel-Expires;

add_header X-Cache-Status $upstream_cache_status;

proxy_cache             cache;

proxy_cache_key 

$scheme$host$request_uri$cookie_site_sessionid;

proxy_cache_valid       200 302 30s;

proxy_cache_use_stale   updating;

}

One issue I encounter with this is that I can’t increment the page view
counter that I maintain in redis.

Is there is way nginx can fire a request to the backend web – to some
URI
so that I can take the request and increment the associated counter?

I know I can achieve this with SSI, but I wanted to check if there is a
better pattern.

-Quintin

Hi all,

Can someone please help me on this? The pattern is similar to

proxy_cache_use_stale updating

but just that the backend request is a fire-and-forget.

If I am to do this in SSI I’ll end up spending time to service the code
path in SSI whereas there is no much need to that urgently.

The next cache bypass will update it and that’s good enough for me.

-Quintin

There’s the post_action directive. But AFAIK is not that reliable.

I would use Lua with the “new” cosocket API and do an HTTP request.

Here’s an example of a library built around it that talks with a Redis
backend: https://github.com/agentzh/lua-resty-redis

As stated post_action is another option:

http://wiki.nginx.org/HttpCoreModule#post_action

–appa

Thanks Appa.

Why would you say this is not reliable? Is there data or experience to
suggest? The semantic is awesome. Precisely what I’ve been looking for.

-Cherian

Well from remarks that core devs of Nginx have posted here before,
referring to it as “hack” (SIC). In fact it’s not even documented in the
official docs.

I’m not sure if that’s an overlook or has deeper meanings related to the
quality of the said directive concept/code.

–appa

Thanks Appa.

Why would you say this is not reliable? Is there data or experience to
suggest? The semantic is awesome. Precisely what I’ve been looking for.

-Cherian

On 4 April 2012 13:17, Quintin P. [email protected] wrote:

Thanks Appa.

Why would you say this is not reliable? Is there data or experience to
suggest? The semantic is awesome. Precisely what I’ve been looking for.

I’d also be interested in discovering any unreliability of this
mechanism, as I’m planning to use it for some real-time logging
shortly.

The weaknesses that I’ve discovered thus far, which I think should all
be fixed – or at least modified as indicated below – are

  • The client connection is held open until the post_action call
    completes
  • The response code returned to the client is the post_action’s response
    code
  • The access.log entry logged reflects the post_action’s URI, response
    code and (possibly; I forget) content length

All of these render it difficult to use post_action as its name
implies - to be competed after the client request is completely
dealt with.

I hope that at some point in the future, we’ll be able to use
post_action in a way which is invisible to the client, and to the
logs. Perhaps with a post_action directive flag, indicating that any
calls to this specific post_action URI should stay out of the logs and
the client’s view.

J

As stated post_action is another option:


nginx mailing list
[email protected]
http://mailman.nginx.org/mailman/listinfo/nginx


Jonathan M.
London, Oxford, UK
http://www.jpluscplusm.com/contact.html

This seems to very risky.

One more question:

The whole reason why I’d use post action is to take the time off from
doing
a SSI. i.e. serve a page from cache in light speed and then let post
action
take its own sweet time to complete.

But from Jonathan’s response I infer that a post actions response code
is
passed to the client. Which means the client is made to wait till the
post
action is complete. Right? If yes, that defeats the purpose.

On Wed, Apr 4, 2012 at 5:57 PM, Jonathan M.