Caching + Error_code or try

location @backend {
#internal;

        #root           html;
        #index          index.html;

        log_subrequest  on;
        access_log  logs/cfc.access.log backend_log;

        #open_file_cache               off;

        #proxy_pass http://127.0.0.1:8070;
        #proxy_pass http://bt.noblehost.com:80;
        #proxy_pass http://www.santacruzdoula.com:80;
        #proxy_pass http://www.penny-arcade.com:80;
        #proxy_pass http://lbj:80;
        #proxy_redirect / /;

    }

Posted at Nginx Forum:

Sorry about the partial repost. I misclicked the POST button early on
and ran into the edit restrictions. Disregard the top.

Consider:

 fastcgi_cache_path             storage/cache levels=2:2 

keys_zone=cacheresp:50m inactive=25m max_size=2000M;
fastcgi_temp_path storage/temp/;
fastcgi_cache_valid any 10s;

location / {
        root            html;
        index           index.html;

        fastcgi_pass    unix:/tmp/php-fcgi.socket
        include        fastcgi_params;

        fastcgi_cache          cfcheckresp;

        error_page 404 = @backend;
}

location @backend {
proxy_pass http://127.0.0.1:8070;
}

The idea is that if fastcgi returns a 404 then the backend handles the
request. The backend’s response should never be cached. This part works
great. What doesn’t… is that the 404 is never cached (the fastcgi
script is hardcoded to always return “Cache-Control: max_age=5”).
Fastcgi is consulted again and again when a 404 is hit. If any other
code is hit it caches just fine for 5 full 5 seconds. It would be great
if the 404 result was cached. Any thoughts on accomplishing this? Am I
missing something?

By the way, the same goes if you use try_files() in place of
error_page(). And I believe the same happens when a x-accel-redirect is
issued.

Posted at Nginx Forum:

Hello!

On Tue, Aug 25, 2009 at 01:05:26AM -0400, icqheretic wrote:

        root            html;

location @backend {
proxy_pass http://127.0.0.1:8070;
}

The idea is that if fastcgi returns a 404 then the backend handles the request. The backend’s response should never be cached. This part works great. What doesn’t… is that the 404 is never cached (the fastcgi script is hardcoded to always return “Cache-Control: max_age=5”). Fastcgi is consulted again and again when a 404 is hit. If any other code is hit it caches just fine for 5 full 5 seconds. It would be great if the 404 result was cached. Any thoughts on accomplishing this? Am I missing something?

By the way, the same goes if you use try_files() in place of error_page(). And I believe the same happens when a x-accel-redirect is issued.

With try_files fastcgi won’t be reached at all if it won’t found
relevant file.

Redirected responses (either via error_page +
fastcgi_intercept_errors, or via X-Accel-Redirect) can’t be cached
now since they leave upstream processing before place where saving
to cache happens.

One possible workaround is to add extra proxy layer just to
separate caching, e.g.

location / {
    # redirection to backend happens here

    proxy_pass http://127.0.0.1:80/cache/;  # same server
    proxy_intercept_errors on;
    error_page 404 = @backend;
}

location /cache/ {
    # caching happens here

    fastcgi_pass ...
    fastcgi_cache ...
    fastcgi_intercept_errors off; # default, actually

    # For X-Accel-Redirect caching you also need to pass this
    # header to upper proxy instead of processing right now.
    # In nginx 0.8.7+ this can be done via
    # fastcgi_ignore_headers/fastcgi_pass_headers. In
    # older versions (including 0.7.61) use another header
    # name and add_header directive instead.

    fastcgi_ignore_headers X-Accel-Redirect;
    fastcgi_pass_headers X-Accel-Redirect;
}

location @backend {
    ...
}

Maxim D.

Hi, Max! I can see that working. I’ll see if the overhead incurred with
the extra layer is less than the cost of calling the fastcgi over and
over. My guess is yes. Some question come to mind for the extra layer:

  1. Is it better / faster to use a socket for the proxy_pass to itself?
    Can nginx be set up to listen to a unix socket?

  2. Would I gain much benefit by using an upstream keepalive module for
    the managing the connections to the same server? Would your
    ngx_http_upstream_keepalive add-on do the trick?

Perphaps in the long term, a clone to error_pages that caches along the
way would make sense. Maybe an add-on that I’ll take up when I’m less
busy.

Thanks for the reply. Nginx is a great piece of work!

Posted at Nginx Forum:

Hello!

On Tue, Aug 25, 2009 at 12:19:03PM -0400, icqheretic wrote:

Hi, Max! I can see that working. I’ll see if the overhead incurred with the extra layer is less than the cost of calling the fastcgi over and over. My guess is yes. Some question come to mind for the extra layer:

  1. Is it better / faster to use a socket for the proxy_pass to itself? Can nginx be set up to listen to a unix socket?

No, nginx doesn’t support listen on unix sockets.

You may optimize things a bit by using separate ip/port with
explicit bind, e.g.

server {
    listen 127.0.0.1:8081 default bind;

    fastcgi_pass ...
    fastcgi_cache ...
}

to avoid gethostname() call and virtual host resolving, see

http://wiki.nginx.org/NginxHttpCoreModule#listen

for details.

  1. Would I gain much benefit by using an upstream keepalive module for the managing the connections to the same server? Would your ngx_http_upstream_keepalive add-on do the trick?

Upstream keepalive currently works only with memcached out of the
box. There are experimental patches floating around to support
keepalive connections for fastcgi, but not (yet) for http.

Maxim D.