Nginx-1.4 proxy requests being continious

A request for /img/file_doesnt_exist.jpg results in the backend server
(192.168.129.90) getting continuous requests for the same file (which
doesn’t exist there either so 404 each time), while the original
requester waits and nginx keeps asking the backend the same.

I’m using the nginx-1.4.1 from the debian squeeze repository.

Is there a better way do to this config? The aim for for all web servers
to have the same config so a resource that aren’t synced yet still get
served a response if it exists somewhere but without the requests ending
up in a circular loop.

My current, hopefully not too cut down, config is:

upstream imgweb_other {
server 192.168.129.90;
server 173.230.136.6 backup;
}

server {

proxy_read_timeout 15;
proxy_connect_timeout 3;
proxy_next_upstream error timeout invalid_header http_500 http_502
http_503 http_504 http_404;

location ~ ^/img/(.*)
{
expires 2592000;
add_header Cache-Control public;
alias /var/www/live_site_resources/$1;
error_page 404 = @imgweb_other;
}

location @imgweb_other {
# we only want to fallback once so use user_agent as a flag
if ( $http_user_agent = IMGWEB ) {
return 404;
}
proxy_pass http://imgweb_other;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header User-Agent IMGWEB;
}

}

Just to prove I’m not making it up (even though I’m having a hard time
replicating it).

log_format extended '$remote_addr - $remote_user [$time_local] ’
'“$request” $status $request_time $body_bytes_sent ’
‘$upstream_cache_status $upstream_addr
$upstream_status $upstream_response_time’
‘“$http_referer” “$http_user_agent”’;

length of log line 3412217 characters (is that a record?)
58.169.18.35 - - [08/May/2013:19:58:13 -0400] “GET
//img/covers/medium/587/9781844454581.jpg HTTP/1.1” 499 100.820 0 -
192.168.
129.90:80, 192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80,
192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80,
192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80,
192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80,
192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80,
192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80,
192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80,
192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80 (many many
pages)… 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404,
404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404,
404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404,
404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404,
404…, - 0.014, 0.001, 0.000, 0.001, 0.001, 0.000, 0.001,
0.001, 0.000, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001… , - “-”
“Wget/1.13.4 (linux-gnu)”

192.168.131.254 - - [08/May/2013:19:58:13 -0400] “GET
//img/covers/medium/587/9781844454581.jpg HTTP/1.0” 404 0.000 169 “-”
“IMGWEB”
192.168.131.254 - - [08/May/2013:19:58:13 -0400] “GET
//img/covers/medium/587/9781844454581.jpg HTTP/1.0” 404 0.000 169 “-”
“IMGWEB”
192.168.131.254 - - [08/May/2013:19:58:13 -0400] “GET
//img/covers/medium/587/9781844454581.jpg HTTP/1.0” 404 0.000 169 “-”
“IMGWEB”
192.168.131.254 - - [08/May/2013:19:58:13 -0400] “GET
//img/covers/medium/587/9781844454581.jpg HTTP/1.0” 404 0.000 169 “-”
“IMGWEB”
192.168.131.254 - - [08/May/2013:19:58:13 -0400] “GET
//img/covers/medium/587/9781844454581.jpg HTTP/1.0” 404 0.000 169 “-”
“IMGWEB”
192.168.131.254 - - [08/May/2013:19:58:13 -0400] “GET
//img/covers/medium/587/9781844454581.jpg HTTP/1.0” 404 0.000 169 “-”
“IMGWEB”
192.168.131.254 - - [08/May/2013:19:58:13 -0400] “GET
//img/covers/medium/587/9781844454581.jpg HTTP/1.0” 404 0.000 169 “-”
“IMGWEB”
192.168.131.254 - - [08/May/2013:19:58:13 -0400] “GET
//img/covers/medium/587/9781844454581.jpg HTTP/1.0” 404 0.000 169 “-”
“IMGWEB”
192.168.131.254 - - [08/May/2013:19:58:13 -0400] “GET
//img/covers/medium/587/9781844454581.jpg HTTP/1.0” 404 0.000 169 “-”
“IMGWEB”
192.168.131.254 - - [08/May/2013:19:58:13 -0400] “GET
//img/covers/medium/587/9781844454581.jpg HTTP/1.0” 404 0.000 169 “-”
“IMGWEB”
192.168.131.254 - - [08/May/2013:19:58:13 -0400] “GET
//img/covers/medium/587/9781844454581.jpg HTTP/1.0” 404 0.000 169 “-”
“IMGWEB”
192.168.131.254 - - [08/May/2013:19:58:13 -0400] “GET
//img/covers/medium/587/9781844454581.jpg HTTP/1.0” 404 0.000 169 “-”
"IMGWEB

----- Original Message -----

requests ending up in a circular loop.
proxy_read_timeout 15;
}
proxy_set_header User-Agent IMGWEB;
}

}


nginx mailing list
[email protected]
nginx Info Page

Daniel Black, Engineer @ Open Query (http://openquery.com)
Remote expertise & maintenance for MySQL/MariaDB server environments.

Hello!

On Sat, May 11, 2013 at 04:13:38PM +1000, Daniel Black wrote:

[…]

A request for /img/file_doesnt_exist.jpg results in the backend server
(192.168.129.90) getting continuous requests for the same file (which
doesn’t exist there either so 404 each time), while the original
requester waits and nginx keeps asking the backend the same.

I’m using the nginx-1.4.1 from the debian squeeze repository.

[…]

server 173.230.136.6 backup;

[…]

proxy_next_upstream error timeout invalid_header http_500 http_502
http_503 http_504 http_404;

What you describe looks very familiar - there was such a bug which
manifested itself with backup servers and proxy_next_upstream
http_404. It was fixed in 1.3.0/1.2.1 though:

*) Bugfix: nginx might loop infinitely over backends if the
   "proxy_next_upstream" directive with the "http_404" parameter was
   used and there were backup servers specified in an upstream 

block.

Are you sure you are using 1.4.1 on your frontend (note: it’s
usually not enough to check version of nginx binary on disk, as
running nginx binary may be different)? Could you please provide
frontend’s debug log?


Maxim D.
http://nginx.org/en/donation.html

Hi!

used and there were backup servers specified in an upstream block.

Are you sure you are using 1.4.1 on your frontend (note: it’s
usually not enough to check version of nginx binary on disk, as
running nginx binary may be different)? Could you please provide
frontend’s debug log?

Quite right. I did update to 1.4.1 just afterwards.
2013-05-08 20:16:29 upgrade nginx 0.7.67-3+squeeze3 1.4.1-1~squeeze

I definitely restarted the nginx-1.4.1 with no remnants of 0.7.67 around
and haven’t had the troubles when I re-tested.

Thanks for the fix Maxim and digging up this changelog entry.

Looking forward to putting it into production in the next few hours. Any
troubles and I will grab a debug log for you.


Daniel Black