Nginx serving wrong website under proxy_cache

hey guys,

So I had 3 sites configured to use caching on my nginx box (tried with
.50, 52 and 53 build) however of the three websites kept redirecting
me to the first site that I was doing caching for.

Example Sites:
0.0.0.1
0.0.0.2
0.0.0.3

HTTP config very similar to this only each utilize a different
“listen” ip address and each site has a different Backend 'Origin" ip:

HTTP LB

upstream LB_HTTP_0.0.0.1 {
server x.x.x.x:80;
}

server {

    listen       0.0.0.1:80;
    server_name  website_0.0.0.1;

     access_log      /var/log/nginx/any.website_0.0.0.1.access_log 

main;
error_log /var/log/nginx/any.website_0.0.0.1.error_log
info;

    error_page  404  /404.html;
    location = /404.html {
        root   html;
    }
    error_page  404 500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }

     location / {
     proxy_intercept_errors on;
     proxy_pass              http://LB_HTTP_0.0.0.1;
     proxy_cache             one;
     proxy_cache_key         backend$request_uri;
     proxy_cache_valid       200  1h;
     proxy_cache_valid       404 5m;
     proxy_cache_use_stale   error timeout invalid_header;
     }

}

When browsing to 0.0.0.1, everything was fine
going to 0.0.0.2 or 0.0.0.3 or any other ip for that matter in the
same class C “/24” it would redirect you to the 0.0.0.1 config which
would then proxy-pass the data back to the origin belonging to the
0.0.0.1 config

Anyone seen this issue at all?
Once i commented all proxy_cache and restarted nginx, problem went away.

Note that i even tried deleting /etc/nginx-cache/* and recreating
it… did not help

Thanks,

have also noticed that requesting to get to the domain setup for
0.0.0.1 forwards you to one of the domains for 0.0.0.2 or .3

So ive had to turn off caching all together as its 100% related to a
caching issue… im just not sure how to correct it.

Thanks
Payam

On Tue, Apr 28, 2009 at 07:27:00PM -0700, Payam C. wrote:

    listen       0.0.0.1:80;
    location = /50x.html {
        root   html;
    }

     location / {
     proxy_intercept_errors on;
     proxy_pass              http://LB_HTTP_0.0.0.1;
     proxy_cache             one;
     proxy_cache_key         backend$request_uri;

Do you have the same

       proxy_cache_key         backend$request_uri;

for all three sites ? You need something like this:

       proxy_cache_key         0.0.0.1$request_uri;

Or you may create several caches:

proxy_path /path/to/cache1 keys_zone=cache1:10m;

On Tue, Apr 28, 2009 at 11:40:03PM -0700, Payam C. wrote:

0.0.0.1

š š š š š š root š html;
š š š š šproxy_cache_key š š š š backend$request_uri;

going to 0.0.0.2 or 0.0.0.3 or any other ip for that matter in the
Thanks,
Hey Igor,

That was the problem… I was using “backend” for each of the configs
so it was confused. Which would you recommend for stability and
scalability, attaching a dynamic $ to the proxy_cache_key or creating
a separate cache for each site that requires caching?
keep in mind that id like to support up to 500 sites per box if
possible, each doing anywhere from 1mbps to +10mbps

Unique proxy_cache_key is enough. The different proxy_cache_path’s are
required just for simple addministration, say, max_size, inactivity,
etc.

2009/4/28 Igor S. [email protected]:

0.0.0.2
server {
}

proxy_path /path/to/cache1 keys_zone=cache1:10m;

same class C “/24” it would redirect you to the 0.0.0.1 config which

Payam Tarverdyan Chychi
Network Security Specialist / Network Engineer


Igor S.
Igor Sysoev

Hey Igor,

That was the problem… I was using “backend” for each of the configs
so it was confused. Which would you recommend for stability and
scalability, attaching a dynamic $ to the proxy_cache_key or creating
a separate cache for each site that requires caching?
keep in mind that id like to support up to 500 sites per box if
possible, each doing anywhere from 1mbps to +10mbps

Thanks,

Igor S. Wrote:

my nginx box (tried with

HTTP config very similar to this only each

    error_page  404  /404.html;
     proxy_intercept_errors on;

going to 0.0.0.2 or 0.0.0.3 or any other ip


for stability and
proxy_cache_path’s are
required just for simple addministration, say,
max_size, inactivity, etc.

I assume the same configuration rules also apply to fastcgi_proxy.

Do I need to use two fastcgi_proxy_key settings if a site serves both
http and https?


Igor S.
Igor Sysoev

Posted at Nginx Forum:

Correction:

The question should read:

Do I need to use two fastcgi_cache_key settings if a site serves both
http and https?

Posted at Nginx Forum:

Igor S. wrote:

If you use the same backend - no:
listen 443;
location / {
fastcgi_pass backend:9000;
fastcgi_cache_key backend:9000$request_uri;
}
}

I am using the same backend and configured like this:

server {
listen 80;

location / {
    fastcgi_pass backend;
    fastcgi_cache one;
    fastcgi_cache_key backend$request_uri;
}

}

server {
listen 443;
location / {
fastcgi_pass backend;
fastcgi_cache one;
fastcgi_cache_key backend$request_uri;
}
}

For what it may be worth, I have seen some md5 collisions in the error
log:

2009/05/03 00:39:18 [crit] 21997#0: *61 cache file
“/usr/local/nginx/cache/0/da/90e8de013d4126fbab247d12350fdda0” has md5
collision, client: my.ip.addr.ess, server: mydomain.com, request: “GET
/rtwhtrsyrn/010110A/687474702s7777772r777732746s7073697465732r636s6q2s627574746s6r2r7068703s753q776s726p6477617274776s7n6s6r655s636s6q
HTTP/1.1”, host: “mydomain.com”, referrer:
https://mydomain.com/rtwhtrsyrn/010110A/687474702s776s726p6477617274776s7n6s6r652r636s6q2s666s72756q732s616r6r6s756r63656q656r74732s31333535392q6r6s2q796s752q6172656r742q6372617n792r68746q6p
2009/05/03 00:39:24 [crit] 21997#0: *44 cache file
“/usr/local/nginx/cache/0/da/90e8de013d4126fbab247d12350fdda0” has md5
collision, client: my.ip.addr.ess, server: mydomain.com, request: “GET
/rtwhtrsyrn/010110A/687474702s7777772r777732746s7073697465732r636s6q2s627574746s6r2r7068703s753q776s726p6477617274776s7n6s6r655s636s6q
HTTP/1.1”, host: “mydomain.com”, referrer:
https://mydomain.com/rtwhtrsyrn/010110A/687474702s776s726p6477617274776s7n6s6r652r636s6q2s666s72756q732s2r2r2s

On Sat, May 02, 2009 at 11:16:47PM -0400, Jim O. wrote:

Correction:

The question should read:

Do I need to use two fastcgi_cache_key settings if a site serves both http and https?

If you use the same backend - no:

  server {
      listen 80;
      location / {
          fastcgi_pass       backend:9000;
          fastcgi_cache_key  backend:9000$request_uri;
      }
  }

  server {
      listen 443;
      location / {
          fastcgi_pass       backend:9000;
          fastcgi_cache_key  backend:9000$request_uri;
      }
  }

On Sun, May 03, 2009 at 01:18:30AM -0400, Jim O. wrote:

Do I need to use two fastcgi_cache_key settings if a site serves both
}

}
}

server {
listen 443;
location / {
fastcgi_pass backend;
fastcgi_cache one;
fastcgi_cache_key backend$request_uri;
}
}

Yes, this is OK.

collision, client: my.ip.addr.ess, server: mydomain.com, request: “GET
/rtwhtrsyrn/010110A/687474702s7777772r777732746s7073697465732r636s6q2s627574746s6r2r7068703s753q776s726p6477617274776s7n6s6r655s636s6q
HTTP/1.1”, host: “mydomain.com”, referrer:
https://mydomain.com/rtwhtrsyrn/010110A/687474702s776s726p6477617274776s7n6s6r652r636s6q2s666s72756q732s2r2r2s

nginx uses md5 create a cache key and use the key as path to a cache
file,
90e8de013d4126fbab247d12350fdda0 in you case. Besides, in the file there
is crc32 of the original key to test possible md5 collisions.

Could you run

head -1 /usr/local/nginx/cache/0/da/90e8de013d4126fbab247d12350fdda0 |
hexdump
head -2 /usr/local/nginx/cache/0/da/90e8de013d4126fbab247d12350fdda0 |
tail -1

?

On Sun, May 03, 2009 at 02:05:57AM -0400, Jim O. wrote:

    location / {

listen 80;
location / {
For what it may be worth, I have seen some md5 collisions in the error
collision, client: my.ip.addr.ess, server: mydomain.com, request: "GET

[root@saturn logs]# head -1
/usr/local/nginx/cache/0/da/90e8de013d4126fbab247d12350fdda0 | hexdump
0000000 8ab0 1afc 0000 0000 1a20 b835 0032 0000
0000010 0000 0000 0000 0000 0000 0000 0000 0000
0000020 028b 0000 0000 0000 000a
0000029
[root@saturn logs]# head -2
/usr/local/nginx/cache/0/da/90e8de013d4126fbab247d12350fdda0 | tail -1
KEY:
unix:/tmp/cgi.sock.1:/rtwhtrsyrn/010110A/687474702s7777772r777732746s7073697465732r636s6q2s627574746s6r2r7068703s753q776s726p6477617274776s7n6s6r655s636s6q

It’s seems like nginx bug. Could you create debug log ?

Igor S. wrote:

     location / {
     }
   fastcgi_pass backend;
   fastcgi_cache_key backend$request_uri;

“/usr/local/nginx/cache/0/da/90e8de013d4126fbab247d12350fdda0” has md5

?

[root@saturn logs]# head -1
/usr/local/nginx/cache/0/da/90e8de013d4126fbab247d12350fdda0 | hexdump
0000000 8ab0 1afc 0000 0000 1a20 b835 0032 0000
0000010 0000 0000 0000 0000 0000 0000 0000 0000
0000020 028b 0000 0000 0000 000a
0000029
[root@saturn logs]# head -2
/usr/local/nginx/cache/0/da/90e8de013d4126fbab247d12350fdda0 | tail -1
KEY:
unix:/tmp/cgi.sock.1:/rtwhtrsyrn/010110A/687474702s7777772r777732746s7073697465732r636s6q2s627574746s6r2r7068703s753q776s726p6477617274776s7n6s6r655s636s6q

Igor S. wrote:

Do I need to use two fastcgi_cache_key settings if a site serves both
fastcgi_cache_key backend:9000$request_uri;

  fastcgi_cache one;

}
2009/05/03 00:39:18 [crit] 21997#0: *61 cache file
https://mydomain.com/rtwhtrsyrn/010110A/687474702s776s726p6477617274776s7n6s6r652r636s6q2s666s72756q732s2r2r2s
head -2 /usr/local/nginx/cache/0/da/90e8de013d4126fbab247d12350fdda0 |
0000010 0000 0000 0000 0000 0000 0000 0000 0000

I’m sure I could… if only I knew how. :slight_smile:

Can you please give me the steps you want me to take?

It will have to wait a few hours. It’s ~ 0315 here and I’m going to bed.

Jim

On Sun, May 03, 2009 at 03:15:33AM -0400, Jim O. wrote:

   listen 80;
       fastcgi_cache_key  backend:9000$request_uri;
 fastcgi_pass backend;

For what it may be worth, I have seen some md5 collisions in the error
collision, client: my.ip.addr.ess, server: mydomain.com, request: "GET
Could you run

It’s seems like nginx bug. Could you create debug log ?

I’m sure I could… if only I knew how. :slight_smile:

Can you please give me the steps you want me to take?
It will have to wait a few hours. It’s ~ 0315 here and I’m going to bed.

./configure --with-debug …

nginx.conf:

error_log /path/to/log debug;

or, if you can easy reproduce the bug with single request:

events {
debug_connection your.ip.address;
}

On Sun, May 03, 2009 at 03:55:58PM -0400, Jim O. wrote:

I reproduced the error:

2009/05/03 15:34:05 [crit] 4845#0: *178 cache file
“/usr/local/nginx/cache/0/da/90e8de013d4126fbab247d12350fdda0” has md5
collision, client: 96.238.94.155, server: mydomain.com, request: “GET
/rtwhtrsyrn/010110A/687474702s7777772r777732746s7073697465732r636s6q2s627574746s6r2r7068703s753q776s726p6477617274776s7n6s6r655s636s6q
HTTP/1.1”, host: “mydomain.com

The attached patch should fix the bug.

Igor S. wrote:

On Sat, May 02, 2009 at 11:16:47PM -0400, Jim O. wrote:

http and https?
fastcgi_cache_key backend:9000$request_uri;

 fastcgi_pass backend;
 fastcgi_cache_key backend$request_uri;

For what it may be worth, I have seen some md5 collisions in the error
collision, client: my.ip.addr.ess, server: mydomain.com, request: "GET

KEY:
Can you please give me the steps you want me to take?

events {
debug_connection your.ip.address;
}

I reproduced the error:

2009/05/03 15:34:05 [crit] 4845#0: *178 cache file
“/usr/local/nginx/cache/0/da/90e8de013d4126fbab247d12350fdda0” has md5
collision, client: 96.238.94.155, server: mydomain.com, request: “GET
/rtwhtrsyrn/010110A/687474702s7777772r777732746s7073697465732r636s6q2s627574746s6r2r7068703s753q776s726p6477617274776s7n6s6r655s636s6q
HTTP/1.1”, host: “mydomain.com

I did it shortly after a restart as the error log using debug got very
large very fast.

The relevant portion is attached. If you want to see the full 5.6M file
I can put it in a location where it can be downloaded. Just let me know.

Jim

Payam C. Wrote:

HTTP LB

upstream LB_HTTP_0.0.0.1 {
server x.x.x.x:80;
}

Can anyone please tell me who is going to serve the requests on
x.x.x.x:80; above? In other words, which process is actually going to
act as the backend?

Is that another nginx server section?

Thanks.

al

Posted at Nginx Forum: