Proxy to all backends

I have a rails app sitting behind nginx and every request is going to
both of the mongrel instances. Pages are getting through. However, there
are a few db intensive tasks that I don’t want run two instances of.
I’ve been playing with proxy_*_timeout and have turned
proxy_next_upstream off and it still manages to duplicate each request.
No errors are reported. A single access is recorded for a page as well
as a single logging of a destination. Pages are returning with status of
304 and load balancer is dutifully alternating between the two. (When I
shut one down I get 502-Bad Gateway errors on alternate requests.) I’ve
included by conf file. I’m running 0.5.33. Any ideas would be
appreciated.

Thanks in advance,
John


user www-data;
worker_processes 2;

pid /var/run/nginx.pid;

events {
worker_connections 1024;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

sendfile        on;
tcp_nopush     on;

keepalive_timeout  65;
tcp_nodelay        off;

gzip  on;

upstream myservers {
  server 127.0.0.1:4010;
  server 127.0.0.1:4011;
}

server {
  listen 80;
  root /var/www/qpd/public;

  access_log  /var/log/nginx/access.log;
  error_log   /var/log/nginx/error.log;

  rewrite_log on;

  location / {
    proxy_set_header  X-Real-IP  $remote_addr;
    proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_redirect false;
  client_max_body_size    10m;

client_body_buffer_size 128k;
proxy_connect_timeout 5;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
proxy_next_upstream off;

    log_format timing
       '$remote_addr - $remote_user [$time_local]  $request '
       'upstream_response_time $upstream_response_time '
       'msec $msec request_time $request_time  sent to 

$upstream_addr $upstream_status’;

access_log /var/log/nginx/proxy.log timing;
if (-f $request_filename/index.html) {
rewrite (.) $1/index.html break;
}
if (-f $request_filename.html) {
rewrite (.
) $1.html break;
}
if (!-f $request_filename) {
proxy_pass http://myservers;
break;
}
}
}
}

Posted at Nginx Forum:

On Thu, Oct 01, 2009 at 09:02:18PM -0400, jkolen wrote:

I have a rails app sitting behind nginx and every request is going to both of the mongrel instances. Pages are getting through. However, there are a few db intensive tasks that I don’t want run two instances of. I’ve been playing with proxy_*_timeout and have turned proxy_next_upstream off and it still manages to duplicate each request. No errors are reported. A single access is recorded for a page as well as a single logging of a destination. Pages are returning with status of 304 and load balancer is dutifully alternating between the two. (When I shut one down I get 502-Bad Gateway errors on alternate requests.) I’ve included by conf file. I’m running 0.5.33. Any ideas would be appreciated.

Do these db intensive tasks have specific URLs ?

worker_connections  1024;
tcp_nodelay        off;
  root /var/www/qpd/public;
    proxy_redirect false;
       'upstream_response_time $upstream_response_time '
 proxy_pass http://myservers;
 break;
    }
 }

}
}

You should replace these “if/rewrites” with simple “try_files”:

location / {
    try_files  $uri/index.html  $uri.thml  $uri  @mongrel;
}

location @mongrel {
    proxy_pass  ...
    ...
}

Yes, the db intensive tasks have specific urls. I’m more concerned with
the ‘send to everybody’ behavior. (With three upstreams, all three get
the request). The db intensive tasks were how I tracked down problem.
Everything works fine with one upstream, my db intensive jobs finish
before the timeout. I’d like figure out why.

Also, try_files is not available in 0.5.33. But thanks for the pointer.

Posted at Nginx Forum:

More info:

I tried recent stable version (0.7.62) and it shows the same
behavior–everyone gets the query.

Replaced mongrel servers with webrick, and the problem disappears. Now,
one request gets sent to one backend, as it should.

FWIW, the mongrel version is 1.1.3

Posted at Nginx Forum: