Increasing timeouts causes browser to endlessly wait for a response

Hi,

I’m running nginx 0.7.24 along with Mongrel. On an admin, non public
facing site, I have tasks that involve lots of computation and can
take up to 15-30 minutes to run.

I’ve configured nginx to wait a long while for a response from
Mongrel, but what happens is that Mongrel will complete the task, but
nothing is returned to the browser. It just sits there as if the
operation is still running.

Below is my configuration. This is what I’m using to increase the
timeout:
proxy_connect_timeout 3600;
proxy_send_timeout 3600;
proxy_read_timeout 3600;
keepalive_timeout 3600;
proxy_next_upstream off;

I’ve been trying to get this to work for over a day, so any hints are
greatly appreciated.
Cheers!


http {
include mime.types;
default_type application/octet-stream;

log_format  main  '$remote_addr - $remote_user [$time_local] 

$request ’
'"$status" $body_bytes_sent “$http_referer” ’
‘"$http_user_agent" “$http_x_forwarded_for”’;

access_log  logs/access.log  main;

sendfile        on;

gzip  on;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_proxied any;
gzip_types      text/plain text/html text/css

application/x-javascript text/xml application/xml application/xml+rss
text/javascript;

upstream mongrel {
  server 127.0.0.1:8000;
  server 127.0.0.1:8001;
  server 127.0.0.1:8002;
}

server {
    listen       3000;
    server_name  blah.com;
    root   /blah/cache;

    if (-f $document_root/system/maintenance.html) {
      rewrite  ^(.*)$  /system/maintenance.html last;
      break;
    }

    location / {
        proxy_set_header  CLIENT_IP  $remote_addr;
        proxy_set_header  Host $http_host;

        proxy_connect_timeout      3600;
        proxy_send_timeout         3600;
        proxy_read_timeout         3600;
        keepalive_timeout          3600;
        proxy_next_upstream off;

        if (-f $request_filename) {
          break;
        }

        if (-f $request_filename/index.html) {
          rewrite (.*) $1/index.html break;
        }

        if (-f $request_filename.html) {
          rewrite (.*) $1.html break;
        }

        if (!-f $request_filename) {
          proxy_pass        http://mongrel;
          break;
        }
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }
}

}

Not nginx related but have you thought about redesigning the
architecture so that the connection basically queues the request, then
the browser is told to refresh every so often and it will check the
status until it is completed?

Seems more scalable than an app that relies on the web connection
staying open for 30-60+ minutes