Hold requests long enough for me to restart upstream?


#1

I use nginx 0.6.31 with proxy_pass to front end requests to a servlet
running in Jetty (the upstream).

Sometimes I need to update a jar on the upstream, which requires
restarting jetty to take effect.

I am looking for a way to tell nginx that if it gets a connection
failure at the upstream (which is what happens when jetty is in the
process of restarting since nothing is listening on that port during the
restart) that it should give it say 20 seconds before erroring out the
request back to the browser.

Certainly this will back up the processing a bit, but it should be very
short as it only takes Jetty about 10 seconds to restart and start
listening again on its port. During slow periods we are only getting 2-5
requests per second so there should be plenty of resources for nginx to
queue up these requests while it waits for Jetty.

Can someone please tell me what settings I should use so that nginx will
wait up to 20 seconds for the upstream to restart so that it doesn’t
return an error to the browser?

In the past I have tried setting all these to 20 seconds:
proxy_connect_timeout 20s;
proxy_send_timeout 20s;
proxy_read_timeout 20s;

but when I restarted Jetty, right away the nginx error logs started
showing errors like:
[error] 6445#0: *141102686 connect() failed (111: Connection refused)
while connecting to upstream

Are the above configuration parameters correct for what I am trying to
do and maybe I just didn’t set them right? Or is there some other way?

Basically what I’m trying to do is set those settings high, tell nginx
to reload its config, then bounce jetty, then have nginx hold the
requests long enough to get through once jetty is back up and then have
the requests go through to jetty without losing any requests. Then after
jetty is restarted I would put the timeouts back to normal levels like
3s, until the next time I have to do an update.

Either that or if I could send nginx a signal that told it to accept
incoming connections but put them on “pause”, then i can restart jetty
and then unpause nginx once jetty is back up without dropping any
connections.

Hopefully I explained this well what I am after. Please let me know your
thoughts on the best way to do this.

Thank you!!


#2

On Thu, 2009-03-19 at 15:01 -0700, Rt Ibmer wrote:

the request back to the browser.

In the past I have tried setting all these to 20 seconds:
proxy_connect_timeout 20s;
proxy_send_timeout 20s;
proxy_read_timeout 20s;

but when I restarted Jetty, right away the nginx error logs started
showing errors like:
[error] 6445#0: *141102686 connect() failed (111: Connection refused)
while connecting to upstream

That’s because there is nothing to connect to, which is different than a
timeout. The backend socket is closed rather than open but
unresponsive.

Are the above configuration parameters correct for what I am trying to
do and maybe I just didn’t set them right? Or is there some other way?

Basically what I’m trying to do is set those settings high, tell nginx
to reload its config, then bounce jetty, then have nginx hold the
requests long enough to get through once jetty is back up and then
have the requests go through to jetty without losing any requests.
Then after jetty is restarted I would put the timeouts back to normal
levels like 3s, until the next time I have to do an update.

Have you considered running two instances of Jetty rather than just one?
Then you could use the upstream directive to manage this.

Regards,
Cliff


#3

Rt Ibmer ha scritto:

I use nginx 0.6.31 with proxy_pass to front end requests to a servlet running in Jetty (the upstream).

Sometimes I need to update a jar on the upstream, which requires restarting jetty to take effect.

I am looking for a way to tell nginx that if it gets a connection failure at the upstream (which is what happens when jetty is in the process of restarting since nothing is listening on that port during the restart) that it should give it say 20 seconds before erroring out the request back to the browser.

Certainly this will back up the processing a bit, but it should be very short as it only takes Jetty about 10 seconds to restart and start listening again on its port. During slow periods we are only getting 2-5 requests per second so there should be plenty of resources for nginx to queue up these requests while it waits for Jetty.

Can someone please tell me what settings I should use so that nginx will wait up to 20 seconds for the upstream to restart so that it doesn’t return an error to the browser?

This should be quite easy if you add another custom “backup” server.
In Nginx config (not tested):

upstream backend {
server backend1.example.com;
server 127.0.0.1:8080 backup;
}

In the custom backup server, all you need to do is to pause every
request for 20 seconds; after this just do a redirect to the same
request URI.

[…]

P.S.: your mail user agent does not wrap long lines

Regards Manlio