Re: Hold requests long enough for me to restart upstream?

Does anyone have some ideas about this? I have an update to do this
weekend that will require me to bounce the upstream, and it would be
great not to drop any connections in the process. Hoping to hear
something shortly. Thank you!!

On Fri, Mar 20, 2009 at 1:09 PM, Rt Ibmer [email protected] wrote:

Does anyone have some ideas about this? I have an update to do this weekend that will require me to bounce the upstream, and it would be great not to drop any connections in the process. Â Hoping to hear something shortly. Â Thank you!!

I thought nginx automatically tried another upstream (if you have
multiple…) - I could be wrong but it seemed like it to me and I
thought that was a neat feature…

If not, or if it does not properly, I would say that is definately a
good feature request (or bug to be fixed) and you may look into
HAproxy I think that might be one of its features too.

On Fri, 2009-03-20 at 13:26 -0700, mike wrote:

On Fri, Mar 20, 2009 at 1:09 PM, Rt Ibmer [email protected] wrote:

Does anyone have some ideas about this? I have an update to do this
weekend that will require me to bounce the upstream, and it would be
great not to drop any connections in the process. Hoping to hear
something shortly. Thank you!!

I thought nginx automatically tried another upstream (if you have
multiple…) - I could be wrong but it seemed like it to me and I
thought that was a neat feature…

It does, but the OP only has a single upstream.

Cliff

On Fri, 2009-03-20 at 13:48 -0700, mike wrote:

Essentially it would be a loop but it would allow to retry the same
upstream (assuming nginx does not have an internal table that rejects
or malfunctions if you define the same upstream more than once)

This won’t work. The problem is that his backend isn’t unresponsive,
it’s down. That is, the backend socket isn’t even open. Timeouts
are for sockets that are “open but unresponsive”, not sockets that are
“closed”.

In this case Nginx would try each backend and instantly go to the next
one until it ran out of backends and then it would throw a 50x error.

Cliff

On Fri, Mar 20, 2009 at 3:32 PM, Cliff W. [email protected] wrote:

This won’t work. Â The problem is that his backend isn’t unresponsive,
it’s down. Â That is, the backend socket isn’t even open. Â Timeouts
are for sockets that are “open but unresponsive”, not sockets that are
“closed”.

In this case Nginx would try each backend and instantly go to the next
one until it ran out of backends and then it would throw a 50x error.

Idea: perhaps a directive to tell it how long to wait before skipping
to the next one for actual connection refused errors? (I would imagine
that would be what fail_timeout is… I figured -any- failure is
considered)

Hrm.

On Fri, 2009-03-20 at 17:24 -0700, mike wrote:

Idea: perhaps a directive to tell it how long to wait before skipping
to the next one for actual connection refused errors? (I would imagine
that would be what fail_timeout is… I figured -any- failure is
considered)

It’s actually “how long until Nginx decides the backend failed”. In the
case of a closed socket, there’s no decision to be made: it’s known
instantly to be failed.

The fundamental problem here is that the OP is trying to “fake” HA. If
he wants HA, he needs to have more than one backend to failover to.

Regards,
Cliff

On Fri, 2009-03-20 at 13:09 -0700, Rt Ibmer wrote:

Does anyone have some ideas about this? I have an update to do this
weekend that will require me to bounce the upstream, and it would be
great not to drop any connections in the process. Hoping to hear
something shortly. Thank you!!

Is there a particular reason you can’t run a second instance of the
Jetty server on a different port? This would allow you to restart one,
let Nginx failover to the second until the first comes back up, and then
restart the second. As a bonus, you’d probably see a moderate increase
in performance since Nginx would load-balance them in normal operations.

Cliff

On Fri, Mar 20, 2009 at 1:36 PM, Cliff W. [email protected] wrote:

It does, but the OP only has a single upstream.

Ah.

I wonder, but what if the OP put the same upstream down multiple times
in an upstream {} block with appropriate retry/timeout settings, like
so…

max_fails=1 fail_timeout=20s;

Essentially it would be a loop but it would allow to retry the same
upstream (assuming nginx does not have an internal table that rejects
or malfunctions if you define the same upstream more than once)