Forum: NGINX Re: Hold requests long enough for me to restart upstream?

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
Rt Ibmer (Guest)
on 2009-03-20 22:19
(Received via mailing list)
Does anyone have some ideas about this? I have an update to do this
weekend that will require me to bounce the upstream, and it would be
great not to drop any connections in the process.  Hoping to hear
something shortly.  Thank you!!
mike (Guest)
on 2009-03-20 22:32
(Received via mailing list)
On Fri, Mar 20, 2009 at 1:09 PM, Rt Ibmer <removed_email_address@domain.invalid> 
wrote:
>
> Does anyone have some ideas about this? I have an update to do this weekend that will 
require me to bounce the upstream, and it would be great not to drop any connections in 
the process.  Hoping to hear something shortly.  Thank you!!

I thought nginx automatically tried another upstream (if you have
multiple...) - I could be wrong but it seemed like it to me and I
thought that was a neat feature...

If not, or if it does not properly, I would say that is definately a
good feature request (or bug to be fixed) and you may look into
HAproxy I think that might be one of its features too.
Cliff W. (Guest)
on 2009-03-20 22:45
(Received via mailing list)
On Fri, 2009-03-20 at 13:26 -0700, mike wrote:
> On Fri, Mar 20, 2009 at 1:09 PM, Rt Ibmer <removed_email_address@domain.invalid> wrote:
> >
> > Does anyone have some ideas about this? I have an update to do this
> weekend that will require me to bounce the upstream, and it would be
> great not to drop any connections in the process.  Hoping to hear
> something shortly.  Thank you!!
>
> I thought nginx automatically tried another upstream (if you have
> multiple...) - I could be wrong but it seemed like it to me and I
> thought that was a neat feature...

It does, but the OP only has a single upstream.


Cliff
mike (Guest)
on 2009-03-20 22:55
(Received via mailing list)
On Fri, Mar 20, 2009 at 1:36 PM, Cliff W. <removed_email_address@domain.invalid> 
wrote:
> It does, but the OP only has a single upstream.

Ah.

I wonder, but what if the OP put the same upstream down multiple times
in an upstream {} block with appropriate retry/timeout settings, like
so...

max_fails=1  fail_timeout=20s;

Essentially it would be a loop but it would allow to retry the same
upstream (assuming nginx does not have an internal table that rejects
or malfunctions if you define the same upstream more than once)
Cliff W. (Guest)
on 2009-03-21 00:43
(Received via mailing list)
On Fri, 2009-03-20 at 13:48 -0700, mike wrote:
>
> Essentially it would be a loop but it would allow to retry the same
> upstream (assuming nginx does not have an internal table that rejects
> or malfunctions if you define the same upstream more than once)

This won't work.  The problem is that his backend isn't unresponsive,
it's *down*.   That is, the backend socket isn't even open.   Timeouts
are for sockets that are "open but unresponsive", not sockets that are
"closed".

In this case Nginx would try each backend and *instantly* go to the next
one until it ran out of backends and then it would throw a 50x error.

Cliff
mike (Guest)
on 2009-03-21 02:31
(Received via mailing list)
On Fri, Mar 20, 2009 at 3:32 PM, Cliff W. <removed_email_address@domain.invalid> 
wrote:

> This won't work.  The problem is that his backend isn't unresponsive,
> it's *down*.   That is, the backend socket isn't even open.   Timeouts
> are for sockets that are "open but unresponsive", not sockets that are
> "closed".
>
> In this case Nginx would try each backend and *instantly* go to the next
> one until it ran out of backends and then it would throw a 50x error.

Idea: perhaps a directive to tell it how long to wait before skipping
to the next one for actual connection refused errors? (I would imagine
that would be what fail_timeout is... I figured -any- failure is
considered)

Hrm.
Cliff W. (Guest)
on 2009-03-21 02:40
(Received via mailing list)
On Fri, 2009-03-20 at 17:24 -0700, mike wrote:
> Idea: perhaps a directive to tell it how long to wait before skipping
> to the next one for actual connection refused errors? (I would imagine
> that would be what fail_timeout is... I figured -any- failure is
> considered)

It's actually "how long until Nginx decides the backend failed".  In the
case of a closed socket, there's no decision to be made: it's known
instantly to be failed.

The fundamental problem here is that the OP is trying to "fake" HA.   If
he wants HA, he needs to have more than one backend to failover to.

Regards,
Cliff
Cliff W. (Guest)
on 2009-03-21 02:45
(Received via mailing list)
On Fri, 2009-03-20 at 13:09 -0700, Rt Ibmer wrote:
> Does anyone have some ideas about this? I have an update to do this
> weekend that will require me to bounce the upstream, and it would be
> great not to drop any connections in the process.  Hoping to hear
> something shortly.  Thank you!!

Is there a particular reason you can't run a second instance of the
Jetty server on a different port?   This would allow you to restart one,
let Nginx failover to the second until the first comes back up, and then
restart the second.   As a bonus, you'd probably see a moderate increase
in performance since Nginx would load-balance them in normal operations.

Cliff
This topic is locked and can not be replied to.