Proxy_next_upstream, only "connect" timeout?, try 2

regarding my original email:
http://article.gmane.org/gmane.comp.web.nginx.english/34175

i assume the silence means there is no such way.

would it be hard to implement it? i can try it, but i’d need to know
if it’s at least possible or not,
if there are the necessary ‘hooks’ in place and such…

thanks,
gabor

Hello!

On Fri, Jun 15, 2012 at 09:36:39AM +0200, Gábor Farkas wrote:

regarding my original email:
http://article.gmane.org/gmane.comp.web.nginx.english/34175

i assume the silence means there is no such way.

would it be hard to implement it? i can try it, but i’d need to know
if it’s at least possible or not,
if there are the necessary ‘hooks’ in place and such…

We probably need something more generic, i.e. some distinction
between idempotent and non-idempotent cases in
proxy_next_upstream. This should allow to retry GET/HEAD at any
point, while keeping POSTs safe.

Maxim D.

On Fri, Jun 15, 2012 at 10:02 AM, Maxim D. [email protected]
wrote:

if it’s at least possible or not,
if there are the necessary ‘hooks’ in place and such…

We probably need something more generic, i.e. some distinction
between idempotent and non-idempotent cases in
proxy_next_upstream. This should allow to retry GET/HEAD at any
point, while keeping POSTs safe.

i agree, but i would still prefer to be able to specify the
on-connect-timeout-only too,
there are some cases where i do not want to repeat even a GET request,
and generally it is a safer bet for me to not-repeat anything that
already ‘reached’
the upstream.

you see, my specific problem is that i have multiple upstreams, and i
want nginx
to go to the next upstream when an upstream’s socket-backlog is full.
and currently
i am unable to do this…

gabor

I’m interested on the subject.
On my current set up, I’m using the timeout option to make sure the
request
is passed to the next upstream if the first server is down.
If the request is hanging, it’s possible the request coming in is a bad
request, which could slow down the full cluster.
I would like to go to the next upstream only on connection timeout.
I wonder if you could provide two additional options, “read_timeout” and
“connect_timeout”, leaving “timeout” unchanged.

Posted at Nginx Forum:

Hello!

On Fri, Jun 15, 2012 at 10:07:35AM +0200, Gábor Farkas wrote:

would it be hard to implement it? i can try it, but i’d need to know
there are some cases where i do not want to repeat even a GET request,
and generally it is a safer bet for me to not-repeat anything that
already ‘reached’
the upstream.

Sure, this needs some configuration for a things which should be
considered idempotent, as http methods are often misused.

you see, my specific problem is that i have multiple upstreams, and i want nginx
to go to the next upstream when an upstream’s socket-backlog is full.
and currently
i am unable to do this…

Practical solution for the specific problem of full listen queue
is to instruct your OS to return RST in this case (this
is usually the default behaviour, but Linux seems to be an
exception), and use “proxy_next_upstream error”. You may also try
small proxy_connect_timeout vs. long proxy_read_timeout and
proxy_send_timeout.

This still has a theoretical problem though, as error might occur
for some reason during sending request to an upstream, and this
will trigger use of next upstream server.

Maxim D.