100% with upstream / backup server

Hi,

I’ve noticed a strange issue. I’ve a configuration like:


upstream _backend {
server 10.0.0.1:80 down;
server 10.0.0.2:80 down;
server 127.0.0.1:80 backup;
}

the purpose is to disable all backends and switch to some kind of
maintenance mode.

Howerver, with this config, all workers have after the first requests a
very high cpu load and nginx is not properly answering requests anymore.
Whenn I kill the workers, they reappear with the high load.

I such a setup not allowed?

kind regards,

matthias

I such a setup not allowed?

This is a bug, I sent patch for this (attached) few weeks ago, but it
didn’t
make it into nginx-0.8.32.

Best regards,
Piotr S. < [email protected] >

Hello!

On Tue, Jan 26, 2010 at 05:39:02PM +0100, Matthias R. wrote:


I such a setup not allowed?
Yes, with all servers marked as ‘down’ nginx is caught in infinite
loop in ngx_http_upstream_get_peer(). I’ll take a look how to fix
this properly.

Maxim D.

Hello!

On Tue, Jan 26, 2010 at 05:58:46PM +0100, Piotr S. wrote:

I such a setup not allowed?

This is a bug, I sent patch for this (attached) few weeks ago, but
it didn’t make it into nginx-0.8.32.

[…]

  •    }
    
  •    if (i == rrp->peers->number) {
    
  •         return NGX_BUSY;
    
  •    }
    
  •    if (pc->tries == rrp->peers->number) {
    
           /* it's a first try - get a current peer */
    

This patch has at least two problems I see right now:

  1. It doesn’t handle backup peers.

  2. It does extra unneeded work on each request.

Maxim D.

Hello!

On Tue, Jan 26, 2010 at 06:31:43PM +0100, Piotr S. wrote:

This is done on purpose. This patch is part of patch distributed
with ngx_supervisord which enables you to start/stop backend servers
on demand.

But I agree that it could be optimized for mainstream release with
something like peers->none;

Doing unneeded work on purpose scares me.

I belive correct fix would be to integrate relevant checks into
peer selection loop. This way extra work will be avoided on
normal path and it will be still possible to eventually have
upstreams dynamically configured.

Something like this will do the trick:

— a/src/http/ngx_http_upstream_round_robin.c
+++ b/src/http/ngx_http_upstream_round_robin.c
@@ -578,7 +578,7 @@ failed:
static ngx_uint_t
ngx_http_upstream_get_peer(ngx_http_upstream_rr_peers_t *peers)
{

  • ngx_uint_t i, n;
  • ngx_uint_t i, n, reset = 0;
    ngx_http_upstream_rr_peer_t *peer;

    peer = &peers->peer[0];
    @@ -617,6 +617,10 @@ ngx_http_upstream_get_peer(ngx_http_upst
    return n;
    }

  •    if (reset++) {
    
  •        return 0;
    
  •    }
    
  •    for (i = 0; i < peers->number; i++) {
           peer[i].current_weight = peer[i].weight;
       }
    

Maxim D.

Hi,

thanks for the answer guys!

matthias

Hi Maxim,

This patch has at least two problems I see right now:

  1. It doesn’t handle backup peers.

You are right, fixed version attached.

  1. It does extra unneeded work on each request.

This is done on purpose. This patch is part of patch distributed with
ngx_supervisord which enables you to start/stop backend servers on
demand.

But I agree that it could be optimized for mainstream release with
something
like peers->none;

Best regards,
Piotr S. < [email protected] >