Re: Fair Proxy Balancer

Thank you. These are excellent points. In my case all upstream servers
share the same responsibility for the types of requests that are served.
I guess I am looking at ‘fair’ more as a way to auto-tune the weighting
based on the relative performance of each upstream.

I am hosting within the Amazon EC2 network. Because of fluctuations in
their virtualized environment and underlying systems, it is very
possible to have some some backends performing poorly compared to
others.

For instance imagine a scenario where I have 3 virtualized servers
running on EC2 that are running as my upstream boxes. These three
servers may actually be (and are most likely) on different physical
servers. Now assume one of the EC2 servers has a problem that affects
performance of all virtualized servers that it is hosting (perhaps it is
networking related or perhaps it affects the speed of the machine).

Now my upstream server on that troubled box will be running at a lot
lower level of performance than my other upstreams, and this will show
itself on the bottom line by much higher average ms total response time
(time it takes to connect to upstream and get its full response)
compared to the others.

So in my case, I would like to use ‘fair’ almost as a way to maximize
site performance based on the health of the systems. Under heavy load I
think ‘fair’ would likely do this as requests for the slower box would
get backed up and get reflected in the weighting. But under a light
load probably not. So in that case ‘fair’ would still route requests to
a server that may take 500ms longer to reply just because there is no
backlog.

Anyway I realize that you did not write ‘fair’ to solve this but just
wanted to provide you with this feedback in case it spurs some ideas for
how to expand it to cover this usage scenario. Thank you for this
opportunity to provide the feedback and for your great contributions to
the nginx project!

  ____________________________________________________________________________________

Be a better friend, newshound, and
know-it-all with Yahoo! Mobile. Try it now.
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

Isn’t the point of using Amazon EC2 exactly that you get guarantee
that your virtual machine will not be affected by normal problems
related to hosting it on a single machine. I mean they say that they
have a copy of your virtual machine on more than on server on maybe
more than one location. I may be wrong here but from what I have read,
you should not worry about your instance shutting down or slowing down
drastically because what happens if I only have one instance and I
reply on that?

Kiril

Kiril,

You would think that would be the case, but it’s not.
EC2 is actually only useful in a sense that you can bring online “n”
number of servers at any time, and take them offline at any time. You
can use 1 server one day, 100 servers the next day, 1000 servers 3
hours later, and be back down to 1 server an hour after that.

The actual performance of said servers can definitely fluctuate.

BJ Clark

Wouldn’t setting your proxy timeouts low solve this scenario? If there
is
an application server with poor performance, it will be marked as failed
and
nginx will divert requests to your other 10/100/1000 servers.

j.

This is an interesting use case. Further to this I have also been
looking at triggers that would factor into a decision to create,
terminate, or terminate and create (replace) instances. Latency is
certainly one of these. Latency tracking in the fair balancer is quite
interesting and would be great with hook that would allow it to be
captured by your monitoring server.

Regards,
David