Thank you. These are excellent points. In my case all upstream servers
share the same responsibility for the types of requests that are served.
I guess I am looking at ‘fair’ more as a way to auto-tune the weighting
based on the relative performance of each upstream.
I am hosting within the Amazon EC2 network. Because of fluctuations in
their virtualized environment and underlying systems, it is very
possible to have some some backends performing poorly compared to
For instance imagine a scenario where I have 3 virtualized servers
running on EC2 that are running as my upstream boxes. These three
servers may actually be (and are most likely) on different physical
servers. Now assume one of the EC2 servers has a problem that affects
performance of all virtualized servers that it is hosting (perhaps it is
networking related or perhaps it affects the speed of the machine).
Now my upstream server on that troubled box will be running at a lot
lower level of performance than my other upstreams, and this will show
itself on the bottom line by much higher average ms total response time
(time it takes to connect to upstream and get its full response)
compared to the others.
So in my case, I would like to use ‘fair’ almost as a way to maximize
site performance based on the health of the systems. Under heavy load I
think ‘fair’ would likely do this as requests for the slower box would
get backed up and get reflected in the weighting. But under a light
load probably not. So in that case ‘fair’ would still route requests to
a server that may take 500ms longer to reply just because there is no
Anyway I realize that you did not write ‘fair’ to solve this but just
wanted to provide you with this feedback in case it spurs some ideas for
how to expand it to cover this usage scenario. Thank you for this
opportunity to provide the feedback and for your great contributions to
the nginx project!
Be a better friend, newshound, and
know-it-all with Yahoo! Mobile. Try it now.