Reverse proxy performance woes

Hey all, I'm running into performance issues with nginx as a load balancer.  I have three web servers and one load balancer server.  All four of them have nginx installed and setup.

Using a stress testing tool (jMeter) I am able to get 220 requests per second from any of the three web servers.  When I go to the load balancer I cannot break 180/s. 

All four of the servers are on the same /24 network, the stress test server is on another /24 network.

Any ideas on what it could be? 

--
Clint P.

On Wed, Sep 03, 2008 at 05:27:14PM -0500, Clint P. wrote:

Hey all, I’m running into performance issues with nginx as a load
balancer. I have three web servers and one load balancer server. All
four of them have nginx installed and setup.
Using a stress testing tool (jMeter) I am able to get 220 requests per
second from any of the three web servers. When I go to the load
balancer I cannot break 180/s.
All four of the servers are on the same /24 network, the stress test
server is on another /24 network.
Any ideas on what it could be?

What do you use as backends ? If backends are java-based, then it can be
java accept connection overhead: nginx currently does not use persistent
connection to backends. Also, nginx as intermediate link will always add
some delay, however, I do not think that this delay can be so huge on
so low request rate (180r/s vs 220r/s).

Clint P. wrote:

All four of the servers are on the same /24 network, the stress test
server is on another /24 network.

Any ideas on what it could be?

i have a similar setup as you Clint. If the backend servers are indeed
pushing that number of requests then you should get at least that at the
front end.

What i would look at is:

  • nginx settings (enough workers?)
  • any errors in the nginx error (file handles?)

Turn off the keep-alive settings on your JMeter test (which i believe is
done by default anyway) so you can test on a level playing field.

I just checked on the error log, no errors are being produced.  I've got the Load balancer setup with:

worker_processes    5;

error_log logs/error.log;

events {
    worker_connections  2048;
}

I have nginx on all three backend servers as well so everything being used is nginx from the load balancer to the web servers, but I still can't break 180/s.

If I have three backend web servers that each can do 220/s should I be able to expect somewhere around 400-600/s out of the nginx load balancer?

Thanks,

-Clint


Alan W. wrote:
Clint Priest wrote:
All four of the servers are on the same /24 network, the stress test server is on another /24 network.

Any ideas on what it could be?

i have a similar setup as you Clint.  If the backend servers are indeed pushing that number of requests then you should get at least that at the front end.

What i would look at is:

  - nginx settings (enough workers?)
  - any errors in the nginx error (file handles?)


Turn off the keep-alive settings on your JMeter test (which i believe is done by default anyway) so you can test on a level playing field.


--
Clint P.

Just thought I'd post the final resolution to this.  I'm running all of this in virtualized environment with VMWare and didn't have the vmware-tools installed.  Now that they are installed the load balancer is saturating the bandwidth serving simple PHP pages from three backend servers.

Getting roughly 660/s through the load balancer.

Clint P. wrote:
I just checked on the error log, no errors are being produced.  I've got the Load balancer setup with:

worker_processes    5;

error_log logs/error.log;

events {
    worker_connections  2048;
}

I have nginx on all three backend servers as well so everything being used is nginx from the load balancer to the web servers, but I still can't break 180/s.

If I have three backend web servers that each can do 220/s should I be able to expect somewhere around 400-600/s out of the nginx load balancer?

Thanks,

-Clint


Alan W. wrote:
Clint Priest wrote:
All four of the servers are on the same /24 network, the stress test server is on another /24 network.

Any ideas on what it could be?

i have a similar setup as you Clint.  If the backend servers are indeed pushing that number of requests then you should get at least that at the front end.

What i would look at is:

  - nginx settings (enough workers?)
  - any errors in the nginx error (file handles?)


Turn off the keep-alive settings on your JMeter test (which i believe is done by default anyway) so you can test on a level playing field.


--
Clint P.