Nginx load balancer help

Hi all,

I need your help to setup nginx as load balancer.
This is the hardware I plan to have: 3 web servers and 1 db server.

Now I have 1 web server (nginx + php-fpm with php and static files to
server) and 1 db server (mysql). The web server load is high, because to
many online users. The db server is fine. So my goal is to add 2 extra
web servers to lower the load. What setup do you recommend me to use? Do
you have a tutorial for a load balancer setup with nginx?

Thanks a lot for your help.

Here’s what we have (simplified as there are some sections like mysql
slaves, squid proxies which have been leftout…)

  1. 1 x Storage and DB server. Storage is on 10K RPM RAID 5. DB is on 15K
    RPM
    Raid 10.

  2. 2 x PHP-FPM servers which are connected to the DB (mysql) and Storage
    (NFS). Lots of memory running xcache and pretty much everything can be
    served out of memory.

  3. 2 x FrontEnd Nginx servers which are connected to the PHP-FPM servers
    and
    the Storage Server (NFS). In reality we’re running squids (manage ACLs
    etc…) + Nginx (URL rewriting etc) in front of these servers which are
    in a
    private network).

  4. 1 x Backup server which runs a dedicated mysql slave and does rsync
    backups regularly.

DNS load balancing (and failover) takes care of where the request goes

one of the two front end Nginx.

Nginx loadbalancing takes care of where the PHP request goes. Static
files
are sent out directly.

For further scaling consider loading mysql on the FPM servers and
treating
them as slaves for read queries. This will add to the hardware
requirements.

Once you’ve split out your delivery, application and database, you can
then
scale the bottle neck areas (i.e. add more DB servers, or more php
servers
or more front ends or caching in front of the front ends).

We’ve probably overcomplicated the setup, but since we’re running fairly
OLD
hardware… I think its ok to be over cautious. Would love to see this
all
scaled down into one nice blade server setup. :slight_smile:

----- Original Message -----
From: “Floren M.” [email protected]
To: [email protected]
Sent: Thursday, April 02, 2009 4:42 PM
Subject: nginx load balancer help

I forgot to mention that I run CentOS 5.2 64bits on all servers.

Hello,

There is hardware I have:

  • 1 web server (nginx + spawn-fcgi with php) IP: 1.1.1.1
  • 2 web-server(spawn-fcgi with php) IP: 1.1.1.2
  • 3 web-server(spawn-fcgi with php) IP: 1.1.1.3
  • 1 db server (mysql)

nginx piece of config (located at 1.1.1.1):

upstream phpcgi {
server 1.1.1.2:9000 weight=5 max_fails=10 fail_timeout=30s;
server 1.1.1.3:9000 weight=5 max_fails=10 fail_timeout=30s;
server 127.0.0.1:9000 weight=5 max_fails=10 fail_timeout=30s;
}

Doing siege tests I received this info:

siege aborted due to excessive socket failure; you
can change the failure threshold in $HOME/.siegerc

             Transactions:                     567 hits

Availability: 31.78 %
Elapsed time: 214.11 secs
Data transferred: 14.38 MB
Response time: 0.78 secs
Transaction rate: 2.65 trans/sec
Throughput: 0.07 MB/sec
Concurrency: 2.07
Successful transactions: 610
Failed transactions: 1217
Longest transaction: 5.18
Shortest transaction: 0.05

Availability and Transaction rate is too low. The problem, I think, is
that requests to MySQL server from php-fcgi backends are very slow
(pings from all backends to MySQL server 0.2ms average).

While siege testing load on frontend (1.1.1.1) and backend’s (1.1.1.2
and 1.1.1.3) servers was 1 average. MySQL load is ok.

The question is:
How have you achieved good performance with the backend’s MySQL server?
Is this just a network problem only or there other variants of the
problem?

Best regards,

Posted at Nginx Forum: