Load balancing PHP via Nginx

So the time has come for us to add another web server (number 2) to our
configuration to help with the amount of connections we’re getting. I’m
looking for some basic recommendations in terms of configuration of
nginx.
That is:

  1. Do I run exactly the same configuration on both boxes and load
    balance
    externally (i.e. nginx + php-fpm on each box + dedicated mysql server)
    or
  2. Do I run nginx + php-fpm on box A and route additional fastCGI
    requests
    to box B?

What would the configuration look like? How do I preserver sessions?
We
are currently using memcached for session menagement and could place
that on
the dedicated data server. What’s the “recommended” methodology?

Thanks!!!

I would recommend using haproxy if you have the budget for a separate
box as
a load balancer. As for doing it via the 2nd method… why would you
want to
do that? And the underlying assumption here - can nginx do that? How
would
it determine when the load has been reached for “this” box? (so that the
rest become “additional” fastcgi requests)

-jf

Maybe I’m not explaining myself correctly, maybe your suggestions are
the
right way to go, but I see a lot of nginx examples such as this:
upstream phpproviders {
server 127.0.0.1:3000;
server 127.0.0.1:3001;
server 127.0.0.1:3002;
}

In this example, different port numbers are used, but you can use
different
ip addresses.

inside the location / tag you would specify:

proxy_pass http://phpproviders

nginx in the simplest (default mode) would round robin the requests.

Is this not a good type of methodology?

Thanks

On Thu, Sep 3, 2009 at 11:10 AM, Ilan B. [email protected]
wrote:

Maybe I’m not explaining myself correctly, maybe your suggestions are the
right way to go, but I see a lot of nginx examples such as this:
upstream phpproviders {
server 127.0.0.1:3000;
server 127.0.0.1:3001;
server 127.0.0.1:3002;
}

:slight_smile: yeah, that works fine. I just saw the phrase “additional (fastcgi)
requests” - and immediately thought u meant to refer to a priority
system…
(ie. where “all requests go to this box. Until it’s loaded. Then send
the
additional requests to that other box!”)

it depends really on what you want/need. If you want a simple setup,
this
could do. And if there is nothing requiring you to stick each request to
any
particular server (since you have session management in memcache;
assuming
you have enough memory, and dont have to forcibly retire sessions ahead
of
their intended expiry time!!!), then this could very well work for you.

-Jeff

The problem that we’re experiencing is that our single web server is
getting
“flooded” (not in a bad way) with a lot of incoming connections, our
site is
growing (yey). So I’m trying to figure out the best way to accommodate
the
growth. In our case, nginx itself is humming along just fine, but PHP
is
choking on requests both due to quantity of requests (it can’t process
them
fast enough so there’s growing queue) as well as due to delays in
database
(I think). So I’m upgrading the database and looking to add another web
server box both to off load some of the load, as well as serve as a
backup
box, etc.
What I don’t know how to do “well” yet, is how to maintain the code base
the
same across the 2 boxes (will using a shared directory work?)

Using Nginx to do simple load balancing I think is the right way to go
for
us for now, obviously I’ll have to deal with session management issues
primarily which can either be handled via a shared memcache
configuration
(can you have 2 memcache servers running that share the same space? –
or on
a single server) or via the database.

Any suggestions that anyone has to help things along would be greatly
appreciated.

I just love how easy Nginx is in terms of configuration and how fast it
is,
really that hasn’t been the issue. I now have to get PHP to process
things
more quickly :-).

On Thu, Sep 3, 2009 at 11:40 AM, Ilan B. [email protected]
wrote:

The problem that we’re experiencing is that our single web server is
getting “flooded” (not in a bad way) with a lot of incoming connections, our
site is growing (yey). So I’m trying to figure out the best way to
accommodate the growth. In our case, nginx itself is humming along just
fine, but PHP is choking on requests both due to quantity of requests (it
can’t process them fast enough so there’s growing queue) as well as due to
delays in database (I think). So I’m upgrading the database and looking to
add another web server box both to off load some of the load, as well as
serve as a backup box, etc.

well it looks like you need to figure out where your bottleneck is.

What I don’t know how to do “well” yet, is how to maintain the code base
the same across the 2 boxes (will using a shared directory work?)

sure. Better cache those files locally! this is where you’ll have to do
some
work (ie., “research”) into the shared file systems to use.

-jf


In the meantime, here is your PSA:
“It’s so hard to write a graphics driver that open-sourcing it would not
help.”
– Andrew Fear, Software Product Manager, NVIDIA Corporation
http://kerneltrap.org/node/7228

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs