Hardware loadbalancer + apache 2.2. proxy + mongrels

hi,

we’re currently planning the deployment of a new rails application.
everything will be behind a hardware load balancer (existing
infrastructure). now the question is which setup to use:

  1. hardware load balancer talks to (one or more) apache 2.2 with
    mod_proxy balancer, each talks to many mongrel processes.
    that would give us all of apache’s features to use.
    disadvantage: we wonder if the load-balancer behind load-balancer setup
    would be a good idea?

  2. have the hardware load-balancer talk to the mongrels directly.
    disadvantage: there’s no full blown web server to do some configs that
    you don’t want to handle in rails (mod_rewrite, mod_access_xxx, etc.
    etc.)

does anyone have experience with one of the setups above? we’d be very
happy if we could learn from that.

cheers,
phillip

Option 1 should work fine as long as you maintain session state in the
DB or via some other machine-independent means (which you are probably
already doing if you are using multiple mongrel processes).

V/r
Anthony E.

On 12/8/06, Phillip O. [email protected] wrote:

disadvantage: we wonder if the load-balancer behind load-balancer setup
cheers,
phillip


Posted via http://www.ruby-forum.com/.


Cell: 808 782-5046
Current Location: Melbourne, FL

Anthony E. wrote:

Option 1 should work fine as long as you maintain session state in the
DB or via some other machine-independent means (which you are probably
already doing if you are using multiple mongrel processes).

yes, sessions will be held in a storage which al mongrels will share.

we’re reluctant to go with option 1, as the hw load balancer will not
“see” the mongrel instances which it has to balance, so it doesn’t get
direct feedback. so we suspect the total throughput would be lower. of
course the obvious thing would be to configure both setups and measure,
but i was hoping someone already has the experience.

thanks,
phil

On Dec 8, 2006, at 7:14 AM, Phillip O. wrote:

we’re reluctant to go with option 1, as the hw load balancer will not
“see” the mongrel instances which it has to balance, so it doesn’t get
direct feedback. so we suspect the total throughput would be
lower. of
course the obvious thing would be to configure both setups and
measure,
but i was hoping someone already has the experience.

thanks,
phil

Hey Phil-

We are doing pretty much number 1 except with nginx instead of

apache. Our setup is like this:

hardware load balancers -> nginx on each node -> mongrel_cluster.

This works superbly. Unless there is some specific feature of apache

that you have to have though I recommend using nginx instead of
apache2.2 nginx proxy module is much faster then mod_proxy_balancer.
Also nginx uses almost no resources, compared to apache which does
use resources heavily. We have close to 100 nodes setup exactly like
this and have had very good results from it.

We did experiments with going directly to mongrel from the load

balancers but the performance difference was minimal. ANd nginx
serves static files very fast , much faster then plain mongrel.
Overall its a performance win to use nginx in front of mongrel
instead of straight to mongrel because static files are served many
times faster. You really don’t want mongrel serving any static files
if possible.

Cheers-
– Ezra Z.
– Lead Rails Evangelist
[email protected]
– Engine Y., Serious Rails Hosting
– (866) 518-YARD (9273)

On 12/8/06, Phillip O. [email protected] wrote:

  1. hardware load balancer talks to (one or more) apache 2.2 with
    mod_proxy balancer, each talks to many mongrel processes.
    that would give us all of apache’s features to use.
    disadvantage: we wonder if the load-balancer behind load-balancer setup
    would be a good idea?

We (http://pivotalsf.com) have a client/project that uses an f5 load
balancer to spread load across three web/app servers running apache
2.2 backed by a mongrel cluster (mysql and solr on separate hosts).
This works great. Our only (minor) problems with this have been with
the lb’s config properly detecting 503s from apache. I highly
recommend apache for this.

Hope this helps,

pt.

Parker T.

510.541.0125

hi ezra, hi parker,

thank you very much for sharing your setups and the reasoning behind it!
it’s good to know that both setups i suggested can work well :slight_smile:

i found an article by rob orsini where he addresses the mongrel static
file “problem” by handing static requests directly to lighty.
http://blog.tupleshop.com/2006/7/8/deploying-rails-with-pound-in-front-of-mongrel-lighttpd-and-apache
now one could adapt that for use with a hw lb as follows:
every app server runs a couple of mongrels and one lighty. the hw lb
hands dynamic requests to the mongrels, static ones to lighty.
no idea if this works, but it would remove the additional nginx proxying
from the equation. otoh, it would make the hw lb rules more complicated.

cheers,
phillip

On 12/8/06, Ezra Z. [email protected] wrote:

hardware load balancers → nginx on each node → mongrel_cluster.

Another option would be:

hardware load balancers → litespeed(s)

I may be in the minority with that suggestion – however my experience
with Litespeed has been very rewarding. Haven’t figured out why it
doesn’t come up more often.

Joe

Joe N. wrote:

On 12/8/06, Ezra Z. [email protected] wrote:

hardware load balancers → nginx on each node → mongrel_cluster.

Another option would be:

hardware load balancers → litespeed(s)

good point, thanks for reminding me.

I may be in the minority with that suggestion – however my experience
with Litespeed has been very rewarding. Haven’t figured out why it
doesn’t come up more often.
i think the fact that it’s a commercial solution gives it less exposure
(i.e. coverage) in the rails community - where very good open source
solutions exist. that’s not to say that it wouldn’t be a good choice.

cheers,
phillip