Switchtower for production?

Hello,

The company that I work for will be developing a large webbased survey
(for a government institution) somewhere in Q1/Q2 of 2006. I’m
investigating the technological possibilities and Rails is certainly a
candidate. The survey application will have to perform under very high
peak load, and the exact specifications of the hardware are not known
at this point. I’m assuming we’ll need to do some serious
loadbalancing. Is Switchtower stable enough to be put to this task?

More specifically, here’s one scenario/setup that we’re considering:

  • 5x dual AMD Opteron 175 w/ 2GB RAM/1MB L2 Cache, 10k RPM harddisk
  • OpenBSD 3.8
  • LighTPPD + FastCGI
  • MySQL 3.22
  • Rails 1.0 + Switchtower

With such a setup, how many concurrent users will we be able to
handle? Or, what kind of performance may we expect from such a system?
I’m asking because another company recently failed to keep their
servers up during the peak moments in a project equal to ours.
Would Memcached be a solution?
(tips and ideas are welcome)

Thanks

Gijs N.

Gijs,

SwitchTower is an administration tool commonly used for deploying web-
apps. It is not a load-balancing tool. All it lets you do is execute
tasks remotely (and in parallel) on multiple machines, via ssh.

Please let me know if I’ve misunderstood your question.

  • Jamis

Hi Jamis,

No I think I must’ve misunderstood SwitchTower’s purpose.
So it’s all FastCGI that handles the load-balancing then?

Gijs

On Dec 16, 2005, at 9:15 AM, Gijs N. wrote:

Hi Jamis,

No I think I must’ve misunderstood SwitchTower’s purpose.
So it’s all FastCGI that handles the load-balancing then?

If you are using Apache-managed fast-cgi listeners, apache will do
some limited load balancing among the available listeners. However,
if apache manages your listeners, the listeners must be on the same
machine as apache itself, which means your web server also becomes
your web server. This in turn means your application is limited to a
single machine, which will limit your ability to scale.

A more scalable approach is to put the listeners on a separate
machine (spawning them with a utility like lighttpd’s spawn-fcgi
program) and then point apache there. The problem with THAT approach,
though, is that apache only lets you configure a single external
fastcgi listener per application. (Lighttpd is more flexible in this
respect, letting you configure multiple external listeners per app.)

However, if you use a software load-balancer like haproxy[1] or
balance[2], you can point apache (or lighttpd) at the load-balancer,
and then point the load balancer at your listeners. This approach
even lets you have multiple app servers, with listeners on each of
them. (Note, however, that haproxy doesn’t play very nicely with fast-
cgi, because it just does a naive round-robin around all listeners,
whether the listener is currently servicing a request or not. This
means that multiple requests can be queued up behind one long-running
request, even if another listener would be able to service the
request sooner. I’m still investigating balance, but it seems like it
might play nicer with fastcgi.)

  • Jamis

[1] http://w.ods.org/tools/haproxy/
[2] Balance :: Balance by Inlab Networks

On Dec 16, 2005, at 9:38 AM, Jamis B. wrote:

machine as apache itself, which means your web server also becomes
your web server. This in turn means your application is limited to
a single machine, which will limit your ability to scale.

duh. “web server also becomes your web server.” Brilliant
observation, Jamis. :wink: I meant, “your web server also becomes your
APP server.”

  • Jamis

Jamis B. wrote:

If you are using Apache-managed fast-cgi listeners, apache will do
some limited load balancing among the available listeners. However,
if apache manages your listeners, the listeners must be on the same
machine as apache itself, which means your web server also becomes
your web server. This in turn means your application is limited to a
single machine, which will limit your ability to scale.

Unless, of course, you use some sort of approach from load
“distribution” (say via round-robin DNS at the most simplistic) to using
a real load balancer in front of multiple Apache (or Lighttpd)
instances, each on their own machine, each which manages its own FCGI
processes. Then, you’re app definitly isn’t limited to a single machine.

There’s a matrix between “wide” (where web and app serving is each done
on a set of machines) and “deep” (where you put each function on a
seperate machine–including potentially haproxy or balance on its own
dedicated machine(s)). And each project’s reality, both requirements and
budget, will find their own optimal mix.

There’s also another issue to throw into the pie. That of managing
failures. With more moving parts (ie, spawing off external fcgi procs
with spawn-fcgi and using haproxy), there’s more places to have to
manage failure when it happens. Not that it’s that bad to have to
manage, but it does take attention to detail.

Scaling is a bitch no matter how you slice it. :slight_smile: The nice part about
figuring out how to scale a particular rails app is that you can change
models mid-stream and go with it since you’ve already built on a
platform that’s ready for shared-nothing deployments.

On 12/17/05, Gijs N. [email protected] wrote:

I’m asking because another company recently failed to keep their
servers up during the peak moments in a project equal to ours.
Would Memcached be a solution?
(tips and ideas are welcome)

In general your post is extreamly unclear. There is no silver bullets
(memcached, lighttpd, fastcgi).
You need to figure what exactly you want, where are the weak points
etc. In fact you need to work first on infrastucture before choicing
the specific solutions.
For example:

  • do you need (can you buy) hardware load balancer (BIG-IP etc.). If
    no, what kind of software you’ll use for load balancing (pound etc.).
  • How you will deal with the failover stuff (heartbeat, whackamole etc.)
  • Will the new system be standalone (so you can choice lighttpd,
    apache) or part of existing one (so you’ll need to use already working
    server)
  • How you will scale (verticaly, buying more expensive boxes) or
    horizontaly (adding new cheaper boxes).
  • What exactly you want to cache? Sessions? Results of evaluating
    templates (HTML pages)?

Take a look on http://danga.com/ They have very good solutions and
documents about growing LiveJournal.

If you decide to go with Rails. What about two boxes, connected with
heartbeat, keeping lighttpd servers. In the config you specify FastCGI
listeners by IP and port (not by socket). Plus several boxes with
Rails listeners (started with spinner, spawner or daemontools),
working on shared IP address with LVS (will give you
loadbalancing/failover). For the caching it can be memcached, but from
my experiments, the Rails memcached can work only with one server (the
ppl who know more, please correct me, i’m interested to see Rails with
memcached on several servers). So maybe better idea is to keep your
sessions in the ActiveRecord store.

Lighttpd can help you to have web server and appication servers on
different boxes.
FastCGI/SCGI can help you to speed up you application.
SwitchTower can help you deploying application to the suitable boxes.
Memcached/dRB/ActiveRecord can help you with sessions.
Heartbeat can help you with high availability. Will be interesting to
see if somebody already done experiments with whackamole (no load
directors, all boxes watch each other)
LVS can help you with failover/loadbalancing. In fact you can even use
lighttpd itself for this (load balancing, but not failover i think).
And only you can help yourself creating the right infrastucture to use
all blocks together.

And last but not less important: Buy a copy of “Agile Web D.
with Rails” [ http://www.pragmaticprogrammer.com/titles/rails/ ], read
it - there are a loooot of good point of Rails maintanance/scalling
etc.