I am researching solutions for “how do you squeeze as many Rails apps as
can on a cluster” problem.
Environment constraints are as follows:
- 4 commodity web servers (2 CPUs, 8 Gb of RAM each)
- shared file storage and database (big, fast, not a bottleneck)
- multiple Rails apps running on it
- normally, the load is insignificant, but from time to time any of
apps can have a big, unpredictable spike in load, that takes (say) 8
Mongrels to handle.
The bottleneck, apparently, is RAM. At 100 Mb per Mongrel process, you
only put 320 Mongrel processes on those boxes, and under specified
parameters you can only handle 40 apps on the hardware described above.
can handle thousands of sites under the same set of constraints.
We could use lighty + FastCGI combo, but it has a bad reputation. I
if it’s because of bugs in implementation, or it’s just not designed for
these scenarios (if not, what’s the limitation, and can it be fixed?)
If anybody knows a ready-made solution to this problem, please let me
The last thing I want to do is reinvent the wheel.
If anybody knows a load balancer smart enough to start and kill multiple
processes across the entire cluster, based on demand per application,
let me know about that, too.
Finally, I’ve been thinking about making Rails execution within Mongrel
concurrent by spawning multiple Rails processes as children of Mongrel,
talking to them through local pipes (just like FastCGI does, but a
Ruby-specific solution). This may allow a single Mongrel to scale 3-4
better than now, and also to scale down if no requests are coming in the
last, say, 10 minutes. A “blank” Ruby process only takes 7Mb of RAM,
a “blank” Mongrel is not much more (haven’t checked yet). Wonder if this
makes sense, or am I just crazy.
I think, we can implement (and open-source) any solution that needs
rather than years of effort.