Many apps == Many processes?

But, right now, they are mutually exclusive: There’s no way to use
Rails (and therefore gain it’s benefits) without incurring this cost.

Tomas J. - What you need is LiteSpeed as suggested by Peter
originally. It spawns and manages application processes. If a
process is idle for X seconds, the process dies (you set X obviously).
You can even choose to not spawn rails processes on a server restart
in which case they would be spawned when the application get hit for
the first time. If you get very little traffic, you will probably see
very few processes running at any given time, even if your hosting 30
rails apps.

Joe

Joe: I’ll have to look into LiteSpeed, but based on what I’ve read,
neither LiteSpeed is capable of having apps share Ruby processes
between themselves (which doesn’t surprise me at all).

Some responses in this thread suggest that is the case:

I feel your pain and understand your issue.

I think however, that this is part of being ‘opinionated software’ …
it
really is about developer productivity, not server productivity.
Again, I
know your asking to improve server productivity and it’s a nobel cause
… I
just don’t see the rails core guys caring or wanting to work on it
anytime
soon.

Second, it’s also about using the right tool for the job and rails just
isn’t the right tool for web ‘pages’ or really small apps to be shared
hosted with hundreds of other accounts. Sure, we all want to use it
in
that space, because we know it and it is great for productivity, but the
very overhead you’re discussing is why it’s the wrong tool in
'small’space.

I’ll be right there with you cheering on the solution when/if it comes,
but
for right now, the answer you seek doesn’t exist.

On 30 dec 2006, at 08.52, Tomas J. wrote:

Joe: I’ll have to look into LiteSpeed, but based on what I’ve read,
neither LiteSpeed is capable of having apps share Ruby processes
between themselves (which doesn’t surprise me at all).

You I cant think of a good reason why you would want/need that (not
that there isn’t one maybe). But I don’t think the problem your
seeking to solve now is answered by sharing ruby processes.

On 12/30/06, Peter B. [email protected] wrote:

Tomas, there is no point in sharing Ruby processes, when they are
spawned and killed according to need.

Exactly. Tomas, think in terms of total memory. Traditionally if you
have 30 apps, 1 process each at 20 mb ram, your using 600 mb of ram on
your server. Now say you figured out that on average only 1 of your
apps is “in use”. If you were running litespeed that would mean on
average you would see 20 mb ram dedicated to rails apps.

You should still make sure your server can run everything at the
same time… otherwise you would be dependent on under-use which just
isn’t good design in any server environment.

I suppose I might have jumped to conclusions regarding the nature of
the problem. The facts are that the server is running perhaps 30 Ruby
processes, and the server is equipped with 1.5 GB of RAM. I’m not able
to check more details right now, but I’ll definitely have to make a
proper check later.

Regards,
Tomas J.

Tomas, there is no point in sharing Ruby processes, when they are
spawned and killed according to need.

Joe N. wrote:

Exactly. Tomas, think in terms of total memory. Traditionally if you
have 30 apps, 1 process each at 20 mb ram, your using 600 mb of ram on
your server. Now say you figured out that on average only 1 of your
apps is “in use”. If you were running litespeed that would mean on
average you would see 20 mb ram dedicated to rails apps.

Peter B. wrote:

Tomas, there is no point in sharing Ruby processes, when they are
spawned and killed according to need.

Spawning a Rails process is about as expensive as handling 20 requests.
Let’s say you have 60 apps, each gets hit twice per minute with uniform
distribution, so the server has to handle about 2 request per second. No
big deal. But if you do on-demand-spawning, it has to handle the load
equivalent of 42 requests per second.

Of course, this is a worst-case scenario, but you get the point.
On-demand-spawning is no silver bullet. In many cases it is better to
just load everything and leave it to the OS to swap unneeded
applications to the disk if memory gets low.

Peter B. wrote:

Well, if processes were spawned every time they are needed and then
killed again after use, each time, you would have something very
inefficient.

As for silver bullets, I’ve never seen one within the field of
computer science, so that’s not what we’re talking about here.

Tomas, there is no point in sharing Ruby processes, when they are
spawned and killed according to need.

I just wanted to show you that there IS a point in sharing ruby
processes between apps. When you have a high number of small apps that
get hit with roughly equal probability, you gain almost nothing with
dynamic process spawning. That’s a common scenario for 99ct mass web
hosters.

We’re running Apache 1.3.something plus FastCGI, is this setup
especially bad at handling Ruby processes?

Is LiteSpeed recommended for serving static content too or just
dynamic stuff via proxy?

No, and you don’t want to. That is why processes were invented in the
first place. They provide separation.

Unix has a good set of tools to manage and monitor processes - which
would have to be duplicated inside any application server.

Why bother complicating matters with another layer of software that may
go wrong?. Memory is cheap and swap space is very cheap. Unix is a
superb multi-user operating system - providing wonderful memory
separation, process management and reasonable security controls. Why do
you want to use it like it was an early version of MS-DOS?

For ease of management though, I would suggest using Mongrel rather
than FCGI. It’s just a lot easier when everything is talking HTTP.

Well, if processes were spawned every time they are needed and then
killed again after use, each time, you would have something very
inefficient. But that’s not the way it works. Your example is about
as extreme as is possible.

Apache as a webserver manages do spawn threads pretty efficiently and
can sustain high hitrates. If processes are left idle for a long
time, they are killed, otherwise they are left to live to serve
future requests. If all processes are busy and another request comes
in, another process may be spawned, depending on configuration, to
handle that request, and that process is subject to the same life-
span restriction as the others.

That’s very common knowledge. And yes, there is a overhead involved
in firing up another process. You don’t want to do it too often. But
that’s elementary resource management.

LiteSpeed does exactly the same thing but with Rails processes.
However it also uses a FCGI implementation of its own to drastically
reduce the startup overhead. The result is that you can let it handle
the resulting resource allocation dynamically - without being overly
bothered by process startup.

As for silver bullets, I’ve never seen one within the field of
computer science, so that’s not what we’re talking about here.

/ Peter

Neil W. wrote:

For ease of management though, I would suggest using Mongrel rather
than FCGI. It’s just a lot easier when everything is talking HTTP.

If memory IS an issue then that’s not a good idea, Mongrel uses almost
twice as much as FCGI/LSAPI.