Using fork to conserve memory

On Sun, 4 Feb 2007, Daniel DeLorme wrote:

Interesting. I had though about db connections but hadn’t followed the
reasoning through to file descriptors and other shared resources. True this
might be quite tricky and problematic but, as khaines said, not necessarily
a showstopper.

i don’t think it’s a show stopper for a roll your own solution for a
specifc
app, but it’s impossible from outside and ap to know, for instance, if
the
code that’s being wrapped did something like ‘flush fd 7’ or not. here’s
a
simple example: say the code you are about to fork did this

lockfile.flock File::LOCK_EX

know you cannot fork. well, you can, but the children cannot inherit
nor can
reaquire this resource. there are many other resources with similar
patterns:
binding to a socket, for example.

Indeed, I imagine (hope) that the code of a .so file would be shared between
processes. But I very much doubt the same holds true for .rb files. And I
doubt that compiled modules are more than a small fraction of the code.

in fact linux will cache .rb files: you can prove this to yourself
thusly by
running a find on ‘/usr’. then do it again, the second time it will be
loads
faster because the filesystem caches pages (fs dependant of course, but
this
will be true on 90% of linux boxes). the pages only get flushed if
they’re
marked dirty (written to) which in the case of ruby libs is virtually
never.

this bevahiour actually is buggy when nfs is used to mount the .rb
files. on
our cluster an install of a ruby library sometimes results in two
version on
the machine: one in memory and one on disk. for some reason the cache
gets
corrupt via nfs and the new library never takes effect until a reboot.
the
point here being the .rb files are definitely cached.

break if Process.fork.nil?
end
require “/path/to/rails/app/config/environment.rb”
sleep 10

and tell me which one you like better :wink:

the second, because the db connection will work in it :wink:

seriously, i think it’ll be hard to track all the resources - but it
would be
great to see it done!

cheers.

-a

On Sun, 4 Feb 2007, Daniel DeLorme wrote:

Ok, so from the answers so far I gather that something like this hasn’t
really been done before. Ara’s acgi was quite similar to what I was hoping
for, except without the concurrent connections.

I Guess I’ll just have to scratch my own itch.

it’d be great if you wanted to hack on acgi. my plans were to

  • re-write using apache-apr so the c code would be platform
    independant

  • support multiple backends. my plan was going to be to setup say 3
    listeners and simply choose them at random when a request came in.
    this
    should be very simply to code and scale linearly with the number of
    servers.

the cool thing about acgi, of course, is that it’s 100% web server
independant
and doesn’t even require special config. it’s also totally transparent
to the
rails app: which would merely switch FCGI -> ACGI. the simplicity means
that
one could ftp upload a rails app to ANY server so long as it had cgi
support
and a rails app would be fast. no config. no fastcgi. no mongrel.
nothing
needed.

-a