i was hoping someone could point me out to some resource, or better yet,
help me solve a ‘hogging’ of memory that my mongrels are using on my
server.
I’ve got an account on rimuhosting, red-hat distrubution, and i use
mongrel and nginx as a load balancer. all in all, mongrel (mongrel_rails
cluster) is a great server, easy to config, and works like it needs to
work.
But … for some apparent reason, it takes up enourmous amounts of
memory, without any need for it. seems like the amount of memory it
takes up is liniar to how much spare memory is on the server - it now
takes up 37 mbs for each deamon, whereas before i upgraded the memory on
my server (added 96 mbs) , it used to take 20 mbs, which also seems like
a lot of mem-usage, which is why i upgraded the memory in the server in
the first case.
catch-22 or something of this sort - if i upgrade the memory, the memory
usage grows, and then i need to upgrade the memory in the server, and
then … etc
is there any way to controll this? how do i config this? has anyone
bumped into any memory - usage problems with mongrel? this is driving me
nuts >!@
Why is 37 MB surprising? The ruby interpreter takes up nearly 30 of
those
(depending on your build). If you’re looking to economize on memory
utilization, consider serving multiple apps from a single mongrel
cluster
using the --prefix option.
BTW: I’m squeezing 3 mongrel instances, MySQL, Postfix/Dovecot and
Apache
with mod_proxy and PHP5 into 127 blocks or some 129 MB. How many
pages/second are you trying to serve out of how many apps?
Paul Johnson-18 wrote:
cluster) is a great server, easy to config, and works like it needs to
catch-22 or something of this sort - if i upgrade the memory, the memory
My recommendation probably won’t be popular, but I believe you need at
least
one Mongrel per Rails app; if I’m expecting some measure of concurrent
requests, I’ll start at two and move up if need be. The key is how fast
your
app can turn a request around, as Rails is one-in-one-out, so requests
are
processed serially unless you add mongrels.
Why is 37 MB surprising? The ruby interpreter takes up nearly 30 of those
(depending on your build). If you’re looking to economize on memory
utilization, consider serving multiple apps from a single mongrel cluster
using the --prefix option.
Can you expand a bit on this? I didn’t know it was possible to run
multiple applications with a single cluster.
from what i’ve realized via the --prefix on mongrel, i can run two apps
in one domain; domain.com/app1 + domain.com/app2, which is very cool,
and quite nice.
…question is if i can run one mongrel instance, on two different
domains? i.e, have a mongrel cluster that runs
…that way i suppose i could save more memory, if the cause of all of
the mongrel memory hogging is because of the many instances running. (37
mbs for each one - running five instances is more memory than running
one…)
but either way, i still can’t figure out why the mongrels are using more
memory now (37 mbs) after the upgrade, rather than staying at 20mbs,
which was their usage, before the upgrade (upgrade = + 96mbs).
i’m kinda stuck in terms of understanding how much memory is taken for
each mongrel, and what are the factors that determine this usage? any
help, or redirection to some good resource on the net, are much
appreciated…
harper: This is possibly because of virtual memory usage. Earlier some
of
the memory the ruby process used might have been VM, now it has more
memory
to consume so it uses more physical memory. It probably didn’t change
the
overall memory footprint. Just a thought.
You can. Just start mongrels on multiple ports with the --prefix
option (identical except for the --port) and use a s/w load balancer
(pen is good) to use all of these. You might be able to get
mongrel_cluster to make it simpler for you to manage all this but I
haven’t tried that out.
harper: This is possibly because of virtual memory usage. Earlier some
of the memory the ruby process used might have been VM, now it has more
memory to consume so it uses more physical memory. It probably didn’t change
the overall memory footprint. Just a thought.
Hey Vishnu,
hmmm…seems you were right; i did free -m and found out that 95 mbs
were in VM before i upgraded (whereas afterwords i added 96 mbs).
anyway, that problem out of the way raises an even more essential issue:
i’m running 5 mongrels (on five apps) and altogether, and the memory
usage seems to be kind of high…maybe even a little too much. i’m
running one mongrel instance on each app, and each mongrel instance
takes up thier own amount of memory, regardless whether a request is
coming through or whatever.
i’ve heard about running one instance on two sites (… --prefix? ), but
i was wondering if i could run a cluster of 2-3 mongrels on all five
sites i’m running? (keeping them on different domains, etc).
i don’t know exactly what the best approach to
clusters/mongrels+nginx/whatever is in terms of keeping the mem-use
under fine controll (no hogging!), but the way my mongrels are
configured, i’m using too much memory, and it’s costing me too much for
what i think it’s giving me.
You can. Just start mongrels on multiple ports with the --prefix
option (identical except for the --port) and use a s/w load balancer
(pen is good) to use all of these. You might be able to get
mongrel_cluster to make it simpler for you to manage all this but I
haven’t tried that out.
Does anyone have an example configuration of this? Sharing a single
cluster among various domains would be awesome.
Thanks,
Andre
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.