I have a production setup that isn’t live yet (<500 visitors per day)
but the mongrels are already misbehaving.
monit has been configured to restart mongrels that are consuming more
than 170MB. What I observe is that the mongrels quickly hit this
limit, typically within 10-20 minutes of starting up. I relaxed the
memory cap for one mongrel and set it to 500MB but the result is still
the same - it still hits the monit memory limit almost as fast as the
log rotation is in place and the logs are being truncated when they
hit more than 10MB.
gettext is being used heavily in almost all pages.
page caching and page fragment caching is used almost everywhere it
can be used.
On my local development machine, I find that I can easily hit 200MB
with less than 50 clicks. I typically keep my mongrel running all the
time during development. As of today my dev mongrel has been up since
yesterday and I just found out that it is now at 800MB, with just me
poking at it over the last 36 hours or so.
I (well, me and my teammates) are looking at the following steps to
see what’s going on:
- run bleak_house to see where the leaks are coming from
- run on JRuby and see if we’ll get the same memory leak patterns. If
we do then definitely it’s our code that’s suspect here. If we don’t
then maybe it’s with one of the gems or libraries we’re using.
I am also hoping that these too will help:
3. install the latest ruby 1.8.6 from source (patch level 114?)
4. install the latest postgres gem (0.7.9?)
I’d very much appreciate any pointers on other possible actions to
Here are more details on the production server:
Dual Intel Xeon 2GHz
64-bit CentOS 5.1
ruby 1.8.6 (2007-03-13 patchlevel 0) [x86_64-linux]
actionmailer (2.0.2, 1.3.6, 1.3.3)
actionpack (2.0.2, 1.13.6, 1.13.3)
actionwebservice (1.2.6, 1.2.3)
activerecord (2.0.2, 1.15.6, 1.15.3)
activesupport (2.0.2, 1.4.4, 1.4.2)
rails (2.0.2, 1.2.6, 1.2.3)