Mongrel and memory usage

Hello,
I’m running a Rails application which must sort and manipulate a lot of
data
which are loaded in memory.
The Rails app runs on 2 mongrel processes.
When I first load the app, both are 32Mb in memory.
After some days, both are between 200Mb and 300Mb.

My question is : is there some kind of garbage collector in Mongrel?
I never see the two Mongrel processes memory footprint decrease.
Is it normal?

I use Mongrel 1.0.1 with Rails 1.2.3 on Debian.

Best regards,
Thomas.

On 11/5/07, Thomas B. [email protected] wrote:

I’m running a Rails application which must sort and manipulate a lot of data
which are loaded in memory.
The Rails app runs on 2 mongrel processes.
When I first load the app, both are 32Mb in memory.
After some days, both are between 200Mb and 300Mb.

My question is : is there some kind of garbage collector in Mongrel?
I never see the two Mongrel processes memory footprint decrease.
Is it normal?

Ruby is a garbage collected language. Ruby has a conservative mark
and sweep garbage collector.

Memory usage like that is probably not a Mongrel issue (unless you are
generating very large responses in your application). It’s likely
an issue with your code. What version of Ruby are you using? Are you
using any extensions?

Kirk H.

On 11/5/07, Kirk H. [email protected] wrote:

On 11/5/07, Thomas B. [email protected] wrote:

I never see the two Mongrel processes memory footprint decrease.
Is it normal?
Ruby is a garbage collected language. Ruby has a conservative mark
and sweep garbage collector.

But Ruby processes never release memory back to the operating system.
So, the fact that its RSS never goes down is normal.

In normal circumstances, Mongrel should grow up to some point around
60-120 Mb and stay there. 300 Mb and growing is a sure sign you have a
memory leak somewhere.


Alexey V.
CruiseControl.rb [http://cruisecontrolrb.thoughtworks.com]
RubyWorks [http://rubyworks.thoughtworks.com]

We’re seeing that all the time with our Rails apps. I’m looking at
four processes right now in the 700 to 900MB range.

My first guess is that it’s something in Rails or our app. After all,
that’s where most of the code is. You might try running requests
through WEBrick on a test server to see if the leak still occurs. If
so, then you know at least part of it is Rails.

There’s always nightly restarts :wink: Not my choice on how to do things,
but hey, it’ll have to hold till I can fix bigger things.

What’s the Ruby GC like? Circular references a problem?

On 11/5/07, Kirk H. [email protected] wrote:

My question is : is there some kind of garbage collector in Mongrel?

Kirk H.


Mongrel-users mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/mongrel-users

Hello Kirk,

Thanks for your answer.
I’m using ruby 1.8.5 (2006-08-25) [i486-linux].
The Rails app uses those plugins :

  • acts_as_taggable_on_steroids
  • attachment_fu
  • exception_notification
  • localization

Which kink of issues with my code could use that much memory?
If I load lots of records with Active Records, aren’t they “unloaded” at
some times?

Thanks in advance for your help.
Thomas.

On 11/5/07, Thomas B. [email protected] wrote:

If I load lots of records with Active Records, aren’t they “unloaded” at
some times?

Does your code or any of those pluginx use Array#shift? There was a
bug with Array#shift which still existed in 1.8.5 which basically left
stuff inside the array data structure after a shift, so that those
things didn’t get GCd when they should have. It’s a sneaky bug that
can easily eat a lot of memory.

Otherwise, can you start a test instance of your application, and then
test it to see if there are certain actions which cause the memory
growth. That would help you pinpoint where the likely problems are.
Just use ab or httperf to send a large number of requests to specific
urls in your app, and see how ram usage changes as you do that.

Kirk H.

Hi guys,

Along the lines of Thomas question, I’ve noticed that my mongrel
rails processes start at around 50 MB, and creep up to around 100 MB
(or a little over) pretty soon after being used. Is this something
other folks are seeing (i.e. standard rails overhead), or does it
sound specific to my app?

Also, if anyone has any tips on finding memory leaks in mongrel,
they’d be much appreciated. I’ve played with watching the
ObjectSpace. Is this the best way?

Kirk: thanks for the tip on Array.shift with Ruby 1.8.5. I’ll keep
an eye out for this.

Thanks,
Pete

On 11/5/07, Pete DeLaurentis [email protected] wrote:

Hi guys,

Along the lines of Thomas question, I’ve noticed that my mongrel
rails processes start at around 50 MB, and creep up to around 100 MB
(or a little over) pretty soon after being used. Is this something
other folks are seeing (i.e. standard rails overhead), or does it
sound specific to my app?

There is probably a jump after the first request, then a slow creep
upward for a bit, then it should stabilize. If it never stabilizes,
then you have something in your code somewhere which is leaking.

Also, if anyone has any tips on finding memory leaks in mongrel,
they’d be much appreciated. I’ve played with watching the
ObjectSpace. Is this the best way?

There are some tools that help, but yeah, mostly it’s by using
objectspace and looking through your code. If the code uses an
extension, it’s easy for an extension to have a leak that doesn’t show
up so easily, though. I originally found the Array#shift leak by
using valgrind on Ruby, since all of that is C code.

Kirk: thanks for the tip on Array.shift with Ruby 1.8.5. I’ll keep
an eye out for this.

If this bites you, you can migrate to the most recent 1.8.6, or you
can change your code to not use shift. Generally when shift is used,
push is being used to stick things on one end of the array while shift
pulls them off the front. Changing that to use unshift and pop gets
around the problem.

Kirk H.

On 11/5/07, Alexey V. [email protected] wrote:

In normal circumstances, Mongrel should grow up to some point around
http://rubyforge.org/mailman/listinfo/mongrel-users

Hello Alexey,

300 Mb and growing is a sure sign you have a memory leak somewhere.

What would you suggest me to investigate?

Thanks,
Thomas.

On 11/5/07, Pete DeLaurentis [email protected] wrote:

Along the lines of Thomas question, I’ve noticed that my mongrel
rails processes start at around 50 MB, and creep up to around 100 MB

Set --num-procs lower than default 1024, and it won’t be happening.


Alexey V.
CruiseControl.rb [http://cruisecontrolrb.thoughtworks.com]
RubyWorks [http://rubyworks.thoughtworks.com]

On 11/5/07, Alexey V. [email protected] wrote:

On 11/5/07, Pete DeLaurentis [email protected] wrote:

Along the lines of Thomas question, I’ve noticed that my mongrel
rails processes start at around 50 MB, and creep up to around 100 MB

Set --num-procs lower than default 1024, and it won’t be happening.

It depends. One cause of that sort of creeping mem usage is having an
app that sees large numbers of concurrent threads, as you know, but
it’s not the only cause.

If concurrent threads ARE a mem usage problem, one might try using
evented_mongrel out of the Swiftiply package.
http://swiftiply.swiftcore.org

Just run it in a test environment and see if it helps. For some apps,
it makes a big difference in that thread related RAM creep.

Kirk H.

P.S. Yes, I WILL have the patch to fix it for Mongrel > 1.0.1 today.
The end of my week/weekend got very busy with things that don’t
involve computer screens.

What is a good value for --num-procs for rails applications, since
these are single threaded? Does it depend on how fast the
application responds to users?

Thanks,
Pete

On 11/5/07, Pete DeLaurentis [email protected] wrote:

What is a good value for --num-procs for rails applications, since
these are single threaded? Does it depend on how fast the
application responds to users?

It’s application specific. Your sweet spot is going to be big enough
that you don’t experience capacity starvation during load bursts when
you have temporary periods where more traffic is coming in than you
are clearing, but small enough that you don’t waste resources.
Experimentation will probably be required to find the best balance.

If you try evented_mongrel, you don’t need to worry about num_procs.
It’s irrelevant for the evented_mongrel.

Kirk H.

Which Image processor are you using for attachment_fu? If you’re using
RMagick, it is notorious for
memory leaks. Look at mini_magick or ImageScience as a replacement.

==
Will G.

If you’re using attachment_fu and send_file then mongrel is handling the
sending of files. I had the same problem, spiking memory usage, until I
switched to using x_send_file. It pushes the file downloads to apache,
instead of mongrel. My memory usage has never spiked since…

The XSendFile plugin
http://tn123.ath.cx/mod_xsendfile/

Plugin to simplify using x-sendfile…
http://john.guen.in/past/2007/4/17/send_files_faster_with_xsendfile/

(e)

Hi Kirk,

I’m wondering if we’re being hit by this issue in our application. We
generate a lot of thumbnails on the fly and use send_file to transfer
the data back to the browsers.

Checking the rails docks for send_file it indicates, that unless you
use the option :stream => false, the file will be read into a 4096
byte buffer and streamed to the client.

Peak Obsession

Is this a bug in send_file?

Cheers

Dave

Hi Kirk,

Does Mongrel need to be multi-threaded at all if you’re working with
Rails applications?

I use Lighttpd’s mod_proxy_core to distribute incoming requests
between 8 mongrels. If mongrel A is working on another request, I
want mongrel B to pick up the request right away.

If all 8 mongrels are busy, I believe Lighty retries the cycle a few
times. So, who needs threads? I’m guessing this is a naive
question, but I’d appreciate it if you’d set me straight.

Once I get a breather in my release schedule, I plan to look at a
switch to evented mongrel. Performance benchmarks + community
feedback looks very good. But I still need to get a better grasp on
how it works + the differences from standard mongrel.

Thanks,
Pete

On 11/5/07, Matte E. [email protected] wrote:

If you’re using attachment_fu and send_file then mongrel is handling the
sending of files. I had the same problem, spiking memory usage, until I
switched to using x_send_file. It pushes the file downloads to apache,
instead of mongrel. My memory usage has never spiked since…

This falls under the category of creating http responses. If you are
using send_file within Mongrel, then the response object that is
created will contain all of the file contents. If your file is small
to moderately sized, that’s no big deal, but if you start pushing
large files around, it will have an impact on your RAM usage. Pushing
huge files via send_file necessarily implies huge RAM usage.

Don’t do that. x_send_file is one way to avoid doing that.

Kirk H.

On 11/5/07, Steve M. [email protected] wrote:

can easily eat a lot of memory.
Thanks Kirk - I guess I’m totally OT at this point, but I hadn’t heard
Mongrel-users mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/mongrel-users

Hello,

Thanks everybody for all those informations.
I’ll make some tests and I’ll keep you posted.
I won’t have the time to run those tests this week, but I won’t forget
to post the results on the list.

Best,
Thomas.

On 11/5/07, Steve M. [email protected] wrote:

Thanks Kirk - I guess I’m totally OT at this point, but I hadn’t heard
about this bug before. From your description this is a specific problem
to the underlying C code implementing shift, which is not found in
related functions? So “array.slice!(0)” would be identical in function
to shift but not contain this leak?

Yeah. It looked to me like whoever wrote the original array.c code
just forgot something when writing the code, because it’s just #shift
that has the problem.

This bug was fixed, but not until 1.8.6. I know it is fixed as of at
least the last couple of patch releases. I am unsure if it was fixed
in the original 1.8.6 release, however.

Kirk H.