Managing memory consumption

Hi all,

I’ve got a framework [1] that transfers files either locally or over
XML::RPC. In either case, memory consumption is significantly higher
than I
would think it should be, but in the case of XML::RPC it’s so high that
I
can’t actually test it on VMWare images because they don’t have enough
RAM
(and I don’t have enough RAM to allocate all of them enough RAM).

Is there anything I can do to force deallocation of memory when I know
I’m
done with it?

I’m confident it’s not a memory leak on my part because memory usage
levels
off after the first full iteration (well, I know it levels off if I
don’t
use XML::RPC, but I get failures with it so I can’t test that), even if
I
recreate all of the objects from scratch on each loop (and I’m not using
any
class variables or globals or anything).

The application, in this case, supports two methods: Send a file and
receive a checksum in return, or send a checksum and receive a file in
return.

Here’s a basic plot of what my memory consumption looks like when
copying my
vim binary around (it’s about 3.5 MB):

    Start the test method:              16124K
    Make a copy of the file:            18604K

In the 'send' method on the client:
    Read the file in the 'send' method: 19388K
    Bas64 encode the file:              27708K

In the 'send' method on the server:
    Base64 decode the file:             31020K
    Read in the server-side file
      for comparison:                   34284K

After this point, no matter how many times I run this loop, memory never
grows or shrinks. I’ve tried setting variables to “” to encourage
memory
shrinkage, but (unsurprisingly) this doesn’t help.

If I add XML::RPC into the mix (with the same file and the same basic
process), the ‘call’ method in xmlrpc/client grows the memory from
28124K to
76132K in one huge jump (between the encode/decode steps). In looking
at
the code, it’s obvious why it’s happening – the data is being copied
multiple times. Still, I can’t have this system using up 76MB of RAM
for
every type of XML::RPC client I have in memory, and if I send a larger
file than 3.5MB the RAM usage climbs multiplicatively.

Again, this is for one file, and this is just one part of my
application.
This is also a long-running application, so it’s important that its
average
memory usage be as low as possible.

At no point do I see memory usage decrease, even if I send 4 smaller
files
after my relatively large one.

So, is there anything I can do to mitigate RAM usage here? Anything I
can
tell Ruby that will make it let go of that RAM?

I’ve tried adding GC.start at the beginning and end of all of these
methods,
and it does not appear to make any kind of difference (which I suppose
is a
good thing), but I’m at a loss as to how to proceed.

Any ideas?

1 - http://reductivelabs.com/projects/puppet

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Luke K. wrote:

Here’s a basic plot of what my memory consumption looks like when copying my
Base64 decode the file: 31020K
Read in the server-side file
for comparison: 34284K

After this point, no matter how many times I run this loop, memory never
grows or shrinks. I’ve tried setting variables to “” to encourage memory
shrinkage, but (unsurprisingly) this doesn’t help.

Because ruby GC doesn’t do compaction. I found out the hard way to,
thanks to Guy Decoux and H. Yamamoto for pointing this out to me.

Any ideas?

I had a similar question a while back on ruby-core in regards to GC and
memory usage in ruby. In my case it was in regards to the
Mysql adapter, since it was never actually releasing some memory. If

Here here a comment from H. Yamamoto:

H> Because rb_gc_mark_locations marks all objects in the range of
STACK_END to rb_gc_stack_start.
H> if GC.start runs inside block, block needs more stack, so more
objects can be wrongly
H> marked as alive. (As you can see, last pause is outside of block,
so less stack is used,
H> huge array goes out of stack range, it is freed)

H> And Eric H. is right. ([ruby-talk:181568])
H> The memory heap slot uses is greater than the memory C string itself
uses.
H> Even after array is freed, heaps_slot itself still eats 26MB memory,
H> and we cannot reduce it because ruby’s GC cannot do compaction.

Follow this thread for the full story:
http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-core/7442

To see some immediate reasoning view Guy Decoux’s post,
http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-core/7458
and read the comments as Guy explains what is happening.

My problem:
http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-core/7463
See Guy’s reasoning:
http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-core/7467

Hope this helps,

Zach

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFEPsRIMyx0fW1d8G0RAtQJAKCBFVwmwWiEVv8S3tfZWdYu/7hdyQCeK5a5
EKYw3YqAixr6pc6p+7/TNRg=
=M0JG
-----END PGP SIGNATURE-----

Well, turns out this extra memory consumption only happens on my Solaris
10
x86 VMWare image running 1.8.2 (I downloaded the blastwave.org compiled
version).

Guess I’ll compile one myself and see if I get the same problems (ugh,
compiling anything in a VMWare image).