Hi all,
I’ve got a framework [1] that transfers files either locally or over
XML::RPC. In either case, memory consumption is significantly higher
than I
would think it should be, but in the case of XML::RPC it’s so high that
I
can’t actually test it on VMWare images because they don’t have enough
RAM
(and I don’t have enough RAM to allocate all of them enough RAM).
Is there anything I can do to force deallocation of memory when I know
I’m
done with it?
I’m confident it’s not a memory leak on my part because memory usage
levels
off after the first full iteration (well, I know it levels off if I
don’t
use XML::RPC, but I get failures with it so I can’t test that), even if
I
recreate all of the objects from scratch on each loop (and I’m not using
any
class variables or globals or anything).
The application, in this case, supports two methods: Send a file and
receive a checksum in return, or send a checksum and receive a file in
return.
Here’s a basic plot of what my memory consumption looks like when
copying my
vim binary around (it’s about 3.5 MB):
Start the test method: 16124K
Make a copy of the file: 18604K
In the 'send' method on the client:
Read the file in the 'send' method: 19388K
Bas64 encode the file: 27708K
In the 'send' method on the server:
Base64 decode the file: 31020K
Read in the server-side file
for comparison: 34284K
After this point, no matter how many times I run this loop, memory never
grows or shrinks. I’ve tried setting variables to “” to encourage
memory
shrinkage, but (unsurprisingly) this doesn’t help.
If I add XML::RPC into the mix (with the same file and the same basic
process), the ‘call’ method in xmlrpc/client grows the memory from
28124K to
76132K in one huge jump (between the encode/decode steps). In looking
at
the code, it’s obvious why it’s happening – the data is being copied
multiple times. Still, I can’t have this system using up 76MB of RAM
for
every type of XML::RPC client I have in memory, and if I send a larger
file than 3.5MB the RAM usage climbs multiplicatively.
Again, this is for one file, and this is just one part of my
application.
This is also a long-running application, so it’s important that its
average
memory usage be as low as possible.
At no point do I see memory usage decrease, even if I send 4 smaller
files
after my relatively large one.
So, is there anything I can do to mitigate RAM usage here? Anything I
can
tell Ruby that will make it let go of that RAM?
I’ve tried adding GC.start at the beginning and end of all of these
methods,
and it does not appear to make any kind of difference (which I suppose
is a
good thing), but I’m at a loss as to how to proceed.
Any ideas?