Forum: Ruby Memory Leak Madness

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
Brandon C. (Guest)
on 2007-01-03 16:46
I'm having one hell of a time trying to find and stop a memory leak in a
ruby daemon. It starts off using a tiny 14 MB of RAM, and 150+MB after a
day. I've read through many forum and blog posts, and tried several
profilers. I don't see anything unusual.

I even tried this guy
http://scottstuff.net/blog/articles/2006/08/17/mem...

That showed my overall object count going up and down as normal, but
memory was not being released as objects came and went.

I've gone through my code made sure I'm squashing any unused objects.
Well...I set them to nil, I just guessed that would help. I'm also
calling .clear on any arrays and hashes. I noticed this helps with
garbage collection. I also added a thread that runs does nothing but
sleep and run garbage collection every 5 minutes, although I'm pretty
sure this isn't helping much.

All I've managed to greatly slow the memory leak.  Before the daemon
would consume 100+ MB in a few hours.

There are two things my daemon makes heavy use of that is out of
control, hpricot and dbi (talking to ms sql server). I suspect one or
both of these guys may be my problem. I'll write into both projects for
advice, but is there anything else I can do so ruby will let go of
unused objects?
Wilson B. (Guest)
on 2007-01-03 17:46
(Received via mailing list)
On 1/3/07, Brandon C. <removed_email_address@domain.invalid> wrote:
>
> There are two things my daemon makes heavy use of that is out of
> control, hpricot and dbi (talking to ms sql server). I suspect one or
> both of these guys may be my problem. I'll write into both projects for
> advice, but is there anything else I can do so ruby will let go of
> unused objects?

Try loading this library when you daemon starts up:
http://moonbase.rydia.net/mental/blog/programming/...

DBI almost certainly makes use of locking, and fastthread cleans that
up significantly.

Just a thought.
Robert K. (Guest)
on 2007-01-03 17:57
(Received via mailing list)
On 03.01.2007 15:46, Brandon C. wrote:
>
> I've gone through my code made sure I'm squashing any unused objects.
> Well...I set them to nil, I just guessed that would help. I'm also
> calling .clear on any arrays and hashes. I noticed this helps with
> garbage collection. I also added a thread that runs does nothing but
> sleep and run garbage collection every 5 minutes, although I'm pretty
> sure this isn't helping much.
>
> All I've managed to greatly slow the memory leak.  Before the daemon
> would consume 100+ MB in a few hours.

Is there some point where it stabilizes after a day or so?

> There are two things my daemon makes heavy use of that is out of
> control, hpricot and dbi (talking to ms sql server). I suspect one or
> both of these guys may be my problem. I'll write into both projects for
> advice, but is there anything else I can do so ruby will let go of
> unused objects?

It may actually be doing it.  The memory footprint you see on OS side
need not be directly related to the number of objects currently in
memory.  I am not sure whether Ruby ever gives back memory to the OS but
if not the effect you see might actually be from a situation where there
were needed many objects at once.

I definitively would not add a thread that does GC.  The interpreter
will take care of this.

I'd probably do some object statistics (per class) and also summarize
String, Array and Hash sizes.  If you encounter a significant increase
in volume somewhere then this might indicate where the problem lies.
You could try something like this:

require 'pp'

def stats_2
   counts = Hash.new(0)
   sizes = Hash.new(0)

   ObjectSpace.each_object(Object) do  |obj|
     counts[obj.class] += 1

     if obj.respond_to?(:size) && obj.method(:size).arity == 0
       sizes[obj.class] += obj.size
     end
   end

   pp "counts", counts, "sizes", sizes
end

Signal.trap :INT do
   stats_2
end


Kind regards

	robert
Brandon C. (Guest)
on 2007-01-03 18:03
Do you mean this by locking?

@mutex.synchronize do
  @myarray << station
end

Becuase I do that in two spots, although I do .clear out that
frequently.
unknown (Guest)
on 2007-01-03 18:19
(Received via mailing list)
On Thu, 4 Jan 2007, Robert K. wrote:

>
> It may actually be doing it.  The memory footprint you see on OS side need
> not be directly related to the number of objects currently in memory.  I am
> not sure whether Ruby ever gives back memory to the OS but if not the effect
> you see might actually be from a situation where there were needed many
> objects at once.
>

silly robert - don't you know that free always returns memory to the OS!

   http://groups-beta.google.com/group/comp.lang.ruby...

;-)

-a
Robert K. (Guest)
on 2007-01-03 18:31
(Received via mailing list)
On 03.01.2007 17:17, removed_email_address@domain.invalid wrote:
>
> 
http://groups-beta.google.com/group/comp.lang.ruby...
>
> ;-)

That says it all... :-)

	sillybert
Brandon C. (Guest)
on 2007-01-03 18:31
Robert K. wrote:
>
> Is there some point where it stabilizes after a day or so?
>

Nope..it keeps going like...like pacman. *wocka* *wocka* *wocka*.
Although it does hover at various intervals before it grows, and before
I made my changes it would grow pretty steady. When you watch the memory
count for the process, it takes 3 steps forward, and two steps back but
the end result is always an increase.

Good times!!

> I definitively would not add a thread that does GC.  The interpreter
> will take care of this.

Ya...I took that out. It felt wrong to do, but I wanted to see what
would happen.


>
> I'd probably do some object statistics (per class) and also summarize
> String, Array and Hash sizes.

I'm pretty sure whats what the MemoryProfiler here does:
http://scottstuff.net/blog/articles/2006/08/17/mem...

It takes that information, and writes the changes to log files at
specified intervals.

Here is what my daemon does....
It has a couple threads, Thread1 one gets work, and Thread2 performs
work

Thread 1:

-Thread1 gets rows from a database every N seconds
-it builds a series of objects based on the database rows, and appends
them to an array. They are called "work items". The array is protected
by a mutex-sync because Thread2 picks items off of it.
-it sleeps for a while then repeats the process

Thread 2:

- Thread two gets a "work item" from the above array. An item is
"popped" off the array by calling .shift from within a mutex-sync block
- Hpricot pulls information down from url's inside the "work item"
- the information gets put put into hashes, which get dumped to disk via
YAML::dump

And that's it. It's a pretty small daemon.
Jan S. (Guest)
on 2007-01-03 18:55
(Received via mailing list)
On 1/3/07, Brandon C. <removed_email_address@domain.invalid> wrote:
> -it sleeps for a while then repeats the process
>
> Thread 2:
>
> - Thread two gets a "work item" from the above array. An item is
> "popped" off the array by calling .shift from within a mutex-sync block
> - Hpricot pulls information down from url's inside the "work item"
> - the information gets put put into hashes, which get dumped to disk via
> YAML::dump

Sometime ago there was a problem with Array#shift leaking memory.
The proposed solution that time was to replace push/shift pairs with
unshift/pop (i.e. entering the data in the opposite direction). I
think it should be fixed now, but I'm not sure (1.8.5-p0 had this bug,
maybe 1.8.5-p2 doesn't).
unknown (Guest)
on 2007-01-03 19:09
(Received via mailing list)
On Thu, 4 Jan 2007, Brandon C. wrote:

> Do you mean this by locking?
>
> @mutex.synchronize do
>  @myarray << station
> end
>
> Becuase I do that in two spots, although I do .clear out that
> frequently.

Do you have situations where you end up with a lot of threads?  Do you
then have a lot of threads all using the same Mutex?  There's an issue
with the way Ruby handles the memory allocated to an array when values
are
shifted off of it.  In short, Mutex uses an array to manage the queue of
waiting threads.  It pushes onto it and shifts them off of it, and if
you
have a lot of threads, you will see what seems to be inexplicable memory
usage as a consequence.  Also be aware that if you use shift() on arrays
elsewhere, your arrays are using more memory than you think.

The current best fix is to use the fastthread library if you can.  It
replaces the Ruby threading support items like Mutex with much faster C
versions.

If, for whatever reason, you can't do this, you can override the
definitions of the Thread lock() and unlock() methods to use unshift and
pop instead of push and shift for placing items into the queue and
taking
them off.  This doesn't hold a candle to all that's being done with
fasthread, but it does eliminate a bad RAM usage issue if you really
can't
use fasthread for whatever reason.

Something like this:

class Thread

   def lock
     while (Thread.critical = true; @locked)
       @waiting.unshift Thread.current
       Thread.stop
     end
     @locked = true
     Thread.critical = false
     self
   end

   def unlock
     return unless @locked
     Thread.critical = true
     @locked = false
     begin
       t = @waiting.pop
       t.wakeup if t
     rescue ThreadError
       retry
     end
     Thread.critical = false
     begin
       t.run if t
     rescue ThreadError
     end
     self
   end

end


Kirk H.
unknown (Guest)
on 2007-01-03 19:41
(Received via mailing list)
I think you said you are using Windows/Ms SQL Server.
The ADO driver leaks memory like you wouldnt beleive.
In my rails app I switched to odbc and it runs much better.
Brandon C. (Guest)
on 2007-01-03 20:11
unknown wrote:
>
> Do you have situations where you end up with a lot of threads?  Do you
> then have a lot of threads all using the same Mutex?

Up to 15 threads work the queue. One threads add work items, the others
pull items out of the queue. All threads access the queue via this
object. Here is the latest code (the work queue is initialized once and
is accessible to all threads):

class StationWorkQueue


  def initialize
    @mutex = Mutex.new
    @stations = []
  end

  def add_station(station)
    @mutex.synchronize do
    @stations.clear if @stations.size == 0
    @stations.unshift(station)
    end
        station = nil
    return nil
  end

  def get_station
    return @stations.pop
  end

  def size
    @mutex.synchronize do
        return @stations.size
    end
  end
end

By the way, my worker threads don't die. Is there any kind of memory
issue with that? They run in a loop. They keep calling
StationWorkQueue.get_station, perform work,  then sleep for a short
while so the CPU doesn't spike, and then go looking for more work. They
drain the queue pretty fast too.

unshift/pop doesn't seem to stop the memory consumption, but it's been
slowed down a great deal. It's only gained a few MB over the last 30
minutes. I'll let this run for a few hours and see what happens. I'll
also try fastthread.

I may also try replacing my simple work queue with reliable-msg. That
was my original plan, so I could scale this across multiple processes
and/or machines but I just wanted the satisfaction of seeing something
done fast so I took at stab at it this way.

I also created a second daemon to load the YAML files, look at the data
and decide which database it should get plopped into. That one leaks
memory like crazy, and it's simpler than the first daemon. So ya....I'll
take that other suggestion and try ODBC over ADO.

Fun fun!
Eric H. (Guest)
on 2007-01-03 20:23
(Received via mailing list)
On Jan 3, 2007, at 07:55, Robert K. wrote:
> On 03.01.2007 15:46, Brandon C. wrote:
>> http://scottstuff.net/blog/articles/2006/08/17/memory-leak-
>> profiling-with-rails
>
> I'd probably do some object statistics (per class) and also
> summarize String, Array and Hash sizes.  If you encounter a
> significant increase in volume somewhere then this might indicate
> where the problem lies. You could try something like this:

Um...

The link above is a better implementation of what you wrote here.

--
Eric H. - removed_email_address@domain.invalid - http://blog.segment7.net

I LIT YOUR GEM ON FIRE!
Rob M. (Guest)
on 2007-01-03 21:02
(Received via mailing list)
removed_email_address@domain.invalid wrote:
> I think you said you are using Windows/Ms SQL Server.
> The ADO driver leaks memory like you wouldnt beleive.
> In my rails app I switched to odbc and it runs much better.

This was my same experience using the ADOdb PHP module and PostgreSQL
but perhaps unrelated. In fact, I think that was identified as being
from _any_
PHP OO in certain pre-5 versions or PHP. I can confirm that anything
that used
any classes/OO in PHP before 5.0 on RH enterprise leaked mem pretty
badly.
Aníbal (Guest)
on 2007-01-04 15:10
(Received via mailing list)
Brandon,

    Some time ago we had the same issue with the MySQL adapter in a
Rails app, under our windows boxes it was running fine, but when
deployed to a *nix box a memory leak started and never ended.

    The problem was fixed updating the MySQL adapters, but it was
driving us nuts until we find it to be the culprit.

    Hope it helps, and good luck

--
Aníbal Rojas
http://www.rubycorner.com
http://www.hasmanydevelopers.com/
Brandon C. (Guest)
on 2007-01-04 23:03
Thanks for all the advice. My problem appear to be solved, but I'll keep
my daemon running for a few days and track of its memory usage.

I followed a several suggestions so I'm not sure exactly what helped,
but I think the switching the database connection from ADO to ODBC had
the biggest impact.
MenTaLguY (Guest)
on 2007-01-06 20:18
(Received via mailing list)
Incidentally, if you're using fastthread, I'd be interested to know how
memory consumption and performance compare for you with and without
$fastthread_avoid_mem_pools set to true.  I'm trying to weigh the
performance advantages and disadvantages of using memory pools.

-mental
MenTaLguY (Guest)
on 2007-01-06 20:30
(Received via mailing list)
On Thu, 2007-01-04 at 03:11 +0900, Brandon C. wrote:
> class StationWorkQueue
> ...
> end
>
> unshift/pop doesn't seem to stop the memory consumption, but it's been
> slowed down a great deal. It's only gained a few MB over the last 30
> minutes. I'll let this run for a few hours and see what happens. I'll
> also try fastthread.

I'd recommend using fastthread's Queue class.  thread.rb's Queue is
almost exactly like your StationWorkQueue, and fastthread's version
should be nicer about memory consumption (and faster).

-mental
Brandon C. (Guest)
on 2007-01-07 00:01
I have not tried fastthread yet, but I may. It turns out I do have a
small memory leak still. It only creeps up a few megabytes per day, but
still I want to fix it.

I'm going to use fastthread or replace the whole concept of an in memory
queue with reliable-msg.
Brandon C. (Guest)
on 2007-01-13 20:31
Thanks again for all the help. My problems are gone. I still haven't
tried fastthread but that's next on my list.

Here is what I did:
-paid a little more attention nil-ing/.clearing arrays and hashes I no
longer need
-traded most threads for forking (with win32-process)
-queued all work in reliable-msg rather than an in memory array

to-do:
-swap all remaining native ruby threads with fastthreads for a little
performance boost.

My application uses a lot more RAM because of forking, but I've gained
speed and scalability with forks and reliable-msg. I can fork more
processes as needed and spread them out over many computers.

The only thing I can't figure out is how to expose the reliable-msg
server to outside IP addresses (it only binds to localhost) but I'll do
more reading.
This topic is locked and can not be replied to.