We have a background process kicked off via rake that failed with an
OutOfMemoryError. I’m confused by the backtrace, as it doesn’t point
to rake or our code. A call to an ActiveRecord finder method is the
highest-level call other than JRuby internals. Is part of the
backtrace missing? Is this expected?
The end of the backtrace is as follows:
… [lots of backtrace here] …
[expected to see our code, then rake internals here, but nothing more]
In case it’s useful, the exception is
“Java::JavaLang::OutOfMemoryError (GC overhead limit exceeded)” and at
the lowest level the Error bubbled up out of some new_relic method
tracing stuff that’s aliased into find_by_sql.
Thanks for any pointers.
Is it possible that it’s loading up ActiveRecord (from a require or
something like that) and the heap you have is too small to contain it
GC overhead limit exceeded generally means that the GC is extremely
busy (default was 98%?), but not finding (enough) garbage to free
(less than 2%).
Yeah, I’d say you might be hitting a default JVM heap size. The
“jruby” command will set the JVM max heap to 500MB, but if running
something like “java -jar jruby.jar” the default is much, much smaller
(like 64MB or something absurdly small).
Now you say you’re kicking off a background process, but on JRuby 1.6
and earlier that may be running in the same JVM, depending on how
you’re launching it. That could be bumping up against the current
process’s heap maximum.
Give us more info on how you’re running it…this is probably a simple
thing to fix.
The heap is plenty big for ActiveRecord to load. This failure comes
it’s been processing (including many AR queries) for a while.
To be clear, we definitely have memory issues in the code. I was just
surprised to see a backtrace that makes it look like JittedMethod.call
the top-level call in the JVM. In subsequent testing we saw an OOM error
with no backtrace at all, so I guess you can’t reliably get that sort of
information when the JVM is in such a state.
– typed with my thumbs