Hi List,
We have been pushing our application with some performance testing of
late
and during these tests we are experiencing some PermGen out-of-memory
problems using JRuby 1.2.0 + Rails 2.2.2. Our application is a
multithreaded
app, a main thread with a number of worker threads. The worker threads
are
fairly simple really, mainly making use of ActiveRecord libraries, then
we
simply let them die.
For our testing we have increased the heap size and only now are we
encountering a PermGen problem. I have read the article by Nick S. (
JRuby and the Permanent Generation)
but what isn’t clear is how to estimate the required PermGen size, when
is
enough enough? Is there a reason why we are only encountering the
problem
after increasing the heap size?
We as yet have not changed the default PermGen sizes so they are as
default
(64m max?) as we want to understand what is going on before we start
increasing things across the board.
I guess my question boils down to a few things:
1/ Do we have a problem? A leak? What is a good stratery for detecting
this leak? Or is it a known problem solved with a larger PermGen space?
2/ If more PermGen space is the answer how can we estimate the
requirements?
Any info would be great,
Cheers,
Mik.
Hi Mik,
as i’m not an expert about jrails, so i might be totaly wrong,
but add to your app -XX:+PrintGCDetails, this way You will be able to
see what’s going on in real time.
Also if Your app works ok without adding heap space it means it’s not
leaking.
GC in java is a very wide subject, what i can recomend is reading more
about it,
and upgrading jruby because it’s all the time tuned to eat less and less
memory.
Also there are more specific switches to control every generation.
I had a problem where adding more heap space caused adding more space
for new generation,
and in effect was throwing an exception.
Hope that helps,
Pawe³ Wielgus.
2009/6/19 Michael G. [email protected]:
encountering a PermGen problem. I have read the article by Nick S.
1/ Do we have a problem? A leak? What is a good stratery for detecting
To unsubscribe from this list, please visit:
http://xircles.codehaus.org/manage_email
On Fri, Jun 19, 2009 at 2:12 AM, Michael G.
[email protected]wrote:
encountering a PermGen problem. I have read the article by Nick S. (
JRuby and the Permanent Generation)
but what isn’t clear is how to estimate the required PermGen size, when is
enough enough? Is there a reason why we are only encountering the problem
after increasing the heap size?
We as yet have not changed the default PermGen sizes so they are as default
(64m max?) as we want to understand what is going on before we start
increasing things across the board.
Hi Mik,
As Pawel says, GC can be a mysterious beast. It wouldn’t surprise me
that by
increasing heap size, you also increase PermGen requirements. JRuby uses
soft references in a few places to control how much to cache, so when
you
increase heap you also increase how much is cached since the soft
references
don’t get collected as quickly. I can’t guarantee that your PermGen
issues
are the direct consequence of this, but it’s possible.
I guess my question boils down to a few things:
1/ Do we have a problem? A leak? What is a good stratery for detecting
this leak? Or is it a known problem solved with a larger PermGen space?
If you have a leak, increasing PermGen would only delay the OOM
condition –
so you can try increasng to 128M and see what happens.
2/ If more PermGen space is the answer how can we estimate the
requirements?
It’s pretty much a wet-finger-in-the-wind game. We use 256m of PermGen
ceiling for kenai.com, and I wouldn’t expect too many Rails apps to get
any
bigger than that. If they do, I’d suspect JRuby to be at fault.
/Nick
Mik,
I’ve noticed that when I redeploy my JRoR application (no matter the
app server Tomcat, Glassfish, JBoss) I am leaking about 20 megs of
permgen. See this bug for more details :
http://jira.codehaus.org/browse/JRUBY-3281
It’s marked as “Not A Bug” but it certainly is still occurring.
Also, when monitoring my App server over JMX I can clearly see JRuby
runtime’s piling up on a redeploy as well. I assume the two are
connected.
-Chad J.
On Sun, Jun 21, 2009 at 6:26 PM, Michael
Gannon[email protected] wrote:
Thanks, that is some useful information. We know that we do have a leak in
the main code, we are using JFreeChart and it ‘seems’ to be coming from
there, as such we may have to live with this for the time-being. We will
increase the PermGen size and see where we get.
Cheers,
Mik.
To unsubscribe from this list, please visit:
http://xircles.codehaus.org/manage_email
From the GlassFish end of things, we’ve noticed three things that cause
this.
The first is Grizzly caching Request objects in thread local. While
that’s good for performance reasons, it ends up meaning that a reference
to the JRuby runtime will stick around until the cache is cleared. There
should be a fix in the next released version of GlassFish (both gem and
server), and in the mean time you can “manually” clear the cache by
sending several requests to the server after undeploy, such as to a
non-existent contextroot.
We’ve also seen some issues around JRuby unregistering itself as a
secure communications provider (through JRuby-openssl). Similarly, that
ends up keeping a reference to a JRuby runtime, so that things aren’t
collected. I know that the JRuby people are working on getting a fix
together for that, in the mean time the default JRuby limited openssl
doesn’t seem to have this issue.
Lastly, there is
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4957990 , which
likely affects all of the listed application servers. As mentioned in
the bug, increasing permgen size (while having class unloading enabled)
will cause correct behavior and no leak. I’m working with one of the JDK
team members to get a fix for this finally integrated into the JDK.
Tomcat and JBoss may have other, or additional, causes of permgen
leakage. I haven’t looked into this using them, so I can’t provide much
information there.
Chad J. wrote:
connected.
Cheers,
Mik.
To unsubscribe from this list, please visit:
http://xircles.codehaus.org/manage_email
To unsubscribe from this list, please visit:
http://xircles.codehaus.org/manage_email
Thanks, that is some useful information. We know that we do have a leak
in
the main code, we are using JFreeChart and it ‘seems’ to be coming from
there, as such we may have to live with this for the time-being. We will
increase the PermGen size and see where we get.
Cheers,
Mik.
Chad J. wrote:
good for performance reasons, it ends up meaning that a reference to the
JRuby runtime will stick around until the cache is cleared. There should be
a fix in the next released version of GlassFish (both gem and server), and
in the mean time you can “manually” clear the cache by sending several
requests to the server after undeploy, such as to a non-existent
contextroot.
Does this “manual clear” need to occur after the un-deploy but before
the subsequent deploy of the app?
No. The issue is that Grizzly worker threads don’t clear their caches on
undeploy, so cached data persists after undeployment until each worker
thread has processed a new request. The “manual clear” is just a way to
make each worker thread process a new request. Under normal usage, the
ratio of deployments to requests is low enough that it isn’t an issue,
but in development environments where the ratio of deployments to
requests can fall to 1:1 it is.
this one is effecting me.
special JVM option that forces this feature to be enabled?
Yes. By default, the HotSpot JVM doesn’t bother garbage collecting the
permanent generation at all, since it’s assumed that very few
classloaders will become collectible. Since each application gets its
own classloader, application servers are an obvious exception to this.
You’ll want to run with the following JVM options:
-XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled
Which will enable class unloading.
To unsubscribe from this list, please visit:
http://xircles.codehaus.org/manage_email
To unsubscribe from this list, please visit:
http://xircles.codehaus.org/manage_email
On Thu, Jun 25, 2009 at 11:41 AM, Jacob K.[email protected]
wrote:
From the GlassFish end of things, we’ve noticed three things that cause
this.
Excellent, thanks for sounding off on this issue Jacob!
The first is Grizzly caching Request objects in thread local. While that’s
good for performance reasons, it ends up meaning that a reference to the
JRuby runtime will stick around until the cache is cleared. There should be
a fix in the next released version of GlassFish (both gem and server), and
in the mean time you can “manually” clear the cache by sending several
requests to the server after undeploy, such as to a non-existent
contextroot.
Does this “manual clear” need to occur after the un-deploy but before
the subsequent deploy of the app?
We’ve also seen some issues around JRuby unregistering itself as a secure
communications provider (through JRuby-openssl). Similarly, that ends up
keeping a reference to a JRuby runtime, so that things aren’t collected. I
know that the JRuby people are working on getting a fix together for that,
in the mean time the default JRuby limited openssl doesn’t seem to have this
issue.
In my case I am using the “limited openssl” option so I don’t think
this one is effecting me.
Lastly, there is http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4957990
, which likely affects all of the listed application servers. As mentioned
in the bug, increasing permgen size (while having class unloading enabled)
will cause correct behavior and no leak. I’m working with one of the JDK
team members to get a fix for this finally integrated into the JDK.
Do I need to go out of my way to enable class unloading? Is there a
special JVM option that forces this feature to be enabled?
Tomcat and JBoss may have other, or additional, causes of permgen leakage. I
haven’t looked into this using them, so I can’t provide much information
there.
Thanks!
-CJ
To unsubscribe from this list, please visit:
http://xircles.codehaus.org/manage_email