Still experiencing weird problems where 1 of my 4 processors
intermittently
hits 100% cpu and tomcat completely locks up my rails app and no future
requests can come in, even with spare CPU cycles. Can anyone help
diagnose
what process is preventing JRuby from handling any requests? I have a
copy
of the kill -3 here
Seems there are a lot of WAITING but not sure where to track this down
to,
or if anyone has any suggestions on how to get better granularity on the
threads that are locking everythign up.
I do notice this but not sure hwo to track down this thread:
waiting on <0x00002aaac1d37b88> (a
java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:116)
We had a similar problem in the past. In our case we would see some
jruby threads going into 100% cpu utilization and never finishing the
request. Other idle threads were able to process requests, though. The
bug was in the juno library and was fixed soon after we reported it.
I’d suggest you take consecutive jstack traces and compare the stacks of
the running threads to see if they are always in the same operation. I
usually take 10 snapshots 1 second apart.
HTH,
fdo
AD wrote:
Seems there are a lot of WAITING but not sure where to track this
down to, or if anyone has any suggestions on how to get better
granularity on the threads that are locking everythign up.
I do notice this but not sure hwo to track down this thread:
waiting on <0x00002aaac1d37b88> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:116)
thanks, how did you figure out it was juno library ? I am trying to
analyze
the link below to the stack trace but not really able to follow where
the
issue might lie. Is there a more informative way (than kill -3) to find
out
what is hanging inside Tomcat ? We have a few apps running under tomcat
and
not sure if the issue is in Jruby or an HTTP Request to another app
running
under Tomcat.
Thanks
Adam
thanks, how did you figure out it was juno library ?
all the methods in the top of the stack (including the one executing)
were from that library. Besides, we correlated these problems with
activities and complains from our doc. writer (a mostly wiki user).
When I compared 10 consecutive snapshots of the stack, the stack for
those troubled threads didn’t change at all, meaning they had been for
10 seconds executing the same method.
I am trying to analyze the link below to the stack trace but not
really able to follow where the issue might lie. Is there a more
informative way (than kill -3) to find out what is hanging inside
Tomcat ?
If by any chance you are using solaris, there are a couple of things you
could do with dtrace probes to gain insight as to what the process is
doing. For example, you can start intercepting calls for a given java
method and inspect its arguments, which can give you some clues. but
that is involved and mostly at the OS level. I can give you more
details if you want to follow this route.
Have you tried deploying in something other that Tomcat just to see if
the same problem happens?
We have a few apps running under tomcat and not sure if the issue is
in Jruby or an HTTP Request to another app running under Tomcat.
HTH,
fdo
intermittently hits 100% cpu and tomcat completely locks up my
I do notice this but not sure hwo to track down this thread:
- waiting on <0x00002aaac1d37b88> (a
java.lang.ref.ReferenceQueue$Lock)
at
java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:116)
Thanks
Adam
To unsubscribe from this list, please visit:
http://xircles.codehaus.org/manage_email
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.