I have 3 processes of Mongrel running with 600MB heap each. Eventually
after a few hours (few 10,000 processes) the memory consumption would
grow and fills the heap entirely. Then, I would get that Java out of
memory error and the mongrel would exits.
I am using Jruby for actual deployment. Is there anyone experiencing
this?
To unsubscribe from this list, please visit:
http://xircles.codehaus.org/manage_email
Out of curiosity, do you use ActiveRecord in your app and have multiple
threads using transactions?
I recently found out that in the version of ActiveRecord that I am using
(I
think it’s from Rails 1.2 series), the transaction method uses
trap(‘TERM’) to
rollback transaction, and restore the previous handler on successful
transaction. However, this causes a severe memory leak when multiple
threads
are using it, because the handlers are not restored properly, causing
some of
the handlers to be permanently leaked.
Peter
Sharkie wrote:
I have 3 processes of Mongrel running with 600MB heap each. Eventually
after a few hours (few 10,000 processes) the memory consumption would
grow and fills the heap entirely. Then, I would get that Java out of
memory error and the mongrel would exits.
I am using Jruby for actual deployment. Is there anyone experiencing this?
After some discussion on IRC it seems like this could be a new effect of
running JRuby + Mongrel under high load. So the running theory goes like
this:
- Mongrel spins up a new thread for each incoming request, regardless of
how many have been spun up (I know there’s some kind of throttling,
but I think it’s TTL based, not count-based…anyone confirm?)
- If more requests are coming in than the server can handle, a backlog
of requests causes more and more threads to be spun up
- Eventually the number of threads exceeds the thread count or thread
stack maximum allowed for the JVM’s settings, and it starts punting on
new threads.
I suggested Sharkie try running instead with the JRuby thread pool. He
enabled it and set it to a max of 50, which should in theory queue up
Ruby threads without spinning up too many native threads, as well as
reducing the cost per new connection/thread. And so far it seems to be
ok.
This is certainly something we should look into. If it is the case that
Mongrel will continue to spin up threads for incoming requests, even to
the point of blowing the thread/stack max, we may want to write up a
simple howto on properly throttling it. If the thread pool works out
well, we may want to start recommending people running Mongrel use it.
We may even want to consider making all threads run through the pool and
having the pool on by default with some reasonably large number like 50
or 100.
Thoughts?
To unsubscribe from this list, please visit:
http://xircles.codehaus.org/manage_email
Hi I am the guy talking to Hedius. My nick on IRC is Jomyoot.
Will report back if this really saves my day.
On Aug 20, 2008, at 1:28 PM, Charles Oliver N. wrote:
theory goes like this:
pool works out well, we may want to start recommending people
http://xircles.codehaus.org/manage_email
To unsubscribe from this list, please visit:
http://xircles.codehaus.org/manage_email