Re: warbler-rack timeout q / circuit breaker

I tried this with 0.9.3 and 1.1.6 Final and it worked as you described.

The timeout works well as a circuit breaker now.

See the second screen shot on the page below to see jconsole output:
https://trisano.csinitiative.net/wiki/Dec26Jruby116FinalRack093

That screen shot shows an expensive transaction under heavy load. The
1.1.6 RC2/Final memory leak clearly has been fixed from 1.1.6 RC2 as you
can see threads dying now.

I think the max runtimes setting is working now. I set min to 2 and
max to 6. I saw 6 db connections (we are using Rails) so I assume that
the pool grew. It would be nice if Rack logged as it grew the pool. It
looks like this piece of Rack logic uses in the java.util.concurrent
package so I presume that is why its not logged (do I have this right?).

Mike

===============

Thanks Nick.

Will do. Glad I asked.

I’ll let you know of anything interesting in the results.

Mike

Nick S. wrote:

On Sun, Dec 21, 2008 at 8:07 AM, Mike H. <mike@csinitiative…

I just upgraded to latest warbler/rack (0.9.12/0.9.3). I’ll do some
testing
on it, but wanted to run something past you …

On our last round of load testing we saw our app, TriSano go into a
predictable death spiral once it got to a certain point of load.

I’m hoping to come up with some form of a “circuit breaker” to avoid
this.
What seems to happen is as threads build up and wait, the app bleeds
resources and can never recover. I realize there is a bug fixed
around this
in 1.1.6 (threads / memory), but I still would like this configured
so that
rather than allowing threads to build up, the circuit breaker kicks
in and
we tell the user to try again later (and the thread queue stays
low).

I observed in rack 0.9.2 that the rack timeout didn’t kick in the
way I
expected. Under load I couldn’t get it to reject requests. I
configured it
to 5 seconds, but it wouldn’t reject requests that clearly were
waiting this
long.

My real question is: is Warbler timeout a good choice for a circuit
breaker?
How have you observed it to work in cases like this?

Please do another round of load testing if you can. There was a bug
on
the runtime pooling code that prevented it from hitting the maximum
number of runtimes, and instead each new request would attempt to
create another runtime when one wasn’t available. The new release of
jruby-rack (0.9.3) should fix that, and hopefully you should be able
to observe the timeout. Do report a bug in the new JRuby-Rack JIRA
1
if you still can’t get the timeout behavior to happen.

Also note you need at least two runtimes for pooling to be used.

/Nick


To unsubscribe from this list, please visit:

http://xircles.codehaus.org/manage_email

To unsubscribe from this list, please visit:

 http://xircles.codehaus.org/manage_email

To unsubscribe from this list, please visit:

http://xircles.codehaus.org/manage_email

Mike H. wrote:

I think the max runtimes setting is working now. I set min to 2 and
max to 6. I saw 6 db connections (we are using Rails) so I assume that
the pool grew. It would be nice if Rack logged as it grew the pool. It
looks like this piece of Rack logic uses in the java.util.concurrent
package so I presume that is why its not logged (do I have this right?).

Mike

We switched to 1.1.6 final and jruby-rack-0.9.3, and have noticed the
same
thing. In particular, the MBeans > org.jruby > Runtime control in
jconsole now
shows a limited number of runtimes under load, rather than a tolerably
huge
number. Also, memory and threads no longer seem to get out of control.


Tommy “tolerably huge?” McGuire
[email protected]


To unsubscribe from this list, please visit:

http://xircles.codehaus.org/manage_email