I’ve actually been looking at these benchmarks the past week in
prototyping the next generation of TorqueBox. So far I’ve been looking
mainly at the JSON serialization test and the plaintext test to get a
All the JRuby tests are run on top of Resin. Why Resin was chosen
instead of Puma, Trinidad, TorqueBox / torquebox-lite, or something else
more sane I’m not sure. That’s the biggest problem with at least the
JSON and plaintext tests.
Techempower gets 69k req/s on their i7 hardware for the ‘rack-ruby’ JSON
test that uses Ruby 2.0, Unicorn, and Nginx.
As a baseline, on my local laptop I get 32k req/s for that same
rack-ruby test with the same versions of MRI Ruby, Unicorn, and Nginx.
On that same local laptop with torquebox-lite, I get 46k req/s. A nice
improvement for sure, but not anything amazing.
Still on the same laptop, using some unpushed prototype code that will
replace torquebox-lite, I get 71k req/s. This prototype is based on
JBoss Undertow, which you’ll notice is on top or near the top of most of
these benchmarks. Perhaps it’s time to put a bow on this server and get
it out there to let people play with?
So, we should definitely work up a pull request to use a better JRuby
server option. And it doesn’t have to be TorqueBox by any means - these
are just the simple examples I had locally.
One optimization in all my local JRuby tests is I set the JRuby option
“-Xjruby.ji.objectProxyCache=false” which will be the default in JRuby
9k. At lower throughput levels it doesn’t really make a difference but
once you start getting over 50k req/s or so the overhead from using the
object proxy cache really starts to show up.