Activerecord 2.3.5's create & find slower than activerecord 2.1.2?

Can anyone help me understand what’s going on here? We updated one of
our apps from 2.1.2 to 2.3.5 hoping largely to get a little
performance boost since I know there was a lot of performance
improvements in the 2.3.x line. However we discovered instead that
our test suite is now running around 40% slower. Looking around on
line we found some reports of slowness with the built-in memcache
client, but we’re not using it. So we started reducing code looking
for the performance differences and eventually arrived at what appears
to be around a 35% slowdown in AR::B.find & around a 20% slowdown in
AR::B.create.

To test this we created to bare rails apps, one 2.1.2 the other
2.3.5. Both were set to use mysql for the database:

$ rails-2.1.2 bench212 -d mysql
$ rails-2.3.5 bench235 -d mysql

Then we run this in each
$ ./script/generate model Foo
$ rake db:create
$ rake db:migrate

Then drop this ( AR::B benchmark · GitHub ) in the script
directory of each. The results are pretty consistently thus:

Rails 2.3.5
25000 finds (7330.0ms) (7.33)
25000 creates (32942.4ms) (32.942)

Rails 2.1.2
25000 finds (5.37473)
25000 creates (26.60233)

Looking at the profile trees, the only one difference that stands out
is in 2.3.5, 3.11% of the time is spent in
ActiveRecord::ConnectionAdapters::ConnectionHandler#retrieve_connection_pool
which doesn’t appear in 2.1.2.

We’ve tried this on two different machines (one mac running macport
installed ruby, the other an x86-64 ubuntu box running stock ruby 1.8)
and see similar results on each. Is this slowdown expected? Or is
there something flawed in our benchmarks?

Thanks!

Ben

On Jan 14, 5:30Â pm, Ben Tucker [email protected] wrote:

Might be worth asking on rails-core - this is a pretty small benchmark
that others can hopefully corroborate…

(FWIW I’ve also seen a performance hit on one of the apps I work with
when going from 2.2.x to 2.3.x but haven’t got as far as narrowing it
down to a small example like this.)

Fred

On Jan 14, 12:44Â pm, Frederick C. [email protected]
wrote:

Might be worth asking on rails-core - this is a pretty small benchmark
that others can hopefully corroborate…

Thanks. I just posted over there as well.

-Ben

On Thu, Jan 14, 2010 at 9:30 AM, Ben T. [email protected] wrote:

Ben, I have the following numbers:

Configuration: Mac OS X 10.5.8 (PowerMac G5 2.5 GHz Quad)
Ruby: 1.9.1p376

Rails: 2.3.5 with MySQL adapter:

25000 finds (16.475510) # without Model.transaction … end
25000 finds (16.259721) # with Model.transaction do … end
25000 creates (35.011375) # with Model.transaction do … end
25000 creates (3.031440) # without Model.transaction … end, using
AR-Extensions to perform a bulk import

Next, if your tests were actually modeling this in a real world
scenario,
you would be using the following techniques within your app:

a) ActiveRecord::Extensions for bulk imports

b) Memcached for caching because you shouldn’t be performing the same
query
that returns the same resultset. For example, if
100,000 people generated this query, then your database would be hit
100,000 times instead once.

c) Wrap your bulk operations that can hit the database using
Model.transaction do … end - because without it a transaction is
being generated for each create operation.

d) Queries may be also concurrent instead of just sequential as they
appear
in your example code. Thus, using something like
Apache JMeter, you can do the proper load testing (i.e. simulate
many
users doing different things at various intervals).

In short, for a test to be viable, one must create realistic tests that
models your application in the real world to analyze your application’s
performance.

-Conrad

Looking at the profile trees, the only one difference that stands out

On Thu, Jan 14, 2010 at 10:36 PM, Conrad T. [email protected]
wrote:

you would be using the following techniques within your app:
Sorry, I should have explained intent a bit more. Our goal with this
test wasn’t to model a real world
scenario (we’re not doing 25,000 creates in the test suite or app
obviously). The point is that AR::B.find & AR::B.create certainly
appear significantly slower in 2.3.5 than in 2.1.2.

a) ActiveRecord::Extensions for bulk imports
b) Memcached for caching because you shouldn’t be performing the same query
that returns the same resultset. Â For example, if
   100,000 people generated this query, then your database would be hit
100,000 times instead once.
c) Â Wrap your bulk operations that can hit the database using
Model.transaction do … end  - because without it a transaction is
   being generated for each create operation.
d) Â Queries may be also concurrent instead of just sequential as they appear
in your example code. Â Thus, using something like

Thanks for these. They’re all things we’re actually doing in the app
already. Of course there are still things which we could optimize
more (there always are), but I wanted to find out whether this
apparent slow down in AR::B.find & AR::B.create was known.

-Ben