I have a very large rails app running on 4 instances of mongrel on a P4
server with 1GB RAM. Not the absolute best setup, but, the server has
been optimized and the application has been running extremely fast for
the past few months.
I’m having one serious problem however…Theres a specific action that
uses 99% of MySQL on the server and in most cases, doesn’t even load, I
end up with a “520 Proxy Error, Reason: Error reading from remote
server” error from mongrel.
The action in question really is like any other, It grabs images tagged
with a specific tag and paginates them:
I can’t figure out what’s going on, it seems to run fine on my
localhost, but simply kills the production server. The rails log looks
normal, the query is very large after the associations, but I have
similar queries that run just fine.
How could I go about figuring out what the problem is?
The action in question really is like any other, It grabs images tagged
I can’t figure out what’s going on, it seems to run fine on my
localhost, but simply kills the production server. The rails log looks
normal, the query is very large after the associations, but I have
similar queries that run just fine.
How could I go about figuring out what the problem is?
Any help would be extremely appreciated!
How many images/tags are in the database? paginate is not very
efficient
when it comes to large datasets as it grabs them all (at least that’s my
memory).
I’d take a look at the dev log and the SQL being generated and see what
it
is that’s going on. Perhaps pass those queries into mysql prefixed with
"EXPLAIN " to see if it’s using your indexes or not…
How could I go about figuring out what the problem is?
The problem is that paginate uses :limit and :offset which can’t work
well with eager joins (your :include => ‘tags’) since all the matching
rows (perhaps thousands or millions) have to be pulled into Ruby,
parsed, then limited and offset. To put it mildly, this does not
scale.
Remove the :include => tags to regain nearly all your performance at
the minor cost of 12 additional queries to pull tags per page.
Generally speaking, to troubleshoot database issues, look at the slow
queries in your production.log and use EXPLAIN in MySQL to
see why they’re performing poorly. Luckily, in this case, it’s just a
matter of returning way too much data.
I would also recommend using the paginator gem instead of the built-in
pagination of Rails. It is much more efficient. SInce I swapped them
out, I’ve had no problems.
Amazing! Each post got me a bit closer to understating the problem.
It was definitely missing indexes, there are about 4,000 tags in the
table, hence it was doing a full scan to find a matching result(s).
I added 2 indexes in the images_tags table, on image_id and tag_id which
seems to have drastically sped things up. However, theres still room for
improvement.
The query analyzer plugin is awesome, since it breaks down all your
queries in the production log.
I would also recommend using the paginator gem instead of the built-in
pagination of Rails. It is much more efficient. SInce I swapped them
out, I’ve had no problems.
Also if you have all your images in the database, this could be the
main cause for the slowness, send back so much data can take a while,
even if the actual query is quick. I don’t know it would be possible,
but consider maybe using Amzon S3 service if your number of image will
increase and the number of requests increase. Then just keep a
reference of the names in database, this should inprove the
effecientcy more.
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.