I have a ruby application where part of the functionality means that the
application has to run very time-consuming queries against a db slave.
My problem is that when these queries are ran, the application stalls
until it is finished. The queries are done like @result =
Model.find(:all, :conditions => conditions, :joins => joins), and they
use the available indexes as far as possible.
So, I need to find a way to allow my rails application to manage
simultaneous queries. Problem is, I don’t really know where to start /
where the issue lies. I know it’s a configuration issue somewhere, but I
don’t if it’s my web server or application that needs to be
reconfigured.
On 15 Apr 2008, at 12:43, Markus Svennerstål wrote:
So, I need to find a way to allow my rails application to manage
simultaneous queries. Problem is, I don’t really know where to start /
where the issue lies. I know it’s a configuration issue somewhere,
but I
don’t if it’s my web server or application that needs to be
reconfigured.
This a fundamental issue with rails: a single rails instance (ie a
mongrel for example) cannot handle more than 1 request at a time. One
approach is to offload such tasks to something like backgroundrb so
that the mongrel is freed up straightaway
On 15 Apr 2008, at 12:43, Markus Svennerst�l wrote:
So, I need to find a way to allow my rails application to manage
simultaneous queries. Problem is, I don’t really know where to start /
where the issue lies. I know it’s a configuration issue somewhere,
but I
don’t if it’s my web server or application that needs to be
reconfigured.
This a fundamental issue with rails: a single rails instance (ie a
mongrel for example) cannot handle more than 1 request at a time. One
approach is to offload such tasks to something like backgroundrb so
that the mongrel is freed up straightaway
Fred
Hmm, I should’ve probably mentioned this in the first post. But I have a
dev environment, (it used to be the live env. until I got a dedicated
machine for it). In the development environment, parallell queries works
perfectly fine… But in my live environment, the problem above happens.
I’ve checked with my db admins, and there’s no max_connection criteria
set for the mysql users for either application. The applications also
use the same source code and the same httpd configuration. Some details
regarding the setup for the two:
On 15 Apr 2008, at 15:23, Markus Svennerstål wrote:
You’ll see similar issues eventually, it’s just that it’s a fastcgi
worker that gets blocked. Depending on your settings apache may
decided to fire up another fastcgi worker or it will try and wait for
one to become available.
Do you think this could be handled with apache MPM worker or something
like that? I reckon backgroundrb could do it, but it would be a lot of
development work since it’s not cronjob-style requests that block the
application, but ad-hoc queries that dig through our databases for
info.
Don’t know really, but I will say that backgroundrb has been great for
that purpose for use. You just need a worker that does your database
digging and returns the results. Your controller just fires off the
work request to backgroundrb and then shows a view which just ajax-
polls backgroundrb until its done before displaying the result data
retrieved from backgroundrb.
On Apr 15, 10:23 am, Markus Svennerstål <rails-mailing-l…@andreas- s.net> wrote:
Thanks for the info btw, I did some
further testing, and I discovered that it actually doesn’t stall until
there are two queries blocking the application… I also discovered that
the same applies when running two queries in the dev environment…
Look at your queries and locking, especially if using MyISAM and
mySQL, which locks entire tables for SELECT statements (your long-
running query). The fact that it locks up once two queries get going
tells me you might have a resource contention or deadlock. If you
have a table where frequent writes and frequent reads are occurring
all the time, especially if you have select and updates within
single transactions, you should think about switching the table to
InnoDB instead, or setting up another table and do some of the heavy
lifting for your query with INSERT triggers to preprocess some of the
information.
On 15 Apr 2008, at 14:44, Markus Svennerstål wrote:
don’t if it’s my web server or application that needs to be
have a
It could be blind luck: if the request isn’t routed to the mongrel
which is busy with the long running process then you won’t see the
problem.
Fred
I don’t use mongrel; I use Apache with mod_fastcgi… Maybe it’s the same
issue, but I’ve been observing the problem for about two months and it
persists through reboots etc. Thanks for the info btw, I did some
further testing, and I discovered that it actually doesn’t stall until
there are two queries blocking the application… I also discovered that
the same applies when running two queries in the dev environment…
Do you think this could be handled with apache MPM worker or something
like that? I reckon backgroundrb could do it, but it would be a lot of
development work since it’s not cronjob-style requests that block the
application, but ad-hoc queries that dig through our databases for info.
Look at your queries and locking, especially if using MyISAM and
mySQL, which locks entire tables for SELECT statements (your long-
running query). The fact that it locks up once two queries get going
tells me you might have a resource contention or deadlock. If you
have a table where frequent writes and frequent reads are occurring
all the time, especially if you have select and updates within
single transactions, you should think about switching the table to
InnoDB instead, or setting up another table and do some of the heavy
lifting for your query with INSERT triggers to preprocess some of the
information.
Good point. You can still get deadlocks with innodb (they’ll timeout
eventually). from your mysql prompt
show full processlist
show innodb status
might should you useful info.
Fred
I had a look at our tables, and verified that all the relevant ones are
InnoDB. Deadlocking isn’t a problem for my application because it uses a
slave (so it’s all select statements), beyond the fact that too big
queries can lock some tables long enough to stop replication. So it’s
not a db issue…
Anyway, I have already started working on a fix with backgroundrb… I
did some basic testing using it via rake, and they seem to run even when
the app is stalled. Thanks for your help.
Look at your queries and locking, especially if using MyISAM and
mySQL, which locks entire tables for SELECT statements (your long-
running query). The fact that it locks up once two queries get going
tells me you might have a resource contention or deadlock. If you
have a table where frequent writes and frequent reads are occurring
all the time, especially if you have select and updates within
single transactions, you should think about switching the table to
InnoDB instead, or setting up another table and do some of the heavy
lifting for your query with INSERT triggers to preprocess some of the
information.
Good point. You can still get deadlocks with innodb (they’ll timeout
eventually). from your mysql prompt
show full processlist
show innodb status
might should you useful info.
Fred
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.