Rails Concurrent Requests

Hi, i’m building a big Rails application, i really like the way that
rails manage our normal problems building a web app.
Yesterdays I found myself into a deadlock, why?, because in my rails
app, there is a functionality to call another app to create some
things, and that app, has a listener that calls my app saying “this”
was created. Those are 2 separated functionalities. So in one request
I was calling the other app, and waiting for the response, and the
other app, was calling me… And o found that my Rails app, can not
handle 2 request at the same time, so i started to look for the answer
of that problem, first i thought it was the development mode, but i
fast realized that was not the problem. I’ve been googling for the
answer, but i found some estrange answers, that i can’t believe.
That’s why i’m writing here.

It is true that a single Rails App instance can’t handle more than 1
request at the same time?

I’ve been looking to the passenger solution for this, and the mongrel
one, and others, but the solutions, are App pools, ans things like
that, but nobody explain this in a right way. I need a direct answer
to this question.

I know some workarounds, like clustering or pools, that’s not what i
want right now.

Thanks!

On Oct 17, 3:20 pm, German [email protected] wrote:

It is true that a single Rails App instance can’t handle more than 1
request at the same time?

It used to be true, but as of 2.2.2 you can turn on thread safe mode.
How much you benefit depends on what ruby interpreter you use what
your app does etc.

Fred

First of all, Thanks frederick fr the fast response!

I’ll been tryint this, i found a BIG incrementation of the
performance. On the other hand, i can’t beleibe how this is a feature
that I have to enable, and is not a default. What is the use of a web
application, that can’t handle 2 request at the same time? IMO that is
useless.
There are any concern about this?.. this feature is ready for
production?.. or it is experimental.

I’ve read in some places that you must be sure that your code is
thread safe, and that is obvious. Is there any other “problem” with
that feature?

Thanks

On Oct 17, 1:32 pm, Frederick C. [email protected]

I would use JRuby because implements threads with Java threads, so it
is thread safe but can use multiple core as any J2EE web app. You can
see this video about JRuby featues:

(included threads and GC)

Riccardo.

On Oct 17, 11:53 pm, Frederick C. [email protected]

The problem is that my app, uses a lot of external partners…
connecting with web services, so a single request can take between 2
and 8 seconds… imagine 100 people using my app :S…
Can you elaborate a bit about this “not multicore” issue? How one
thread can block the entire interpreter if i have the
config.threadsafe! on?

On Oct 17, 8:53 pm, Frederick C. [email protected]

On Oct 17, 10:28 pm, German [email protected] wrote:

First of all, Thanks frederick fr the fast response!

I’ll been tryint this, i found a BIG incrementation of the
performance. On the other hand, i can’t beleibe how this is a feature
that I have to enable, and is not a default. What is the use of a web
application, that can’t handle 2 request at the same time? IMO that is
useless.

An awful lot of rails applications seem to get along just fine
(especially as MRI’s threading means that you don’t take advantage of
multiple cores, one thread can block the entire VM etc.)

Fred

On Sun, Oct 18, 2009 at 2:54 PM, German [email protected] wrote:

The problem is that my app, uses a lot of external partners…
connecting with web services, so a single request can take between 2
and 8 seconds… imagine 100 people using my app :S…
Can you elaborate a bit about this “not multicore” issue? How one
thread can block the entire interpreter if i have the
config.threadsafe! on?

In short, each thread must obtain Global Interpreter Lock (GIL) before
it
can
be executed. Thus, enabling config.threadsafe! will allow one to handle
more
than one request but they are not processed in parallel but rather in
FIFO
(i.e. a
basic queue) order by the interpreter.

-Conrad

On Oct 18, 10:54 pm, German [email protected] wrote:

The problem is that my app, uses a lot of external partners…
connecting with web services, so a single request can take between 2
and 8 seconds… imagine 100 people using my app :S…
Can you elaborate a bit about this “not multicore” issue? How one
thread can block the entire interpreter if i have the
config.threadsafe! on?

Because MRI’s threading isn’t great - ruby threads don’t map to native
threads. Pure ruby code shouldn’t block the entire interpreter but C
stuff can (eg a mysql query). jruby is much better in this respect.

Fred

German wrote:

Is there a solution, for this issue?.. i’m using postgres db. As far
as i understand what you say, every access to a native library, will
be locked and only one thread at a time will access.

Yes: use one of the pooled solutions like Passenger. I know you may
think it silly, but at the moment, it’s the best way if you’re not using
JRuby.

On Oct 18, 8:06�pm, Frederick C. [email protected]

Best,

Marnen Laibow-Koser
http://www.marnen.org
[email protected]

Is there a solution, for this issue?.. i’m using postgres db. As far
as i understand what you say, every access to a native library, will
be locked and only one thread at a time will access.

On Oct 18, 8:06 pm, Frederick C. [email protected]

Thanks, I have a passenger environment, with a pool of app’s… but
this schema is eating my server… what if i have 10.000 rq per
second… the solution is JRuby?

On Oct 19, 11:29 am, Marnen Laibow-Koser <rails-mailing-l…@andreas-

On the other hand, Clustering or pooling are solutions for Reliability
not for performance. Clustering does not give better performance, in
fact, most of the times gives you less performance one of the reasons
is the load balancer. It’s true that every application has a limit of
request to handle, all the servers have that limit, and there is where
you need to do a pool or a cluster to “solve” the problem, but if my
limit is “one request”, i should think i have another problem…

Ok Robert, you are right.

So, if I have calls to native C libraries, my threads will be locked
there. So the only way to have concurrency there is with more than one
application?

On Oct 19, 2:35 pm, Robert W. [email protected]

German wrote:

On the other hand, Clustering or pooling are solutions for Reliability
not for performance.

I disagree. Clustering is primarily for scalability not reliability
(although provides that as well). Performance actually provides little
in terms of scalability. Increasing performance has limited usefulness
in the ability of an application to scale. Clustering can provide
theoretically unlimited scaleability.

Clustering does not give better performance, in
fact, most of the times gives you less performance one of the reasons
is the load balancer.

While there may be some small overhead introduced by the load balancer
it certainly does not offset the scalability advantages they provide. If
you add just one additional instance of the application though a load
balancer, it would have to slow the requests down by one half to loose
overall scaleability. I’m sure that they do not introduce that level of
overhead. The overhead is more likely negligible.