On 3/31/06 1:13 AM, “Tom M.” [email protected] wrote:
So, -n 10 would allow 1 Rails requests, and 9 concurrent “other”
requests.
Yes, this is true. Here’s a concrete example. Let’s say you do a -n of
4
(which is really too low) and you have one Rails controller/action
that
takes 1 minute. Here’s a faked timeline:
Activity
1 Enters mongrel, sends header (thread switch)
2 Enters mongrel, sends header, locks rails, routed to
controller/action
2 Processes for 60 seconds.
1 Held in queue waiting for #2
3 Enters mongrel, sends header, gets parsed. Blocked by #2.
4 Enters mongrel, sends header, gets parsed. Blocked by #2.
5 Enters mongrel, count of concurrent is > 4, close socket.
6 Enters mongrel, count of concurrent is > 4, close socket.
2 Finishes, releases Rails lock.
1 Gets Rails lock, routed to controller/action.
2 Response goes out (notice that locking isn’t stopping the response).
3 Blocked by #1 now.
4 Blocked by #1 now.
The numbers are just request/client numbers to show you how they’d
interact.
In general the only thing that’s locked is Rails. Every other part of
Mongrel is as thread safe as Ruby allows. This is why if you have a
bunch
of long running requests you’ll need a larger number of backend mongrel
handlers to deal with it.
Now, let’s look at your other problem of having a rails controller call
back
onto the same mongrel rails again. Rails is locked and you’re
processing
that request. Inside this locked request you then do another request
back
to rails. This request comes in, gets to the lock, and stops. You’ve
basically created a deadlock since your controller/action is waiting on
your
HTTP client to finish, but the HTTP client can’t finish until the
controller/action exits.
The solution: With any web application you really should design long
running requests to use a queuing system rather than make the client
wait.
On any web server platform you’ll eventually fill up the reasonable
number
of threads you can handle or sockets that can be open and your
performance
will go to nothing.
A better approach is to setup a DRb server that handles work you give it
like in a queue. You pass this DRb server “stuff to do”, and then
quickly
return to the user with a status. Throw in some fancy ajax that then
checks
a second controller to see if the DRb request is finished and displays
this
to the user. When it’s done, your ajax then whips over to the “it’s
done”
action and displays the results from the DRB server.
Hope that helps.
Zed A. Shaw
http://mongrel.rubyforge.org/