Lighty+Mongrel: More than one connection per client?

I have an action which requests another URL from the same site (for
testing purposes), but it keeps timing out. If I use an off-site URL,
the action works fine. I also notice that I’m only able to request one
page at a time from my server - the rest just spin until the previous
requests complete. Is this some setting in Lighty, Mongrel, or my
browser (Firefox - I played around with the pipelining settings, to no
effect)?

Joe

I have an action which requests another URL from the same site (for
testing purposes), but it keeps timing out. If I use an off-site URL,
the action works fine. I also notice that I’m only able to request one
page at a time from my server - the rest just spin until the previous
requests complete. Is this some setting in Lighty, Mongrel, or my
browser (Firefox - I played around with the pipelining settings, to no
effect)?

Each Mongrel has 1 Rails process inside it. Just like WEBrick. If you
need concurrent access, you’ll need multiple mongrels. Just like you
need multiple FCGIs if you want the same with lighttpd.

To use multiple mongrels, you’ll need to place them behind a proxy
like Apache or lighttpd. This will benefit you in any case because
those web servers will then handle static files.

David Heinemeier H.
http://www.loudthinking.com – Broadcasting Brain
http://www.basecamphq.com – Online project management
http://www.backpackit.com – Personal information manager
http://www.rubyonrails.com – Web-application framework

I do have multiple Mongrels served via Lighty’s mod_proxy ;). Mongrel’s
-n option should also affect concurrency. I don’t know what’s up with
this.

Joe

On Mar 30, 2006, at 9:47 PM, Joe wrote:

I do have multiple Mongrels served via Lighty’s mod_proxy ;).
Mongrel’s
-n option should also affect concurrency. I don’t know what’s up with
this.

-n is for general HTTP requests. Rails requests must be locked because
Rails’ internals are not thread safe.

So, -n 10 would allow 1 Rails requests, and 9 concurrent “other”
requests.


– Tom M.

Joe,

Not sure what it is your really experiencing, but the -n option changed
back
in 0.3.11 release. There is no long a single fixed set of threads, but
instead one thread per request is fired off and allowed to finish. -n
now
means “only allow N concurrent requests”. This means that if you set it
to
20 only 20 concurrent requests can come in and if a new request comes in
that would put it at 21 then the socket is closed rather rudely.

The recommended setting (incidentally the default) is 850. This was
based
on testing with Rails and how select defaults to 1024.

If you can, try to catch me on irc.freenode.org in the #rubyonrails
channel
as zedas. I’ll help you out there.

Zed

Ahh… thanks, I think I understand now!

I had just created an action that posts data to another site, and I set
up a test action which I wanted to log what was being posted and the
first action’s post (via Net::HTTP) to it was what was timing out. I
figured there was probably a deadlock issue, but also figured maybe a
different mongrel process could handle it (I have 5 of them, and the
site doesn’t have much traffic right now). Maybe I’ll have to see about
DRb.

Joe

On 3/31/06 1:13 AM, “Tom M.” [email protected] wrote:

So, -n 10 would allow 1 Rails requests, and 9 concurrent “other”
requests.

Yes, this is true. Here’s a concrete example. Let’s say you do a -n of
4
(which is really too low) and you have one Rails controller/action
that
takes 1 minute. Here’s a faked timeline:

Activity

1 Enters mongrel, sends header (thread switch)
2 Enters mongrel, sends header, locks rails, routed to
controller/action
2 Processes for 60 seconds.
1 Held in queue waiting for #2
3 Enters mongrel, sends header, gets parsed. Blocked by #2.
4 Enters mongrel, sends header, gets parsed. Blocked by #2.
5 Enters mongrel, count of concurrent is > 4, close socket.
6 Enters mongrel, count of concurrent is > 4, close socket.
2 Finishes, releases Rails lock.
1 Gets Rails lock, routed to controller/action.
2 Response goes out (notice that locking isn’t stopping the response).
3 Blocked by #1 now.
4 Blocked by #1 now.

The numbers are just request/client numbers to show you how they’d
interact.
In general the only thing that’s locked is Rails. Every other part of
Mongrel is as thread safe as Ruby allows. This is why if you have a
bunch
of long running requests you’ll need a larger number of backend mongrel
handlers to deal with it.

Now, let’s look at your other problem of having a rails controller call
back
onto the same mongrel rails again. Rails is locked and you’re
processing
that request. Inside this locked request you then do another request
back
to rails. This request comes in, gets to the lock, and stops. You’ve
basically created a deadlock since your controller/action is waiting on
your
HTTP client to finish, but the HTTP client can’t finish until the
controller/action exits.

The solution: With any web application you really should design long
running requests to use a queuing system rather than make the client
wait.
On any web server platform you’ll eventually fill up the reasonable
number
of threads you can handle or sockets that can be open and your
performance
will go to nothing.

A better approach is to setup a DRb server that handles work you give it
like in a queue. You pass this DRb server “stuff to do”, and then
quickly
return to the user with a status. Throw in some fancy ajax that then
checks
a second controller to see if the DRb request is finished and displays
this
to the user. When it’s done, your ajax then whips over to the “it’s
done”
action and displays the results from the DRB server.

Hope that helps.

Zed A. Shaw

http://mongrel.rubyforge.org/