Design flaw? - num_processors, accept/close

Rails instances themselves are almost always single-threaded, whereas
Mongrel, and it’s acceptor, are multithreaded.

In a situation with long-running Rails pages this presents a problem for
mod_proxy_balancer.

If num_processors is greater than 1 ( default: 950 ), then Mongrel will
gladly accept incoming requests and queue them if its rails instance is
currently busy. So even though there are non-busy mongrel instances,
a busy one can accept a new request and queue it behind a long-running
request.

I tried setting num_processors to 1. But it looks like this is less
than ideal – I need to dig into mod_proxy_balancer to be sure. But at
first glance, it appears this replaces queuing problem with a proxy
error. That’s because Mongrel still accepts the incoming request –
only to close the new socket immediately if Rails is busy.

Once again, I do need to set up a test and see exactly how
mod_proxy_balancer handles this… but…

If I understand the problem correctly, then one solution might be moving
lines 721 thru 734 into a loop, possibly in its own method, which does
sth like this:

def myaccept
while true
return @socket.accept if worker_list.length < num_processors ##
check first to see if we can handle the request. Let client worry about
connect timeouts.
while @num_processors < reap_dead_workers
sleep @loop_throttle
end
end
end

720       @acceptor = Thread.new do
721         while true
722           begin
  • 723 client = @socket.accept
  • 724
    725 if $tcp_cork_opts
    726 client.setsockopt(*$tcp_cork_opts) rescue nil
    727 end
    728
    729 worker_list = @workers.list
    730
    731 if worker_list.length >= @num_processors
    732 STDERR.puts “Server overloaded with
    #{worker_list.length} processors (#@num_processors max).
    Dropping connection.”
  • 733 client.close rescue Object*
    734 reap_dead_workers(“max processors”)
    735 else
    736 thread = Thread.new(client) {|c| process_client©
    }
    737 thread[:started_on] = Time.now
    738 @workers.add(thread)
    739
    740 sleep @timeout/100 if @timeout > 0
    741 end

Mod_proxy_balancer is just a weighted round-robin, and doesn’t
consider actual worker load, so I don’t think this will help you. Have
you looked at Evented Mongrel?

Evan

But it is precisely because of mod_proxy_balancer’s round-robin
algorithm that I think the fix would work. If we give
mod_proxy_balancer the option of timing out on connect, it will iterate
to the next mongrel instance in the pool.

Of course, I should look at Evented Mongrel, and swiftiply.

But still, my original question remains. I think that Mongrel would
play much more nicely with mod_proxy_balancer out-of-the-box if it
refused to call accept() call accept until worker_list.length has been
reduced. I personally prefer that to request queuing and certainly to
“accept then drop without warning”.

The wildcard, of course, is what mod_proxy_balancer does in the drop
without warning case – if it gracefully moves on to the next Mongrel
server in its balancer pool, then all is well, and I’m making a fuss
about nothing.

Here’s an armchair scenario to better illustrate why I think a fix would
work. Again, I need to test to insure that mod_proxy_balancer doesn’t
currently handle the situation gracefully –

Consider:

  • A pool of 10 mongrels behind mod_proxy_balancer.
  • One mongrel, say #5, gets a request that takes one minute to run (
    e.g., complex report )
  • System as a whole gets 10 processing requests per second

What happens (I think) with the current code and mod_proxy_balancer

  • Mongrel instance #5 will continue receiving a new request every
    second.
  • Over the one minute period, 10% of requests will either be
    • queued and unnecessarily delayed (num_processors > 60 )
    • be picked up and dropped without warning ( num_processors == 1 )

What should happen if mongrel does not invoke “accept” when all workers
are busy:

  • Mongrel instance #5 will continue getting new connection requests
    every second
  • mod_proxy_balancer connect() will time out
  • mod_proxy_balancer will continue cycling through the pool till it
    finds an available Mongrel instance

Again, if all is well under the current scenario – Apache
mod_proxy_balancer gracefully moves on to another Mongrel instance after
the accept/drop, then I’ve just made a big fuss over a really dumb
question…

Oh, I misunderstood your code.

I don’t think mod_proxy_balancer gracefully moves on so perhaps you
are right. On the other hand, I thought when a worker timed out it got
removed from the pool permanently. I can’t seem to verify that one way
or the other in the Apache docs, though.

Evan

Ah, no, they are only marked operational until the retry timeout is
elapsed. So I guess if you had extremely small timeouts in Apache and
Mongrel both it would work ok.

Someone else respond, because clearly I don’t know what I’m talking
about. :slight_smile:

Evan

Typo – the following is incorrect:

With the current Mongrel code, BalancerMember max > 1 and Mongrel
num_processors > 1 triggers the accept/close bug.

should be:

With the current Mongrel code, BalancerMember max > 1 and Mongrel
num_processors = 1 triggers the accept/close bug.

====

I’ve discovered a setting in mod_proxy_balancer that prevents the
Mongrel/Rails request queuing vs. accept/close problem from ever being
reached.

For each BalancerMember

  • max=1 – this caps the maximum number of connections Apache will
    open a BalancerMember to ‘1’
  • acquire=N max amount of time (N seconds) to wait to acquire a
    connection to a balancer member

So, at a minimum:

BalancerMember http://foo max=1 acquire=1

and I’m using

BalancerMember http://127.0.0.1:9000 max=1 keepalive=on acquire=1
timeout=1

=====

I experimented with three mongrel servers, and tied one up for 60
seconds at a time calling “sleep” in a handler.

Without the “acquire” parameter mod_proxy_balancer’s simple round-robin
scheme blocked waiting when it reached a busy BalancerManager,
effectively queuing the request. With “acquire” set the balancer
stepped over the busy BalancerMember and continue searching through it’s
round-robin cycle.

So, whether or not Mongrel’s accept/close and request queuing are
issues, there is a setting in mod_proxy_balancer that prevents either
problem from being triggered.

At a bare minimum, for a single-threaded process running in Mongrel

BalancerMember http://127.0.0.1:9000 max=1 acquire=1
BalancerMember http://127.0.0.1:9001 max=1 acquire=1

With all BalancerMembers busy Apache returns a 503 Server Busy, which is
a heck of a lot more appropriate than 502 proxy error.

======

It turns out that having Mongrel reap threads before calling accept both
queueing in Mongrel and prevents Mongrel’s accept/close behavior.

But BalancerMembers in mod_proxy_balancer will still need “acquire” to
be set – otherwise proxy client threads will sit around waiting for
Mongrel to call accept – effectively queuing requests in Apache.

Since max=1 acquire=1 steps around the queuing problem altogether, the
reap-before-accept fix, though more correct, is of no practical benefit.

====

With the current Mongrel code, BalancerMember max > 1 and Mongrel
num_processors > 1 triggers the accept/close bug.

Likewise, BalancerMember max >1 with Mongrel num_processors > 1 runs
into Mongrel’s request queuing…

====

Conclusion —

I’d like to see Mongrel return a 503 Server Busy when an incoming
request hits the num_processor limit.

For practical use, the fix to the problems is in configuring
mod_proxy_balancer such that it shields against encountering either
issue.

Very cool. Can you do a little performance testing to see if it’s more
efficient under various loads than the current way? I would expect it
would have a small but significant difference when you’re near CPU
saturation point, but not much if you’re below (enough free resources
already) or above (requests will get piled up regardless). It may be
worse in the overloaded situation because no one’s request will get
through–the queue might grow indefinitely instead of getting
truncated.

The 503 behavior seems reasonable.

Evan

We recently ran into exactly this issue. Some rails requests were
making
external requests that were taking 5 minutes (networking issues out of
our
control). If your request got queued behind one of these stuck
mongrels,
the experience was terrible. I experimented with adjusting the
mod_proxy_balance settings to try to get it to fail over to the next
mongrel
(I had hoped that the min,max,smax could all be set to one, and force
only
one connection to a mongrel at a time) but this didn’t seem to work.

Solution - I stuck lighttpd in between. Lighttp has a proxying
algorithm
that does exactly this - round robin but to worker with lightest load.

I’d love to hear that there’s a way to use mod_proxy_balancer, but I
couldn’t get to work.

–Brian

On Mon, 15 Oct 2007 12:51:47 -0400
“Evan W.” [email protected] wrote:

Ah, no, they are only marked operational until the retry timeout is
elapsed. So I guess if you had extremely small timeouts in Apache and
Mongrel both it would work ok.

Someone else respond, because clearly I don’t know what I’m talking about. :slight_smile:

I’m confused, isn’t the point of a balancer that it tries all available
backends multiple times before giving up? If m_p_b is aborting on the
first accept that’s denied then it’s broken. It should try every one,
and possibly twice or three times before giving up. Otherwise it’s not
really a “balancer”, but more of a “robinder”.

Also, the proposed solution probably won’t work. If my crufty late
night brain is right, this would mean that the backend will attempt a
connect to a “sleeping” mongrel and either have to wait until the TCP
timeout or just get blocked. Eventually you’re back at the same problem
that you have tons of requests piling up, they’re just piled up in the
OS tcp stack where no useful work can be done. At least piling them in
mongrel means some IO is getting processed.

And, it sounds like nobody is actually trying these proposed solutions.
Anyone got some metrics? Tried Lucas Carlson’s Dr. Proxy yet? Other
solutions? Evented mongrel?


Zed A. Shaw

At least piling them in mongrel means some IO is getting processed.

Ok, that’s the real issue then. When you have a heavy queuing
situation, Ruby can at least schedule the IO among the green threads
whereas Apache has to keep them serialized waiting for a worker to
open up.

Evan

On Mon, 15 Oct 2007 16:43:34 -0700
“Brian Williams” [email protected] wrote:

We recently ran into exactly this issue. Some rails requests were making
external requests that were taking 5 minutes (networking issues out of our
control).

Now that’s a design flaw. If you’re expecting the UI user to wait for a
backend request that takes 5 minutes then you need to redesign the
workflow and interface. Do it like asynchronous email where the use
“sends a request”, “awaits a reply”, “reads the reply”, and doesn’t deal
with the backend processing chain of events.

If done right, you’ll even get a performance boost and you can
distribute the load of these requests out to other servers. It’s also a
model most users are familiar with from SMTP processing.


Zed A. Shaw

On 15 Oct 2007, at 21:52, Robert M. wrote:

I’ve discovered a setting in mod_proxy_balancer that prevents the
Mongrel/Rails request queuing vs. accept/close problem from ever
being reached.

Thanks for that, Robert. We’ve hit exactly the same issue in the
past, but have been unable to find a way to persuade
mod_proxy_balancer to do the right thing. I posted about this issue
here a year or so ago:

http://rubyforge.org/pipermail/mongrel-users/2006-September/
001653.html

But was unable to get anyone to take it seriously :frowning:


paul.butcher->msgCount++

Snetterton, Castle Combe, Cadwell Park…
Who says I have a one track mind?

LinkedIn: https://www.linkedin.com/in/paulbutcher
MSN: [email protected]
AIM: paulrabutcher
Skype: paulrabutcher

On 10/15/07, Zed A. Shaw [email protected] wrote:

Tried Lucas Carlson’s Dr. Proxy yet? Other solutions? Evented mongrel?

HAProxy (and some other proxies smarter than mod_proxy_balancer)
solves this problem by allowing to set the maximum number of requests
outstanding to any node in the cluster. Setting it to 1 means that it
will only ask a Mongrel instance to serve a request when it’s not
already doing so. Which makes perfect sense with Rails
(single-threaded), especially if you do have something else to serve
static content in this setup.

Setting num_processors to 1 is only possible when you have a proxy
that can restrict itself from sending more than one request per
Mongrel. Otherwise, if I remember correctly, you replace occasional
delays with HTTP 503s. Not a good trade-off.

Setting num_processors low has a positive side effect of restricting
how far your Mongrel will grow in memory when put under strain even
for a short period. It grows in memory by allocating RAM to new
threads (that then pile up on a Rails mutex). With, say, 10 Mongrels
and a default num_processors = 1024, allocating memory for 1024 * 10
threads means hundreds of Megabytes of RAM.

I usually set num_processors to something a bit bigger than 1 (say,
5), just so that monitoring can hit it at the same time when load
balancer does.


Alexey V.
CruiseControl.rb [http://cruisecontrolrb.thoughtworks.com]
RubyWorks [http://rubyworks.thoughtworks.com]

What’r y’all usin’ for load generation / perf metrics tools?

This is a huge area and I wonder if you narrow down to certain things
for smoke tests or such.

On 16 Oct 2007, at 06:49, Zed A. Shaw wrote:

for a backend request that takes 5 minutes then you need to
redesign the workflow and interface. Do it like asynchronous email
where the use “sends a request”, “awaits a reply”, “reads the
reply”, and doesn’t deal with the backend processing chain of events.

Zed, you’re being obtuse. Of course that isn’t what Brian means. What
he’s doing is giving a pathological example to illustrate just how
badly the mod_proxy_balancer/mongrel/rails combination behaves when
things go wrong.

Yes, you can mask the problem to some extent by mucking about with
your application (and in fact that’s what we’ve done here), but
that’s missing the point.

It is not unreasonable to expect that some actions performed by an
application are “fast” and some are “slow”. It’s further not
unreasonable to expect a very large difference between the fastest
and the slowest actions (if one action takes 10ms and another takes
1s, that’s not unreasonable - but it is a 2 order of magnitude
difference).

With the obvious setup, fast actions will be delayed behind slow
actions. This is a Bad Thing.

Furthermore, people are fallible. If I happen to accidentally
introduce an action into my system which takes 10s, yes I’ve screwed
up and should fix it. But is it reasonable for the fact that I have a
single (possibly very rare) action which takes 10s to mean that all
the other fast actions are affected? Even when most of my mongrels
are idle?

Of course, this isn’t really a problem with Mongrel. It’s a problem
with Ruby (which doesn’t know what the word “thread” means) and Rails
(which doesn’t even manage to successfully make use of the brain-dead
version of threading which Ruby does support).


paul.butcher->msgCount++

Snetterton, Castle Combe, Cadwell Park…
Who says I have a one track mind?

LinkedIn: https://www.linkedin.com/in/paulbutcher
MSN: [email protected]
AIM: paulrabutcher
Skype: paulrabutcher

What settings did you use in m_p_b?

The trick to making it work was “acquire”, “max”, and probably
“timeout”.

On Tue, 16 Oct 2007 12:49:51 +0100
Paul B. [email protected] wrote:

Now that’s a design flaw. If you’re expecting the UI user to wait
Yes, you can mask the problem to some extent by mucking about with
your application (and in fact that’s what we’ve done here), but
that’s missing the point.

No, as usual performance panic has set in and you’re not looking at the
problem in the best way to solve it. EVERYTHING takes time. No amount
of super fast assembler based multiplexed evented code will get around
that. In his example he also was relying on an external service. It is
a classic mistake to make the user wait for a remote service and all
of your backend processes to finish before they see the end of the HTTP
request.

What people constantly do though, is they assume that the boundary of
their transactions must also match the single boundary of one HTTP
request. If you break this so that presentation of the process is
decoupled from the actual process then you don’t have a problem of the
user eating up a web server.

But, I’m sure nobody will ever convince programmers of this. They love
to run around “performance tuning” stuff instead of just redesigning the
system to it appears fast.


Zed A. Shaw

Alexey V. wrote:

On 10/15/07, Zed A. Shaw [email protected] wrote:

Tried Lucas Carlson’s Dr. Proxy yet? Other solutions? Evented mongrel?

HAProxy (and some other proxies smarter than mod_proxy_balancer)
solves this problem by allowing to set the maximum number of requests
outstanding to any node in the cluster.
But m_p_b is correct in this!!! It’s the “max” attribute to
BalancerMember.

It’s just a pain to discover the correct combination of parameters!

Setting it to 1 means that it
will only ask a Mongrel instance to serve a request when it’s not
already doing so
But mpb IS doing this correctly, as you specify! It’s a matter of
combining “max” and “acquire” attrs on BalancerMember. Perhaps the
thing that needs changing is documentation, making this the default mpb
behavior, or better documentation ( or all of the above! )
. Which makes perfect sense with Rails
(single-threaded), especially if you do have something else to serve
static content in this setup.

Setting num_processors to 1 is only possible when you have a proxy
that can restrict itself from sending more than one request per
Mongrel.

Which we do in m_p_b, via the “max” attribute to BalancerMember

Otherwise, if I remember correctly, you replace occasional
delays with HTTP 503s. Not a good trade-off.

The 503s would only be generated in the case of incorrect mpb
settings. A 503 “server busy” coming from the Mongrel back-end gives
developers and admins a better idea of what’s really happening.

Consider: the back end has reached maximum capacity. Saying “Hey
503! I’m at max capacity” is better than the current action – open and
close with no indication of what’s wrong.

Setting num_processors low has a positive side effect of restricting
how far your Mongrel will grow in memory when put under strain even

5), just so that monitoring can hit it at the same time when load
balancer does.

Excellent idea!

On 10/15/07, Zed A. Shaw [email protected] wrote:

Now that’s a design flaw. If you’re expecting the UI user to wait for a
backend request that takes 5 minutes then you need to redesign the workflow
and interface. Do it like asynchronous email where the use “sends a
request”, “awaits a reply”, “reads the reply”, and doesn’t deal with the
backend processing chain of events.

If done right, you’ll even get a performance boost and you can distribute
the load of these requests out to other servers. It’s also a model most
users are familiar with from SMTP processing.

Just to clarify, we were accessing a web service that typically returns
results in < 1 second. But due to network issues out of our control,
these
requests were going into a black hole, and waiting for tcp timeouts.
Admittedly, since this was to an external service, we could shift to a
model
where all updates are asynchronous, but this doesn’t help in the cases
that
paul mentions, such as a slower reporting queries or programmer error
slow
actions which then end up degrading the experience for all users to the
site.

Assuming we did switch to an asynchronous model, I would think it would
be
more like - show me latest FOO, trigger backend update to get latest
FOO,
return last cached FOO. Or if you know what FOO is, you periodically
update it, and don’t bother triggering an update.

The first request would then return something like ‘Fetching results’,
right?

–Brian