ANNC: qrp - queueing reverse proxy

Hi, I’ve started a new project that uses Mongrel. It basically
lets you defer requests to it until other Mongrels in your (Rails)
pool becomes free.

Rubyforge project page:
http://rubyforge.org/projects/qrp/

gems, tarballs and git repo in case they haven’t hit the mirrors yet:
http://bogomips.org/ruby/

I should also add that nginx 0.6.7 or later is required for the
“backup” feature I mention below in the README:

Queueing Reverse Proxy (qrp)

Ever pick the wrong line at the checkout counters in a crowded store?
This is what happens to HTTP requests when you mix a multi-threaded
Mongrel with Rails, which is single-threaded.

qrp aims to be the simplest (worse-is-better) solution and have the
lowest (adverse) impact to an existing setup.

Background:

An existing Rails site running Mongrel with nginx proxying to them.

Unlike Apache, nginx fully buffers HTTP requests from clients before
passing them off to Mongrels, which allows Mongrels to dedicate more
cycles to running Rails itself.

Problem:

Rails is single-threaded; this is (probably) not easily fixable.

By default, Mongrel will accept and queue requests while Rails is
handling another request.

Some Rails actions will take longer than others; and sometimes
several seconds can be required to respond to an HTTP request.

This problem is exacerbated if the Rails application queries
third-party servers for information.

Any queued requests inside Mongrel running Rails must wait until a
slow Rails action has finished before they can run.

If another Mongrel in the pool becomes free, then the requests that
got queued behind a still-busy Mongrel would still be stuck and unable
to get to the free Mongrel.

Disabling concurrency on the Rails Mongrel (with “num_processors: 1”
in the config)[1] will cause clients to be rejected outright and users
will see 502 (Bad Gateway) errors.

Bad Gateway errors getting returned to clients are bad, a slightly
slower site is still better than a broken site.

The developers also lack the resources to migrate to thread-safe
platform (such as Merb or Waves) at the moment.

Solution:

Disable concurrency in Mongrels running Rails is part of the solution.

Then setup a qrp or two as a backup member in your nginx
configuration.

Connections will normally go directly from nginx to Rails Mongrels (as
before). However if all your regular Mongrels are busy, then nginx
will send requests to the backup qrp instance(s).

Once a request gets to qrp, qrp will retry the all the members in a
given pool until a connection can be made and a response is returned.

This avoids extra data copies of requests for the common (non-busy)
case, and requires few changes to any existing infrastructure.

Having fail_timeout=0 in the nginx config for every member of the
Rails pool will allow nginx to immediately re-add a Rails Mongrel to
the pool once the Rails Mongrel has finished processing.

— highlights of the nginx config:
upstream mongrel {
server 0:3000 fail_timeout=0; # Rails
server 0:3001 fail_timeout=0; # Rails
server 0:3002 fail_timeout=0; # Rails
server 0:3003 fail_timeout=0; # Rails
server 0:3500 backup; # qrp
server 0:3501 backup; # qrp
}
— highlights of the qrp config:

same Rails upstreams as in the nginx config

upstreams:

  • 0:3000
  • 0:3001
  • 0:3002
  • 0:3003

— highlight of the mongrel config[1]:
num_processors: 1

Other existing solutions (and why I chose qrp):

Fair-proxy balancer patch for nginx - this can keep new connections
away from busy Mongrels, but if all (concurrency-disabled) Mongrels in
your pool get busy, then you’ll still get 502 errors.

HAProxy - This will queue requests for you, but only if it makes all
the connections to the backends itself. This means you cannot make
other HTTP connections to the backends without confusing HAProxy;
which (IMHO) defeats the purpose of using HTTP over a custom
protocol.

Swiftiply - admittedly I haven’t tried it. It seems to require
changes to our current infrastructure in deployment and monitoring
tools. Additionally, the extra layer between nginx and Mongrel hurts
performance for every request, not just those that get unlucky.
This also seems to take away the flexibility of being able to talk to
any individual Mongrel process using plain HTTP.

Footnotes:

[1] - The current version of mongrel (1.1.3) does not handle the
-n/–num-procs command-line option, and hence the current
mongrel_cluster (1.0.5) is broken with it:
http://mongrel.rubyforge.org/ticket/14

   A better solution would be to use mongrel_cow_cluster (also a
   development of mine) as it handles the "num_processors:"
   directive correctly in the config file and also supports rolling
   restarts.

/EOF

Hi,

we have a project where mongrel 1.0.1 has been in use in production
for a few months now. Everything works fine except we run into marked
performance problems if the setup has been running a couple of weeks
without a restart. The general performance slows down quite a bit but
restarting everything brings it back to normal.

So we are looking at areas that could be relevant and therefore
considering upgrading mongrel to 1.0.5.

Can anyone provide insight on whether this could help our performance
issues?

Thanks

Matthew

Hi Eric,

This is very interesting - thanks for your notification on qrp. I have
a question for you. I believe there is an nginx module called “fair
proxy” which is supposed to be intelligent about queuing requests so
that only “free mongrels” (i.e. mongrels without an active Rails task
running) receive requests.

Ezra blogged it here:

http://brainspl.at/articles/2007/11/09/a-fair-proxy-balancer-for-nginx-and-mongrel

I wonder how what you’re working on differs and/or improves on this
system (obviously the architecture is different but I’m wondering about
performance/effect)?

Would there be a reason to run both? Would your tool be preferred in
some circumstances to the nginx fair proxy balancer? If so, what kind
of circumstances? Or do they basically solve the same problem at
different points in the stack?

Thanks for any additional detail on your very interesting project!

Sincerely,

Steve

On Mon, Feb 25, 2008 at 6:57 AM, Matthew L.ham
[email protected] wrote:

Can anyone provide insight on whether this could help our performance
issues?

Maybe no-one will comment this, but most of the “long running process”
problems are not (directly) Mongrel related, but Rails related.

It seems quite common blame Mongrel because it hides behind a simple
“mongrel_rails” script or a cluster of mongrels. But after all,
mongrel is running your Rails application. period.

Mongrel offers to Rails a layer to process the HTTP protocol and serve
the CGI-like information to the Rails Dispatcher, nothing more,
nothing less.

If you don’t provide/create custom handlers (in mongrel) that’s the
most Mongrel do for Rails.

I’ve seen several memory leaks from Rails, and tried to pinpoint some
of them, ended repeating the work done in the past by others but that
never got integrated into Ruby or Rails core.

Like, for example, the Benchmark::realtime “abuse” that Rails do and
the huge amount of memory it allocated in the past (now fixed in Ruby
trunk and 1_8 branch).

Migrating to Mongrel 1.0.5 wouldn’t fix that, and you will keep
blaming Mongrel for the problem.

I’ll suggest you plan a monit/god sweep and respawn strategy for your
rails processes.

Besides that, guess is time for you to start looking at things your
application is using to became slower with the time:

  • Use of image processing plugins/extensions/gems like RMagick
  • Files opened without proper closing (File.open without a block or
    file#close after use of it).
  • String concatenation using += instead of <<
  • Misused queries over associations in your ActiveRecord models (use
    of length to determine the size of a associated array instead of
    .size).

Those are the more common mistakes.

HTH,

Luis L.
Multimedia systems

A common mistake that people make when trying to design
something completely foolproof is to underestimate
the ingenuity of complete fools.
Douglas Adams

Steve M. [email protected] wrote:

Ruby on Rails Blog / What is Ruby on Rails for?
Hi Steven,

I noted it in the README and original announcement:

Other existing solutions (and why I chose qrp):

Fair-proxy balancer patch for nginx - this can keep new connections
away from busy Mongrels, but if all (concurrency-disabled) Mongrels in
your pool get busy, then you’ll still get 502 errors.

I wonder how what you’re working on differs and/or improves on this
system (obviously the architecture is different but I’m wondering about
performance/effect)?

Would there be a reason to run both? Would your tool be preferred in
some circumstances to the nginx fair proxy balancer? If so, what kind
of circumstances? Or do they basically solve the same problem at
different points in the stack?

I believe my solution is better for the worst-case scenario when all
Mongrels are busy servicing requests and another new request comes in.

The fair proxy balancer would give 502s if I disabled concurrency in the
Mongrels.

Leaving concurrency in the Mongrels while Rails is single-threaded is
not a comfortable solution, either; mainly because requests can still
end up waiting on the current one.

Imagine this scenario with the fair proxy balancer + Mongrel
concurrency:

10 mongrels, and 10 concurrent requests => all good, requests would be
evenly distributed

If there are 20 concurrent requests on 10 Mongrels, then each Mongrel
would be processing two requests each. If one of the first ten
requests is a slow one, then the second request in that Mongrel could
be stuck waiting even after the other 18 requests are happily
finished.

This is why I’m not comfortable having concurrency in Mongrel + Rails

I haven’t benchmarked the two, but if I had to bet money on it, I’d say
nginx fair balancer + Mongrel concurrency would be slighly faster on the
average case than my solution. It’s just the odd pedantic case I’m
worried about.

So the average processing time of a request going from 200ms to 150ms
doesn’t mean a whole lot to me, it’s somebody that’s stuck behind a 10s
request when all they want is a 200ms request that bothers me.

Running both could work, but I’d prefer to keep nginx as stupidly simple
as possible. I actually considered adding retry/queueing logic to
nginx, but it would be a more complex addition to the event loop. I
actually discovered how the “backup” directive worked when reading the
nginx source and investigating adding queueing to nginx.

Mine relies on nginx 0.6.7 or later for the “backup” directive. Using
nginx 0.6.x may be too scary for some people, but the site I wrote qrp
for has been using nginx 0.6.x since before I was hired to work on it,
and it works pretty well.

Thanks for any additional detail on your very interesting project!

You’re welcome :>

On Mon, Feb 25, 2008 at 2:49 PM, Matthew L.ham
[email protected] wrote:

Hi Luis,

I wasn’t blaming mongrel for the performance problems in general. I
realize that we need to look at the Rails application as well.

Don’t worry, I know you didn’t, but that post will serve as pointer
for future rants :wink:

Anyway…

My question was more along the lines of whether we would see any
additional gains in using a newer version of mongrel.

I’ve not seen a “huge” difference between these versions (performance
speaking).

In any case, it fixed a few issues with mongrel_rails parameters and
some security issues with DirHandler, only if you served directly
Mongrel and not behind apache or nginx.

Forgot to mention: check what are you logging in your log/ files, and
how big the became… Ruby don’t play nice with bigger files (neither
most of the dynamic languages I know of)

Thanks for all your points.

No problem man, don’t take them personally :slight_smile:


Luis L.
Multimedia systems

A common mistake that people make when trying to design
something completely foolproof is to underestimate
the ingenuity of complete fools.
Douglas Adams

Hi Luis,

I wasn’t blaming mongrel for the performance problems in general. I
realize that we need to look at the Rails application as well.

My question was more along the lines of whether we would see any
additional gains in using a newer version of mongrel.

Thanks for all your points.

Matthew

On 25.02.2008, at 17:22, Luis L. wrote:

So we are looking at areas that could be relevant and therefore
“mongrel_rails” script or a cluster of mongrels. But after all,
of them, ended repeating the work done in the past by others but that
rails processes.
.size).
the ingenuity of complete fools.
Douglas Adams


Mongrel-users mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/mongrel-users


Matthew L.ham
Geschäftsführer / Managing Director
Tel: +49 (0) 5251 6948 293
Mobile: +49 (0)172 5749305

Indiginox GmbH
Frankfurter Weg 6, 33106 Paderborn, Germany
HRB Paderborn 8130
Geschäftsführer / Managing Directors:
Matthew L.ham / Ashley Steele