Ruby on Rails performance / Why is Mongrel popular?

I’ve been a satisfied Ruby on Rails developer for quite some time now.
Recently I’m considering using Rails again for a new website. But this
website has many visitors - 40k unique visitors per day (and about
200k page loads per day) - so I’m worried about performance and memory
usage.

I wrote a little test application - it has one controller, ‘Person’,
and one method, ‘test’. The method ‘test’ is not implemented, I only
put a file ‘test.rhtml’ in the ‘views/person’ folder. test.rhtml
contains the text “hello world”.

I started Rails as follows:
./script/server &>/dev/null
(output is redirected to /dev/null so that I won’t be testing the
performance of my terminal emulator)
Rails uses lighttpd as web server. I ran httperf as follows:
httperf --num-conn=100 --server=localhost --port=3000 --uri=/person/
test --timeout=5 --hog
And the result is alarmingly low: the request rate is only 15.5 per
second!

For comparison, I also wrote a ‘hello world’ PHP script (test.php) -
the script doesn’t contain any PHP calls, only the text ‘hello world’:
httperf --num-conn=100 --server=localhost --port=80 --uri=/test.php –
timeout=5 --hog

Request rate: 1018.3 req/s (1.0 ms/req)

It seems rails is about 100 times slower. I can load balance the
website, but that costs money - lots of money. I don’t have that much
money. Am I doing something wrong, or is Rails really that slow?

I’ve also tried Mongrel instead of lighttpd, but there was no
performance increase. Speaking of Mongrel, why is it so popular? Why
is everybody moving away from lighttpd and toward Mongrel?

  • Last time I checked, Mongrel can only process one request at the
    same time. So suppose a Rails requests needs 5 seconds to complete,
    then all other clients will have to wait 5 seconds as well. Lighttpd
    seems to spawn a new Rails process if the current one hasn’t finished.
  • I also read that people uses Apache (or lighttpd) as load balancer,
    and proxies requests to several Mongrel instances. But doesn’t that
    waste memory like mad? Each Mongrel process seems to need 31 MB at
    startup (and this is for an empty ‘hello world’ Rails app). If you
    have n Mongrel processes then you need at least n*31 MB of memory!
    Almost no memory is shared because the Ruby parse tree is not shared
    between Ruby instances, unlike native shared libraries.

Can someone provide me with the answer to these questions?

Shaoxia wrote:

For comparison, I also wrote a ‘hello world’ PHP script (test.php) -
the script doesn’t contain any PHP calls, only the text ‘hello world’:
httperf --num-conn=100 --server=localhost --port=80 --uri=/test.php –
timeout=5 --hog

Request rate: 1018.3 req/s (1.0 ms/req)

It seems rails is about 100 times slower. I can load balance the
website, but that costs money - lots of money. I don’t have that much
money. Am I doing something wrong, or is Rails really that slow?

You are running in development mode. Try production mode. Also 200k page
views
spread over a 12 hour period is 4.6 page views per second. That’s hardly
that
many page views per second.

On my 2.0 GHz Athlon 64 (single-core) Linux box I get 15.6 req/s with
Mongrel
in development mode and 103.7 req/s in production mode with a test
“hello
world” RHTML file (no embedded Ruby) using your above httperf command.

I’ve also tried Mongrel instead of lighttpd, but there was no
performance increase. Speaking of Mongrel, why is it so popular? Why
is everybody moving away from lighttpd and toward Mongrel?

People are saying that Mongrel is more reliable and it’s easier to setup
in a
load-balanced cluster with Apache 2.2 with mod_proxy_balancer. I never
did the
FastCGI/lighttpd setup so I can’t comment on that personally but I’ve
been
using Mongrel on my low traffic site with no issues at all.

between Ruby instances, unlike native shared libraries.

Can someone provide me with the answer to these questions?

Well “waste memory like mad” is a subjective thing but yes if you run a
large
cluster of load-balanced Mongrel processes behind Apache 2.2
mod_proxy_balancer you will use up a good chunk of memory. It’s not
clear that
with 200K page views a day you need to do that.


Michael W.

website, but that costs money - lots of money. I don’t have that much
money. Am I doing something wrong, or is Rails really that slow?
Rails has a built-in caching mechanism that should help. Have a look
at the caches_page method.

You can also arrange to have static content served directly through
Apache without touching Rails at all.

  • Last time I checked, Mongrel can only process one request at the
    same time. So suppose a Rails requests needs 5 seconds to complete,
    That’s right. I’m using mongrel_cluster+monit with four mongrels - but
    I don’t get 200k hits a day.

then all other clients will have to wait 5 seconds as well. Lighttpd
seems to spawn a new Rails process if the current one hasn’t finished.
Both Lighttpd and Apache can be configured to spawn new processes as
needed, instead of keeping them around. This means you eat the
significant startup overhead at request time, though. Even back when I
was using Apache+Fastcgi I preferred to have all the dispatch.fcgi
processes start with the server.

Can someone provide me with the answer to these questions?
No magic bullet, I’m afraid. Caching, performance tuning, and a lot of
free memory…

Gwyn.