Why the render speed is still so slow under apache?

<% for demand in @demands %>
<% cache(:action => ‘list’,:part => article.id) do -%>
<%= render :partial => ‘article’} %>
<% end %>
<% end %>

Under webrick,the time that list rendering costs will be very soon,but
under apache2.2+mongrel_cluster, the rendering still takes a long time-
which occupies about 95% of the time.

Following is the httpd.conf:

RewriteEngine On

RewriteRule ^/$ /index.html [QSA]

RewriteRule ^([^.]+)$ $1.html [QSA]

RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f
RewriteRule ^/(.*)$ balancer://myapp_cluster%{REQUEST_URI} [P,QSA,L]

As a whole, of the three cache mechanism : page/controller/fragment,
only page works normally under apache2.2+mongrel_cluster, the other two
cache do not improve the speed at all,but they do under webrick+windows.

Anyone who can help me with issue?

Jonathan

Hey Jonathan

Im experiencing similar issues actually, however im running a cluster
with pound up front, and a distrubuted lighty and mongrel setup. The app
is lighting fast on local network, but it takes forever and ever to
render when comming in externally.

Im currently looking into it, as it might well be a config problem or
somthing to do with how ive written it. Might you be able to show your
controller code also? Im just thining we can both compare what is
running slowly maybe there will be a common denominator? Like a large
query or somthing

Cheers

Tim

Tim P. wrote:

Hey Jonathan

Im experiencing similar issues actually, however im running a cluster
with pound up front, and a distrubuted lighty and mongrel setup. The app
is lighting fast on local network, but it takes forever and ever to
render when comming in externally.

Im currently looking into it, as it might well be a config problem or
somthing to do with how ive written it. Might you be able to show your
controller code also? Im just thining we can both compare what is
running slowly maybe there will be a common denominator? Like a large
query or somthing

Cheers

Tim

Hi,Time
The controller code of mine is pretty simple:
require ‘fileutils’
class DemandsController < ApplicationController

def list
@pageCount = 10
demandCount = Demand.count(:conditions => “status = ‘publish’”)
@pages = Paginator.new self, demandCount, @pageCount, @params[:page]
@demands = Demand.find(:all, :conditions => “status = ‘publish’”,
:order => ‘created_at DESC’, :limit => @pageCount, :offset =>
@pages.current.offset)
render :layout => false
end

end

I think perhaps it is because of some config problems… All my config
procedure is according to :
http://rubyforge.org/pipermail/mongrel-users/2006-July/000757.html

Tim P. wrote:

I see, theres quite a bit going on there. Its similar to mine in that
sense, i have a reasonable multi table query going on, so there is
obviously some load on that. Have you got the site your having problems
with on an external IP or address so we can take a look?

Have you run performance tests on it? checked the caching? ive done none
of this but im just throwing it out there as it might be a solution for
both of us. Im not aware of any issues with the caching not working
under mongrel, are you part of the mongrel mailing list?

There has been qutie a bit of discussion latly about performance and how
to wring every last ounce out of it on the mailng list. Ive put the
posts below…
The second one is probally got the most info on

Thank you Tim
In fact,my app is a very complicated one and is going to be
deployed,former development is proceeded under windows and
webrick,however when it is started to be in production in linux,the
request is not improved much more,therefore it has been a great pleasure
to increase the response speed.All you have listed are have to be done
to find out the reason that effect the performance.

On Aug 28, 2006, at 6:01 AM, Jonathan wrote:

of this but im just throwing it out there as it might be a

Posted via http://www.ruby-forum.com/.
It sounds like there might be a slow dns resolver between your
proxies and backends? If it is fast locally then there must be
something amisss on the server config that makes it slow. Sometimes
making sure to use IP addresses instead of domain names in your pound
and lighty configs can make a big diference if there is slow dns
involved.

-Ezra

I see, theres quite a bit going on there. Its similar to mine in that
sense, i have a reasonable multi table query going on, so there is
obviously some load on that. Have you got the site your having problems
with on an external IP or address so we can take a look?

Have you run performance tests on it? checked the caching? ive done none
of this but im just throwing it out there as it might be a solution for
both of us. Im not aware of any issues with the caching not working
under mongrel, are you part of the mongrel mailing list?

There has been qutie a bit of discussion latly about performance and how
to wring every last ounce out of it on the mailng list. Ive put the
posts below…
The second one is probally got the most info on


The best way to find your “utilization sweet spot” is to do something
like the following:

  1. Write a simple rails app with a test controller that just returns the
    word “test” with render text. This is the fastest little rails action
    you could have, so make sure your configuration is tight an this is the
    fastest. Do this with just one mongrel and measure it using ab or
    httperf.
  2. Once you have a single mongrel running well, proceed to add mongrels
    and retest (make sure you increase the concurrency too) until adding
    mongrels doesn’t improve performance of this fastest action.
  3. This is probably your sweet spot, now just run these tests on the
    various actions you have in your real app and see how everything works
    for ram and cpu usage.

After that you’ll have to get into different configs, tuning your OS,
etc. but at least you’ll have found a good start.

Also, retest this same scenario when new versions of rails come out or
you deploy a new version of your app. Every time the conditions of your
last test change you need to re-run them. It’s just like unit testing,
you gotta keep doing it or it’s pointless.


Zed A. Shaw


There is no set number that is “best” since that depends on factors like
the type of application, hardware you run on, how dynamic the appication
is, etc.

I’ve found that 8-12 mongrel processes per CPU does right, but I
determined this by starting with 1 and then doing the following:

  1. You’ll need a URL to a small file that is running on your apache
    server and is not served by Mongrel at all. This URL will be your “best
    possible baseline”.

  2. Build your baseline measurement first. Using httperf, measure the
    speed of your URL from #1 above so that you know how fast you could
    possibly get if you served everything static in ideal situations.

a) ***** Make sure you do this on a different machine over an ideal
network. Not your damn wifi over a phone line through sixteen poorly
configured routers. Right next to the box your testing with a fast
switch and only one hop is the best test situation. This removes
network latency from your test as a confounding factor.*****

  1. Pick a page that’s a good representative page for your application.
    Make sure you disable logins to make this test easier to run. Hit this
    Rails page and compare it to your baseline page.

a) If your rails measurement is FASTER than your baseline
measurement then you screwed up. Rails shouldn’t be faster than a file
off your static server. Check your config.

b) If your rails measurement is horribly slow compared to baseline
then you’ve got some config to do before you even start tuning the
number of process. Repeat this test until one mongrel is as fast as
possible.

  1. Once you’ve got a Rails page going at a reasonable speed, then you’ll
    want to increase the --rate setting to make sure that it can handle the
    reported rate.

  2. Finally, you alternate between adding a mongrel process and running
    test #4 with the next highest rate you’d get. You basically stop when
    adding one more server doesn’t improve your rate.

a) Make sure you run one round of test #4 to get the server “warmed
up”, and then run the real one. Hell, run like 5 or 6 just to make sure
you’re not getting a possibly bad reading.

b) Example, you run #4 and find out the --rate one mongrel can support
is 120 req/second. You add another mongrel and run the test again with
–rate 240. It handles this just find so you add another and get --rate
360. Ok, try another one and you get it dies. Giving --rate 480 gets
you only a rate of 100. Your server has hit it’s max and broke. Try
tuning the --rate down at this and see if it’s totally busted (like, 4
mongrels only gets you --rate 380) or if it’s pretty close to 480.

That should do it. A good practice is to also look at the CPUs on the
server with top and see what kind of thrashing you give the server.

HTTPERF

Here’s the commands I use for each test, but read the man page for
httperf so that you learn to use it. It’s an important tool and just
cut-pasting what I have here is not going to do it for you.

#2 && #3) httperf --server www.theserver.com --port 80 --uri /tested
–num-conns <10 second count>

#4) httperf --server www.theserver.com --port 80 --uri /tested
–num-conns <10 second count> --rate <reported req/sec>

Where <10 second count> is you put enough connections to make the test
run for 10 seconds. Start off with like 100 and keep raising it until
it goes for 10 seconds.

Where <reported req/sec> is you put in whatever httperf said the
estimated requests/second were from #3. What you’re doing here is
seeing if it really can handle that much concurrency. Try raising it up
and dropping it down to see the impact of performance on higher loads.

Have fun.


Zed A. Shaw

Ok cool, well let me know what your results yeild i would be most
interested

Cheers

Tim