mySQL database. That’s why they are now using a cluster. Probably that
figure included requests to static content like stylesheets, images, and
stuff as well.
That’s a busy forum, but depending on how they use AJAX I could see it
reaching that.
But I thought most of that is cached by the browser anyway? If the
clients are frequent users of that portal, wouldn’t the browser avoid
loading that stuff and request only the dynamic content?!
Guess it depends… on our site most of the pages include their own CSS
and specific images as the content is pretty different from section to
section, but yeah, after awhile browsers should cash that.
But my point was that lighttpd/nginx/apache/etc should be able to serve
thousands of requests per second for static content. So that really
shouldn’t factor into things unless you’re youtube or flickr
However I’m trying to avoid having havy database load with a intelligent
caching strategy. I hope that mongrel will support me on this way.
I’m not sure how mongrel would help specifically with this. Rails page
caching and fragment caching would though. Look into memcache as well
and
the various plugins that tie memcache into AR’s find methods and Rails
caching in general. Memcache is a life saver for us.
database won’t get overloaded?!
Does ANYONE knows some more performance figures of rails based
web-sites? I would be interested in how many requests they can serve and
what type of hardware and software-setup they are using. Too bad the
author of “Agile web development with rails” stopped talking about these
figures after the first book.
Sometime last year our corp site did 8,996,175 pages and 63,571,374
requests in one day. That works out to um… about 100 pages/sec and
735
requests/sec.
And while I don’t trust alexia exactly, for comparision with some other
rails sites…
http://img312.imageshack.us/img312/3569/alexavd9.png
We did this with 20 servers running apache and 4 mongrels each, and
three
separate media servers (for video). None of the servers were overworked
(load < 1) so I imagine we could have gotten by with a lot fewer, but
the
year before we were PHP and were slammed and our traffic triples about
this time every year so we didn’t want to take any chances
-philip