Question on hardware for site

For a medium-sized website, does one machine for database server
(either mysql or postgres) and another for the web server (lighttpd)
sound ok? How much traffic could I expect to handle? (I know it
depends on the application, but some rough estimates would be nice)

Joe

For a medium sized site, the machines should probably be between 18" and
24"
inches to make sure you can serve everyone… Oh, not a pizza question?
Sorry.

This has been gone over so many times, it ain’t funny. Read the
archives.
Test load for your app in development. Start small, add more. You
don’t
need a cluster of Sun’s to serve a Web app (at least most Web apps).

On Mar 15, 2006, at 2:14 PM, Joe Van D. wrote:

Joe-

Jut for some comparisons. I'm not sure what kind of app you are

going to be running but two servers split like you mention will get
you a long way. The http://yakimaherald.com serves 80,000+ page views/
day with around half of those being full dynamic rails requests. The
site runs on a dual g5 xserve and the db runs on a seperate machine.
The xserve is not even taxed at all. It runs at about 15% load most
of the time. It runs the main website you can see on 6 standalone
fcgi listeners. and also 5 other intranet rails apps each with on
fcgi listener.

You can definitely serve a lot of requests with two servers, one for

db and one for web and app. It does depend on how much aching vs
dynamic and if you do a lot of RMagick or not and things like that.

Please feel free to contact me off the list and I can give you some

more specs of a few other rails clusters I admin. A few of them two
machines like you are looking at and a few of them are 3 machine setups.

Cheers-
-Ezra Z.
Yakima Herald-Republic
WebMaster

509-577-7732
[email protected]

Hi Joe ~

I would definitely check out the 37 signals philosophy on scaling for
web apps. Here is a sample chapter from Getting Real:

The entire book can be purchased in PDF here:

Quality.

~ Ben

On 3/15/06, Joe Van D. [email protected] wrote:


Ben R.
http://www.benr75.com

On Mar 15, 2006, at 5:18 PM, Ezra Z. wrote:

Jut for some comparisons. I’m not sure what kind of app you are
going to be running but two servers split like you mention will get
you a long way. The http://yakimaherald.com serves 80,000+ page
views/day with around half of those being full dynamic rails
requests. The site runs on a dual g5 xserve and the db runs on a
seperate machine. The xserve is not even taxed at all. It runs at
about 15% load most of the time.

Does this mean uptime would show .15?

Traditional UNIX performance gurus would tell you that a load of 4.0
is maximum processor utilization, which would make .15 load 3.75%. :slight_smile:


– Tom M.

On 3/15/06, Ben R. [email protected] wrote:

Getting Real

The entire book can be purchased in PDF here: https://gettingreal.37signals.com/

I don’t think having a separate computer for database and web is
really considered “scaling up”. Having the database hosted on
multiple machines, or having a load-balanced webserver, I’d consider
that to be excessive scaling at the beginning.

I’m going to need a third computer anyways to securely hold CC details
(unless I can do this on the database server, and only have the DB and
the CC service running on it).

I don’t think having a separate computer for database and web is
really considered “scaling up”. Having the database hosted on
multiple machines, or having a load-balanced webserver, I’d consider
that to be excessive scaling at the beginning.

I’m going to need a third computer anyways to securely hold CC details
(unless I can do this on the database server, and only have the DB and
the CC service running on it).

I agree. Not excessive and will definitely handle a lot of users.
Just pointing out a good read…

~ Ben

On 3/15/06, Joe Van D. [email protected] wrote:

web apps. Here is a sample chapter from Getting Real:
I’m going to need a third computer anyways to securely hold CC details
(unless I can do this on the database server, and only have the DB and
the CC service running on it).


Rails mailing list
[email protected]
http://lists.rubyonrails.org/mailman/listinfo/rails


Ben R.
http://www.benr75.com

Joe,

As Ezra said, you can go a long way with two desktop-class systems:
one running Web+app and the other running the database.

Once you have that setup in place, there’s typically a lot you can do
to squeeze extra performance out of it. Both MySQL and (particularly)
Postgres allow for lots of performance enhancements over and above
their default configs; I’ve regularly squeezed 10x performance
improvements out of Postgres simply by following the tuning guidelines
you can find at the Postgres Web site. I think the default Postgres
config was put in place in the days when 300MHz, 64Mb machines were
leading edge gear, and hasn’t been updated for years.

Beyond that, and still on the database, you can optimise your
MySQL/Postgres indexes to cover the most common data searches; you can
select your table types in MySQL to give you the appropriate tradeoff
between speed and data integrity; you can implement partial indexes in
Postgres that dramatically improve the performance of specific, common
searches, particularly if you’re doing AJAX-y type progressive matches
on text data; you can split your data into separate tablespaces under
Postgres to improve performance. There’s a whole lot of stuff you
can do to make these two databases perform faster. Beyond that, you
can re-implement some of your Rails SQL as stored procedures or views,
which can give you a significant performance improvement in specific
cases.

On the app side, you can generally cache a lot of static content. One
thing to watch for is that Web server threads can get locked up
trickle-feeding data to Web browsers over slow links; if you can have
the Web server dump that data to e.g. squid, and let squid trickle
feed it to the Web browser, your Web server threads will be freed up
much much sooner and each thread will be able to process new incoming
requests. If you do the maths about how long it takes to send a Web
page full of data to a Web browser over a 56k dialup link, you’ll
realise that a Web server thread could be locked up for >10 seconds
quite easily.

This stuff is all documented if you search for it, and generally not
hard to implement. As others on this list have said repeatedly,
scalability on LAMP is a solved problem these days. The key is to get
a working environment in place, profile it, hunt down the bottlenecks
and start to address them. You need to define what your workload will
look like, then have a way of simulating it, plus you need to have a
way of measuring performance and identifying bottlenecks in your
architecture. Once you’ve got the bottlenecks identified, you can
start to knock them on the head until your performance reaches an
acceptable level; it will reach a point of diminishing returns, where
you’re eventually spending lots of effort to squeeze out that last
1-2% improvement, so you need to be able to define a cutoff point
where things are “fast enough”.

You can get a long way with a simple 2-system setup, but you need to
do your homework to get the most out of it.

…And yes, I do this stuff for a living ;->

Regards

Dave M.

On 3/15/06, Joe Van D. [email protected] wrote:

For a medium-sized website, does one machine for database server
(either mysql or postgres) and another for the web server (lighttpd)
sound ok? How much traffic could I expect to handle? (I know it
depends on the application, but some rough estimates would be nice)

I guess what I’m really interested to hear about is how many users I
could simulatanously served. We’re planning some events that may get
a ton of traffic, and we’d need to be able to serve large numbers of
people at the same time. I’ll figure out how many simultanous people
we need to serve and try to work out the hardware details.

Joe

On Mar 15, 2006, at 6:41 PM, Tom M. wrote:

Does this mean uptime would show .15?

Traditional UNIX performance gurus would tell you that a load of
4.0 is maximum processor utilization, which would make .15 load
3.75%. :slight_smile:

– Tom M.

No i meant 15% ;):

CPU usage: 8.4% user, 12.4% sys, 79.2% idle

Although today I guess its more like 20% ;)

-Ezra Z.
Yakima Herald-Republic
WebMaster

509-577-7732
[email protected]

Thanks, Ezra!


– Tom M.