On Wed, Oct 03, 2007 at 10:19:57PM +0900, Charles Oliver N. wrote:
servers and sure Ruby scales just fine !
administration and cooling costs once the application written must be
deployed to thousands of users.
For very small operations, this is true.
An application that scales poorly will require more hardware. Hardware
is cheap, but power and administrative resources are not. If you need 10
servers to run a poorly-scaling language/platform versus some smaller
number of servers to run other “faster/more scalable”
languages/platforms, you’re paying a continuously higher cost to keep
those servers running. Better scaling means fewer servers and lower
continuous costs.
Actually, when people talk about something scaling well or poorly,
they’re usually talking about whether it scales linearly or requires an
ever-increasing inclusion of some resource. Something that scales very
well requires the addition of one more unit of a given resource to
achieve an increase in capability that matches up pretty much exactly
with the amount of capability per unit of resource already employed.
This is usually counted starting after an initial base resource cost.
For instance, if you have minimal electricity needs for lighting, air
conditioning, and a security system, plus your network infrastructure,
and none of that will need to be upgraded within the foreseeable future,
you start counting your electricity resource usage when you start
throwing webservers into the mix (for a somewhat simplified example).
If
you simply add one more webserver to increase load handling by a static
quantity of concurrent connections, you have linear (good) scaling.
On the other hand, if you have a system plagued by interdependencies and
other issues that make your scaling needs non-linear, that kind of
resource cost can get very expensive. Obviously, some software design
needs are part of determining the linearality of your scaling
capabilities, but such needs often involve factors like choosing a
language that makes development easier, a framework that is already
well-designed for scaling, and so on. A language that compiles to
relatively high-performance binaries, or one that is compiled to
bytecode
and executed by an optimizing VM, can help – but that doesn’t magically
make your software scale linearly. That’s dependent upon how the
software was designed in the first place.
Throwing more programmers at the problem certainly won’t result in a
system that scales linearly either. What a larger number of programmers
on a single project often does, in fact, is ensure that scaling
characteristics across the project are less consistent. You may end up
with one particular part of the overall software serving as a scaling
bottleneck because its design characteristics are sufficiently different
from the rest that it requires either a refactor or ever-increasing
resources as scaling needs get more extreme. Oh, and there’s one more
thing . . .
Even the most inexpensive and quickly-developed application’s savings
will be completely overshadowed if deployment to a large datacenter
results in unreasonably high month-on-month expenses.
Even the cheapest hardware and energy requirement will quickly become
astronomically expensive if you have to throw more programmers at it.
The more difficult a system is to maintain, the faster the needed
programmer resources grow. That’s the key: programming resources don’t
tend to scale linearly. Hardware resources, except in very poor
examples
of software design, usually do.