Recent Criticism about Ruby (Scalability, etc.)

Ruby will eventually (I think) become something more useful than PHP,
mostly because it is a language, like other cool ones (LUA, Erlang,
etc.) that attracts some very smart people.

Everyone seems to be focusing on the cost/benefit analysis. Have we
forgot about how fun it is to program?

Todd

On Thu, Oct 04, 2007 at 07:10:05AM +0900, Jay L. wrote:

programmers (or programmer time) at the problem, that’s certainly
“realistic” in my estimation.

A lot depends on your application requirements. If you design it from the
ground up to be “shared nothing”, then you may well be lucky enough to
truly HAVE shared nothing. But you’ll also have a pretty limited feature
set.

“Feature-rich” is overrated. Anyone who tries to be everything to
everyone will end up being not the right thing to pretty much everyone.
You only get into the kind of trouble you describe when you try to hard
to get everyone interested.

What’s the big buzzword today? Social networking. What did we used to
call that? “Community.” What was the single biggest sticky-paper
community feature? Buddy lists. Who does buddy lists besides the Big Guys
(who can throw money at it) and the really small guys (who fit on a single
server)? Nobody. Why? Doesn’t scale linearly. Think about what it takes
to offer a feature that, for every simultaneous user, checks the list of
every other simultaneous user for people you know. Shared-nothing that.

The answer to that, from where I’m sitting, is to choose between
focusing
on “social networking” and focusing on something else. If you’re just
adding it as “yet another feature” to your application to become
buzzword-compliant, you’ll become another dot-com startup has-been. Of
course, there’s also always the business strategy of “look successful,
sell to someone big” without actually turning a positive buck along the
way – and if that’s what you want to do, you’re on your own.

My area of expertise was the AOL mail system. And, looking back, there
were a number of core features we offered that simply couldn’t be done in a
shared-nothing world over slow phone lines:

anything that makes any assumptions at all about the state of any database
you’re interacting with or relational integrity or any other transaction in
the system, ever. Including whether the disk drive holding the transaction
you just wrote to disk has disappeared in a puff of head crash.

You don’t always have to write shared-nothing code to get near-linear
scalability – and it’s true that near-linear scalability is something
that only exists within certain ranges before you hit a cost or resource
requirement spike, but if you’re smart you plan ahead for those kinds of
things. Things don’t always go as planned, of course, but if you’re
smart you plan for that, too, by setting aside “money for a rainy day”
and ensuring that, short of your main datacenters and every off-site
backup in the world being eliminated by meteor strikes simultaneously,
any major scaling issues will not require a sudden “right now” fix.

It was always the little things that bit us. Know why AOL screen names are
often “Jim293852”? Well, it started out as “The name ‘Jim’ is already
taken. Would you like ‘Jim2’?”. Guess how well that scales when the first
available Jim is “Jim35000”? Not very.

I’m curious how any of this is meant to support the position that a
faster-executing programming language that imposes greater hurdles on
programmer productivity will be a better investment for scalability than
designing a system that can absorb greater loads by adding hardware
resources.

Pop-quiz: Which of your core features would you have to eliminate with
three million simultaneous users?

Hopefully, by the time you have that many users, you’re making enough
money to be able to manage that many users. If not, your business model
sucks.

On Thu, 4 Oct 2007 06:19:19 +0900, Chad P. [email protected]
wrote:

In other words, I was assuming reasonably good code as a baseline,
and you were assuming reasonably bad code as a baseline. These
incompatible assumptions may be influenced by our respective work
environments.

Yes, I think that’s the crux of it.

I’ll expand on my position, then:

  1. Hire good programmers.

  2. Have them write good code.

  3. Throw hardware at the scaling problem, because your good code
    written by good programmers can handle it.

Perhaps we’ve also got slightly different audiences in mind –
“hire good programmers” is not advice I’d normally give to a
programmer.

-mental

On Fri, 5 Oct 2007 03:29:59 +0900, Chad P. [email protected]
wrote:

Adjusted for programmers:

  1. Be(come) a good programmer.

  2. Write good code.

  3. Let your boss throw hardware at the scaling problem, because the
    good code you wrote as a good programmer can handle it.

Better?

Fair enough. :slight_smile:

-mental

On Fri, Oct 05, 2007 at 02:52:05AM +0900, MenTaLguY wrote:

  1. Hire good programmers.

  2. Have them write good code.

  3. Throw hardware at the scaling problem, because your good code
    written by good programmers can handle it.

Perhaps we’ve also got slightly different audiences in mind –
“hire good programmers” is not advice I’d normally give to a
programmer.

Adjusted for programmers:

  1. Be(come) a good programmer.

  2. Write good code.

  3. Let your boss throw hardware at the scaling problem, because the
    good code you wrote as a good programmer can handle it.

Better?

On Fri, 5 Oct 2007 02:03:42 +0900, Chad P. wrote:

truly HAVE shared nothing. But you’ll also have a pretty limited feature
set.

“Feature-rich” is overrated. Anyone who tries to be everything to
everyone will end up being not the right thing to pretty much everyone.
You only get into the kind of trouble you describe when you try to hard
to get everyone interested.

chuckle I do believe that’s the first time in recorded history that
anyone has accused AOL of being feature-rich.

The point I was making with all the features you snipped was that it
doesn’t take wild, pie-in-the-sky everything-to-everyone features to
prevent your application from scaling linearly. Any little thing can
trip
you up. Most of the features I listed were either small facets of
behavior
or byproducts of other design decisions. And some of them (e.g. saving
disk space) were actually “scaling” features themselves; what helps you
scale to 100x (fitting on the available disk drives) may hinder you at
10000x (when your servers are in different data centers).

adding it as “yet another feature” to your application to become
buzzword-compliant, you’ll become another dot-com startup has-been. Of
course, there’s also always the business strategy of “look successful,
sell to someone big” without actually turning a positive buck along the
way – and if that’s what you want to do, you’re on your own.

I’m not really sure what that has to do with… well, with anything.
But
then, my point probably wasn’t all that clear to you, either. The point
was that there’s always a tension between feature-set and scalability.
Sometimes, you go too far in one direction or the other. Looking back,
I
probably didn’t need to be as adamant as I was that “‘Your mail has been
sent’ means your mail has been sent”; the rest of the Internet has
learned to cope with "Your mail has, in all probability, been sent or is
about to be really soon, generally speaking’.

On the other hand, I find it remarkable that a huge crop of “social
networking” sites have become immensely popular without the most obvious
social networking feature - who else is here? - because that feature
just
doesn’t scale. It would be like if e-commerce grew to its current
levels
without real-time credit card processing, or if Flickr only let you
upload
ASCII-art of photos because photos are too big to store.

You don’t always have to write shared-nothing code to get near-linear
scalability – and it’s true that near-linear scalability is something
that only exists within certain ranges before you hit a cost or resource
requirement spike, but if you’re smart you plan ahead for those kinds of
things. Things don’t always go as planned, of course, but if you’re
smart you plan for that, too, by setting aside “money for a rainy day”
and ensuring that, short of your main datacenters and every off-site
backup in the world being eliminated by meteor strikes simultaneously,
any major scaling issues will not require a sudden “right now” fix.

I both agree and violently disagree. The problem with “if you’re smart
you
plan ahead” is that (a) you often won’t know what your pain points will
be
until shortly before you hit them, (b) even if you do, they may not be
in
your control, and © you don’t always know how fast you’re going to
grow.
It would be foolish for me to invest in a large Arizona data center in
case
the traffic to my last-updated-in-2005 blog spikes 10000x next year.
(And
I do keep promising my financial advisor that I’m selling the data
center.)

But sometimes externalities do hit your business; it becomes “steam
engine
time”, and you’re the steam engine. Verizon nee AT&T nee the Bell
Telephone Company had, literally, over a hundred years of experience
telling them how much their business would grow each year. And that
worked
very well until 1995, when all of a sudden the online world was booming,
people were leaving their phones off the hook even when they weren’t
home,
and suddenly they ran out of dial tones.

Luckily for them, they were the phone company; nobody had anywhere to
run.
But if they were in a competitive business, they’d be toast, because
someone would fill the need that they couldn’t. Remember Friendster?
Great idea, great product, right time, couldn’t scale as quickly as
their
user base, slow site, toast.

That said, I agree that most major scaling issues do not require a
“right
now” fix. If you know what to measure, you will know what to fix next.
Don’t measure queue depth; if you’re queuing, it’s too late. Measure
percent busy. Fifteen years ago, “top” only told you your “load
average” -
that’s a queue depth. Now it tells you percent busy, and you can
predict
when you’ll be out of capacity. You should always know what percent of
your CPU, your disk, your network, your any-resource-here is being used.
And that includes time, too; if you have batch runs that take longer
than
24 hours to process 24 hours worth of data, you ain’t catching up any
time
soon. But it takes experience to learn where the knee of the curve is.

You can also control the demand side to some extent. Google really
knows
how to do it well; they restrict growth by burying new features deep in
Google Labs, or only showing them to a small percentage of users, or
using
an invite system.

I’m curious how any of this is meant to support the position that a
faster-executing programming language that imposes greater hurdles on
programmer productivity will be a better investment for scalability than
designing a system that can absorb greater loads by adding hardware
resources.

It isn’t. Elsewhere in this thread, in fact, I was arguing that
programmer
productivity is far more important to orders-of-magnitude scalability
than
raw language performance. I agree with you on that. Where I disagreed
was
that it was realistic for “many purposes” to assume linear scalability.
Show me any site design and I’ll show you a dozen places it falls over
at a
few orders of magnitude.

On Fri, Oct 05, 2007 at 06:15:05AM +0900, Jay L. wrote:

ground up to be “shared nothing”, then you may well be lucky enough to
truly HAVE shared nothing. But you’ll also have a pretty limited feature
set.

“Feature-rich” is overrated. Anyone who tries to be everything to
everyone will end up being not the right thing to pretty much everyone.
You only get into the kind of trouble you describe when you try to hard
to get everyone interested.

chuckle I do believe that’s the first time in recorded history that
anyone has accused AOL of being feature-rich.

AOL has always been “feature-rich”. It just isn’t the right thing for
almost anyone at all, because when you get that “feature-rich” you get
very feature-targeted – in that you’re catering only to the people who
want all, or most, of what you provide. People who want little or none
of what you provide (beyond basics) will go somewhere else, because that
“somewhere else” doesn’t impose a whole lot of overhead. The fact that
AOL features have often been broken, slow, and in-the-way kludgey never
changed the fact that there were a lot of them – and, in fact, it’s in
large part the sheer weight of features that made the feature set so
unusable to so many people.

Trying to trap people in an AOL-only internet, rather than letting them
seamlessly out into the Internet, was a “feature” – it was just a
feature pretty much nobody wanted. Most of AOL’s features have tended
to
be much like that.

The point I was making with all the features you snipped was that it
doesn’t take wild, pie-in-the-sky everything-to-everyone features to
prevent your application from scaling linearly. Any little thing can trip
you up. Most of the features I listed were either small facets of behavior
or byproducts of other design decisions. And some of them (e.g. saving
disk space) were actually “scaling” features themselves; what helps you
scale to 100x (fitting on the available disk drives) may hinder you at
10000x (when your servers are in different data centers).

. . . but you can get pretty close to linear scalability within specific
ranges of scaling, especially if you avoid massive feature lists.
Sure, they don’t have to be “wild, pie-in-the-sky” features, but you
missed my point with that statement. My point wasn’t that one feature
is
“everything to everyone”, but that seventy features is trying to provide
exactly that without doing any one thing that, examined in a vacuum,
looks unreasonable.

In other words, if you want to minimize scalability hurdles, one of the
most important things you can do is pick a focus area.

on “social networking” and focusing on something else. If you’re just
adding it as “yet another feature” to your application to become
buzzword-compliant, you’ll become another dot-com startup has-been. Of
course, there’s also always the business strategy of “look successful,
sell to someone big” without actually turning a positive buck along the
way – and if that’s what you want to do, you’re on your own.

I’m not really sure what that has to do with… well, with anything. But
then, my point probably wasn’t all that clear to you, either. The point
was that there’s always a tension between feature-set and scalability.

That was sorta my point, too – except that I wasy saying that since
there’s a tension, you need to pick a direction, and if your direction
kills scalability that’s your own fault and not disproof of the fact
that
near-linear scalability is possible. The fact of the matter is that the
same things that break linearality of scalabilty for your software are
the things that break linearality of scalability for everything else,
too. You may start watching your “everything to everyone” business plan
circling the drain, now.

On the other hand, I find it remarkable that a huge crop of “social
networking” sites have become immensely popular without the most obvious
social networking feature - who else is here? - because that feature just
doesn’t scale. It would be like if e-commerce grew to its current levels
without real-time credit card processing, or if Flickr only let you upload
ASCII-art of photos because photos are too big to store.

I don’t find that so odd. The “most obvious” social networking feature
is actually not all that great a feature for a new business venture. It
was solved a long damned time ago with technologies like IRC. It’s not
new. The other stuff being implemented by all these “social networking”
sites is new, at least in an Internet context – or presented in a new
manner.

Telephone Company had, literally, over a hundred years of experience
telling them how much their business would grow each year. And that worked
very well until 1995, when all of a sudden the online world was booming,
people were leaving their phones off the hook even when they weren’t home,
and suddenly they ran out of dial tones.

Luckily for them, they were the phone company; nobody had anywhere to run.
But if they were in a competitive business, they’d be toast, because
someone would fill the need that they couldn’t. Remember Friendster?
Great idea, great product, right time, couldn’t scale as quickly as their
user base, slow site, toast.

I don’t generally like to be so harsh, but . . . if you plan badly, your
plan fails. Sorry to burst the bubble for anyone who thinks that hard
work and good intentions should automatically translate into success.
This is not something that can be blamed on the potential scalability of
well-written software. The blame for that rests entirely at the feet of
those who made the planning decisions in the first place.

that it was realistic for “many purposes” to assume linear scalability.
Show me any site design and I’ll show you a dozen places it falls over at a
few orders of magnitude.

Show me where it falls over at a few orders of magnitude, and I’ll show
you software that is being misused – or, looked at from the other
direction, miswritten – if it’s being written well at all. If it’s not
being written well at all, that’s pretty much irrelevant to my point
anyway, since poor software development can kill anything.

This is why focus is important: when you’re trying to be all things to
all people (the all-singing, all-dancing, dish-washing performing
monkey), there’s no give in any area to make compromises so that in
other
areas it’ll scale, because there’s nothing you don’t need out of a
system.

Todd B. wrote:

Ruby will eventually (I think) become something more useful than PHP,
mostly because it is a language, like other cool ones (LUA, Erlang,
etc.) that attracts some very smart people.

There are some languages that can only be used by “very smart people”.
APL comes to mind, and I suspect there are those who could make the same
case for Forth, Haskell and Prolog. For “most of us”, languages like
Python, Perl, PHP, Ruby and Lua are great because they’re easy to
learn even if you’re not a “very smart person”.

Everyone seems to be focusing on the cost/benefit analysis. Have we
forgot about how fun it is to program?

I haven’t … but I don’t think fun is language-specific. I can’t think
of a single programming language I hated using, but then, I never used
RPG. :slight_smile: That one I think would have sucked.

On Fri, 5 Oct 2007 06:41:13 +0900, Chad P. wrote:

being written well at all, that’s pretty much irrelevant to my point
anyway, since poor software development can kill anything.

Then I’m afraid we just have to disagree. My experience scaling a site
from five hundred simultaneous users to three million tells me that
everything - everything - falls over at a few orders of magnitude. I
would be curious to hear your experiences where it didn’t.

Now, granted, the state of the art has advanced quite a bit in the past
decade, to say the least. So maybe you’re just used to working with
software that’s already been rewritten to handle eBay-sized needs, and
as
far as you know, it’s always just worked that way.

But it didn’t. When I was playing this game for real, instead of on a
newsgroup, abso-freaking-lutely nothing scaled that way. I’m not
talking
toy software; I’m not talking homegrown software. I’m talking HP-UX.
BIND.
Sendmail. MMDF. Apache. Solaris. Cisco. Stratus. Tandem. Auspex.
Network Appliance. EMC. Various TCP stacks. Various filesystems.
Sybase. Oracle. Informix. In short, just about everything. We never
ran
an OS or a piece of software that we didn’t have to modify, or get the
vendor to modify, at least once to handle the load.

If you want to just write off all that as “bad software” by definition -
hey, if it didn’t scale, it’s bad software! - then you’re missing the
point. Even if it is bad software by today’s standards, it certainly
wasn’t at the time. Which means software that we think is good today
may,
in fact, not be. Which means: You need more than pithy sayings about
business plans to write scalable software.

Again, I’m curious to hear your real-world experiences, since they
differ
greatly from mine. Maybe everything’s changed now. Tell me some
stories
about what didn’t break.

On Fri, 5 Oct 2007, M. Edward (Ed) Borasky wrote:

There are some languages that can only be used by “very smart people”. APL
comes to mind, and I suspect there are those who could make the same case
for Forth, Haskell and Prolog. For “most of us”, languages like Python,
Perl, PHP, Ruby and Lua are great because they’re easy to learn even if
you’re not a “very smart person”.

Huh? Forth was one of the easiest languages for me to learn. Ruby has
been a LOT more work.

– Matt
It’s not what I know that counts.
It’s what I can remember in time to use.

On Fri, 5 Oct 2007 06:41:13 +0900, Chad P. wrote:

chuckle I do believe that’s the first time in recorded history that
anyone has accused AOL of being feature-rich.

AOL has always been “feature-rich”.

OK, the first time got a chuckle. The second time gets a “what AOL were
you smoking?”

Seriously. You’re a techie, not in AOL’s targeted “novice” market, so I
assume most of your interactions with AOL were (a) commercials involving
Batman, (b) free DVDs in your cereal box, and © friends and family
that
asked you for help in uninstalling it. So I can understand if you got
the
wrong impression of AOL, and all the shiny lights impressed you.

Experience is subjective, and all that, so I can never know the AOL you
used. All I know is the AOL I wrote. And it had like a dozen features
visible to the user.

Seriously. The features boiled down to:

  • Get connected (by some modem bank near you, hopefully)
  • Get registered (and choose a pricing plan that you like)
  • Get mail, with a friendly voice, and send some, too
  • See a welcome screen (“portal!”) that told you what might be
    interesting
  • Go to chat rooms
  • Send instant messages (and later see a buddy list)
  • Message boards and software libraries
  • Navigate graphical or text forms, all of which either led you to other
    forms, or text articles, or gateways to outside information services
    (news,
    weather, etc).
  • Eventually, ta-da: Go to the web, or at least a small, embedded
    version
    of the web.

That’s about it. The vast majority of the server-side software deals
with
the implementation details of it all; drilling down to specify exact
behaviour for each use case, or “drilling out” to see what reporting,
billing, maintenance, scaling, security, etc. requirements are implied
by
the design.

The AOL software, even in its current “most advanced ever” form, still
doesn’t have a fraction of the features that your average web browser,
mail
client, or - heck - text editor has. The reason it’s “so easy to use it
[was] #1” is that any request to add any feature went through a very
thorough vetting to predict how many people would actually want that
feature, and how many novices would be confused by it. That’s not a
recipe
for feature bloat.

Trying to trap people in an AOL-only internet, rather than letting them
seamlessly out into the Internet, was a “feature” – it was just a
feature pretty much nobody wanted. Most of AOL’s features have tended to
be much like that.

Well, there were a few things that led AOL down that path.

The first was the most pragmatic: AOL was launched and well-established
before the idea of “commercial internet access to the masses” was even
considered feasible. I don’t remember exactly what year the rules
changed
to allow commercial providers to resell Internet access, but AOL’s
infrastructure was starting to take shape in 1982, and our core servers
didn’t even speak TCP/IP well. So the bolt-on nature of most of the
early
Internet-related features was a function of legacy design.

The second was the very quick realization that we WERE the Eternal
September everybody feared. If AOL handed everyone a fresh copy of
Internet Explorer, put them through two weeks Computer Camp, gave them a
list of web sites, and said “go!” the world would have been crushed by
the
weight. AOL actually had to provide the cushion, in the form of caching
proxies - which, yes, had many of their own flaws.

I know that on the engineering side, as the large-scale Internet grew up
around us, we spent most of our time trying to figure out the best ways
to
leverage the new technology and retire our decade-old kludges. L2TP
replaced many parts of the internal client-to-server protocol, and the
back-end mail system now natively supports a lot more IMAP functionality
than it was ever designed to do.

But I can babble about AOL all day, I can see your criticisms and top
them
tenfold, but nobody really cares about that ancient history.

I use AOL as an example in this scaling thread because people are
talking
about what it takes to scale an application, and we seem to have agreed
that we’re now talking about orders-of-magnitude scaling, not just
“Should
I define more mongrels” scaling. And I’ve got lots of experience in
knowing what does and doesn’t scale, and what the early warning signs
are
and what they aren’t.

And the point I keep trying to make, though you keep deflecting it, is
that
there ARE no guarantees that if you “do everything right” with a focused
business plan and a lightweight feature set, you’re golden for scaling.
Go
back to the little list of AOL mail features I posted, and realize that
not
a single one of them scales linearly. And realize that you might well
implement a feature like that in a different system, maybe even without
thinking about it, because it’s not a big deal. (Let’s select a nicer
screen name for someone if they can’t find the one they want.)

And those features are the ones that bite you, more so than the “you
tried
to be everything to everyone” features that seem to worry you. AOL
wasn’t
an Enterprise-Grade Solution System Provider Framework for Communicating
Entity Value Relations among Stakeholders; that type of “a something to
do
somethings” overgeneralization stays in the enterprise world. AOL was
just
a way to talk to people.

Chad P. wrote:

  1. Hire good programmers.
  1. Be(come) a good programmer.

  2. Write good code.

  3. Let your boss throw hardware at the scaling problem, because the
    good code you wrote as a good programmer can handle it.

Better?

I’m with Jay on this one – no matter how good the programmers and the
code are, there are limits to scalability. Remember, I’m a performance
engineer. :wink:

Then again, there are much better ways to explain scalability and
capacity planning than the way some authors do it. I won’t mention any
names, of course …

On Fri, Oct 05, 2007 at 09:30:05AM +0900, Jay L. wrote:

Now, granted, the state of the art has advanced quite a bit in the past
decade, to say the least. So maybe you’re just used to working with
software that’s already been rewritten to handle eBay-sized needs, and as
far as you know, it’s always just worked that way.

I think you’re assuming I’m talking about software that is written, put
into operation, and never touched again by programmers. I’m not. I’d
like to be very clear about this:

Scalability assumes maintenance – lots of maintenance. Over time,
you end up with software that may very well contain no more than about
2% original code (probably less).

The world itself is changing, and with web startups popping up all over
the place more and more people are thinking about their millionth user
about the time they have zero users (because the software isn’t even in
usable alpha testing state yet). Things like BIND, et al., weren’t
really planned ahead that way (and, for that matter, neither were things
like TCP/IP – thus the IPv6 vapornet we’ve been hearing so much about).
These days, every time someone talks about writing software, they talk
about making sure it doesn’t crash and burn when they “hit the big
time”.

When one of these things takes off on the web these days, it takes of
fast. It will require rewriting and tweaking of components, but in
the
midst of all this it has to be able to scale easily from day one without
rewriting the entire thing in a dark room from scratch then hot-swapping
it into operation. That sort of thing doesn’t work, in part because
it
means you lose a lot of resources on the production software maintenance
and in part because when you do that you suddenly discover a lot of bugs
in your bug-free software.

Part of writing scalable software is writing software that can be
upgraded piecemeal, as needed. Couple that with the ability to throw
hardware at it, without missing a step, and you’ve got a winner.

If you want to just write off all that as “bad software” by definition -
hey, if it didn’t scale, it’s bad software! - then you’re missing the
point. Even if it is bad software by today’s standards, it certainly
wasn’t at the time. Which means software that we think is good today may,
in fact, not be. Which means: You need more than pithy sayings about
business plans to write scalable software.

You’re misunderstanding (or misrepresenting) what I’ve said. Something
isn’t bad software because it ran into a limit on scalability. It may
be
bad at scaling – and if scaling was the point of the software design,
that is bad(ly written) software. If scaling wasn’t the point of the
software design, and you find that it’s being used in a situation where
that scalability is needed, either your design decisions were poor (in
retrospect at least), or you’re “misusing” it.

I thought I already made that point.

Again, I’m curious to hear your real-world experiences, since they differ
greatly from mine. Maybe everything’s changed now. Tell me some stories
about what didn’t break.

Everything “breaks”. If it only breaks a little at a time, and you have
a plan in place for dealing with those little breaks – and you’re
lucky – then you might scale smoothly.

If not, it wasn’t scalable.

On Fri, Oct 05, 2007 at 11:40:04AM +0900, Jay L. wrote:

Seriously. You’re a techie, not in AOL’s targeted “novice” market, so I
assume most of your interactions with AOL were (a) commercials involving
Batman, (b) free DVDs in your cereal box, and © friends and family that
asked you for help in uninstalling it. So I can understand if you got the
wrong impression of AOL, and all the shiny lights impressed you.

So now I’m stupid and easily distracted by shiny things. Thanks.

Experience is subjective, and all that, so I can never know the AOL you
used. All I know is the AOL I wrote. And it had like a dozen features
visible to the user.

What was this – circa '87? Sorry, I wasn’t familiar with AOL prior to
the early '90s in any sense. I guess I should have been more specific
in
my use of the word “always”. Perhaps that means I’m easily distracted
by
pretty things, and unable to apply critical thought to concepts like
“organic expansion of a chatroom network is a feature”.

And the point I keep trying to make, though you keep deflecting it, is that
there ARE no guarantees that if you “do everything right” with a focused
business plan and a lightweight feature set, you’re golden for scaling. Go
back to the little list of AOL mail features I posted, and realize that not
a single one of them scales linearly. And realize that you might well
implement a feature like that in a different system, maybe even without
thinking about it, because it’s not a big deal. (Let’s select a nicer
screen name for someone if they can’t find the one they want.)

I never said there were guarantees – but I can see how you’d make the
assumption that my statements led in that direction, what with the fact
it would fit in so well with my obvious stupidity. That, or your
imagining I said things I didn’t because that’s easier to dispute.

On Fri, Oct 05, 2007 at 11:47:08AM +0900, M. Edward (Ed) Borasky wrote:

I’m with Jay on this one – no matter how good the programmers and the
code are, there are limits to scalability. Remember, I’m a performance
engineer. :wink:

There are always limits – but I’m talking about scalability in the
sense
of “scalability within the realm of reason”. Obviously, I’m not
suggesting that reddit is ready to take on the complete userbase of the
Andromeda Galactic Empire added to the already weighty traffic it gets
from one measely little planet when Digg manages to piss off most of its
user-base on a censorship lark.

There are similarly limits to Moore’s Law (insofar as it has ever really
been “true”), but I don’t think we’ve approached them yet (again,
insofar
as it has ever really been applicable).

Then again, there are much better ways to explain scalability and
capacity planning than the way some authors do it. I won’t mention any
names, of course …

I’m afraid you must be getting a little too subtle for me.

On Fri, Oct 05, 2007 at 11:29:48AM +0900, M. Edward (Ed) Borasky wrote:

There are some languages that can only be used by “very smart people”.
APL comes to mind, and I suspect there are those who could make the same
case for Forth, Haskell and Prolog. For “most of us”, languages like
Python, Perl, PHP, Ruby and Lua are great because they’re easy to learn
even if you’re not a “very smart person”.

(OT) My first full-time IT job back when I was 24 or so was to write
assembly
line startup personnel planning software using VS APL 4.0 on VM/CMS (and
using
GDDM for graphics). Thanks for making me feel good about that. :slight_smile:

On Fri, 5 Oct 2007 11:29:48 +0900, M. Edward (Ed) Borasky wrote:

I haven’t … but I don’t think fun is language-specific. I can’t think
of a single programming language I hated using, but then, I never used
RPG. :slight_smile: That one I think would have sucked.

Having just jumped back to PL/I for a few weeks… that one’s no fun at
ALL.

I mean, it’s good for what it does; it’s a lot like “C with the pain
taken
out and some sugar”. But I don’t say “Whee, I’m programming in PL/I!”
And
I don’t stop to marvel at how elegant some code turned out thanks to
some
PL/I construct. Doesn’t happen.

I was recently asked what languages were my favorite before Ruby. They
had
to repeat the question. I never had the concept of a favorite
language
before Ruby.

Chad P. wrote:

Then again, there are much better ways to explain scalability and
capacity planning than the way some authors do it. I won’t mention any
names, of course …

I’m afraid you must be getting a little too subtle for me.

It does not refer to anyone on this list.

Jay L. wrote:

I don’t stop to marvel at how elegant some code turned out thanks to some
PL/I construct. Doesn’t happen.

I was recently asked what languages were my favorite before Ruby. They had
to repeat the question. I never had the concept of a favorite language
before Ruby.

  1. I sometimes refer to Ruby as a happy marriage of Java and Perl. PL/I,
    on the other hand, was a shotgun wedding of FORTRAN and COBOL. :slight_smile:

  2. I have the concept of a “second-favorite language”. My favorite
    language is the one I get paid to write, and my second-favorite is the
    one I’m learning at any given time.

So I think my second-favorite language of all time – one that I enjoyed
more than any other but never got paid to write – is a dead tie between
Lisp 1.5 and Forth. It seems more likely I’ll get paid to write Ruby
than either of them. :slight_smile:

On Fri, 5 Oct 2007 12:29:48 +0900, M. Edward (Ed) Borasky wrote:

  1. I sometimes refer to Ruby as a happy marriage of Java and Perl. PL/I,
    on the other hand, was a shotgun wedding of FORTRAN and COBOL. :slight_smile:

Does that mean my claim of having gotten through life without ever
learning
COBOL or FORTRAN is untrue?? Oh no.