The perens in lisp dilects is there for a reson... macros

“Francis C.” [email protected] writes:

Even though the Church and Turing models of computation are unified
(and were unified long before digital computers existed), there still
seems to be something intuitively attractive to many programmers about
stuffing values into little boxes and being able to go get the values
out of the boxes later. I’m not sure if this is fundamental, or a
side-effect of the fact that our practical computing machines are all
Turing machines and most programmers learn something about “memory”
and “secondary storage” early in their training. This explains why
pure functional programming is stylistically so challenging in
practice, whether or not you believe it’s challenging in theory.

Turing machines derive from a model of a human working with pencil and
paper. That’s pretty fundamental.

Steve

M. Edward (Ed) Borasky wrote:

Ah, but you can buy or download for free really good compilers for
functional languages. See the conference announcement I posted a couple
of days ago. Maybe you’d even like to come visit the City of Roses? :slight_smile:

I’m awfully impressed by the recent quality of the compilers for
lambda-languages, especially Lisp. This was far from true a dozen years
ago when I was going through the exercise. Also, I was a professional
compiler-writer at the time. I ditched that career about five minutes
after I read the Dr. Dobbs article announcing Java, in May 1995. (Well,
not completely. That summer I wrote a bytecode->native x86 compiler. But
the handwriting was on the wall.)

Interesting, your comment about Rails. (Although now we’re mixing CoC up
with my intuitions about fundamentally restricting language power.) You
notice how people talk about Rails as the “DSL of the web”?

Part of what I’m interested in is permitting arbitrary Ruby code to run
in IoC containers with DI. That does imply some restrictions on how you
write your classes, not least so the framework can automatically give
you diagnostics and management tools. The simple fact that people use
TDD effectively in the agile world shows that there is something to
this: you’re writing code-objects that are designed to be testable.
Well, I’d like to discover what are the best practices for writing
code-objects that designed to be distributable and composable. (Of
course we already know about an approach that won’t work-
convention-based reflection in the manner of Java Beans.)

On Wed, Aug 09, 2006 at 12:53:49AM +0900, M. Edward (Ed) Borasky wrote:

But of course, the Rails folks rediscovered all of this – the power
that comes from restricting some freedoms. Maybe the Python motto –
“There’s only one way to do it” is the right thing after all.

It’s not restriction that does this for you – it’s well-designed
default abstractions. With some languages and for some tasks, those
defaults must exist in your head, and with others they exist in the
toolset (the language interpreter, the framework, et cetera). In either
case, they need not be restrictions, per se. Restrictions are
inviolable defaults, and Python’s inviolable defaults are reasonably
well-designed, but they’re still limiting when one must do something
that is best accomplished in a manner incompatible with those defaults.

Perl has a “horn of plenty” approach to defaults, where you pick and
choose your default behavior set from an incredibly large menu of
discrete defaults to suit your personal taste and your problem domain.
This has its ups and downs: on the upside, you have more flexibility
when you must accomplish something not covered by the vision of the guys
that created the language’s defaults and, on the downside, you have a
language so complex that you’re almost guaranteed to end up using a
suboptimal set of chosen default abstractions for your task because you
don’t know the entire language (Larry Wall has, himself, said he doesn’t
know all of Perl).

Lisp, in my understanding, achieves greater flexibility than Perl by
making it absurdly easy to arbitrarily generate the defaults that best
suit the task at hand. Ruby, meanwhile, is a great compromise between
all three of those approaches: there are default abstractions that are
clearly defined, and it encourages the programmer to use them for
solving problems, but it makes it possible to use a plethora of other
abstractions that are nondefault and still available to the programmer.
When that’s not enough, it provides macros for building your own
defaults from “scratch” with notable ease.

Obviously, this is all oversimplified, and I’m hampered by varying
levels of familiarity with the various languages discussed when I
explain my take on them, but I’m pretty sure nobody knows everything
about all these languages so I’ll just chalk it up to “the human
condition”.

Steven L. wrote:

Turing machines derive from a model of a human working with pencil and
paper. That’s pretty fundamental.

Steve

So then you would argue that imperative languages are fundamentally more
natural for people to program in than functional languages?
Notwithstanding that they are mathematically equivalent?

I’m not an FP partisan myself so I won’t argue with you, but I have a
feeling some Lisp and Ocaml fanboys are firing up their blowtorches for
you :wink:

Chad P. wrote:

It’s not restriction that does this for you – it’s well-designed
default abstractions. With some languages and for some tasks, those
defaults must exist in your head, and with others they exist in the
toolset (the language interpreter, the framework, et cetera). In either
case, they need not be restrictions, per se. Restrictions are
inviolable defaults, and Python’s inviolable defaults are reasonably
well-designed, but they’re still limiting when one must do something
that is best accomplished in a manner incompatible with those defaults.

That’s all perfectly fine. Every language gives you a set of metaphors
with which you can model the real-world domains you care about, and
that’s why we use languages in the first place. Because you get leverage
from their conventions and restrictions. And of course this applies to
level-3 languages like English and French, as well as level-1 languages.

What I was reaching for was some insight into why full-powered
(Turing-complete) computer languages are so difficult to write automated
tools and workbenches for. The challenge is not how humans can reason
about programs (undecidability is not a big hurdle for us), but how
programs can reason about programs. And my (unproved) intuition is that
mathematically restricted DSLs (whether they are Lisp programs, Ruby
programs, graphical widgets, physical constructions, or any other such
equivalent thing) may be so much easier to tool that they can give a
vast productivity improvement.

Low-grade example: I’ve been working on a component-based
web-development framework in Ruby, and most of it is DSL-driven. So far,
it’s been remarkably easy to build graphical tools for it.

On Tue, Aug 08, 2006 at 10:31:30PM +0900, M. Edward (Ed) Borasky wrote:

programming style, and the term “factoring”, are prominent in that work
(which by the way is available on line now as a PDF). I think it’s
something all programmers of a certain level of maturity do regardless
of language.

That’s true, but the advantage to Lisp, it seems to me, isn’t merely a
perception that has grown from “we did it first” syndrome. I think it
boils down to the fact of Greenspun’s Tenth Rule:

Greenspun’s Tenth Rule of Programming: “Any sufficiently complicated C
or Fortran program contains an ad-hoc, informally-specified bug-ridden
slow implementation of half of Common Lisp.”

More generalized, any sufficiently complicated program written in a
language that doesn’t have the same facilities for (as I put it earlier)
jargon-definition as Lisp (which means basically all languages, still,
though some like Ruby are getting awfully close) must essentially
implement part of a Lisp semantic structure for certain types of highly
complex problems. In the case of something sufficiently low-level, such
as C or Fortran, that pretty much means a very scattered Lisp
implementation for particularly complex problems of any type.

For a significantly Lispy language like Ruby, however, there’s a
nontrivial percentage of potential problem domains for which Ruby
already contains all the necessary behaviors of Lisp to solve them in
the most efficient manner possible, and in addition to that Ruby
implements some excellent features that are not default to Lisp but
might be created within Lisp to solve certain types of problems. This
makes Ruby more convenient for solving some problems and (roughly)
equally convenient for solving other problems. There are then a
fairly small set of problem types for which one would have to
reimplement Lisp to most-efficiently solve the problem in Ruby.

The key, of course, is to pick the language that best suits what you do
when programming: if your language of choice already does (almost)
everything for which you’d have to create your own syntax in Lisp, you
probably have the right language for the job. If not, you might need to
find a different language that does so – or use Lisp to abstract the
problem sufficiently to make it easy to solve.

The way this all works can be analogized to trigonometry: in trig, there
are three basic “laws” from which all other rules for doing trigonometry
are derived. Lisp is like those “laws” with a syntax designed to
simplify the process of derivation. Other languages (such as Fortran,
to pick a target at semi-random) are like a set of useful derivations
that you’ve memorized to take a test in a trig class. One can work
backward from those derivations to arrive at the original three “laws”
of trigonometry, and from there arrive at other derivations that can be
used to solve certain problems, and that tends to be necessary in a lot
of complex problem solving to some degree at least. Some languages
provide closer approximations to those three basic principles by
default, so that less work is necessary to solve problems not central to
the design of the language (I’m looking at you, Ruby), and so on.

These opinions are not representative of the management. No programmers
were harmed in the making of this lengthy ramble. We now return you to
your regularly scheduled programming. Hah. Programming. Get it?

On Wed, Aug 09, 2006 at 03:00:51AM +0900, Francis C. wrote:

Low-grade example: I’ve been working on a component-based
web-development framework in Ruby, and most of it is DSL-driven. So far,
it’s been remarkably easy to build graphical tools for it.

I think object-oriented programming is just a very limited form of what
we’re really trying to accomplish when we do things correctly:
componentized abstraction. It’s easiest to solve complex problems when
we create abstractions for dealing with them, and abstractions
themselves are often complex problems. Thus, we build abstractions with
which to build abstractions, edging closer to the actual solution to the
problem. In practice, this pretty much boils down to creating a
language within one’s language – a task to which Lisp is particularly
suited, and to which Ruby’s design devotes significant attention as
well. A language like Ruby allows rather more to be done without having
to create such abstractions than a “pure” Lisp, however, which makes it
a lot more approachable, and it may thus be a better compromise for the
programming of our time for most problems we’re attempting to solve by
programming.

Abstractions are like levers: you use a longer lever to move a bigger
problem. Something like Lisp is an incredibly efficient lever-creator.
That’s my take, at any rate. The way this relates to Turing
completeness is in the fact that when creating abstractions specific to
your problem, you don’t need Turing completeness. Trying to implement
it within your abstraction is a waste of time. Turing completeness
might, however, be more important for the overarching framework within
which you are creating these DSLs that are your abstractions.

An important distinction should be made, as well, between Turing
completeness and Turing equivalency. A “pure” Lisp, for instance, can
be used to create a Turing complete language, but isn’t precisely a
Turing complete language itself. Similarly, a “pure” Turing machine can
be used to create a Lisp-complete language, but isn’t precisely a Lisp
itself. Thus, Lisp is Turing equivalent, and a Turing machine is Lisp
equivalent, but neither is “complete” within the realm of othe other.
Without being a computer science PhD that knows all, I flatter myself to
think I understand the relationship between the two: each is just a
different way of approaching the same problem. The Turing way of doing
it is like trying to ensure you have all the tools you need to solve any
problem with relative ease, while the Lisp way of doing things is more
like trying to ensure you have all the tools you need to create tools
that provide optimal ease of accomplishing the task at hand. Please
excuse the flawed oversimplification.

On Wed, Aug 09, 2006 at 02:47:12AM +0900, Francis C. wrote:

I’m not an FP partisan myself so I won’t argue with you, but I have a
feeling some Lisp and Ocaml fanboys are firing up their blowtorches for
you :wink:

I don’t know what Steven meant, but I know what I’d mean if I said that:

Turing machines and Lisp machines are equivalently fundamental. After
all, I can work with mathematical functions with pencil and paper, too.

gabriele renzi wrote:

Francis C. ha scritto:
notice that to proof some properties of a program you don’t
necessarily
need a functional language.
For example, you may love to take a look at Spark[1], a statically
verifiable subset of Ada, or at Vault[2].

I’m not surprised that someone would mention Diana, the Ada subset with
a formal semantics. People have been working on things like this for
decades, and all of the work is interesting. However, I’m more
interested in the economic value of high-quality software. And the
economic dimensions include such things as widespread availability of
reasonably-priced talent, high productivity, good stability in
production, etc. I think that because of Rails, Ruby may be poised to
break out into the programming mainstream, which is interesting because
of Ruby’s underappreciated potential to fundamentally improve the
dynamics of software delivery.

Auguri Gabriele, e ti ringrazio per aver scritto :wink:

Francis C. ha scritto:

I’ve always been fascinated by one specific aspect of functional
languages: in theory, it’s possible to use tools to prove programs
correct. What a productivity win that would be.

notice that to proof some properties of a program you don’t necessarily
need a functional language.
For example, you may love to take a look at Spark[1], a statically
verifiable subset of Ada, or at Vault[2].

Other languages can proof correctness for "some"functionalities.
I had the chance to work with NesC[3] wich is a nifty language for
embbeded systems, basically C+concurrency+components, where the compiler
checks that there are no race conditions in concurrent code (it gets
some false positives, but no wrong code passes).

[1] http://www.praxis-his.com/sparkada/intro.asp
[2] http://research.microsoft.com/vault/intro_page.htm
[2] http://nescc.sourceforge.net/

Chad P. wrote:

On Wed, Aug 09, 2006 at 02:47:12AM +0900, Francis C. wrote:

I don’t know what Steven meant, but I know what I’d mean if I said
that:

Turing machines and Lisp machines are equivalently fundamental. After
all, I can work with mathematical functions with pencil and paper, too.

You may be right, but my guess is that Steve was trying to say something
like this:

It’s fundamentally more natural for people to approach many computations
by imagining a working set of discrete values and constructing recipes
for manipulating them in discrete ways. It’s fundamentally less natural
to construct what amounts to a mathematical field to do the same thing.
A perhaps poor analogy: most people have an easier time with arithmetic
on natural numbers than they do with group theory (even though they are
mathematically unified).

The contentious word in that paragraph is “natural,” and it gets to what
I care about the most (writing all the software that needs to get
written with measurably high quality at a reasonable cost in dollars and
time). Is the Turing approach more “natural” or just more “familiar”
than the Church approach (even though they are mathematically unified)?

Francis C. wrote:

The simple fact that people use
TDD effectively in the agile world shows that there is something to
this: you’re writing code-objects that are designed to be testable.
Well, I’d like to discover what are the best practices for writing
code-objects that designed to be distributable and composable.
Well, the “big deal” these days is Robin Milner’s “Pi Calculus”, which
is an extension of his Calculus of Communicating Systems (CCS) to
include “mobile” processes. It’s essentially a process algebra. As you
might expect, it imposes some heavy declaration burdens on the
programmer in return for being able to prove properties about one’s
code. Still, it’s getting a lot of hype – i.e., companies are being
started to “productize” it, etc.

Francis C. wrote:

It’s fundamentally more natural for people to approach many computations
by imagining a working set of discrete values and constructing recipes
for manipulating them in discrete ways. It’s fundamentally less natural
to construct what amounts to a mathematical field to do the same thing.
A perhaps poor analogy: most people have an easier time with arithmetic
on natural numbers than they do with group theory (even though they are
mathematically unified).

Well … actually I think the “new math” experiment aka disaster showed
the contrary. It was just as easy for grade school children to learn
set theory, binary arithmetic, symbolic logic, groups theory, etc., as
it was for them to learn addition, subtraction, multiplication,
division, etc. But it was all useless when they got into the real world
and couldn’t balance a checkbook. :slight_smile:

The contentious word in that paragraph is “natural,” and it gets to what
I care about the most (writing all the software that needs to get
written with measurably high quality at a reasonable cost in dollars and
time). Is the Turing approach more “natural” or just more “familiar”
than the Church approach (even though they are mathematically unified)?

Well … first of all, let’s look at why there are computers at all. I
won’t expound on it here, since I posted my opinion at
http://borasky-research.blogspot.com/2006/06/why-are-there-computers-attempt-to-be.html.
But given that posting, I think the reason we followed the Turing and
Von Neumann path rather than the Church/Curry/etc. path is simply that
Turing and Von Neumann were both instrumental in applying computer
technology to winning World War II, and Church wasn’t. (At least as far
as I know, Church wasn’t … if that’s incorrect, please let me know.)

To be a little more accurate, most of the things in computer technology
we remember Von Neumann for occurred after WW II and so properly
belong to the early days of the Cold War. Moving on, when it comes to
“writing all the software that needs to get written”, there just aren’t
any silver bullets, free lunches, etc. There aren’t any “global optima”
and I’m not sure there are even local optima. In short, we satisfice.
The IAS machines were “good enough”, the Cray 1 was “good enough”, Lisp
1.5 was “good enough”, (holding nose) FORTRAN II was “good enough”, and
Ruby 1.8 is “good enough.” Ruby 2.0 will be better than 1.8 but it will
still be “good enough.”

gabriele renzi wrote:

notice that to proof some properties of a program you don’t
necessarily need a functional language.
For example, you may love to take a look at Spark[1], a statically
verifiable subset of Ada, or at Vault[2].
IIRC the original exercise in proving a program correct was in fact
not for a functional language, but for a flowchart. Was it Floyd??

Other languages can proof correctness for "some"functionalities.
I had the chance to work with NesC[3] wich is a nifty language for
embbeded systems, basically C+concurrency+components, where the
compiler checks that there are no race conditions in concurrent code
(it gets some false positives, but no wrong code passes).
I think my philosophy boils down to this:

  1. A false positive – an indication from the tool that a program
    contains a race when it in fact does not – is an indication that the
    program is too difficult to read and should be rewritten so it generates
    a true negative.
  2. By the same token, failure of the verifying tool to prove correctness
    of a program in an economically viable amount of time should also be
    taken as an indication that the program is too difficult to read and
    should be rewritten.

Steven L. wrote:

pure functional programming is stylistically so challenging in
practice, whether or not you believe it’s challenging in theory.

Turing machines derive from a model of a human working with pencil and
paper. That’s pretty fundamental.

First of all, while a Turing machine is a great “experimental animal”,
our “practical computing machines” are Von Neumann machines rather than
Turing machines. And Von Neumann machines derive from a model of a human
working with a mechanical desk calculator. In fact, the people – rooms
full of people – who operated them were called “computers”. Properly
speaking, a Von Neumann machine is an “electronic” computer or
“automatic” computer – a machine doing what people used to do.

Second, I don’t think the actual unification of Church and Turing models
occurred long before digital computers existed. The basic Turing and
Godel and Church papers were written in the early 1930s, and by the late
1930s there were working relay-based digital computers. Vacuum tube
machines started showing up (in public, anyhow) in the early 1950s and
in “war rooms” shortly after the end of WW II.

Chad P. wrote:

(Larry Wall has, himself, said he doesn’t
know all of Perl).

At one point, I pretty much knew “all of Lisp 1.5”. When I had an
opportunity to do some (Macintosh) Common Lisp programming two summers
ago, I discovered that, although a lot of “new stuff” had been tacked
on, Lisp 1.5 still works!! I don’t think K&R C “still works”, I don’t
think FORTRAN I “still works”, although FORTRAN IV probably does. And I
know one piece of Perl 4 that stopped working somewhere in the 5.6 –
5.8 range – byte semantics. :slight_smile: The part of Perl that Larry Wall does
know may not “still work”. :slight_smile:

Lisp, in my understanding, achieves greater flexibility than Perl by
making it absurdly easy to arbitrarily generate the defaults that best
suit the task at hand. Ruby, meanwhile, is a great compromise between
all three of those approaches: there are default abstractions that are
clearly defined, and it encourages the programmer to use them for
solving problems, but it makes it possible to use a plethora of other
abstractions that are nondefault and still available to the programmer.
When that’s not enough, it provides macros for building your own
defaults from “scratch” with notable ease.

With all of the rantings of the Lispniks about how wonderful Lisp macros
are, I’ve actually never seen what Rubyists refer to as “macros”. Where
could I find that – Ruby constructs called “macros” by Ruby
programmers?

On Wed, Aug 09, 2006 at 01:56:14PM +0900, M. Edward (Ed) Borasky wrote:

With all of the rantings of the Lispniks about how wonderful Lisp macros
are, I’ve actually never seen what Rubyists refer to as “macros”. Where
could I find that – Ruby constructs called “macros” by Ruby programmers?

I’m no expert in the subject (haven’t been so well-blessed as to be
working with a language that provides the mechanisms of proper macros
for most of my programming experience), but my impression is that what
differentiates a “proper macro” from . . . other stuff . . . is the fact
that with Lisp you can treat a collection of characters in your program
as a string, an array-like list, an atomic Thing, or code, all without
having to do any special gymnastics. In other wrods, there’s no
differentiation between code and data, as I understand it.

I at least know you can do such stuff, whether or not that’s actually
the defining characteristic of Lisp macros – but it sure seems like a
good point of differentiation from “other stuff”. I suppose Ruby’s
macros must be similarly differentiated from other Ruby code.

On Wed, 2006-08-09 at 13:41 +0900, M. Edward (Ed) Borasky wrote:

By the way, one of the classic “demonstrations of Artificial
Intelligence”, ELIZA, was written not in Lisp but in a list-processing
language patterned on Lisp but implemented in (pregnant pause) FORTRAN.
But the SLIP interpreter, rather than being “sufficiently complicated”
was in fact “delightfully simple.”

Way OT here, but Joseph Weizenbaum (who write ELIZA) also wrote an
excellent book “Computer Power and Human Reason”. Here’s a review:

http://www.ischool.utexas.edu/~palmquis/courses/reviews/amy.htm

Definitely worth picking up if you can find a copy somewhere…

Yours,

Tom

Chad P. wrote:

perception that has grown from “we did it first” syndrome. I think it
boils down to the fact of Greenspun’s Tenth Rule:

Greenspun’s Tenth Rule of Programming: “Any sufficiently complicated C
or Fortran program contains an ad-hoc, informally-specified bug-ridden
slow implementation of half of Common Lisp.”

Well … I think I disagree with this, having written some moderately
complicated FORTRAN programs and having read through many more. What
they all almost always contain, however, is a domain-specific language
that controls where they find their inputs, what processing the do on
them, where they put their outputs and in what formats, etc., on what we
used to call “control cards”. Some of the parsers and DSLs of this
nature are quite complex and rich, but they are in no sense of the word
“Lisp”.

Curiously enough, if you read Chuck Moore’s description of the evolution
of FORTH, you’ll find something similar – every sufficiently
complicated FORTRAN program contained a huge “case” statement which was
a “language interpreter”.

By the way, one of the classic “demonstrations of Artificial
Intelligence”, ELIZA, was written not in Lisp but in a list-processing
language patterned on Lisp but implemented in (pregnant pause) FORTRAN.
But the SLIP interpreter, rather than being “sufficiently complicated”
was in fact “delightfully simple.”

M. Edward (Ed) Borasky wrote:

First of all, while a Turing machine is a great “experimental animal”,
our “practical computing machines” are Von Neumann machines rather than
Turing machines. And Von Neumann machines derive from a model of a human
working with a mechanical desk calculator. In fact, the people – rooms
full of people – who operated them were called “computers”. Properly
speaking, a Von Neumann machine is an “electronic” computer or
“automatic” computer – a machine doing what people used to do.

Second, I don’t think the actual unification of Church and Turing models
occurred long before digital computers existed. The basic Turing and
Godel and Church papers were written in the early 1930s, and by the late
1930s there were working relay-based digital computers. Vacuum tube
machines started showing up (in public, anyhow) in the early 1950s and
in “war rooms” shortly after the end of WW II.

Ed, I’ll certainly defer to the memory of a guy who was there ;-). I
wasn’t, but here’s my recollection of the history: You’re absolutely
right about the big stuff being done by Turing and Church in the early
30s (and don’t forget Godel’s papers published in the middle of the
decade, although he was addressing different questions). Turing came to
Princeton on sabbatical for the 1937-38 academic year, and there he and
Church worked together, and they successfully unified Church’s lambda
calculus with Turing’s cellular-automaton-based universal machine that
winter.

By 1948, von Neumann had suggested what became known as the “Harvard
architecture” (stored-program digital computer) which is the model for
all of the “computers” we recognize as such today. The ten-year interval
between 1938 and 1948 is what I had in mind when I said “long before.”

You’re also right about the early impetus for mechanized calculation. It
was ballistic equations for artillery, and also the hydrodynamics of
chemical explosives for plutonium-based atomic bombs. The various
machines that were invented to solve these problems basically had
programs hard-wired into them, with solder. von Neumann’s suggestion was
to store what later became called “software” directly into the “memory
cells” of the machines, and in so doing he invented the model that made
computers valuable for general use.

But note how completely reminiscent both models (the “ENIAC” style and
the “Harvard” style) are of classic Turing machines! Computer memory is
just a tessellation structure, and a computer program (even a
hard-soldered one) is a linear sequence of values corresponding
precisely to Turing’s “tape.”

The fact is that a Turing machine is described with components that not
only have direct physical analogs (his 29-state tape can easily be
modeled with five of what we now call “bits”, and you will please pardon
the pun on “analogs”), but these physical analogs (electronic circuits)
turned out to be both easy to make and blazingly fast, once the
transistor came into wide use in the mid-Fifties. None of this is true
of a logical computing machine based on lambda-calculus. And that,
logically enough, is the reason why for the very longest time, languages
like Lisp suffered from horrible performance and incredible memory
requirements. And these problems have only recently been solved, but
only by extreme cleverness on the part of compiler writers.

Whereas almost 50 years ago, John Backus and his crew already had
Fortran compilers that could execute numerical methods at a pretty high
fraction of the available machine speed. (Interestingly, according to
Backus’ notes, the first Fortran compiler took 18 man-years to build,
and most of that time went into hand-writing the front-end, a job that
become automated and near-trivial by the mid-Seventies.)

When I was a kid, I spent all my allowances buying TTL gates and
hand-wiring digital logic. Then when I first got my hands on a computer,
the process of learning machine language and then assembler was
near-instantaneous, because I knew exactly what those instructions were
doing physically in the hardware. Memory-addressing and pointers made
complete sense from the first moment. I was shocked when I first
encountered C, years later, that it was possible to program computers at
such a high level! My point is that putting values into memory cells,
moving them around, and adding and subtracting them is not only the
easiest known way to build a useful (Turing-complete) computer, but also
may be the easiest way for most people to model computations. As an
economic proposition, then, regardless of all this theory and history,
I’m no longer interested in FP for mainstream use. Ruby, on the other
hand, is a step in the right direction.

Aside: I used to think the fad that swept our university CS departments
10 years or so ago of teaching only Scheme was extraordinarily wasteful
and damaging. They thought that teaching people how to understand
computation using the “more-pure” functional style was preferable to
just teaching people how to program. Well, compared to what they
replaced it with (algorithm cookbooks in Java), it may not have been so
bad.

But now that I’ve declared Lisp to be a fascinating part of computing’s
past (), the question is: what lies ahead? It
would be charitable to compare our current computer technology, and the
applications we put it to, to the model T and the candlestick telephone.
The more interesting challenge is to go beyond the solved problem of
Turing-complete computations, and figure out how to linguistically and
metaphorically express the large-scale information-driven processes that
humans work with every day. Someone upthread started hinting at that.
There are already some beautiful but flawed examples of this new
thinking (Erlang for one, the original conception of Smalltalk for
another), but it’s till very very early.