On 8/10/06, Steven L. [email protected] wrote:
I guess I’m probably to late to stop the flamewar, but I was merely
answering your question and not advocating from one side or the other.
I think FP is great! And to prove it, I’ll quote someone had a
different experience from yours. 
Ok, so how about this for a hypothesis?
The thing that makes FP wonderful is the fact that everything is a
function.
Meaning, everything is a mapping between one single domain element
(which
can be a tuple) and one single functional result (which can be a tuple).
There are no side effects, and lists are the primary way of thinking of
sets. All kinds of nice things fall out of this, like a natural approach
to
concurrency (“futures” and “promises”), and much more graceful
debugging.
On the other hand, the FP approach breaks down fast when programs get
big,
because the hardware we currently know how to build isn’t a good match
for
handling functions; it’s a good match for handling values. (Please
spare
me the inevitable “we built a four-million-line Lisp program that’s
running
our company and it’s been in production for 25 years” story: those are
the
exceptions that prove the rule.)
But what else can you think of that presents an analogy to FP’s virtues?
What springs to my mind are distributed systems. Back in the Nineties we
all
saw how miserably the object-oriented approach fails when you try to
scale
it across process spaces. But systems based on idempotent
message-passing
scale beatifully and easily (think of Erlang). Because all of the
functionality you can access in the network behaves like functions,
not
like objects with state.
So here’s my hypothesis: find a radically different discpline for
building
large systems in which “components” are developed using non-FP languages
like Java and Ruby and even C++. These components are much
coarser-grained
than the ones you normally think of from object-orientation. They are
more
like relatively large subsystems, but self-contained and fully
unit-testable. The idea is that the upper size limit for these
functional
blocks is correlated with the point at which (speaking informally) they
become “too hard” to debug.
Now take these components, which have side effects like writing
databases,
sending purchase orders, etc etc, and knit them together into larger
systems
using a pure functional language. This captures the intuition that no
program should ever get too “large” in terms of function points, but
also
the distinction between functional components (which are most easily
written
imperatively) and large-scale process definitions (which are most easily
written functionally).
So the kneejerk response is: “that’s what REST is for, stupid.” Yes,
that’s
a great starting point. But I want to take a step beyond, to a pure FP
language that can express complex interactions among components without
requiring complex synchronization. See the world as functions rather
than as
resources.
Now that’s a pretty half-assed and informal proposal, but give it some
thought before you flame back. What could be wrong with it? Well, it
might
turn out that large-scale processes don’t work well when modeled as
functions- they might actually need persistent state in order to be
meaningful. But maybe not.