The perens in lisp dilects is there for a reson... macros

another), but it’s till very very early.
Avast ye landlubbers, it is not early, it is too late! We are caught in
the Straights of Lotsof Microprocessor Real Estate and are doomed! To
port
lies the Land of Threads where many a ship will wreck for indeterminate
reasons (
http://www.computer.org/portal/site/computer/menuitem.5d61c1d591162e4b0ef1bd108bcd45f3/index.jsp?path=computer/homepage/0506&file=cover.xml&xsl=article.xsl).
To starboard lies the Land of Oz where multiparadigm core languages (
http://www.info.ucl.ac.be/~pvr/book.html) may save us. Ahead, if we
make
it, is plain old Ruby running downwind powered by a single, immensely
fast
core named Deep Thought. Now, when Deep Thought and Ruby calculate the
answer (which everyone knows is 42) to the ultimate question, the
utimate
answer will be that, since the ultimate question is (as every knows)
“What
is 6 x 9?” we must redesign Deep Thought (are you following this?) to
reach
the Singularity (The Singularity Is Near - Wikipedia).
Don’t know if this area is inhabited by Daleks or the Borg, but I think
I’ll
take the Land of Oz. Hard to starboard, Matey!!

DavyB

On 8/9/06, Francis C. [email protected] wrote:

near-instantaneous, because I knew exactly what those instructions were

If the “natural” way of modeling computation is that of moving around
bits in memory and doing trivial operations on them, we wouldn’t be
building all these kinds of fancy abstractions, now would we? Also,
Ruby is full of higher order functions, so saying FP is not for
mainstream use but Ruby is a step in the right direction seems a bit
od.

Aside: I used to think the fad that swept our university CS departments
10 years or so ago of teaching only Scheme was extraordinarily wasteful
and damaging. They thought that teaching people how to understand
computation using the “more-pure” functional style was preferable to
just teaching people how to program. Well, compared to what they
replaced it with (algorithm cookbooks in Java), it may not have been so
bad.

Only teaching one language is probably not the way to go, but I see
Scheme as an excellent beginner language. It contains a simple core
and yet a bunch of powerful abstractions that are useful to learn. It
doesn’t lock someone into a single mindset. Java, on the other hand,
is probably making it more difficult for people to grasp the essence
of programming, by making them think that anything that doesn’t have a
hundred classes is a failure.

another), but it’s till very very early.


Posted via http://www.ruby-forum.com/.

There’s always room for improvement. Should people not have invented
the clock because someone had solved the problem with an hourglass or
whatever?

On 8/9/06, Simen E. [email protected] wrote:

If the “natural” way of modeling computation is that of moving around
bits in memory and doing trivial operations on them, we wouldn’t be
building all these kinds of fancy abstractions, now would we? Also,
Ruby is full of higher order functions, so saying FP is not for
mainstream use but Ruby is a step in the right direction seems a bit
od<<<

Give that a bit more thought. Most of the “fancy abstractions” are
attempts to map aspects of particular problem domains into language
constructs. And what are the typical things we do with our fancy
abstractions? They are mostly about transport and communications. We
create purchase orders and store them in databases, then we email them
to vendors. We make accounting calculations and then generate reports.
We send email and instant messages to each other. These operations are
undertaken primarily for the side effects they generate (or
alternatively, the persistent changes they make to the computational
environment). Constructing mathematical fields to solve these
problems rather than scripting them imperatively is arguably a less
natural approach. True functional languages do not have side effects
at all, or they are considered hybrid if they do (SML for example).

Now you and many others may argue with much justification that true FP
is just as valid for constructing nontrivial programs and in many ways
better. Fair enough. But in the real world, I’ve become convinced that
more programmers will do “better” work (where “better” is defined in
economic terms as high quality at low cost) with imperative languages.
And the reasons why are certainly of interest, but only of theoretical
interest.

Ruby’s “higher order functions”: aren’t you confusing the true
functional style with the simple ability of Ruby (and many other
imperative languages) to treat functions and closures as objects?

On 8/9/06, Francis C. [email protected] wrote:

constructs. And what are the typical things we do with our fancy

Understanding why something works and how it works and how we might
make it work more easily seems worth making a science about. For the
record, I find adding complexity to satisfy mathematics at best an
academic exercise.

Now you and many others may argue with much justification that true FP
is just as valid for constructing nontrivial programs and in many ways
better. Fair enough. But in the real world, I’ve become convinced that
more programmers will do “better” work (where “better” is defined in
economic terms as high quality at low cost) with imperative languages.
And the reasons why are certainly of interest, but only of theoretical
interest.

Pure FP seems to complicate making complex systems. I’ve been trying
to learn Haskell, but lifting values in and out of layers of monads to
achieve simple state seems needlessly complicated. I believe Clean has
an alternative solution to state in a pure functional language – I
don’t know if it’s any better. Anyway, my point is that non-pure FP
languages produce shorter and more readable – more “pure” and
“elegant” if you like – solutions to many problems. I’m no hardcore
FP advocate. There is no one solution that fits all problems.

Ruby’s “higher order functions”: aren’t you confusing the true
functional style with the simple ability of Ruby (and many other
imperative languages) to treat functions and closures as objects?

True, they’re not really mathematical functions, but how is map, each,
inject etc. not higher-order functions? Ruby favors functions (well,
methods) that take blocks (aka closures). The definition of higher
order function from Wikipedia:

In mathematics and computer science, higher-order functions or
functionals are functions which do at least one of the following:

* take one or more functions as an input
* output a function

According to this definition, Ruby’s standard library is filled with
higher order functions, as are the libraries of Ocaml, Scheme etc.

On Wed, 9 Aug 2006, Francis C. wrote:

functional languages don’t expose any side effects at all. They are
support them. And that reinforces all the more my intuition that pure FP
is not well-suited for mainstream professional programming.

Years ago, I spent a lot of time trying to teach professional
programmers to grasp the functional style. And most people couldn’t even
let go of the concept of variables as little boxes holding readable and
writable values. That’s when I gave up on FP.

interesting. but then you have ocaml, which is a bastard of oop/fp and
impertive features - what’s wrong there?

cheers.

-a

Simen E. wrote:

In mathematics and computer science, higher-order functions or
functionals are functions which do at least one of the following:

* take one or more functions as an input
* output a function

According to this definition, Ruby’s standard library is filled with
higher order functions, as are the libraries of Ocaml, Scheme etc.

This comment is extremely interesting. It is actually not the case
that Ruby’s “functions” are functions in the mathematical sense. They
are actually objects that point to blocks of executable code or to
closures. They have the ability to execute instructions that modify the
contents of memory or produce other side effects so as to make visible
the results of operations that simulate mathematical functions.

But true functional languages directly expose composable mathematical
functions (mappings from a possibly infinite domains to ranges). Pure
functional languages don’t expose any side effects at all. They are
extremely effective at modeling computations (I once wrote a complete,
correct qsort in 7 lines of ML) but not very good at executing
communications or data transport tasks, which are all about side
effects.

I’ve often been mystified when people talk about Ruby’s metaprogramming,
closures, lambdas, etc. as constituting a “functional” programming
style. Your comment is striking because it made me realize that perhaps
many people who learn languages like Scheme are actually trying to apply
the imperative, Turing-style computational model to languages that don’t
support them. And that reinforces all the more my intuition that pure FP
is not well-suited for mainstream professional programming.

Years ago, I spent a lot of time trying to teach professional
programmers to grasp the functional style. And most people couldn’t even
let go of the concept of variables as little boxes holding readable and
writable values. That’s when I gave up on FP.

Francis C. [email protected] writes:

And my (unproved) intuition is that
mathematically restricted DSLs (whether they are Lisp programs, Ruby
programs, graphical widgets, physical constructions, or any other such
equivalent thing) may be so much easier to tool that they can give a
vast productivity improvement.

I believe that there are certainly programming constructs that can be
easier to reason about than the standard type of programming, but I’m
not sure how ease-of-reasoning relates to Turing completeness. For
instance, Haskell is certainly Turing-complete as a language, yet
chunks of Haskell code seem like they should be very easy to reason
about. On the other hand, to take a pathological example, Befunge-93
is not Turing-complete, but I doubt it’s any easier to reason about
than, say, Brainfuck, which is Turing complete.

I have a theory that a big advantage to tool makers is the ability to
perform some sort of detailed type inference, where the types that can
be inferred are useful in the problem domain. (e.g. it does no good
to be able to infer a type of “int”, when what you really need to know
is which of twenty different types all typedef’ed to int this is
supposed to be) So far as I can tell, the ease of useful type
inference is at least a different question from whether the language
being considered is Turing complete, and may in fact be completely
orthogonal to the question of Turing completeness.

On 8/9/06, Francis C. [email protected] wrote:

This comment is extremely interesting. It is actually not the case
that Ruby’s “functions” are functions in the mathematical sense. They
are actually objects that point to blocks of executable code or to
closures. They have the ability to execute instructions that modify the
contents of memory or produce other side effects so as to make visible
the results of operations that simulate mathematical functions.

Yes, in the post you quoted I said that Ruby functions are not true
mathematical functions. My point still holds: functions, or methods or
whatever you’d like to call them, that operate on closures (regardless
of whether the closures are themselves purely functional) is a very
functional idea. Whether you call it “Higher order method” or
something else, it’s still a functional idea.

But true functional languages directly expose composable mathematical
functions (mappings from a possibly infinite domains to ranges). Pure
functional languages don’t expose any side effects at all. They are
extremely effective at modeling computations (I once wrote a complete,
correct qsort in 7 lines of ML) but not very good at executing
communications or data transport tasks, which are all about side
effects.

What about Erlang? It’s not purely functional, but it certainly favors
a functional style and is used extensively at tasks that seem exactly
like the ones you describe.

I’ve often been mystified when people talk about Ruby’s metaprogramming,
closures, lambdas, etc. as constituting a “functional” programming
style. Your comment is striking because it made me realize that perhaps
many people who learn languages like Scheme are actually trying to apply
the imperative, Turing-style computational model to languages that don’t
support them. And that reinforces all the more my intuition that pure FP
is not well-suited for mainstream professional programming.

Perhaps not pure FP. I can agree with you on that. But a functional
style and ideas like closures, higher-order functions, pattern
matching etc. are all good tools which the mainstream could benefit
greatly from.

Years ago, I spent a lot of time trying to teach professional
programmers to grasp the functional style. And most people couldn’t even
let go of the concept of variables as little boxes holding readable and
writable values. That’s when I gave up on FP.


Posted via http://www.ruby-forum.com/.

But as long as you understand FP, there’s no reason to give up on
it. Surely there are other intelligent programmers out there who
understand it too. I’m not saying we should abandon imperative
constructs altogether, as you seem to imply.

On Wed, Aug 09, 2006 at 11:57:03PM +0900, Francis C. wrote:

I’ve often been mystified when people talk about Ruby’s metaprogramming,
closures, lambdas, etc. as constituting a “functional” programming
style. Your comment is striking because it made me realize that perhaps
many people who learn languages like Scheme are actually trying to apply
the imperative, Turing-style computational model to languages that don’t
support them. And that reinforces all the more my intuition that pure FP
is not well-suited for mainstream professional programming.

  1. A functional “style” is not the same as a functional “language”.
    The adoption of certain coding techniques that arose in functional
    languages for use in a language that is more object oriented and
    imperative in syntax in no way changes the fact that those techniques
    come from a programming style that arose due in large part to a
    functional syntax.

  2. I agree that pure functional programming is not well-suited to
    mainstream professional computing, but then, neither is pure
    imperative programming. That doesn’t change the fact that a whole lot
    of functional programming syntactic characteristics can be crammed into
    a language before passing the point of diminishing returns. In fact, I
    think that something as simple as teaching students to use prefix
    arithmetic notation in grade school would turn the tables such that
    functional programming syntax would probably be by far the “most
    natural” to nascent programmers by quite a wide margin.

On Wed, Aug 09, 2006 at 10:08:51PM +0900, Francis C. wrote:

Ruby’s “higher order functions”: aren’t you confusing the true
functional style with the simple ability of Ruby (and many other
imperative languages) to treat functions and closures as objects?

Minor tangential quibble:
A closure, in essence, is an object by definition (definition of
“object”, that is). Perhaps Jonathan Rees’ menu of OOP features would
help make my point: Object-Oriented

I think what you mean is something more like “treat closures as the type
of object specified by a language’s non-closure-centered object model”,
which is not quite the same thing.

My apologies if someone has posted this already – I just received it:

http://www.americanscientist.org/template/AssetDetail/assetid/51982

"

The Semicolon Wars

Every programmer knows there is one true programming language. A new
one every week

Brian Hayes
http://www.americanscientist.org/template/AuthorDetail/authorid/490;jsessionid=baagDBWk-NRVHf"

Chad P. wrote:

On Wed, Aug 09, 2006 at 11:57:03PM +0900, Francis C. wrote:

I’ve often been mystified when people talk about Ruby’s metaprogramming,
closures, lambdas, etc. as constituting a “functional” programming
style. Your comment is striking because it made me realize that perhaps
many people who learn languages like Scheme are actually trying to apply
the imperative, Turing-style computational model to languages that don’t
support them. And that reinforces all the more my intuition that pure FP
is not well-suited for mainstream professional programming.

  1. A functional “style” is not the same as a functional “language”.
    The adoption of certain coding techniques that arose in functional
    languages for use in a language that is more object oriented and
    imperative in syntax in no way changes the fact that those techniques
    come from a programming style that arose due in large part to a
    functional syntax.

  2. I agree that pure functional programming is not well-suited to
    mainstream professional computing, but then, neither is pure
    imperative programming. That doesn’t change the fact that a whole lot
    of functional programming syntactic characteristics can be crammed into
    a language before passing the point of diminishing returns. In fact, I
    think that something as simple as teaching students to use prefix
    arithmetic notation in grade school would turn the tables such that
    functional programming syntax would probably be by far the “most
    natural” to nascent programmers by quite a wide margin.

This thread has probably gone on long enough :wink: but as I read it, what
you’re saying here amounts simply to “let’s take what’s good from each
approach and apply it generally.” Can’t argue with that.

I find it very revealing that you see the primary (borrowable) value in
functional languages as coming primarily from syntactic elements.
Again I may be misreading you, but you can get syntactic innovations
from anywhere. The thing that really distinguishes functional programs
is that you can reason mathematically (and one hopes, automatically)
about their correctness. Rather than being stuck with the whole sorry
round of unit-test/debug/regress/repeat that we use with Ruby and every
other non-functional language. Apart from the reference to Ada/Diana (a
sub-language with a rigorous formal semantics), I haven’t heard anyone
on this thread try to relate this benefit of functional languages to
non-functional ones.

To answer someone else, the ability to automatically process programs is
why I’m interested in mini-languages (DSLs) written in Ruby that are
non-Turing complete, because they may be decidable, if designed
properly. That opens the way (in theory) to the kind of automatic
correctness analyzers that you see in pure functional programming. The
potential economic benefit is staggering. What proportion of your
programming time do you spend debugging today?

If people see the distinctions between “functional” and “non-functional”
languages purely in syntactic terms, then this whole discussion is
really no deeper than yet another Ruby vs. Python debate.

On 8/10/06, Francis C. [email protected] wrote:

This thread has probably gone on long enough :wink:

Probably.

on this thread try to relate this benefit of functional languages to
non-functional ones.

You can reason mathematically about the correctness of a pure FP
program, but not about the time and effort it took to create it. I see
productivity and maintainability as better arguments for using
functional languages.

To answer someone else, the ability to automatically process programs is
why I’m interested in mini-languages (DSLs) written in Ruby that are
non-Turing complete, because they may be decidable, if designed
properly.

Any internal DSL has access to the whole of the parent language, and
so it can not be non-Turing complete as long as the parent language
is. Restricting access to features of the parent language to get this
automatical correctness tester or whatever to work seems plain silly.
Besides, there’s no mathematical notation for a programmers intent,
and probably never will be, so a processor will probably never be able
to prove that a program does what its author intended to do.

If people see the distinctions between “functional” and “non-functional”
languages purely in syntactic terms, then this whole discussion is
really no deeper than yet another Ruby vs. Python debate.

True.

Simen E. wrote:

On 8/10/06, Francis C. [email protected] wrote:

This thread has probably gone on long enough :wink:

Probably.
Well … I’ve sort of stood back the past day, except for spotting some
semi-hemi-demi-quasi relevant web links courtesy of the ACM and
del.icio.us. :slight_smile: But the fundamental discussion here is highly
interesting, and certainly relevant if Ruby is to be a “significantly
better language” than its scripting cousins Perl, Python and PHP, or
other programming languages in general. Among the popular languages,
Ruby seems to be more “computer-science-aware” than any of the others.

As I’ve noted, I got interested in proving programs correct in the early
1970s, when I was in graduate school but not as a computer science
student. I continued that interest, collecting books on the subject and
attempting to introduce the concepts at my workplace when I became a
professional programmer again upon my escape from graduate school. I
still have most of the books, although I haven’t looked at them
recently.

I still look at the literature on this subject occasionally, although
I’m much more interested in the literature on the Markov/queuing/Petri
net/performance modeling arena of computer science now than the
“correctness” arena.

The thing that really distinguishes functional programs
is that you can reason mathematically (and one hopes, automatically)
about their correctness. Rather than being stuck with the whole sorry
round of unit-test/debug/regress/repeat that we use with Ruby and every
other non-functional language. Apart from the reference to Ada/Diana (a
sub-language with a rigorous formal semantics), I haven’t heard anyone
on this thread try to relate this benefit of functional languages to
non-functional ones.
I don’t think it’s necessarily either functional syntax or semantics
that makes this possible, but restrictions on the size of programs,
strong and static typing, restrictions on non-reversible operations like
assignment, etc. Yes, Dijkstra’s maxim, “Testing can only show the
presence of defects, never their absence,” is true. But I have seen
(tiny) FORTRAN programs proved correct, (tiny) Lisp programs proved
correct, and as I noted a post or two ago, I think the original effort
applied to a program expressed as a (small) flowchart.
and probably never will be, so a processor will probably never be able
to prove that a program does what its author intended to do.
Well …

  1. No programmer I’ve ever known has stood by idly while someone took
    Turing completeness away from him or her, either deliberately or
    accidentally. :slight_smile:
  2. Part of the whole “science” of proving programs correct is designing
    ways for the customer to formally specify what a program must do to be
    correct. A processor not only has to know what the programmer intended
    but what the customer wanted as well. :slight_smile:

In any event, the academic thread that appears to have the strongest
legs these days in the whole realm of proving reliability of our
software is the Pi-calculus. Most of what I’ve seen (which is the
abstract of one paper!) is in reference to Erlang. See
http://www.erlang.se/workshop/2006/. I’m not at all familiar with Erlang
the language, although I’m quite familiar with Erlang the queueing
theorist. :slight_smile: I downloaded and installed it, though.

M. Edward (Ed) Borasky wrote:

Every programmer knows there is one true programming language. A new
one every week

Brian Hayes
http://www.americanscientist.org/template/AuthorDetail/authorid/490;jsessionid=baagDBWk-NRVHf"

Yet another mirroring of our discussions on languages. This one has
some good things to say about Ruby :slight_smile:

On 8/10/06, M. Edward (Ed) Borasky [email protected] wrote:

  1. No programmer I’ve ever known has stood by idly while someone took
    Turing completeness away from him or her, either deliberately or
    accidentally. :slight_smile:
  2. Part of the whole “science” of proving programs correct is designing
    ways for the customer to formally specify what a program must do to be
    correct. A processor not only has to know what the programmer intended
    but what the customer wanted as well. :slight_smile:

Ed, the biggest problem with abstruse conversations like this is when
you get two or three guys who enable each other and keep it going ;-).

To take a step back from CS, here’s the high-level thing that really
troubles me: why is there such an impedance mismatch between the
things we need to achieve in software and the available tools? Why do
I always feel like it takes mountains of “code” to do things that can
be described without too much difficulty? I’m not saying that
genuinely complex problems should be reducible beyond what makes
sense. I’m also not underrating the difficulty of good software design
or the difficulty of communicating well with the clients. But I’d
rather that most of the effort go into those areas. As things stand,
I have to solve a lot of computer-language-generated problems that do
not contribute value to the solution. It’s like our machines have too
much friction.

I would gladly give up Turing-completeness in order to get a
(domain-specific) language that was more expressive (less
“frictional”) in terms of the problems I need to solve. I’ve been
trying to present an intuition (which I don’t know how to prove or
disprove) that the practical power required to solve circumscribed
problems does not come from Turing completeness, which of course is
required to solve general computing problems. Much depends on how you
define (and constrain) the problem space, and we need to get better
and more disciplined about that.

If this sounds like the old Lisp mantra of “design a language to fit
your problem and then just use that language,” well, yes. How does one
do it effectively, though? You mentioned techniques for making
general-purpose programs easier to reason about, and they boiled down
to making them “smaller” in multiple dimensions. But our problem
domains are big and getting bigger. So something is missing. Back to
impedance mismatch again.

I’m going to try to have the discipline to stand back from this thread
now because until I can prove some of this crap I’ve been spouting, I
don’t think I’m adding much value.

On Thu, Aug 10, 2006 at 11:26:01PM +0900, Francis C. wrote:

I have to solve a lot of computer-language-generated problems that do
not contribute value to the solution. It’s like our machines have too
much friction.

I would gladly give up Turing-completeness in order to get a
(domain-specific) language that was more expressive (less
“frictional”) in terms of the problems I need to solve.

For that, you pretty much have to choose the language that is least
“frictional” for the specific task you’ve undertaken. For certain web
development tasks, nothing beats PHP – PHP programming mostly consists
of looking up already extant core language functions and gluing them
together with some control structures. For something that doesn’t
neatly, and roughly exactly, fit the mold of any of PHP’s core
functions, you tend to find yourself battling the language quite a lot,
fighting with its half-baked semantic structure and trying to squeeze
blood from its very anemic syntax. Perl provides a (slightly less, but
still) very accessible analog to the PHP core function bucket in the
form of CPAN, but you have to choose our modules wisely – meanwhile,
Perl’s semantic structure is better designed and more enabling, while
its syntax is far, far richer, but you have to be careful about how you
write programs so that you don’t do something you won’t understand later
(like, tomorrow) and be unable to maintain. It’s easy to write
maintainable Perl, contrary to the common “wisdom”, but it takes
attention to detail to do so.

Lisp is about the best language you’ll ever find for metaprogramming.
If you’re going to be working on a large project with many people,
you’re better off using a language that handles OOP better than Perl’s
tacked-on object model. Ruby manages this excellently, as does
Smalltalk. There are other reasons to use each as well, but I’m getting
tired of describing languages, so I’ll keep this short.

The point is that you’re probably best served for any nontrivial
programming task by choosing the right language for the job from among
Turing-equivalent/Turing-complete languages that already exist. Each
has its own benefits in terms of enabling easier solutions for the
problems at hand, and to get some of that enabling you have to trade off
with other enabling (at least, you do if you want your language
interpreter to finish parsing code this century).

On Wed, Aug 09, 2006 at 01:41:32PM +0900, M. Edward (Ed) Borasky wrote:

they all almost always contain, however, is a domain-specific language
that controls where they find their inputs, what processing the do on
them, where they put their outputs and in what formats, etc., on what we
used to call “control cards”. Some of the parsers and DSLs of this
nature are quite complex and rich, but they are in no sense of the word
“Lisp”.

I’m not really sure how to address this in a manner that gets my
understanding of Greenspun’s Tenth Rule across (that’s why it has taken
me this long to reply). I guess, to solve the problem of figuring out
what’s going on with the complicated FORTRAN problems you indicate, you
could hand them to a long-time Lisp programmer who also knows some
FORTRAN and ask him or her to point out any Lisplike semantics you’ve
implemented in the process of developing fully-contained domain specific
languages as part of solving your problem, assuming there are any such
examples of Greenspun’s Tenth Rule in action. Perhaps there aren’t.

Maybe we should ask Greenspun.

Curiously enough, if you read Chuck Moore’s description of the evolution
of FORTH, you’ll find something similar – every sufficiently
complicated FORTRAN program contained a huge “case” statement which was
a “language interpreter”.

Not to be insulting, or anything, but perhaps Greenspun’s Tenth Rule
should have specified that these FORTRAN programs were “well-written”.
If you’re implementing a nontrivial domain-specific language within a
program entirely by way of case/switch statements, I shudder to
contemplate the source code. Then again, I’ve never written a complex
program in FORTRAN, nor bothered to figure out how I might do so –
perhaps that’s the only way to do it.

By the way, one of the classic “demonstrations of Artificial
Intelligence”, ELIZA, was written not in Lisp but in a list-processing
language patterned on Lisp but implemented in (pregnant pause) FORTRAN.
But the SLIP interpreter, rather than being “sufficiently complicated”
was in fact “delightfully simple.”

I’m pretty sure Greenspun didn’t mean to say that creating a (partial)
Lisp implementation was itself complicated – only that the problems
that necessitate doing so are complicated.

Francis C. [email protected] writes:

I’m not an FP partisan myself so I won’t argue with you, but I have a
feeling some Lisp and Ocaml fanboys are firing up their blowtorches for
you :wink:

I guess I’m probably to late to stop the flamewar, but I was merely
answering your question and not advocating from one side or the other.
I think FP is great! And to prove it, I’ll quote someone had a
different experience from yours. :wink:

We’ve had very little trouble getting non-Lisp programmers to read
and understand and extend our Lisp code. The only real problem is
that the training most programmers have in Lisp has taught them to
code very inefficiently, without paying any attention to the
compiler. Of course, with things like STL and Java, I think
programmers of other languages are also becoming pretty ignorant.

“Carl de Marcken: Inside Orbitz”, Carl de Marcken: Inside Orbitz

Steve

On 8/10/06, Steven L. [email protected] wrote:

I guess I’m probably to late to stop the flamewar, but I was merely
answering your question and not advocating from one side or the other.
I think FP is great! And to prove it, I’ll quote someone had a
different experience from yours. :wink:

Ok, so how about this for a hypothesis?

The thing that makes FP wonderful is the fact that everything is a
function.
Meaning, everything is a mapping between one single domain element
(which
can be a tuple) and one single functional result (which can be a tuple).
There are no side effects, and lists are the primary way of thinking of
sets. All kinds of nice things fall out of this, like a natural approach
to
concurrency (“futures” and “promises”), and much more graceful
debugging.

On the other hand, the FP approach breaks down fast when programs get
big,
because the hardware we currently know how to build isn’t a good match
for
handling functions; it’s a good match for handling values. (Please
spare
me the inevitable “we built a four-million-line Lisp program that’s
running
our company and it’s been in production for 25 years” story: those are
the
exceptions that prove the rule.)

But what else can you think of that presents an analogy to FP’s virtues?
What springs to my mind are distributed systems. Back in the Nineties we
all
saw how miserably the object-oriented approach fails when you try to
scale
it across process spaces. But systems based on idempotent
message-passing
scale beatifully and easily (think of Erlang). Because all of the
functionality you can access in the network behaves like functions,
not
like objects with state.

So here’s my hypothesis: find a radically different discpline for
building
large systems in which “components” are developed using non-FP languages
like Java and Ruby and even C++. These components are much
coarser-grained
than the ones you normally think of from object-orientation. They are
more
like relatively large subsystems, but self-contained and fully
unit-testable. The idea is that the upper size limit for these
functional
blocks is correlated with the point at which (speaking informally) they
become “too hard” to debug.

Now take these components, which have side effects like writing
databases,
sending purchase orders, etc etc, and knit them together into larger
systems
using a pure functional language. This captures the intuition that no
program should ever get too “large” in terms of function points, but
also
the distinction between functional components (which are most easily
written
imperatively) and large-scale process definitions (which are most easily
written functionally).

So the kneejerk response is: “that’s what REST is for, stupid.” Yes,
that’s
a great starting point. But I want to take a step beyond, to a pure FP
language that can express complex interactions among components without
requiring complex synchronization. See the world as functions rather
than as
resources.

Now that’s a pretty half-assed and informal proposal, but give it some
thought before you flame back. What could be wrong with it? Well, it
might
turn out that large-scale processes don’t work well when modeled as
functions- they might actually need persistent state in order to be
meaningful. But maybe not.