On Mon, Aug 01, 2011 at 04:24:43PM +0900, Mike S. wrote:
I think this is closer to the truth:
Talk:Second-generation programming language - Wikipedia
In my experience, the language “generation” talk is (as noted early on
that page) basically marketing. People use it to try to say their
language is better than yours. People who are serious about software
development and language design tend to refer to languages as being more
or less “high-level” or “abstracted”, or as being more or less “domain
specific”, while people who are more serious about selling you on an
idea
will sometimes refer to a language as a “4GL” as if that makes it good
somehow.
From what I’ve seen, when someone who actually knows what (s)he’s doing
and cares about getting something done uses the term “4GL”, the term is
used in a sort of sarcastic or derogatory way to mean “not a real
programming language”. Suffice to say that there is a lot of skepticism
out there about the generational jargon for language classification.
Your mileage may vary.
Third generation came into mark the first languages a level above
assembler. Fourth generation was applied to things like Focus which
dealt with typical DP tasks by removing some of the chores,
particularly empty datasets or starts/ends of datasets. That theme
would lead you to say SQL or say PL/SQL were 4GL but of course people
now expect more of a programming language so would reject these as
‘languages’.
More of a language than PL/SQL . . . ? You’re aware that PL/SQL is
actually a Turing-complete programming language – right?
Granted, I wouldn’t want to use it for general purpose programming, but
it is entirely capable of such (ab)use.
Of course, I seem to recall that the idea of third generation languages
as a term of jargon mostly arose as we approached the marketing hype
around upcoming so-called fourth generation languages. Businesses
started thinking about how to sell people on the idea of non-programmers
being able to do all of your programming, and eventually ended up with
what they called 4GLs, which actually programmers looked at with severe
suspicion because of the way they tended to give people with no skill
the
ability to create something with no value (in their view, at least)
while
still effectively devaluing the contributions of software developers in
the eyes of middle managers. I think 3GL has mostly arisen as a term
used to denigrate anything in common usage that is not a 4GL according
to
the marketing geniuses trying to sell you a drag-and-drop automation
system.
Back when languages like C and its descendants started to appear, I
think
everyone basically just called them “high level languages”, and not
“third generation languages”. Maybe all of this is just my perception,
based on the people with whom I interacted at the time and the reading
that I did over the years. I suppose I might have completely missed a
lot of people using the term 3GL years ago, before the rise of 4GLs as
products.
Perhaps a little ironically, I have actually seen a 4GL that might
actually provide some of what it promises as a way to make it possible
for non-programmers to do some programming – some, I say. I speak of
Google’s Android App Inventor. It has some limitations that make it
unsuitable for some purposes, and unenticing to me as a developer, but
it
really does allow some pretty arbitrary software development goals to be
achieved by someone who is not familiar, or comfortable, with
traditional
programming tasks involving the work of writing source code.
Things moved away from this pattern. We seemed to revert to lower level
languages like C but added in all sorts of powerful features and
libraries. Ruby with its iterators deals with some things in a way 4GLs
did.
So while the terminology is obsolete, what is interesting is why the
author wanted to distinguish Ruby from C#. As I recall, 4GLs were
interpreted and thus capable of dynamic programming. Perhaps that’s what
he/she was getting at.
That’s a very strange way to distinguish between a so-called 3GL and a
so-called 4GL, and seems to disagree with every other means of
differentiating between them that I’ve ever encountered. Interestingly,
it would make Objective Caml both a 3GL and a 4GL, depending on how you
use it, as well as something in between – because the “official”
implementation of OCaml comes with a compiler, a bytecode VM, an
interpreter, and even a REPL. How do you classify Common Lisp using
such
criteria – a language available in a plethora of implementations,
including compilers at one extreme and interpreters at the other?
How do bytecode VMs fit into this system of categorization anyway? Ruby
is very heavily moving toward VMs and away from basic interpreters,
including the new reference implementation for 1.9.x, Rubinius, and so
on. Does that make it no longer suitable for identification as a 4GL
according to the criteria of someone basing categorization on
interpreter
vs. compiler implementations?