The recent IEEE Computer society magazine ‘Computer’ May 2006 has a
thought-provoking article on threading and concurrency (p. 33).
The author has three points:
Threading is an error prone method of parallelizing a program.
Basically, his thesis is that thread packages support non-deterministic
coding and then adds methods of constraining the non-determinism, and
that such methods are brain-twistingly difficult to write and read,
difficult to test, and have latent bugs that don’t appear for years
which are correspondingly impossible to duplicate.
Better methods of expressing concurrency exist, have been implemented
in many obscure languages, and still don’t have mainstream acceptance.
The author advocates use of ‘coordination’ or ‘composition’ languages
on top of existing general purpose languages to express concurrency.
These coordination languages still have a lot of work yet to be done. My
thought is perhaps Ruby can express concurrency cleanly without
needing another language.
I thought this idea might appeal to Matz, language-aficionado that he
is, and that Ruby has demonstrated with Rake and Rails that not having
multiple languages in a development environment has benefits.
My point in posting this message is to ask the Ruby community if it is
worth thinking about laying some foundations in Ruby 2.0 and YARV to
elegantly support other methods of expressing concurrency. Perhaps this
work won’t show results until Ruby 3.0, but reserving some keywords in
the grammar and some hooks in the VM may yield dividends in the future.
It is clear to me that single processor machines are becoming quaint,
and that the new norm in desktop machines will be multi-core, multi-chip
SMP and NUMA machines along with clusters for servers.
In this new environment, if Ruby can seamlessly and cleanly take
advantage of available concurrent resources, it will be a huge win for
Ruby over other popular languages. My hope is that the Ruby VM will take
care of each architecture’s concurrency ugliness behind the scenes,
leaving the fun stuff in front.
Yes, I’m posting this essentially anonymously. I’m new to Ruby, and
rusty at coding and threading. I’m intrigued by the idea of having fun
again and I want Ruby to be the best language it can be. I did search
the archives for discussions of concurrency and parallelism - I didn’t
find very much. I also want to be able to attend Ruby events without
needing a paper bag on my head if it turns out that this is a pointless
post. I trust the community won’t flame me too badly.
In message “Re: Beyond threads? Better concurrency methods?”
on Thu, 20 Jul 2006 02:35:13 +0900, anmus [email protected] writes:
|The recent IEEE Computer society magazine ‘Computer’ May 2006 has a
|thought-provoking article on threading and concurrency (p. 33).
|I thought this idea might appeal to Matz, language-aficionado that he
|is, and that Ruby has demonstrated with Rake and Rails that not having
|multiple languages in a development environment has benefits.
Interesting. But I don’t have IEEE Computer magazine at hand. Could
anyone point me further information about this ‘coordination’?
to write and read, difficult to test, and have latent bugs that don’t without needing another language.
I read the article and somewhat vaguely remember what the author
recommended. I think “still have a lot of work yet to be done” is a
gross understatement.
I thought this idea might appeal to Matz, language-aficionado that he
is, and that Ruby has demonstrated with Rake and Rails that not having
multiple languages in a development environment has benefits.
My point in posting this message is to ask the Ruby community if it is
worth thinking about laying some foundations in Ruby 2.0 and YARV to
elegantly support other methods of expressing concurrency. Perhaps
this work won’t show results until Ruby 3.0, but reserving some
keywords in the grammar and some hooks in the VM may yield dividends
in the future.
Uh … the primitives need to be in the OS for most
“concurrency/parallelism” implementations. Keywords and virtual machines
come after that. And for the primitives to be in the OS, they need to be
in the hardware. The paradigms supported by today’s hardware and
operating systems are the paradigms that have a track record for the
most part.
It is clear to me that single processor machines are becoming quaint,
and that the new norm in desktop machines will be multi-core,
multi-chip SMP and NUMA machines along with clusters for servers.
And the stories I’m seeing in the trade press are that “parallel
programming” is no easier today than it was when Gene Amdahl first
published his law. There aren’t any silver bullets.
In this new environment, if Ruby can seamlessly and cleanly take
advantage of available concurrent resources, it will be a huge win for
Ruby over other popular languages. My hope is that the Ruby VM will
take care of each architecture’s concurrency ugliness behind the
scenes, leaving the fun stuff in front.
Strangely enough, I don’t recall ever seeing a real programming
language, to be distinguished from academic ones, that ever handled
parallelism in a manner other than as calls to run-time libraries. Ruby
already has that.
Well, actually, there was one … Occam for the Transputer. Some
companies actually built products around this, although they were not
economically viable. Ruby seems to be too well established for it to
suffer this unhappy fate.
Yes, I’m posting this essentially anonymously. I’m new to Ruby, and
rusty at coding and threading. I’m intrigued by the idea of having fun
again and I want Ruby to be the best language it can be. I did search
the archives for discussions of concurrency and parallelism - I didn’t
find very much. I also want to be able to attend Ruby events without
needing a paper bag on my head if it turns out that this is a
pointless post. I trust the community won’t flame me too badly.
No, it’s not a pointless post by any stretch of the imagination. I think
most of us over a certain level of experience in programming have these
dreams. In my own career, so far I’ve had the dreams of automatically
proving programs correct, widespread adoption of formal semantics in
programming languages, functional languages and programming styles
dominating the practice, literate programming, the ability to write
programs faster, etc. All of these dreams have fallen to the tyranny of
“good enough”, and I suspect “seamless supercomputing” is another one.
So I write my code the way I know how, hope that others can read it, try
to keep it simple enough that I can convince myself it’s correct, and
try to reserve the time to refactor.
non-determinism, and that such methods are brain-twistingly difficult
to be done. My thought is perhaps Ruby can express concurrency cleanly
worth thinking about laying some foundations in Ruby 2.0 and YARV to
elegantly support other methods of expressing concurrency. Perhaps
this work won’t show results until Ruby 3.0, but reserving some
keywords in the grammar and some hooks in the VM may yield dividends
in the future.
Uh … the primitives need to be in the OS for most
“concurrency/parallelism” implementations.
I’ve got an SMP kernel.
Keywords and virtual machines
come after that. And for the primitives to be in the OS, they need to be
in the hardware.
I’ve got a dual core processor.
The paradigms supported by today’s hardware and
operating systems are the paradigms that have a track record for the
most part.
Some support for parallelism seems to be already in place at that OS
and hardware levels. I think the point of the article (which I only
read a summary of) was that we need better ways of describing
parallelism (better ways than threads).
Ruby over other popular languages. My hope is that the Ruby VM will
take care of each architecture’s concurrency ugliness behind the
scenes, leaving the fun stuff in front.
Strangely enough, I don’t recall ever seeing a real programming
language, to be distinguished from academic ones, that ever handled
parallelism in a manner other than as calls to run-time libraries. Ruby
already has that.
In the hardware world there are HDLs (hardware description languages)
which model parallelism using an RTL/dataflow model. Oddly enough,
the hardware folks are trying to figure out how to use C/C++ to model
hardware. I’m wondering if they’re going the wrong direction; C/C++
don’t seem to be a good fit for hardware design from what I’ve seen so
far. Maybe we need to inroduce dataflow concepts into general purpose
programming languages. (project plug: See RHDL: http://rhdl.rubyforge.org/ ).
The basic idea is that in an HDL everything is happening at once; all
statements outside of a process block execute concurrently. Inside a
process they execute as they would in a normal programming language,
but all of the processes are considered to be executing in parallel.
processes get triggered by changes in signals. Of course HDL
simulators often make use of threads or continuations (RHDL uses
continuations which in turn are implemented as threads in Ruby).
Hardware in inherently parallel. You can think of logic gates as
being simple little processors. Outputs change when inputs change.
dataflow. That’s why HDLs were developed in the mid 80’s to model
hardware.
“M. Edward (Ed) Borasky” [email protected] wrote on 20/07/2006
05:08:35:
Strangely enough, I don’t recall ever seeing a real programming
language, to be distinguished from academic ones, that ever handled
parallelism in a manner other than as calls to run-time libraries. Ruby
already has that.
Well, actually, there was one … Occam for the Transputer. Some
companies actually built products around this, although they were not
economically viable. Ruby seems to be too well established for it to
suffer this unhappy fate.
I spent some great years programming the Transputer in OCCAM
commercially.
It would be good to see some ideas from CSP in Ruby…
companies actually built products around this, although they were not
economically viable. Ruby seems to be too well established for it to
suffer this unhappy fate.
I spent some great years programming the Transputer in OCCAM commercially.
It would be good to see some ideas from CSP in Ruby…
And I spent four terrible years at a company called Floating Point
Systems that bet the farm on an Occam/Transputer hypercube and ended up
becoming one of the Portland area’s larger disemployers. To be fair, I
liked the Transputer and Occam, but the guys with the big bags of
nickels bought other stuff from other companies.
Anyhow, CSP is “old hat” – this year’s “silver bullet” is the
PI-Calculus, a close relative of Hoare’s CSP and a direct descendant of
Milner’s CCS. I like the PI-Calculus just as much as I liked CSP, Occam,
Concurrent Pascal, CCS, Linda/Rinda and all the other theoretical
computer science approaches. In my own field, performance engineering, I
like the CCS derivative, Jane Hillston’s PEPA. But what are the guys
with the big bags of nickels buying?
Thank you, unfortunately I have to buy the article for $19.00, which
seems too expensive for a single article.
Wierd. I didn’t have to pay for it. I’ve seen that Srinivas has already
sent you the URL of the article.
Yeah it’s really weird. If you go to the main page, the article is there
available for free. But if you go to past issues > May 2006 you can only
read the summary with an option to buy the article as a pdf. Talk about
consistency…
Anyhow, CSP is “old hat” – this year’s “silver bullet” is the
PI-Calculus, a close relative of Hoare’s CSP and a direct descendant of
Milner’s CCS. I like the PI-Calculus just as much as I liked CSP, Occam,
Concurrent Pascal, CCS, Linda/Rinda and all the other theoretical
computer science approaches. In my own field, performance engineering, I
like the CCS derivative, Jane Hillston’s PEPA. But what are the guys
with the big bags of nickels buying?
Erlang doesn’t count? True, I don’t know of anyone using it besides
Erikson, but they use it in real commercial products that other
companies pay large amounts of money for.
Erlang’s sort of a strange one. Obviously, it can’t be called
academic, and the programming model is entirely based on concurrency,
but at the same time, it doesn’t support native threading. I’ve always
thought that is a strange choice. They have really fast user-space
threading, but it will never use multiple processors unless you’re
running multiple Erlang VMs, and then you have to explicitly start your
thread on the VM of your choice, from what I understand. Still, I love
the language. I wish ruby allowed overloading of the ! operator, just
so I could implement a ruby thread wrapper that could accept messages
with Erlang syntax
need to be in the hardware. The paradigms supported by today’s
hardware and operating systems are the paradigms that have a track
record for the most part.
Well, the Scala language is hosted on the JVM, so has as its
underlying “OS” support only traditional threads. However, because of
the abstrction capabilities in the language, they’ve got a concurrency
model similar to Erlang’s. I definitely think that the Ruby community
should look to see what additional concurrency abstractions can be
swiped from elsewhere.
And the stories I’m seeing in the trade press are that “parallel
programming” is no easier today than it was when Gene Amdahl first
published his law. There aren’t any silver bullets.
“no silver bullets” is different from “this is the best we can do.”
Structured programming wasn’t the silver bullet it was promoted as at
first, but I doubt anyone thinks it wasn’t an improvement.
Strangely enough, I don’t recall ever seeing a real programming
language, to be distinguished from academic ones, that ever handled
parallelism in a manner other than as calls to run-time
libraries. Ruby already has that.
Erlang doesn’t count? True, I don’t know of anyone using it besides
Erikson, but they use it in real commercial products that other
companies pay large amounts of money for.
And what about Ada? I seem to remember that someone somewhere was
building production systems for some rather extreme operating
environments in Ada…
Also, I’d object to the idea that java handles parallelism only with
calls to the run-time library: the concept of different threads,
synchronized blocks, and object monitors is built into the language
syntax even if there is some code in a run-time library to implement
some aspects of java.lang.Thread.
Strangely enough, I don’t recall ever seeing a real programming
language, to be distinguished from academic ones, that ever handled
parallelism in a manner other than as calls to run-time
libraries. Ruby already has that.
Erlang doesn’t count? True, I don’t know of anyone using it besides
Erikson…
A couple other notable projects built with Erlang are
the “Wings 3D” [1,2,3] 3D modeling package and the
“ejabberd” [4] database-backed Jabber server. Earlier this year Jabber.org (one of the larger jabber servers with 180K users)
started using the Erlang based server for its own use[5], so
it’s a pretty mature product.
Erlang is actually a pretty nice general purpose language; and
indeed was designed to support parallelism without threads[6].
I’d have to say Erlang’s an excellent answer to Ed’s comment.
To Ed’s “real programming language … that handled parallelism in
a manner to calls other than calls to run-time libraries”, doesn’t
C with OpenMP also count?
#include <omp.h>
int main(int argc, char argv[])
{
int i, a[100]; #pragma omp parallel for
for (i=0;i<100;i++) a[i]= 2i;
return 0;
}
Though admittedly I’ve only seen OpenMP in academic projects so far.
This is a little old. I’m glad that the hot bed of Occam programming
(Kent) is still working with it.
Besides it’s explicit parallelism, Occam’s other claim to fame is its
basis in Formal Methods. Many years ago, silicon chip designs were
‘derived’ from Occam program descriptions. When INMOS was bought by EMI
and then absorbed into STM, the idea of deriving silicon went silent. My
hypothesis was that STM was using Occam ++ internally as its silver
bullet design methodology. Complex chips need to work with a minimum of
tweeks in the manufacturing process.