Huge performance gap

Reggie Mr wrote:

Austin Z. wrote:

On 7/1/06, Reggie Mr [email protected] wrote:

Here is a simple graph of performance by different platforms.

UsenetBinaries.com is for sale | HugeDomains
I can’t think of a more useless “test” other than anything put out by
the Alioth shootout.

Sure it’s a simple test but that doesn’t make it useless. Systemic
performance
testing is the most relevant, but “unit” performance testing also has
its use.

I would agree…except Ruby did VERY poorly in this “useless” test.

Ruby didn’t do poorly. With fastcgi, it compares with PHP5. I think
that’s quite
respectable. What did poorly was RoR. ruby+fastcgi has good performance,
but it
drops by over an order of magnitude if you add rails to the mix. Now
that is
quite telling.

Daniel

Daniel
<<<<<<<<

We recently did a simple hello world test with Rails on a very low-end
machine and compared it with a Ruby framework that we built for our
commercial apps. Both apps had no database, and simply served the phrase
“Hello, world” with a text/plain mime type. The test client was running
localhost to minimize TCP and network effects. Rails was running in
fast-cgi
mode (one process for the whole run) and our framework was running in
CGI
mode (one fork per request).

Rails did 20 pages per second. The other app did 200 per second.
(Straight-run apache with a cached static page of similar size could
probably do 1000/second or more on this machine.)

Bear in mind, both of these frameworks are Ruby. This tells me the
comparison to other languages is misleading at best.

On 7/2/06, Austin Z. [email protected] wrote:

On 7/2/06, Reggie Mr [email protected] wrote:

I would agree…except Ruby did VERY poorly in this “useless” test.

No except. It’s a useless test. I wouldn’t trust a single thing about
it. What this is essentially measuring, especially with CGI-style
output, is startup time. Ruby does have a slower start-up time than
other options.

Austin makes a good point; I’d expect they’d all blow away Java in CGI
mode
:slight_smile:

On Sun, Jul 02, 2006 at 07:38:42AM +0900, Charles O Nutter wrote:

Ruby has great potential to make these same kinds of optimizations at
runtime, and as I understand it, YARV will do quite a bit of “smart”
optimization at runtime.

IIRC it just did use inline caches for … constant lookup. No inline
caches
for method calls, let alone PICs. And no inlining either.

Some opcodes are (statically) optimized: hardcoded operations are used
(instead of a full method call) for Fixnum, Float, [String, Array,
Hash… for
the opcodes for which they make sense] (tested in roughly that order) if
the
corresponding methods have not been redefined (this code invalidation
wasn’t
implemented last time I read the sources though, but it’s been a while).
And
IIRC there was also some sort of optimization for Integer#times and some
other
methods one often uses in synthetic benchmarks.

But at any rate it was far from doing lots of dynamic optimizations.

Hello Charles,
CON> Ruby has great potential to make these same kinds of optimizations
at
CON> runtime, and as I understand it, YARV will do quite a bit of
“smart”
CON> optimization at runtime.

The last time i looked into papers about YARV it does nothing about
this. It optimizes the control flow but doesn’t do anything about data
based optimizations. So YARV is pretty simple, it’s just that the
current implementation of ruby is so weak that there is a lot room for
simple optimizations.

Okay this is one year ago but i doublt a lot changed as there is still
no official YARV release today.

On 7/2/06, Mauricio F. [email protected] wrote:

Some opcodes are (statically) optimized: hardcoded operations are used
methods one often uses in synthetic benchmarks.

But at any rate it was far from doing lots of dynamic optimizations.

Well, that’s too bad, but many of the optimizations you mention do sound
similar to what we’re doing in JRuby. Of course, JRuby has other issues
to
tidy up before these optimizations will be very fruitful (like our
remaining
yet-to-be-implemented-in-Java native libraries) but it’s good to see
we’re
going down similar paths. We’re also planning on doing some mixed-mode
JIT,
however, once I find time to work on the compilation side of things. All
told there should be plenty of excellent VM options for Ruby in the
future.

Robert K. wrote:

I’m not sure whether you read Charles excellent posting about the
properties of a VM. All optimizations you mention are static, which
is reflected in the fact that they are based on statistical
information of a large set of applications, i.e. there is basically
just one application that those optimizations can target. A VM on the
other hand (and this is especially true for the JVM) has more precise
information about the current application’s behavior and thus can
target optimizations better.
So, in fact, does a CISC chip with millions of transistors at its
disposal. :slight_smile: Real machines are pretty smart too, at least the ones from
Intel are. The point of my comment was the emphasis on statistical
properties of applications. Since this is the area I’ve spent quite a
bit of time in, it’s a more natural approach to me than, say, the
niceties of discrete math required to design an optimizing compiler or
interpreter.

In the end, most of the “interesting” discrete math problems in
optimization are either totally unsolvable or NP complete, and you end
up making statistical / probabalistic compromises anyhow. You end up
solving problems you can solve for people who behave reasonably
rationally, and you try to design your hardware, OS, compilers,
interpreters and languages so rational behavior is rewarded with
satisfactory performance, not necessarily optimal performance. And you
try to design so that irrational behavior is detected and prevented from
injuring the rational people.

I’ll try an example: consider method inlining. With C++ you can have
methods inlined at compile time. This will lead to code bloat and the
developer will have to decide which methods he wants inlined. This
takes time, because he has to do tests and profile the application.
Even then it might be that his tests do not reflect the production
behavior due to some error in the setup of wrong assumptions about the
data to be processed etc. In the worst case method inlining can have
an adversary effect on performance due to the increases size of the
memory image.
Not to mention what happens with the “tiny” caches most of today’s
machines have.

Another advantage of a VM is that it makes instrumentation of code
much easier. Current JVM’s have a sophisticated API that provides
runtime statistical data to analysis tools. With compiled applications
you typically have to compile for instrumentation. So you’re
effectively profiling a different application (although the overhead
may be neglectible).
Don’t get me wrong, the Sun Intel x86 JVM is a marvelous piece of
software engineering. Considering how many person-years of tweaking it’s
had, that’s not surprising. But the original goal of Java and the
reason for using a VM was “write once, run anywhere”. “Anywhere” no
longer includes the Alpha, and may have never included the MIPS or
HP-PARISC. IIRC “anywhere” no longer includes MacOS. And since I’ve
never tested it, I don’t know for a fact that the Solaris/SPARC version
of the JVM is as highly tuned as the Intel one.

To bring this back to Ruby, my recommendations stand:

  1. Focus on building a smart(er) interpreter rather than an extra
    virtual machine layer.
  2. Focus optimizations on the Intel x68 and x86-64 architectures for the
    “community” projects. Leverage off of GCC for all platforms; i.e.,
    don’t use Microsoft’s compilers on Windows. And don’t be afraid of a
    little assembler code. It works for Linux, it works for ATLAS
    (Automatically Tuned Linear Algebra Subroutines) and I suspect there’s
    some in the Sun JVM.
  3. Focus on Windows, Linux and MacOS for complete Ruby environments for
    the “community” projects.


M. Edward (Ed) Borasky

Ruby has a built-in profiler. Fair enough, let’s run it, it would be
interesting. You started a new thread, but my comment was part of a
different thread comparing (and disparaging) Ruby against other
languages
(and frameworks) typically used for Web development. And my point was
that
Ruby itself isn’t the problem.

My experience suggests that Ruby’s performance “problems” are negligible
with small working sets and are very serious with large ones (program
size
and number of code points seem to matter relatively little). My
hypothesis
(no proof adduced) is that this is a necessary consequence of Ruby’s
extremely dynamic nature and is only somewhat amenable to improvements
like
YARV and automatic optimizations (as the many Java-like VM proponents
suggest). So in my own work I tend to design small Ruby processes
performing
carefully-circumscribed tasks and knitting them together with a
message-passing framework. I think you can make Ruby perform just as
fast as
anything else but a style change is required.

And of course the point of the effort is to get Ruby’s productivity
improvements without losing too much at the other end. Time-to-market is
a
measurable quality dimension too.

On 7/2/06, M. Edward (Ed) Borasky [email protected] wrote:

So, in fact, does a CISC chip with millions of transistors at its
disposal. :slight_smile: Real machines are pretty smart too, at least the ones from
Intel are. The point of my comment was the emphasis on statistical
properties of applications. Since this is the area I’ve spent quite a
bit of time in, it’s a more natural approach to me than, say, the
niceties of discrete math required to design an optimizing compiler or
interpreter.

Which VMs also benefit from when they compile to native code. Isn’t that
why
we compile, JIT or AOT, in the first place?

VMs also benefit from online profiling BEFORE compile to ensure the
generated code is closer to optimal. That runtime profiling allows a VM
to
leverage the underlying processor better than you could by just
guessing
at it up front, since it makes decisions based on realtime data, rather
than
statistical averages. Yes, there are times it has to guess or go with a
“typical” model, but as execution proceeds it can adjust compilation
parameters to re-optimize code.

There’s a body of research on this stuff online; I don’t really need to
defend it.

Don’t get me wrong, the Sun Intel x86 JVM is a marvelous piece of

software engineering. Considering how many person-years of tweaking it’s
had, that’s not surprising. But the original goal of Java and the
reason for using a VM was “write once, run anywhere”. “Anywhere” no
longer includes the Alpha, and may have never included the MIPS or
HP-PARISC. IIRC “anywhere” no longer includes MacOS. And since I’ve
never tested it, I don’t know for a fact that the Solaris/SPARC version
of the JVM is as highly tuned as the Intel one.

JVM discussions are fairly OT, but I have to knock this one down. Sun
has
JVM implementations for x86, x86-64, Sparc, and Itanium, running
Solaris,
Linux or Windows (except Linux on Sparc). IBM has JVMs for Linux on
IA32,
AMD64, POWER 64-bit, and z-Series 31-bit and 64-bit. Apple has a JVM for
OS
X on PowerPC and for x86. There’s a whole slew of open source JVMs at
GNU Classpath Success Stories - GNU Project - Free Software Foundation (FSF) and bunches of other commercial
JVMs
for everything from absurdly small devices (like aJile’s native Java
chips,
ajile.com) to absurdly large ones (Azul Systems network-attached
processing,
azulsystems.com).

Fighting the VM tide seems a little silly to me. YARV is on the right
track.

On 7/2/06, Robert M. [email protected] wrote:

Is your CGI loading and initializing the Ruby interpreter each time its
invoked?

No it doesn’t. The web server is in Ruby so it’s integrated into the
framework. (Sorry but it was incourteous of me not to actually answer
your
question in my original response.)

We did this benchmark with a production-config of Rails. I think Rails
just
does a tremendous amount of work, which isn’t surprising considering how
much value it adds. There may be subdomains of web-development that
could
benefit from a different set of feature choices than the ones Rails
made.

Hello M.,

MEEB> HP-PARISC. IIRC “anywhere” no longer includes MacOS. And since
I’ve
MEEB> never tested it, I don’t know for a fact that the Solaris/SPARC
version
MEEB> of the JVM is as highly tuned as the Intel one.

It runs much better on Solaris/SPARC then on XXX/Intel.

What tools exist for profiling Ruby?

What might answer a lot of these questions is something along the lines
of a call tree showing time spent ( clock/sys/user) or CPU cycles for
each node of the tree (node and node+children).

Other questions:

Does a Rails development do more checking and recompiling than a rails
production environment? If so, by how much does that affect results?

Is your CGI loading and initializing the Ruby interpreter each time its
invoked?

Francis C. wrote:

Ruby has a built-in profiler. Fair enough, let’s run it, it would be
interesting. You started a new thread, but my comment was part of a
different thread comparing (and disparaging) Ruby against other languages
(and frameworks) typically used for Web development. And my point was
that
Ruby itself isn’t the problem.

Agreed, and in fact, my intent was to point the profiler at the
framework itself to locate the problem.

And thank you for the rest of your post. It’s going to save me a lot
of agony going forward!!

Francis C. wrote:

My experience suggests that Ruby’s performance “problems” are negligible
with small working sets and are very serious with large ones (program
size
and number of code points seem to matter relatively little). My
hypothesis
(no proof adduced) is that this is a necessary consequence of Ruby’s
extremely dynamic nature and is only somewhat amenable to improvements
like
YARV and automatic optimizations (as the many Java-like VM proponents
suggest).
Interesting … maybe we shouldn’t be profiling Ruby code with the
Ruby profiler, but profiling the Ruby interpreter with “gprof” or
“oprofile”. I had assumed that had already been done, though. :slight_smile:

I personally don’t think it a “necessary consequence of Ruby’s extremely
dynamic nature.” There are a couple of things it could be:

  1. Page faulting with large working sets. There are things you can do to
    the interpreter to enhance locality and minimize page faulting, but if
    you have two 256 MB working sets in a 256 MB real memory, something’s
    gotta give.

  2. Some process in the run-time environment that grows faster than N log
    N, where N is the number of bytes in the working set. Again, putting on
    my statistician’s hat, you want the interpreter to exhibit N log N or
    better behavior on the average.

So in my own work I tend to design small Ruby processes performing
carefully-circumscribed tasks and knitting them together with a
message-passing framework. I think you can make Ruby perform just as
fast as
anything else but a style change is required.

And of course the point of the effort is to get Ruby’s productivity
improvements without losing too much at the other end. Time-to-market
is a
measurable quality dimension too.
I think this is good advice regardless of the language or the
application. Still, that does pass some of the burden on to the
interpreter and OS, and it doesn’t mean we shouldn’t use your large
working set codes as test cases to make the Ruby run-time better. :slight_smile:


M. Edward (Ed) Borasky

On 7/2/06, M. Edward (Ed) Borasky [email protected] wrote:

N, where N is the number of bytes in the working set. Again, putting on
my statistician’s hat, you want the interpreter to exhibit N log N or
better behavior on the average.

Ok, but these are problems that affect any program regardless of what
it’s
written in. If that’s your theory, then you still need to explain why
Ruby
in particular seems to be so slow ;-).

I could figure this out if I were frisky enough (but someone probably
already knows), but it seems like Ruby takes a Smalltalk-like approach
to
method-dispatch. Meaning, it searches for the method to send a message
to,
on behalf of each object. Whereas a language like Java views
method-dispatch
as calling a function pointer in a dispatch table that is associated
with
each class, and can easily be optimized. That’s what I meant by Ruby’s
“extremely dynamic nature.” And the fact that classes and even objects
are
totally open throughout runtime makes it all the more challenging. As a
former language designer, I have a hard time imagining how you would
automatically optimize such fluid data structures at runtime. You
mentioned
page faulting, but it’s even more important (especially on
multiprocessors)
not to miss L1 or L2 caches or mispredict branches either. If you’re
writing
C++, you have control over this, but not in Ruby.

The more I work with Ruby, the more I find myself metaprogramming almost
everything I do. This seems to put such a burden on Ruby’s runtime that
I’m
looking for simpler and more automatic ways to run Ruby objects in
automatically-distributed containers, to minimize the working sets. The
problem is worth solving because the productivity-upside is just so
attractive.

On 7/2/06, M. Edward (Ed) Borasky [email protected] wrote:

YARV and automatic optimizations (as the many Java-like VM proponents
suggest).
Interesting … maybe we shouldn’t be profiling Ruby code with the
Ruby profiler, but profiling the Ruby interpreter with “gprof” or
“oprofile”. I had assumed that had already been done, though. :slight_smile:

I heard a rumor that Ruby’s heavy use of setjmp/longjmp interferes with
profiling in some way, but I could have been misinformed or confused.
Can
anyone confirm that?

Francis C. wrote:

  1. Some process in the run-time environment that grows faster than N log
    N, where N is the number of bytes in the working set. Again, putting on
    my statistician’s hat, you want the interpreter to exhibit N log N or
    better behavior on the average.

Ok, but these are problems that affect any program regardless of what
it’s
written in.
1 affects any program regardless of what it’s written in. 2 could be
either some fundamental constraint of the langauge semantics (which I
doubt) or an optimization opportunity in the run-time to deal more
efficiently with the semantics of the language.
I could figure this out if I were frisky enough (but someone probably
already knows), but it seems like Ruby takes a Smalltalk-like approach to
method-dispatch. Meaning, it searches for the method to send a message
to,
on behalf of each object.
Ah … now searching is something we can optimize!
Whereas a language like Java views method-dispatch
as calling a function pointer in a dispatch table that is associated with
each class, and can easily be optimized. That’s what I meant by Ruby’s
“extremely dynamic nature.” And the fact that classes and even objects
are
totally open throughout runtime makes it all the more challenging. As a
former language designer, I have a hard time imagining how you would
automatically optimize such fluid data structures at runtime.
I suspect the dynamic nature means you have to keep more data
structures, and that they need to be larger, but it’s still pretty
much known techniques in computer science.
You mentioned
page faulting, but it’s even more important (especially on
multiprocessors)
not to miss L1 or L2 caches or mispredict branches either. If you’re
writing
C++, you have control over this, but not in Ruby.
Yes, you’ve given this task to the run time environment. At least one
poster claims, and I have no reason to doubt him, that the Sun JVM is
smart enough to do this kind of thing, though I don’t recall this
specific task being given as one that it does in fact do. If the Sun JVM
can do it, a Ruby interpreter should be able to do it as well.
The more I work with Ruby, the more I find myself metaprogramming almost
everything I do. This seems to put such a burden on Ruby’s runtime
that I’m
looking for simpler and more automatic ways to run Ruby objects in
automatically-distributed containers, to minimize the working sets. The
problem is worth solving because the productivity-upside is just so
attractive.
I’m not sure what you mean here, both in terms of “objects in
automatically-distributed containers” and “productivity-upside”. Are you
looking for something like “lightweight processes/threads” or what is
known as “tasks” in classic FORTH? Little chunks of code, sort of like
an interrupt service routine, that do a little bit of work, stick some
results somewhere and then give up the processor to some “master
scheduler”?

I don’t know Ruby well enough to figure out how to do that sort of
thing. Then again, if I wanted to write something that was an ideal
FORTH application, I’d probably write it in FORTH. :slight_smile:

In any event, I’m working on a Ruby project in my spare time, and I can
certainly dig into the workings of the Ruby run-time if I find that it’s
too slow. The application area is matrix calculation for the most part,
so I expect “mathn”, “rational”, “complex” and “matrix” are going to be
the bottlenecks. I suspect the places where Ruby needs be tuned
underneath will stick out like the proverbial sore thumbs for the kind
of application I have in mind.


M. Edward (Ed) Borasky

On 7/2/06, M. Edward (Ed) Borasky [email protected] wrote:

…could be
either some fundamental constraint of the langauge semantics (which I
doubt) or an optimization opportunity in the run-time to deal more
efficiently with the semantics of the language.

If the opportunity is there, why hasn’t someone seen it yet? I’ll take
even
incremental improvements, but it seems unlikely that something really
major
has been missed.

Ah … now searching is something we can optimize!

Searching can be improved but even so, it’s a lot of work to do at
runtime.
Languages that treat method dispatch as lookups into indexed tables have
a
big edge. Even Python does this.

If the Sun JVM

can do it, a Ruby interpreter should be able to do it as well.

No knock against Sun’s engineers, some of the sharpest folks in the
business. But that poster was referring to the Solaris/Sparc JVM, which
in
my experience is perhaps the least well-executed JVM around. Ugh.
There’s a
limit to the amount of server RAM I’m willing to buy, power, and cool
just
to run bloatware.

I’m not sure what you mean here, both in terms of "objects in

automatically-distributed containers" and “productivity-upside”. Are you
looking for something like “lightweight processes/threads” or what is
known as “tasks” in classic FORTH?

Nothing like that. I’m trying to make my life easier, not harder ;-).
The
main reason I’m attracted to Ruby is the promise of developing a lot
more of
the unbelievably large amount of code that has to get written while
reducing
the critical dependency on high-quality personnel, which is a
highly-constrained resource in uncertain supply. (I’m eliding some of my
thinking here of course, so you may well challenge that statement.)
I’d like to run plain-old Ruby objects in container processes that know
how
to distribute loads adapatively and keep individual process sizes small,
and
interoperate with objects written in other languages. In general I work
with
applications that require extremely high throughputs but can tolerate
relatively large latencies (milliseconds as opposed to microseconds) as
long
as all the system resources are fully utilized. I want to take advantage
of
the coming multiprocessor hardware, but I don’t want to do it with
multithreading (life’s too short for that).

The application area is matrix calculation for the most part,
That sounds like the kind of thing I would use Ruby only to prototype.
But
you’ve been around the block a few times, so who am I to say? :wink:

Francis C. wrote:

major
has been missed.
As far as I know, at least since I’ve been reading this list, you’re the
first person to come up with a “clue”, in the form of noticing that big
working sets were slower than small ones. That’s something a performance
engineer can take his or her measuring and analysis tools and do
something with, unlike “it’s slower than Python”. :slight_smile:
Ah … now searching is something we can optimize!

Searching can be improved but even so, it’s a lot of work to do at
runtime.
Languages that treat method dispatch as lookups into indexed tables
have a
big edge. Even Python does this.
Is that language-specific or interpreter-specific? I don’t know much
about Ruby and I know even less about Python. Does Python build the
tables once at “compile time”, or is it dynamic enough to require table
rebuilds at run time?
No knock against Sun’s engineers, some of the sharpest folks in the
business. But that poster was referring to the Solaris/Sparc JVM,
which in
my experience is perhaps the least well-executed JVM around. Ugh.
There’s a
limit to the amount of server RAM I’m willing to buy, power, and cool
just
to run bloatware.
Ah, but there’s also a limit to how many developer-hours you’re willing
to buy as well, so you’ve chosen to use Ruby rather than C. :slight_smile: Of
course, C is pretty much as fast as it’s going to get, but the Ruby
run-time is probably “laden with low-hanging fruit”.
Nothing like that. I’m trying to make my life easier, not harder ;-). The
main reason I’m attracted to Ruby is the promise of developing a lot
more of
the unbelievably large amount of code that has to get written while
reducing
the critical dependency on high-quality personnel, which is a
highly-constrained resource in uncertain supply. (I’m eliding some of my
thinking here of course, so you may well challenge that statement.)
Developing “a large amount of code?” Why is there so much code required?
Are there a lot of detailed special cases that can’t be made into data?

advantage of
the coming multiprocessor hardware, but I don’t want to do it with
multithreading (life’s too short for that).
Boy, you sure don’t ask for much! :slight_smile: But … hang on for a moment …
let me type a few magic words in a terminal window:

$ cd ~/PDFs/Pragmatic
$ acroread Pickaxe.pdf

Searching for “Rinda”, we find, on page 706:

"Library Rinda … Tuplespace Implementation

"Tuplespaces are a distributed blackboard system. Processes may add
tuples to the blackboard, and other processes may remove tuples from the
blackboard that match a certain pattern. Originally presented by David
Gelernter, tuplespaces offer an interesting scheme for distributed
cooperation among heterogeneous processes.

“Rinda, the Ruby implementation of tuplespaces, offers some interesting
additions to the concept. In particular, the Rinda implementation uses
the === operator to match tuples. This means that tuples may be matched
using regular expressions, the classes of their elements, as well as the
element values.”

The application area is matrix calculation for the most part,

That sounds like the kind of thing I would use Ruby only to prototype.
But
you’ve been around the block a few times, so who am I to say? :wink:
Yes … the “natural” implementation of it is using Axiom for the
prototyping and R as the execution engine. But I want to use Ruby for
both, as a learning exercise and as a benchmark of Ruby’s math
capabilities. I know it’s going to be slow – matrices are stored as
arrays and the built-in LU decomposition is done using “BigDecimal”
operations, for example.

It will be interesting – to me, anyhow – to see just how much slower
the built-in Ruby LU decomposition is than the same process using the
ATLAS library, which contains assembly language kernels. Think of ATLAS
as a “virtual machine” for numerical linear algebra. :slight_smile: In the end, I’ll
wish I had used ATLAS right from the beginning. But maybe Ruby will get
better in the process.

There are other things this application needs to do besides number
crunching and symbolic math. It has to have a GUI, draw various graphs,
both node-edge type and 2d/3d plot type, and handle objects large enough
that they probably will end up in a PostgreSQL database. Curiously
enough, all of these exist as add-on packages in the R library already.
:slight_smile: But I think Ruby is more “natural” for those pieces of the
application, and I can always “shell out to R” if I need to.


M. Edward (Ed) Borasky

On 7/2/06, M. Edward (Ed) Borasky [email protected] wrote:

There’s a
limit to the amount of server RAM I’m willing to buy, power, and cool
just
to run bloatware.
Ah, but there’s also a limit to how many developer-hours you’re willing
to buy as well, so you’ve chosen to use Ruby rather than C. :slight_smile: Of
course, C is pretty much as fast as it’s going to get, but the Ruby
run-time is probably “laden with low-hanging fruit”.

<<<<<
You’ve made several interesting points here in just two sentences. Part
of
my business is building appliances, and another part is running them in
farms. I’m getting really aware of the marginal costs of production
computing. Pure machine cycles are still getting cheaper (not nearly as
fast
as they once did) but the ancillary costs of running a cycle are NOT
getting
cheaper. This is starting to have an effect on the standard calculus and
it’s no longer unambiguously true that “iron is cheaper than
programmers.”
That’s a big discussion on its own, but then you go on to make your
point
about the Ruby runtime. I’ve expressed the intuition (it’s nothing more
than
that) that re-basing Ruby on a VM will not solve much of the real-world
performance problems we experience with Ruby. If I’m right and you’re
right,
then we all may indeed be missing some opportunities just by looking in
the
wrong place.

Developing “a large amount of code?” Why is there so much code required?
Are there a lot of detailed special cases that can’t be made into data?

<<<<<
No, I just meant the business needs for software are enormous and
getting
larger every day. Every extra day it takes you to come to market (and
I’m
talking about internal applications, not just commercial ones) is
forgone
business value. We have so much software we need to write in my little
company that I’m obsessing over finding much better ways to do it. (I
know
this is not a new problem. That doesn’t mean it’s not an urgent one.)

Boy, you sure don’t ask for much! :slight_smile:

<<<
If you don’t ask for much, you won’t get it :wink:

Rinda/Linda: yes, but. I’ve been working with Linda spaces and actor
spaces
for over ten years now. It’s a high-level abstraction which is
interesting
and powerful, but we need a few steps in between before it (or something
based on it, or inspired by it), can be used for large-scale
development.

Your application: you’ve neatly described a split between things that
belong
in Ruby and things (the raw matrix calculations) that belong in
something
else. It’s real nice that Ruby makes this easy to do.