2006/7/2, M. Edward (Ed) Borasky [email protected]:
disposal.
Real machines are pretty smart too, at least the ones from
Intel are.
True, they do quite a lot of smart things nowadays. I feel however
that optimizing on a higher level of abstraction can yield better
improvements (i.e. removing an operation from a loop vs. just making
it as fast as possible).
The point of my comment was the emphasis on statistical
properties of applications. Since this is the area I’ve spent quite a
bit of time in, it’s a more natural approach to me than, say, the
niceties of discrete math required to design an optimizing compiler or
interpreter.
Well, VM also use statistical data - but that’s derived from a
different set of data points.
In the end, most of the “interesting” discrete math problems in
optimization are either totally unsolvable or NP complete, and you end
up making statistical / probabalistic compromises anyhow. You end up
solving problems you can solve for people who behave reasonably
rationally, and you try to design your hardware, OS, compilers,
interpreters and languages so rational behavior is rewarded with
satisfactory performance, not necessarily optimal performance. And you
try to design so that irrational behavior is detected and prevented from
injuring the rational people.
Agree.
Don’t get me wrong, the Sun Intel x86 JVM is a marvelous piece of
software engineering. Considering how many person-years of tweaking it’s
had, that’s not surprising. But the original goal of Java and the
reason for using a VM was “write once, run anywhere”. “Anywhere” no
longer includes the Alpha, and may have never included the MIPS or
HP-PARISC. IIRC “anywhere” no longer includes MacOS. And since I’ve
never tested it, I don’t know for a fact that the Solaris/SPARC version
of the JVM is as highly tuned as the Intel one.
I once had a link to an article that came from Sun development where
they claimed that their Solaris JVM is too bad compared with the
Windows version… Unfortunately I cannot dig it up at the moment.
To bring this back to Ruby, my recommendations stand:
We’re probably less far away from each other than it seemed:
- Focus on building a smart(er) interpreter rather than an extra
virtual machine layer.
I don’t care what it’s called or whether it uses bytecode or what not.
My basic point was that a runtime environment (aka VM aka interpreter)
is a good architecture because it provides better options for runtime
optimization.
- Focus optimizations on the Intel x68 and x86-64 architectures for the
“community” projects. Leverage off of GCC for all platforms; i.e.,
don’t use Microsoft’s compilers on Windows.
I can’t comment on MS compiler vs. GCC - all I’ve heard in the past is
that some compilers yield better performance characteristics than
others so the platform’s native compiler seems to have an edge there.
And don’t be afraid of a
little assembler code. It works for Linux, it works for ATLAS
(Automatically Tuned Linear Algebra Subroutines) and I suspect there’s
some in the Sun JVM.
Yes.
- Focus on Windows, Linux and MacOS for complete Ruby environments for
the “community” projects.
Sounds reasonable. For more server oriented apps Solaris might be an
option, too. But I have the feeling that it’s on the decline…
Kind regards
robert