Ruby Vs. Java

Chad P. wrote:

I suspect you mean C++. I don’t recall seeing much in the way of
evidence that Java had quite gotten as fast as C – which is still about
twice as fast as C++ for many purposes.

C++ is only grossly slower than C in one situation: Someone used C++ as
a
Very High Level Language, to write a big object model.

For number crunching, C++ Template Metaprogramming can compete with C,
Assembler, and even Fortran.

And, as always, nothing beats simply picking the correct algorithm,
first…

If you want to understand how Ruby can be made to run fast I recommend
reading about the Strongtalk VM and reading Urs Hölzle’s thesis,
“Adaptive Optimization for Self: Reconciling High
Performance with Exploratory Programming”,
available on the web as
http://www.cs.ucsb.edu/labs/oocsb/papers/urs-thesis.html
http://www.cs.ucsb.edu/labs/oocsb/papers/hoelzle-thesis.pdf
http://citeseer.ist.psu.edu/rd/38028164%2C51308%2C1%2C0.25%2CDownload/http://citeseer.ist.psu.edu/cache/papers/cs/3280/http:zSzzSzwww.sunlabs.comzSzresearchzSzselfzSzpaperszSzhoelzle-thesis.pdf/hlzle94adaptive.pdf

Robert

On Aug 26, 7:56 am, “Phlip” [email protected] wrote:
-snip-

… and use
Lua. It’s harder to program than Ruby but much easier than Java, and its
speed can compete with C.

Compete and lose.

“Lua is a tiny and simple language, partly because it does not try to
do what C is already good for, such as sheer performance, low-level
operations, or interface with third-party software. Lua relies on C
for those tasks.”

Preface xiii, Programming in Lua

On Aug 28, 5:59 pm, Charles Oliver N. [email protected]
wrote:

Server VM)
that executes consistently faster than Ruby 1.8 and in some cases faster
than Ruby 1.9. In general, the difficult task has been structuring the
bytecode and the call pipeline in such a way as to allow HotSpot to do
its optimization.

This also means that Java and JRuby and similar adaptive optimizing
runtimes require some “warm-up time”. Java code will get faster as it
executes, but for short benchmarks it will usually be much slower than
its full potential. The same applies to JRuby, and as a result JRuby
will be better for longer-running processes (unless, of course, you
don’t mind it being a little slow early on).

I’m a bit confused about what you might mean, help me understand.

Do you mean small benchmark programs will be “much slower” when run
once rather than run 100 times? How much slower - 0.1x 10x 1000x ?

Do you mean we should not assume small program performance is a
reasonable estimate of large program performance?

Incidentally, I don’t think “warm-up time” works as a description of
adaptive optimization - it makes it sound like a one-time-thing,
rather than continual profiling decompilation recompilation adapting
to the current hotspot.

Isaac G. wrote:

I’m a bit confused about what you might mean, help me understand.

Absolutely!

Do you mean small benchmark programs will be “much slower” when run
once rather than run 100 times? How much slower - 0.1x 10x 1000x ?

A description of JRuby internals will help here.

JRuby starts running almost all code in interpreted mode right now. This
is partially because the compiler is incomplete, and can’t handle all
syntax, but also partially because parsing + compilation + classloading
costs more than just parsing, frequently so much more that performance
gains during execution are outweighed.

So JRuby currently has the bytecode compiler in JIT mode. As methods are
called, the number of invocations are recorded. After a certain
threshold, they are compiled. We do not do any adaptive optimization in
the compiler at present, though we do a few ahead-of-time optimizations
by inspecting the AST. Compiled code does not (with very few exceptions)
ever deoptimize.

Because of the JRuby JIT, we must balance our compilation triggers with
the JVMs. Ideally, we get things compiled quickly enough for HotSpot to
take over and make a big difference without compiling too many methods
or compiling them too frequently and having a negative impact on
performance.

Do you mean we should not assume small program performance is a
reasonable estimate of large program performance?

It depends how small. For example, if the top-level of a script includes
a while loop of a million iterations, it will not be indicative of an
app that has such loops in methods, as part of an object model, and so
on, because that top-level may not ever compile to bytecode (since it’s
only called once) or may only execute once and never be JITed by the
JVM. Soon, when the compiler is complete, we could theoretically compile
scripts on load, but it remains to be seen if that will incur more
overhead than it is worth. And it still wouldn’t solve the problem of
long-running methods or scripts that are only invoked once.

As an example, try running the two following scripts in JRuby and
comparing the results:

SCRIPT1:

t = Time.now
i = 0
while i < 10_000_000
i += 1
end
puts Time.now - t
t = Time.now
i = 0
while i < 10_000_000
i += 1
end
puts Time.now - t
t = Time.now
i = 0
while i < 10_000_000
i += 1
end
puts Time.now - t
t = Time.now
i = 0
while i < 10_000_000
i += 1
end
puts Time.now - t
t = Time.now
i = 0
while i < 10_000_000
i += 1
end
puts Time.now - t

SCRIPT2:

def looper
i = 0
while i < 10_000_000
i += 1
end
end

5.times {
t = Time.now
looper
puts Time.now - t
}

My results:

SCRIPT1:
9.389
9.194
9.207
9.198
9.191

SCRIPT2:
9.128
9.012
2.001
1.822
1.823

This is fairly typical. And this also should be of interest to you for
the alioth shooutout benchmarks; simply re-running the same script in a
loop will not allow JRuby or HotSpot to really get moving, since each
run through the script will define new classes and new methods that
must “warm up” again. You must leave the methods defined and re-run only
the work portion of the benchmark.

Incidentally, I don’t think “warm-up time” works as a description of
adaptive optimization - it makes it sound like a one-time-thing,
rather than continual profiling decompilation recompilation adapting
to the current hotspot.

In our case, it’s a bit of both. There’s some warm-up time for JRuby to
compile to bytecode, and then there’s the adaptive optimization of
HotSpot which is a bit of a black box to us. We are working to reduce
JRuby’s warm up time to get HotSpot in the picture sooner.

  • Charlie

On Aug 29, 11:53 am, Charles Oliver N. [email protected]
wrote:

JRuby starts running almost all code in interpreted mode right now. This
ever deoptimize.
It depends how small. For example, if the top-level of a script includes
comparing the results:
i = 0
t = Time.now
puts Time.now - t
5.times {
9.194

to the current hotspot.

In our case, it’s a bit of both. There’s some warm-up time for JRuby to
compile to bytecode, and then there’s the adaptive optimization of
HotSpot which is a bit of a black box to us. We are working to reduce
JRuby’s warm up time to get HotSpot in the picture sooner.

  • Charlie

Now my impression is that I misunderstood how different JRuby
characteristics are from Java. I suspect that whenever you mention
“adaptive optimization” or HotSpot, I’ll get confused about what JRuby
is actually doing (and in-any-case JRuby is going to be doing things
differently in future).

You should know better than to post code snippet timings :slight_smile:
What if you have:

t = Time.now
looper
puts Time.now - t
t = Time.now
looper
puts Time.now - t


As for the alioth shootout / benchmarks game, we don’t re-run in a
loop because Clean and Haskell compilers notice that once around the
loop is enough to get the answer :slight_smile:

(When we poked at JVM in the FAQ, the “started once” examples are just
like your SCRIPT2 - main renamed test, and called within a timing
loop.)

Isaac G. wrote:

looper
puts Time.now - t
t = Time.now
looper
puts Time.now - t


This would produce results roughly like the second version. The key
point is to have the same physical code called multiple times.

As for the alioth shootout / benchmarks game, we don’t re-run in a
loop because Clean and Haskell compilers notice that once around the
loop is enough to get the answer :slight_smile:

(When we poked at JVM in the FAQ, the “started once” examples are just
like your SCRIPT2 - main renamed test, and called within a timing
loop.)

That sounds good then. I’ll be looking forward to the next run against
JRuby 1.1 when we get it finished :slight_smile:

  • Charlie

On Thu, Aug 30, 2007 at 03:20:11AM +0900, r wrote:

As for Ruby and web apps–it’s 9 months later…had the training got

cd /usr/local/Zend/htdocs

shudder