Alexis R. wrote:
one is a relatively low-level language, compiled to native machine code,
factor of over 80 times. Besides, I implemented the same code in java
too, which isn’t native code as well and runs in a virtual machine too,
and it executed in about the same time as c++.
Most modern Java implementations (on full computers, not PDAs and the
like) are /not/ interpreted. The interpreter compiles the bytecode into
Furthermore, even when interpreted, Java has typed variables. A Java int
is always a 32-bit 2’s-complement integer. “i = j + k;”, where each of
i, j, and k is an int, is a simple operation involving about three
instructions in either the Java Virtual Machine or the real machine. A
Ruby variable could be an integer, a big-integer, a floating-point
number, a character string, or even something to which “+” doesn’t
apply, and, every time an expression is evaluated, that all has to be
is considerable. But it comes at a price. If the bottleneck in the
program is the speed of your disk, or of your IP connection, that price
probably doesn’t matter. But if you’re doing substantial calculations in
RAM, it may not be worth it.
You can’t always generalize, though. Ruby is faster than Java at finding
perfect numbers (probably because Ruby’s implementation of big integers
is faster than Java’s), and both are considerably faster than Perl
(probably because Perl forces /all/ numbers to be big integers, if any
are) (and GNU Common LISP is faster than Ruby).