Comparison Ruby, Python, Php, Groovy ecc

Comparison script languages for the fractal geometry, these are the
languages i tested:

Java
Lua 5.1.4
Php 5.3.0
Python 2.6.2
Python 3.1.1
Jython 2.5.0
Groovy 1.6.3
Jruby 1.3.1
Ruby 1.9.1 p129
Ruby 1.8.6 p368
Ruby 1.8.6 p111
IronRuby 0.9.0
IronPython 2.0.2
Perl 5.10.0

Let me know yours comment

http://mastrodonato.info/index.php/2009/08/comparison-script-languages-for-the-fractal-geometry/?lang=en

Are those executables compiled with identical compilers + compile flags?

Two things I notice.

It seems ruby 1.9 indeed managed to keep up with the python versions
more than the older ruby versions. And it seems to be (almost) as fast
as perl for that test.

The other thing, which is very strange, is that the ironruby
implementation is significantly slower than ironpython, whereas the
other versions arent by as much. What is wrong here?

IronRuby 0.9.0 6,038 39x
IronPyhon 2.0.2 0,978 6x

vs

Ruby 1.9.1 p129 2,688 18x
Python 3.1.1 1,566 10x

Urabe S. [email protected] writes:

Are those executables compiled with identical compilers + compile flags?

The question is understandable, but these implementation might very
well be written in different languages, so it doesn’t really matter.
We can assume that they are as much as possible, if they come from a
common distribution.

Groovy 1.6.3
http://mastrodonato.info/index.php/2009/08/comparison-script-languages-for-the-fractal-geometry/?lang=en
I added a comment to the web site, but I’m not sure it was taken into
account (I didn’t got the same feed back as for a second shorter
comment). So here it is again:

For completeness, could you please try Common Lisp too?

You could use sbcl 1.0.29 (MS-Windows port in progress) at:
http://prdownloads.sourceforge.net/sbcl/sbcl-1.0.29-x86-windows-binary.msi

-------(bench1.lisp)----------------------------------------------------
(declaim (optimize (speed 3) (space 2) (debug 0) (safety 0)))
(declaim (ftype (function (single-float single-float) fixnum) iterate))

(defparameter bailout 16.0)
(defparameter max-iterations 1000)

(defun bench1 ()
(format t “Rendering…~%”) (force-output)
(loop :for y fixnum :from -39 to 39 :do
(terpri)
(loop :for x fixnum :from -39 to 39 :do
(princ (if (zerop (iterate (the single-float (/ x 40.0))
(the single-float (/ y 40.0))))
“*”
" "))))
(finish-output))

(defun iterate (x y)
(declare (single-float x y))
(loop
:with cr single-float = (- y 0.5)
:with ci single-float = x
:with zi single-float = 0.0
:with zr single-float = 0.0
:for i fixnum :from 0 :below max-iterations
:do (let ((temp (* zr zi))
(zr2 (* zr zr))
(zi2 (* zi zi)))
(declare (single-float temp zr2 zi2))
(setf zr (+ (- zr2 zi2) cr)
zi (+ temp temp ci))
(when (< (the single-float bailout) (the single-float (+
zi2 zr2)))
(return-from iterate i)))
:finally (return-from iterate 0)))

(time (bench1))

Run the test with:

sbcl --no-userinit --eval '(load (compile-file "bench1.lisp"))' 

–eval ‘(quit)’

Pascal J. Bourguignon wrote:

Urabe S. [email protected] writes:

Are those executables compiled with identical compilers + compile flags?

The question is understandable, but these implementation might very
well be written in different languages, so it doesn’t really matter.
We can assume that they are as much as possible, if they come from a
common distribution.

It does matter very much. At least identical ones should be used for
each
underlying languages to write them. The report says that test was held
on
Windows XP, so I suspect there is no such thing as “a common
distribution” on it.

Urabe S. wrote:

Are those executables compiled with identical compilers + compile flags?

Ruby p111 is mswin32 (onclick installer) the others are mingw32
downloaded from Downloads
Iron and java version, Python, groovy, php …was been downloaded from
main site. The perl’s exe is strawberry’s installation. Java was been
compiled by netbeans.

Marc H. wrote:

Two things I notice.

It seems ruby 1.9 indeed managed to keep up with the python versions
more than the older ruby versions. And it seems to be (almost) as fast
as perl for that test.

The other thing, which is very strange, is that the ironruby
implementation is significantly slower than ironpython, whereas the
other versions arent by as much. What is wrong here?

  1. Python is still faster, but version 1.9.1 goes very well catching up
    good results, goes better than perl, its script was been very optimized
    to get that result, 2.7s against 4s of the normal version.

  2. I think there aren’t correlation between these versions: Ironruby,
    ironpython, ruby and python are different projects and with different
    development.

Pascal J. Bourguignon wrote:

I added a comment to the web site, but I’m not sure it was taken into
account (I didn’t got the same feed back as for a second shorter
comment). So here it is again:

For completeness, could you please try Common Lisp too?

I got only the second comment, anyway, i’ll add lisp asap and thanks for
your work

Lua 5.1.4

Does this make use of the lua jit[1]?

BTW I’m not so sure that the type statements in groovy make the code
run faster. IIRC with older versions they simply introduced type
checks that had the adverse effect.

[1] http://luajit.org

Urabe S. wrote:

So the next thing you should do is to recompile by yourself to uniform
compilation environment among them. cut …

Yes i could do that but will be a different comparison. I don’t think
there are many people who compile themself from the sources, on windows

Marco M. wrote:

Urabe S. wrote:

Are those executables compiled with identical compilers + compile flags?

Ruby p111 is mswin32 (onclick installer) the others are mingw32
downloaded from Downloads
Iron and java version, Python, groovy, php …was been downloaded from
main site. The perl’s exe is strawberry’s installation. Java was been
compiled by netbeans.

So the next thing you should do is to recompile by yourself to uniform
compilation environment among them. Fairness is the most essential part
when
you want to do an emotional Yo-Yo on a benchmark like that.

And luckily, all implementations nominated are open sourced.

On Mon, Aug 24, 2009 at 8:47 AM, Marco
Mastrodonato[email protected] wrote:

Urabe S. wrote:

So the next thing you should do is to recompile by yourself to uniform
compilation environment among them. cut …

Yes i could do that but will be a different comparison. I don’t think
there are many people who compile themself from the sources, on windows

Another thing to consider when running the benchmark runs to not write
the program output to the console (e.g. ./program > /dev/null or
program.exe > NUL). Presumably the intention is to compare the speed
of the implementations calculating the result and not test how fast
the desktop environment can display the result. While it may not in
all cases make a significant difference in the results, it is best to
remove that variable from the test.

For example, in one case I saw a benchmark that claimed something to
the effect of “look BRANDX is not slow: this program in BRANDX is only
slightly slower than C!”. But, on closer examination it took the C
program less than 1 second to calculate the result and it took the
BRANDX program a few seconds to calculate the result; but, in both
cases, it took cmd.exe many seconds to display the result.

Using your Java program* (10 runs):
Min. 1st Qu. Median Mean 3rd Qu. Max. Std-dev
158.0 178.8 196.5 195.6 205.5 249.0 27.37882

and, sending out to to /dev/null:
Min. 1st Qu. Median Mean 3rd Qu. Max. Std-dev
106.0 108.2 109.0 109.0 110.0 112.0 1.699673

[*] Modified to send the time output to standard error

unknown wrote:

Another thing to consider when running the benchmark runs to not write
the program output to the console

unknown, thanks for your advice. I agree with you, i didn’t care to test
the stdout write speed. Honestly i tried to write into a variable and
send to output only at the end… i saw there was no difference (ruby’s
script) and i went back to the original, more nice to use because you
can early feel the speed. In your test, instead, there’s a discrete
difference

Marco M. wrote:

Urabe S. wrote:

So the next thing you should do is to recompile by yourself to uniform
compilation environment among them. cut …

Yes i could do that but will be a different comparison. I don’t think
there are many people who compile themself from the sources, on windows

If you really need speed recompiling is the easiest way to achieve it.
Does it
really worth comparing those executables? Their “slowness” might be
sourced
from a bad compilation. I don’t know about other language but
1.8.6-p111
versus 1.8.6-p368 case is (I believe) due to difference of their
compilers.
What is your point on that article, then? Are you really comparing
languages?
not compilers behind them?

Urabe S. wrote:

If you really need speed recompiling is the easiest way to achieve it.

The aim is a languages’s comparison, using the downlodable package
…without the need to compile every interpreter. I could be agree with
you about a “real comparison”, but this is the practise done by the
99,5% of the windows users.

@Pascal
Sorry, i’m having some trouble with lisp, have i to use the txt format?
Or have i to compile it first? Take a look:

C:\Lavoro\Progetti\Test\Bench\multilanguage>sbcl --no-userinit --eval
‘(load (compile-file “bench1.lisp”))’ --eval ‘(quit)’
This is SBCL 1.0.29, an implementation of ANSI Common Lisp.
More information about SBCL is available at http://www.sbcl.org/.

SBCL is free software, provided as is, with absolutely no warranty.
It is mostly in the public domain; some portions are provided under
BSD-style licenses. See the CREDITS and COPYING files in the
distribution for more information.

This is experimental prerelease support for the Windows platform: use
at your own risk. “Your Kitten of Death awaits!”

debugger invoked on a END-OF-FILE:
end of file on #<SB-IMPL::STRING-INPUT-STREAM {23B59BD1}>

Type HELP for debugger help, or (SB-EXT:QUIT) to exit from SBCL.

restarts (invokable by number or by possibly-abbreviated name):
0: [CONTINUE] Ignore runtime option --eval “'(load”.
1: [ABORT ] Skip rest of --eval and --load options.
2: Skip to toplevel READ/EVAL/PRINT loop.
3: [QUIT ] Quit SBCL (calling #'QUIT, killing the process).

(SB-IMPL::STRING-INCH #<SB-IMPL::STRING-INPUT-STREAM {23B59BD1}> T NIL)
0]

Marco M. wrote:

Jruby 1.3.1
Ruby 1.9.1 p129
Ruby 1.8.6 p368
Ruby 1.8.6 p111
IronRuby 0.9.0
IronPython 2.0.2
Perl 5.10.0

Let me know yours comment

http://mastrodonato.info/index.php/2009/08/comparison-script-languages-for-the-fractal-geometry/?lang=en

A decent try and honest, but there are too many variables involved.
Even a quick look at the Perl code could use a little fine tuning (it
looked like v4 code in a lot of ways (from 15 years ago), improperly
using local, etc.), and where there are wasteful assignments (not a big
deal). I’d recommend posting the code in their respective news groups
and asking for any advice on how to speed up the code/write it more
efficiently, and see what people come up with. Implementing the same
code essentially the same way in different languages may show a slower
time elapsed, but perhaps if you recoded it in a way to take advantage
of that language you would see some (slightly, but important)
differences in your results. Don’t get me wrong, I appreciate what
you’re doing here and it’s a good attempt, I just think there are too
many variables. I haven’t ran Windows for a long time, but I wouldn’t
be surprised if a *nix variant provided different results (I know it
does for me on Linux using a very comparable system). For that matter,
you might offer something in C and C++ to compare to compiled Java,
unless you believe that those aren’t viable comparison languages for
some reason? (If so, know that myself and many others develop online
and offline using them, because they are faster and sometimes that’s
important with huge trafficked sites where every tiny otherwise trivial
thing makes a difference).

On Tue, Aug 25, 2009 at 5:20 PM, Nathan Keel[email protected] wrote:

Groovy 1.6.3
http://mastrodonato.info/index.php/2009/08/comparison-script-languages-for-the-fractal-geometry/?lang=en

A decent try and honest, but there are too many variables involved.
… Don’t get me wrong, I appreciate what you’re doing here and it’s
a good attempt, I just think there are too many variables…
you might offer something in C and C++ to compare to compiled Java,
unless you believe that those aren’t viable comparison languages for
some reason?

OpenJDK Client VM (build 14.0-b08, mixed mode, sharing)

java Bench1 > /dev/null
Java Elapsed 0.08
Java Elapsed 0.079
Java Elapsed 0.079

ruby 1.9.2dev (2009-08-25 trunk 24642) [i686-linux]

ruby bench1.rb > /dev/null
Ruby Elapsed 3.515
Ruby Elapsed 3.352
Ruby Elapsed 3.523

jruby 1.3.0 (ruby 1.8.6p287) (2009-06-03 5dc2e22) (OpenJDK Client VM
1.6.0_0) [i386-java]

jruby bench1.rb > /dev/null
Ruby Elapsed 4.185
Ruby Elapsed 3.760
Ruby Elapsed 3.626

ruby 1.9.2dev (2009-08-25 trunk 24642) [i686-linux]

ruby -rubygems bench2.rb > /dev/null
Ruby Elapsed 0.059
Ruby Elapsed 0.058
Ruby Elapsed 0.060

jruby 1.3.0 (ruby 1.8.6p287) (2009-06-03 5dc2e22) (OpenJDK Client VM
1.6.0_0) [i386-java]

jruby -rubygems bench2.rb > /dev/null
Ruby Elapsed 0.409
Ruby Elapsed 0.410
Ruby Elapsed 0.412

require ‘ffi-inliner’

BAILOUT = 16
MAX_ITERATIONS = 1000

class Bench2
extend Inliner

def initialize
puts “Rendering…”
for y in -39…39
for x in -39…39
print iterate(x/40.0, y/40.0) == 0 ? “*” : " "
end
print “\n”
end
end

inline <<-EO
int n;

int iterate(double x, double y)
{
  int i = 0;
  double zi = 0.0;
  double zr = 0.0;
  double zi2, zr2, temp;
  double ci = x;
  double cr = y-0.5;
  while(i < #{MAX_ITERATIONS}) {
    i++;
    temp = zr * zi;
    zr2 = zr * zr;
    zi2 = zi * zi;
    zr = zr2 - zi2 + cr;
    zi = temp + temp + ci;
    if(zi2 + zr2 > #{BAILOUT}) return i;
  }
  return 0;
}

EO
end

time = Time.now
Bench2.new
STDERR.puts “Ruby Elapsed %.3f” % (Time.now - time)

In this case, the original Ruby version of the method looked so much
like C that I don’t think you lose much in readability by inlining
(except for loss of vim’s syntax highlighting).

Comparison script languages for the fractal geometry, these are the
languages i tested:

http://shootout.alioth.debian.org/u32q/benchmark.php?test=mandelbrot

OpenJDK Client VM (build 14.0-b08, mixed mode, sharing)> java Bench1 > /dev/null

Java Elapsed 0.08
Java Elapsed 0.079
Java Elapsed 0.079

ruby 1.9.2dev (2009-08-25 trunk 24642) [i686-linux]> ruby -rubygems bench2.rb > /dev/null

Ruby Elapsed 0.059
Ruby Elapsed 0.058
Ruby Elapsed 0.060

I guess this wasn’t the first run when the inline code got compiled?

Anyway, since the runtime is so short, in the case of the java version
you’re to some extent measuring the JVM startup time.

Another thing: if I’m not totally mistaken, a ruby float is a double
in the Java world.

On Wed, Aug 26, 2009 at 1:25 AM, lith[email protected] wrote:

ruby 1.9.2dev (2009-08-25 trunk 24642) [i686-linux]> ruby -rubygems bench2.rb > /dev/null

Ruby Elapsed 0.059
Ruby Elapsed 0.058
Ruby Elapsed 0.060

I guess this wasn’t the first run when the inline code got compiled?

Right, compilation time is not included. The time measurements were
taken inside the program before and after the method call.

Anyway, since the runtime is so short, in the case of the java version
you’re to some extent measuring the JVM startup time.

As above, the timings are from inside the program: so, they don’t
include the startup time, but the program did not allow for JIT warmup
time.

I just ran a few of these benchmarks on my machine. I got different
results
Linux 2.6.28-11-generic GNU/Linux
Intel core 2 duo 2GHz, 2MB L2 cache, 3GB ram DDR

     Language Time for 100 iterations times slower than java with

-server java1.6 –server 0.18 1 Ruby1.8 7.78 44.07 Ruby1.9.2 4.2 23.78
Jruby 2.5 14.16 Jruby1.3.1—sever 2.31 13.1 java1.6 -client
0.18 1.01 python 2.6.2
3.04 17.21

Jruby is the fastest here with 13 times slower than java. I am sure
there
are other command line options which may allow jruby to perform faster,
that
I am not aware of. Also shows ruby1.9.2 is slower than python 2.6.2

On Mon, Aug 24, 2009 at 10:43 PM, Marco M. <