Ruby Vs. Java

On Aug 26, 2007, at 6:33 PM, Michael G. wrote:

On Aug 26, 2007, at 11:13 , Lionel B. wrote:

Real (wo)men program in machine language with an hex editor only.

You use a hex editor? Real hackers use only ones and zeros—and
that’s only if they have ones.

I saw once a cartoon about real programmers where a guy was sitting
in front of a terminal with a keyboard that had only two keys,
labelled “0” and “1”, hahaha.

– fxn

I saw once a cartoon about real programmers where a guy was sitting in
front of a terminal with a keyboard that had only two keys, labelled “0”
and “1”, hahaha.

– fxn

Uma G. wrote:

funny enough… but isn’t this exactly the way telegraphs used to work ?

That was the previous system; only one key.

On Aug 26, 2007, at 10:03 PM, Phlip wrote:

That was the previous system; only one key.
One key, relative short or long key press and pauses between words.
then as now, the error was usually between the desk and the chair.

On 8/27/07, Marc H. [email protected] wrote:

"I’m going to try both Java and Ruby out before I choose. "

But Java is faster.
It’s also uglier and less fun than Ruby.

If you need a proper speed, just stick to the static
languages. If you want the fun, go with Ruby. :>

If you want a JVM static language, I’d strongly suggest scala rather
than java.

martin

"I’m going to try both Java and Ruby out before I choose. "

But Java is faster.
It’s also uglier and less fun than Ruby.

If you need a proper speed, just stick to the static
languages. If you want the fun, go with Ruby. :>

Michael G. wrote:

On Aug 26, 2007, at 11:13 , Lionel B. wrote:

Real (wo)men program in machine language with an hex editor only.

You use a hex editor? Real hackers use only ones and zeros�and that’s
only if they have ones.

You had zeros? I worked for a company that could not afford zeros. We
had to use the letter “O”.

In the olden days, programs used to be a combination of assembly and a
low level compiled language like C or Pascal. Lotus was actually
written wholly in assembler back in the day. The comparison of C to
assembler was, in the early days, much like the comparison of Java to
Ruby today. However, the speed comparison of assembler to C did not
sustain its original conclusion for long. As compilers improved, it was
possible to wrote wholly in C and have it execute faster than in pure
assembler.

Personal anecdote #1: actual testing

I was working at Quantum at the time (hard disk maker) and this very
controversy arose. The managers listened to the philosophical debate
for a while and decided to settle it. We had volunteers from each side
to write code in their idiom doing theoretical and practical tests of
speed. By theoretical I mean do X times and see how fast
it is. By practical, I mean that you had to see how many seeks,
read/writes, etc you could get in a certain time with the different
algorithms.

The pure language, no assembler, won hands down.

Personal anecdote #2: compiler comparisons

I was a sysop on compuserve for the Borland topics for a while and this
came up again in various threads. There were several assembler devotees
that were pushing their notions. They maintained that they could write
an assembler routine that was fewer lines than anything that a compiled
language could manage. They tried several examples. In each case,
someone wrote a pascal method, compiled it, then looked at the generated
assembler and posted it. The compiled version was always shorter than
the one written in assembler.

Does this prove that it will always be so? Nah. It proves that trying
to out compile the compiler these days is waste. You MAY get one line
less here or there but only with tremendous effort, so why bother? Is
assembler useless? Nah, but its reputation is overrated.

What does this mean in this context?

The speed of execution is not as simple as looking at the stopwatch.
With hardware prices coming down down down, you can get muscle machines
to pick up any such slack. But, that costs money. Well, so does
development. If you can develop a lot faster in Ruby, is it worth it to
get another machine and run some of the parts in threads that will get
it done in faster fashion? If it is not, then speed may not be as
critical as you think.

There are many ways to get things moving faster. Here is one I saw
recently that I consider to be quite helpful. (It also has one of my
favorite lines I have ever read in a technical article:

“However, all these caching measures won’t hide a basic problem: you are
performing lots of database queries, and it’s harshing your mellow.”

How can you top that???

In short, Java might run faster, but that should not make a difference.
You can use hardware and design to make up for that. After that is
done, it is the time to develop and maintain the code that will be what
matters most.

FWIW, and IMHO

Lloyd L. wrote:

Michael G. wrote:

On Aug 26, 2007, at 11:13 , Lionel B. wrote:

Real (wo)men program in machine language with an hex editor only.
You use a hex editor? Real hackers use only ones and zeros�and that’s
only if they have ones.

You had zeros? I worked for a company that could not afford zeros. We
had to use the letter “O”.

Be careful … I understand Scott Adams is rather nasty about people
stealing his better Dilbert lines. :slight_smile: But … in truth, although ILLIAC
I assembler did have symbolic addressing, it did not have symbolic op
codes … the machine language of the system was so simple and logical
that they weren’t needed.

Other notes on that bygone day – programmers used hexadecimal notation,
but it was called “sexideciaml”. And after 9 you had either K, S, N, J,
F, L or +, -, N, J, F, L. Those happened to be the right codes on the
five-hole paper tape that the Teletypes punched.

Lloyd L. wrote:

I was working at Quantum at the time (hard disk maker) and this very
controversy arose. The managers listened to the philosophical debate
for a while and decided to settle it. We had volunteers from each side
to write code in their idiom doing theoretical and practical tests of
speed. By theoretical I mean do X times and see how fast
it is. By practical, I mean that you had to see how many seeks,
read/writes, etc you could get in a certain time with the different
algorithms.

The pure language, no assembler, won hands down.

I’m really curious about two things:

  1. The processor architecture, and
  2. The language.

There once was an architecture called VLIW, embodied in a
mini-supercomputer called Multiflow. This architecture was so
complicated that it literally had to have a compiler – no human could
even program it, let alone optimize code for it. The compiler used a
technique called “trace scheduling” to do this.

The punchline is that the optimization problem for this beast was
NP-complete. Now most compiler optimization problems are NP-complete
once you express them as true combinatorial optimization, and the good
folks at Multiflow weren’t oblivious to that fact. However, their
approximations were still slow relative to what simpler architectures
required, and Multiflow went out of business. They disappeared without a
trace.

On Aug 26, 2007, at 6:39 PM, Nick el wrote:

hate
lag and I want little of it as possible. But I guess that also
depends
on the quality of the code.

Lag is also largely caused by a DoS of packets to the server. Want a
fix? Use multiple servers (like WoW), and get LOTS of bandwidth.

---------------------------------------------------------------|
~Ari
“I don’t suffer from insanity. I enjoy every minute of it” --1337est
man alive

On 8/26/07, Terry P. [email protected] wrote:

I
the

I remember when Tcl became popular in the early 90’s. On the performance
issue, John Ousterhout, Tcl’s creator, argued that its performance was
actually very good, because you would write very few lines of Tcl and
most
of the work would be done by the underlying compiled-code
implementation.
Indeed, Tcl scripts are very succinct.

I suspect the same could be true in Ruby. Because Ruby code tends to be
more
succinct than Java code, there is potentially greater room for
optimization
(ignoring other issues due to language differences…), as less time
needs
to be spent in the “interpretation of characters”. Just a thought…

dean

On 2007-08-27, M. Edward (Ed) Borasky [email protected] wrote:

Lloyd L. wrote:

You had zeros? I worked for a company that could not afford zeros.
We had to use the letter “O”.

Be careful … I understand Scott Adams is rather nasty about people
stealing his better Dilbert lines. :slight_smile:

I thought people sent him his best lines. :slight_smile: :slight_smile:

Jeremy H.

On Mon, 27 Aug 2007, Phlip wrote:

Uma G. wrote:

funny enough… but isn’t this exactly the way telegraphs used to work ?

That was the previous system; only one key.

Ummm, I actually use an iambic keyer. One side for dots and the other
for
dashes.

– Matt
It’s not what I know that counts.
It’s what I can remember in time to use.

M. Edward (Ed) Borasky wrote:

Lloyd L. wrote:

I was working at Quantum at the time (hard disk maker) and this very
controversy arose. The managers listened to the philosophical debate
for a while and decided to settle it. We had volunteers from each side
to write code in their idiom doing theoretical and practical tests of
speed. By theoretical I mean do X times and see how fast
it is. By practical, I mean that you had to see how many seeks,
read/writes, etc you could get in a certain time with the different
algorithms.

The pure language, no assembler, won hands down.

I’m really curious about two things:

  1. The processor architecture, and
  2. The language.
  1. It was on a 386, 16MHz with no math coprocessor. (How is THAT for
    old???)

  2. We were using straight C and Microsoft’s 5.1 compiler on DOS 3.1 if
    memory serves.

Note: I was the one that did the theoretical programming. My buddy did
the practical and he did it with a kind of a cheat. He would write
code, then look at the assembler that the compiler produced. He would
tweak the code and look again. Whatever produced the least number of
assembler lines is what he used. His code was more than an order of
magnitude faster.

Theoretical speed is somewhat like the velocity metrics on high
performance cars. Just because it goes 0-60 in 4 seconds does not mean
that you can get it to do that. It is much the same with java vs. Ruby.
Many articles I have read, and with which I agree, say that development
and maintenance are the biggest costs for most projects. Getting things
up and earning revenue as fast as possible is not to be underestimated.

On Tue, Aug 28, 2007 at 12:33:20PM +0900, Kenneth McDonald wrote:

Java is much faster than Ruby (on average), and can now approach or even
match the speed of C in many cases. Of course, you’ll spend more time
implementing those fast bits of code in Java.

I suspect you mean C++. I don’t recall seeing much in the way of
evidence that Java had quite gotten as fast as C – which is still about
twice as fast as C++ for many purposes.

Nick N. wrote:

Which programming language is faster - Ruby or Java?

This is one of the things that will decide whether I use Ruby or Java so
help is appreciated greatly.

Thanks.

Java is much faster than Ruby (on average), and can now approach or even
match the speed of C in many cases. Of course, you’ll spend more time
implementing those fast bits of code in Java.

I didn’t look at all of the replies to the original note, but none of
the ones I did read mentioned JRuby. Worth checking out; use Ruby
(executed by the Java virtual machine) for the non-performance-critical
parts of your application, and Java for the parts that require speed.

Ken

Java’s speed comes largely from its pre-compiling. It was quite slow in
the early days. Is Ruby likely to get such a boon in the foreseeable
future? That would certainly be something that pointy haired managers
could boldly hold forth in meetings for consideration.

On Aug 28, 2007, at 2:18 PM, Lloyd L. wrote:

Java’s speed comes largely from its pre-compiling. It was quite
slow in
the early days. Is Ruby likely to get such a boon in the foreseeable
future? That would certainly be something that pointy haired managers
could boldly hold forth in meetings for consideration.

http://blog.grayproductions.net/articles/the_ruby_vm_episode_v

James Edward G. II

Lloyd L. wrote:

Java’s speed comes largely from its pre-compiling. It was quite slow in
the early days. Is Ruby likely to get such a boon in the foreseeable
future? That would certainly be something that pointy haired managers
could boldly hold forth in meetings for consideration.

Um, no. Java’s speed comes principally from
a) its non-object numeric types, and
b) from sophisticated VMs that do adaptive optimization (see Sun HotSpot
Server VM)

If by precompiling you mean compiling to bytecode then, no, this won’t
of itself give great speed. Java as always compiled to bytecode but the
early Sun reference VM - a bytecode interpreter - was still slow. It’s
easy to write slow bytecode interpreters. YARV for Ruby is currently
also a slow bytecode interpreter.

If you want to understand how Ruby can be made to run fast I recommend
reading about the Strongtalk VM and reading Urs Hölzle’s thesis,
“Adaptive Optimization for Self: Reconciling High
Performance with Exploratory Programming”,
available on the web as
http://www.cs.ucsb.edu/labs/oocsb/papers/urs-thesis.html
http://www.cs.ucsb.edu/labs/oocsb/papers/hoelzle-thesis.pdf

HTH

Eliot M. wrote:

If by precompiling you mean compiling to bytecode then, no, this won’t
of itself give great speed. Java as always compiled to bytecode but the
early Sun reference VM - a bytecode interpreter - was still slow. It’s
easy to write slow bytecode interpreters. YARV for Ruby is currently
also a slow bytecode interpreter.

This is precisely why we’ve been able to get very good performance in
JRuby. Though we still have an interpreted mode, which runs slower than
Ruby 1.8, we also have a nearly complete Ruby-to-JVM-bytecode compiler
that executes consistently faster than Ruby 1.8 and in some cases faster
than Ruby 1.9. In general, the difficult task has been structuring the
bytecode and the call pipeline in such a way as to allow HotSpot to do
its optimization.

This also means that Java and JRuby and similar adaptive optimizing
runtimes require some “warm-up time”. Java code will get faster as it
executes, but for short benchmarks it will usually be much slower than
its full potential. The same applies to JRuby, and as a result JRuby
will be better for longer-running processes (unless, of course, you
don’t mind it being a little slow early on).

As far as Ruby vs Java…why not just use both: www.jruby.org

  • Charlie