Question on bottleneck of ruby

On Fri, Sep 28, 2007 at 07:31:17PM +0900, Byung-Hee HWANG wrote:

well-controlled processes, you’re probably climbing the walls and getting
ready to throw things at me right now! But this is why I was talking about a
value-tradeoff. If I’m right, then there are a lot of opportunities to
create capturable business value that traditional methodologies (including
fast-prototype-followed-by-extensive-rewrite) simply can’t touch.

For these cases, Ruby is uniquely valuable.
It seems like you are angry. I can feel that you like Ruby very much.

[…snip…]

I rather strongly suspect that you are not a psychologist – and that,
even if you were, you would not be trying to diagnose people over the
Internet. Please stick to the discussion topic rather than attempting
to
assign motivations and emotions to others involved in the discussion as
an alternative to making a salient point.

On 9/28/07, Francis C. [email protected] wrote:

In some respects, guilty as charged. (I deny the point about
non-understandability. If you’re going to develop like this, then writing
documentation and unit tests must dominate the development effort, perhaps
by 10-to-1. Otherwise, you end up with nothing usable.)
This however (although 10 might be an exaggeration) is a good thing
more often than not.

Robert

Chad P. wrote:

Perl for some of that, to avoid extremely long waits (for some definition
of “extremely long”) for basic sysadmin utilities.

I third it. I don’t like the slow startup time of JRuby any more than
you all would. We’ll do what we can to fix that.

  • Charlie

On Fri, Sep 28, 2007 at 12:37:11PM +0900, M. Edward (Ed) Borasky wrote:

Which is why they teach data structures in computer science class. It’s
all about fast search, I think. That’s one of the big gripes I have with
“lazy” interpretation. If you don’t do stuff until you have to do it, it
only pays off if you end up never having to do it. :slight_smile:

My understanding is that lazy evaluation can actually be of distinct
benefit to simplifying attempts to code for concurrency. I haven’t
really investigated the matter personally, having little call to write
software that would benefit from concurrency, but I imagine that will
change in time. That being the case, I will surely enjoy the benefits
of
lazy evaluation at that time, should my understanding of its benefits to
concurrency not prove to be based on faulty information.

subjects. People have “always” prototyped in “slow but productive”
completely in a compiled language or going bankrupt buying hardware.
There are a great many use cases for Ruby where there will never come
a
time that runtime performance is that important. Probably 80% of the
code I write, minimum, falls into that category. Under such
circumstances, a reasonably quick startup time and a decent algorithm
make much more of a difference than long-running performance and a
binary
or bytecode compiled language with a reputation for performance.

That doesn’t mean I wouldn’t like to see Ruby’s performance improved
significantly in the future. It just means that if Ruby never
approaches
the performance of C, or the long-running performance characteristics of
an optimizing VM like Java’s, it will in no way hamper my ability to put
Ruby to good use without having to plan for the day when I have to
rewrite everything – because that day will never come in at least the
vast majority of cases.

In fact, in cases where rapid coding up front in a way that requires a
high level language is very important, and high performance software
will
become very important given time, my preference would not be to
prototype
in Ruby (or Perl, or UCBLogo, or whatever) anyway. It’d be to use
something like OCaml, with excellent performance characteristics in
binary compiled form, decent long-running performance in bytecode
compiled form running on its VM, and convenient source code access using
the interpreter with the ability to test stuff on the fly using its
“toplevel” interactive interpreter. Use a tool to suit the job at hand.

A lot of the time, in my work and play, that tool is Ruby – and will
never require a rewrite in a “faster” language.

On 9/28/07, Robert D. [email protected] wrote:

This however (although 10 might be an exaggeration) is a good thing
more often than not.

10-1 is no exaggeration, and may indeed be understating it. For a
software
micro-effort to throw off capturable business value, the documentation
is
almost more important than the program. That’s precisely because the
analysis of fleeting value-opportunities is also fleeting. I’ve gotten
quite
used to putting short (< 2000 lines) Ruby scripts into production and
having
them run trouble free (except for slow performance and high memory
consumption). I’ve gotten just as used to completely (and willfully)
forgetting all the technical analysis about them after they’re written.
I
can do this to a very limited extent in Java, but I’ve never managed it
in
Python.

On 9/28/07, Byung-Hee HWANG [email protected] wrote:

value-tradeoff. If I’m right, then there are a lot of opportunities to
create capturable business value that traditional methodologies
(including
fast-prototype-followed-by-extensive-rewrite) simply can’t touch.

For these cases, Ruby is uniquely valuable.
It seems like you are angry. I can feel that you like Ruby very much.

Angry? Well, no, not in particular. I reserve anger for injustice,
malfeasance, and incompetence. In short, for things that people do, not
that
machines do.

:wink:

On 9/27/07, Clifford H. [email protected] wrote:

The other performance factor (related to a different discussion) that
makes byte-code interpretation faster than AST-interpretation is that
with byte-code, you get much better locality of reference, so your
cache+memory system works much better. This is a very significant
factor that justifies some recent complaints.

Depending on circumstances byte-codes can actually be faster than
machine code.

An example.

Many years ago, Digitalk produced a Smalltalk implementation for PCs.
They continually had to answer questions about the performance of
byte-code interpretation. When they came out with a version 32-bit
PCs they decided to expand everything to x86 machine code, so that
they could say that their Smaltalk was compiled instead of
interpreted.

What they learned was that the machine code version actually ran
slower due to locality of reference and paging.

Of course tuning the performance of anything, and in particular a
dynamic language implementation, is as much of an art as a science,
and requires constant experimentation and willingness to overcome
one’s assumptions.


Rick DeNatale

My blog on Ruby
http://talklikeaduck.denhaven2.com/