For performance, write it in C

On Thu, 27 Jul 2006, Francis C. wrote:

In regard to YARV: I get a creepy feeling about anything that is
considered by most of the world to be the prospective answer to all
their problems. And as a former language designer, I have some reasons
to believe that a VM will not be Ruby’s performance panacea.

one of the reasons i’ve been pushing so hard for an msys based ruby is
that
having a ‘compilable’ ruby on all platforms might open up developement
on jit
type things like ruby inline - which is pretty dang neat.

2 cts.

-a

On Thu, Jul 27, 2006 at 04:27:57AM +0900, Ashley M. wrote:

Anyway, my question really is that I thought a VM was a prerequisite
or JIT? Is that not the case? And if the YARV VM is not the way to
go, what is?

The canonical example for comparison, I suppose, is the Java VM vs. the
Perl JIT compiler. In Java, the source is compiled to bytecode and
stored. In Perl, the source remains in source form, and is stored as
ASCII (or whatever). When execution happens with Java, the VM actually
interprets the bytecode. Java bytecode is compiled for a virtual
computer system (the “virtual machine”), which then runs the code as
though it were native binary compiled for this virtual machine. That
virtual machine is, from the perspective of the OS, an interpreter,
however. Thus, Java is generally half-compiled and half-interpreted,
which speeds up the interpretation process.

When execution happens in Perl 5.x, on the other hand, a compiler runs
at execution time, compiling executable binary code from the source. It
does so in stages, however, to allow for the dynamic runtime effects of
Perl to take place – which is one reason the JIT compiler is generally
preferable to a compiler of persistent binary executables in the style
of C. Perl is, thus, technically a compiled language, and not an
interpreted language like Ruby.

Something akin to bytecode compilation could be used to improve upon the
execution speed of Perl programs without diverging from the
JIT-compilation execution it currently uses and also without giving up
any of the dynamic runtime capabilities of Perl. This would involve
running the first (couple of) pass(es) of the compiler to produce a
persistent binary compiled file with the dynamic elements still left in
an uncompiled form, to be JIT-compiled at execution time. That would
probably grant the best performance available for a dynamic language,
and would avoid the overhead of a VM implementation. It would, however,
require some pretty clever programmers to implement in a sane fashion.

I’m not entirely certain that would be appropriate for Ruby, considering
how much of the language ends up being dynamic in implementation, but it
bothers me that it doesn’t even seem to be up for discussion. In fact,
Perl is heading in the direction of a VM implementation with Perl 6,
despite the performance successes of the Perl 5.x compiler. Rather than
improve upon an implementation that is working brilliantly, they seem
intent upon tossing it out and creating a different implementation
altogether that, as far as I can see, doesn’t hold out much hope for
improvement. I could, of course, be wrong about that, but that’s how it
looks from where I’m standing.

It just looks to me like everyone’s chasing VMs. While the nontrivial
problems with Java’s VM are in many cases specific to the Java VM (the
Smalltalk VMs have tended to be rather better designed, for instance),
there are still issues inherent in the VM approach as currently
envisioned, and as such it leaves sort of a bad taste in my mouth.

I think I’ve rambled. I’ll stop now.

On Thu, Jul 27, 2006 at 04:59:13AM +0900, Francis C. wrote:

orders of magnitude faster. You can even try to break your computation
up into multiple stages, and stream the intermediate results out to
temporary files. As ugly as that sounds, it will be far faster.

One of these days, I’ll actually know enough Ruby to be sure of what
language constructs work for what purposes in terms of performance. I
rather suspect there are prettier AND better-performing options than
using temporary files to store data during computation, however.

Ron M wrote:

Not really. In C you can quite easily use inline assembly
to do use your chips MMX/SSE/VIS/AltiVec extensions and if
you need more, interface to your GPU if you want to use it
as a coprocessor.

I don’t know of any good way of doing those in Java except
by writing native extensions in C or directly with an assembler.

Last I played with Java it didn’t have a working cross-platform
mmap, and if that’s still true, the awesome NArray+mmap Ruby
floating around is a good real-world example of this flexibility.

Your point about Java here is very well-taken. I’d add that you don’t
even really need to drop into asm to get most of the benefits you’re
talking about. C compilers are really very good at optimizing, and I
think you’ll get nearly all of the available performance benefits from
well-written C alone. (I’ve written at least a million lines of
production asm code in my life, as well as a pile of commercial
compilers for various languages.) It goes back to economics again. A
very few applications will gain so much incremental value from the extra
5-10% performance boost that you get from hand-tuned asm, that it’s
worth the vastly higher cost (development, maintenance, and loss of
portability) of doing the asm. A tiny number of pretty unusual apps
(graphics processing, perhaps) will get a lot more than 10% from asm.

The performance increment in going from Ruby to C is in many cases a
lot more than 10%, in fact it can easily be 10,000%.

On Thu, 27 Jul 2006, Ashley M. wrote:

I’m late to this conversation but I’ve been interested in Ruby performance
lately. I just had to write a script to process about 1-1.5GB of CSV data

Just as a sidenote to this conversation, if you are not using FasterCSV,
take a look at it. http://rubyforge.org/projects/fastercsv

Using it may dramatically speed your script.

Kirk H.

Chad P. wrote:

On Thu, Jul 27, 2006 at 04:59:13AM +0900, Francis C. wrote:

orders of magnitude faster. You can even try to break your computation
up into multiple stages, and stream the intermediate results out to
temporary files. As ugly as that sounds, it will be far faster.

One of these days, I’ll actually know enough Ruby to be sure of what
language constructs work for what purposes in terms of performance. I
rather suspect there are prettier AND better-performing options than
using temporary files to store data during computation, however.

Ashley was talking about 1GB+ datasets, iirc. I’d love to see an
in-memory data structure (Ruby or otherwise) that can slug a few of
those around without breathing hard. And on most machines, you’re going
through the disk anyway with a dataset that large, as it thrashes your
virtual-memory. So why not take advantage of the tunings that are built
into the I/O channel?

If I’m using C, I always handle datasets that big with the kernel vm
functions- generally faster than the I/O functions. I don’t know how to
do that portably in Ruby (yet).

Chad P. wrote:

On Wed, Jul 26, 2006 at 11:29:06PM +0900, Ryan McGovern wrote:
-snip-
For those keen on functional programming syntax, Haskell is a better
choice than Java for performance: in fact, the only thing keeping
Haskell from performing as well as C, from what I understand, is the
current state of processor design. Similarly, O’Caml is one of the
fastest non-C languages available: it consistently, in a wide range of
benchmark tests and real-world anecdotal comparisons, executes “at least
half as quickly” as C, which is faster than it sounds.

For those keen on functional programming, Clean produces small fast
executables.

The OP is right, though: if execution speed is your top priority, use C.
Java is an also-ran – what people generally mean when they say that
Java is almost as fast as C is that a given application written in both
C and Java “also runs in under a second” in Java, or something to that
effect. While that may be true, there’s a significant difference
between 0.023 seconds and 0.8 seconds (for hypothetical example).

That sounds wrong to me - I hear positive comments about Java
performance for long-running programs, not for programs that run in
under a second.

Isaac G. wrote:

That sounds wrong to me - I hear positive comments about Java
performance for long-running programs, not for programs that run in
under a second.

JIT is the key to a lot of that. Performance depends greatly on
the compiler, the JVM, the algorithm, etc.

I won a bet once from a friend. We wrote comparable programs in
Java and C++ (some arbitrary math in a loop running a bazillion
times).

With defaults on both compiles, the Java was actually faster
than the C++. Even I didn’t expect that. But as I said, this
sort of thing is highly dependent on many different factors.

Hal

On Jul 26, 2006, at 9:31 pm, Francis C. wrote:

functions- generally faster than the I/O functions. I don’t know
how to
do that portably in Ruby (yet).

I think the total data size is about 1.5GB, but the individual files
are smaller, the largest being a few hundred GB. The most rows in a
file is ~15,000,000 I think. The server I run it on has 2GB RAM (an
Athlon 3500+ running FreeBSD/amd64, so the hardware is not really an
issue)… it can get all the way through without swapping (just!)

The processing is pretty trivial, and mainly involves incrementing
some ID columns so we can merge datasets together, adding a text
column to the start of every row, and eliminating a few duplicates.
The output file is gzipped (sending the output of CSV::Writer through
GzipWriter). I could probably rewrite it so that most files are
output a line at a time, and call out to the command line gzip. Only
the small files need to be stored in RAM for duplicate removal,
others are guaranteed unique. At the time I didn’t think using RAM
would give such a huge performance hit (lesson learnt).

I might also look into Kirk’s suggestion of FasterCSV. If all this
doesn’t improve things, there’s always the option of going dual-core
and forking to do independent files.

However… the script can be run at night so even in its current
state it’s acceptable. It will only need serious work if we start
adding many more datasets into the routine (we’re using two out of a
conceivable 4 or 5, I think). In that case we could justify buying a
faster CPU if it got out of hand, rather than rewrite it in C. But
that’s more a reflection of hardware prices than my wages :slight_smile:

I have yet to write anything in Ruby was less than twice as fast to
code as it would have been in bourne-sh/Java/whatever, never mind
twice as fun or maintainable. I recently rewrote an 830 line Java/
Hibernate web service client as 67 lines of Ruby, in about an hour.
With that kind of productivity, performance can go to hell!

Ashley

On 7/26/06, Ron M [email protected] wrote:

as a coprocessor.

I don’t know of any good way of doing those in Java except
by writing native extensions in C or directly with an assembler.

So you’re saying that when writing an extension to Ruby in C you can
also
manually write assembly to speed up specific aspects of your code, and
that
when writing an extrenion in Java you’d have to manually write assembly
to
speed up specific aspect of your code? The great hassle of writing Java
Native Interface code aside (which is really a one-time cost) what
exactly
is the difference here?

On any platform, Java included, you can eventually call out to C code to
do
some processor specific or really performance-intensive task. Java
doesn’t
make it as easy as Ruby, but it also performs quite a bit better than
Ruby
for most cases. It’s only in rare cases that you actually need to write
native code to make a Java app perform well. However in the Ruby world,
that
tends to be the stock answer…if it’s not fast enough, give up on Ruby!

I can absolutely appreciate the gains shown by moving targetted pieces
of
code from Ruby to C. In those examples, Ruby’s power is grossly
underutilized, so the conversion to a less feature-rich language with
less
overhead makes a great deal of sense. However I would challenge the Ruby
community at large to expect more from Ruby proper before giving up the
dream of highly-performant Ruby code and plunging into the C.

On Thu, 27 Jul 2006, Ashley M. wrote:

rewrite it so that most files are output a line at a time, and call out to
datasets into the routine (we’re using two out of a conceivable 4 or 5, I
think). In that case we could justify buying a faster CPU if it got out of
hand, rather than rewrite it in C. But that’s more a reflection of hardware
prices than my wages :slight_smile:

I have yet to write anything in Ruby was less than twice as fast to code as
it would have been in bourne-sh/Java/whatever, never mind twice as fun or
maintainable. I recently rewrote an 830 line Java/Hibernate web service
client as 67 lines of Ruby, in about an hour. With that kind of
productivity, performance can go to hell!

i process tons of big csv files and use this approach:

  • parse the first line, remember cell count

  • foreach line

    • attempt parsing using simple split, iff that fails fall back to
      csv.rb
      methods

something like

n_fields = nil

f.each do |line|
fields = lines.split %r/,/
n_fields ||= fields.size

 if fields.size != n_fields
   fields = parse_with_csv_lib line
 end

 ...

end

this obviously won’t work with csv files that have cells spanning lines,
but
for simply stuff it can speed up parsing in a huge way.

-a

Charles O Nutter wrote:

I would challenge the Ruby
community at large to expect more from Ruby proper before giving up the
dream of highly-performant Ruby code and plunging into the C.

Much depends on what is wanted from the language. My friends know me for
a person who will gladly walk a very long way to get an incremental
performance improvement in any program. But I don’t dream of
highly-performant Ruby code. I dream of highly-scalable applications
that can work with many different kinds of data seamlessly and link
business people and their customers together in newer, faster, more
secure ways than have ever been imagined before. I want to be able to
turn almost any kind of data, wherever it is, into actionable
information and combine it flexibly with any other data. I want to be
able to simply drop any piece of new code into a network and
automatically have it start working with other components in the
(global) network. I want a language system that can gracefully and
powerfully model all of these new kinds of interactions without
requiring top-down analysis of impossibly large problem domains and
rigid program-by-contract regimes. Ruby has unique characteristics,
among all other languages that I know, that qualify it for a first
approach to my prticular dream. Among these are the excellent
metaprogramming support, the open classes, the adaptability to tooling,
and (yes) the generally-acceptable performance.

If one’s goal is to get a program that will take the least amount of
time to plow through some vector mathematics problem, then by all means
let’s have the language-performance discussion. But to me, most of these
compute-intensive tasks are problems that have been being addressed by
smart people ever since Fortran came along. We don’t necessarily need
Ruby to solve them.

We do need Ruby to solve a very different set of next-generation
problems, for which C and Java (and even Perl and Python) are very
poorly suited.

On Jul 26, 2006, at 9:11 pm, Chad P. wrote:

It just looks to me like everyone’s chasing VMs. While the nontrivial
problems with Java’s VM are in many cases specific to the Java VM (the
Smalltalk VMs have tended to be rather better designed, for instance),
there are still issues inherent in the VM approach as currently
envisioned, and as such it leaves sort of a bad taste in my mouth.

Chad…

Just out of curiosity (since I don’t know much about this subject),
what do yo think of the approach Microsoft took with the CLR? From
what I read it’s very similar to the JVM except it compiles directly
to native code, and makes linking to native libraries easier. I
assume this is closer to JVM behaviour than Perl 5 behaviour. Is
there anything to be learnt from it for Ruby?

Ashley

On 7/26/06, Chad P. [email protected] wrote:

which speeds up the interpretation process.
Half true. The Java VM could be called “half-compiled and
half-interpreted”
at runtime for only a short time, and only if you do not consider VM
bytecodes to be a valid “compiled” state. However most bytecode is very
quickly compiled into processor-native code, making those bits fully
compiled. After a long enough runtime (not very long in actuality), all
Java
code is running as native code for the target processor (with various
degrees of optimization and overhead).

The difference between AOT compilation with GCC or .NET is that Java’s
compiler can make determinations based on runtime profiling about how
to
compile that “last mile” in the most optimal way possible. The bytecode
compilation does, as you say, primarily speed up the interpretation
process.
However it’s far from the whole story, and the runtime JITing of
bytecode
into native code is where the magic lives. To miss that is to miss the
greatest single feature of the JVM.

When execution happens in Perl 5.x, on the other hand, a compiler runs

at execution time, compiling executable binary code from the source. It
does so in stages, however, to allow for the dynamic runtime effects of
Perl to take place – which is one reason the JIT compiler is generally
preferable to a compiler of persistent binary executables in the style
of C. Perl is, thus, technically a compiled language, and not an
interpreted language like Ruby.

I am not familiar with Perl’s compiler. Does it compile to
processor-native
code or to an intermediate bytecode of some kind?

We’re also juggling terms pretty loosely here. A compiler converts
human-readable code into machine-readable code. If the “machine” is a
VM,
then you’re fully compiling. If the VM code later gets compiled into
“real
machine” code, that’s another compile cycle. Compilation isn’t as cut
and
dried as you make it out to be, and claiming that, for example, Java is
“half compiled” is just plain wrong.

Something akin to bytecode compilation could be used to improve upon the

execution speed of Perl programs without diverging from the
JIT-compilation execution it currently uses and also without giving up
any of the dynamic runtime capabilities of Perl. This would involve
running the first (couple of) pass(es) of the compiler to produce a
persistent binary compiled file with the dynamic elements still left in
an uncompiled form, to be JIT-compiled at execution time. That would
probably grant the best performance available for a dynamic language,
and would avoid the overhead of a VM implementation. It would, however,
require some pretty clever programmers to implement in a sane fashion.

There are a lot of clever programmers out there.

I’m not entirely certain that would be appropriate for Ruby, considering

how much of the language ends up being dynamic in implementation, but it
bothers me that it doesn’t even seem to be up for discussion. In fact,
Perl is heading in the direction of a VM implementation with Perl 6,
despite the performance successes of the Perl 5.x compiler. Rather than
improve upon an implementation that is working brilliantly, they seem
intent upon tossing it out and creating a different implementation
altogether that, as far as I can see, doesn’t hold out much hope for
improvement. I could, of course, be wrong about that, but that’s how it
looks from where I’m standing.

Having worked heavily on a Ruby implementation, I can say for certain
that
99% of Ruby code is static. There are some dynamic bits, especially
within
Rails where methods are juggled about like flaming swords, but even
these
dynamic bits eventually settle into mostly-static sections of code.
Compilation of Ruby code into either bytecode for a fast interpreter
engine
like YARV or into bytecode for a VM like Java is therefore perfectly
valid
and very effective. Preliminary compiler results for JRuby show a boost
of
50% performance over previous versions, and that’s without optimizing
many
of the more expensive Ruby operations (call logic, block management).
Whether a VM is present (as in JRuby) or not (as may be the case with
YARV),
eliminating the overhead of per-node interpretation is a big positive.
JRuby
will also feature a JIT compiler to allow running arbitrary .rb files
directly, optimizing them as necessary and as seems valid based on
runtime
characteristics. I don’t know if YARV will do the same, but it’s a good
idea.

It just looks to me like everyone’s chasing VMs. While the nontrivial

problems with Java’s VM are in many cases specific to the Java VM (the
Smalltalk VMs have tended to be rather better designed, for instance),
there are still issues inherent in the VM approach as currently
envisioned, and as such it leaves sort of a bad taste in my mouth.

The whole VM thing is such a small issue. Ruby itself is really just a
VM,
where its instructions are the elements in its AST. The definition of a
VM
is sufficiently vague enough to include most other interpreters in the
same
family. Perhaps you are specifically referring to VMs that provide a set
of
“processor-like” fine-grained operations, attempting to simulate some
sort
of magical imaginary hardware? That would describe the Java VM pretty
well,
though in actuality there are real processes that run Java bytecodes
natively as well. Whether or not a language runs on top of a VM is
irrelevant, especially considering JRuby is a mostly-compatible version
of
Ruby running on top of a VM. It matters much more that translation to
whatever underlying machine…virtual or otherwise…is as optimal and
clean as possible.

On Thu, Jul 27, 2006 at 06:16:25AM +0900, Ashley M. wrote:

I have yet to write anything in Ruby was less than twice as fast to
code as it would have been in bourne-sh/Java/whatever, never mind
twice as fun or maintainable. I recently rewrote an 830 line Java/
Hibernate web service client as 67 lines of Ruby, in about an hour.
With that kind of productivity, performance can go to hell!

With a 92% cut in code weight, I can certainly sympathize with that
sentiment. Wow.

On Thu, Jul 27, 2006 at 06:24:49AM +0900, Charles O Nutter wrote:

however. Thus, Java is generally half-compiled and half-interpreted,
which speeds up the interpretation process.

Half true. The Java VM could be called “half-compiled and half-interpreted”
at runtime for only a short time, and only if you do not consider VM
bytecodes to be a valid “compiled” state. However most bytecode is very
quickly compiled into processor-native code, making those bits fully
compiled. After a long enough runtime (not very long in actuality), all Java
code is running as native code for the target processor (with various
degrees of optimization and overhead).

True . . . but this results in fairly abysmal performance, all things
considered, for short runs. Also, see below regarding dynamic
programming.

The difference between AOT compilation with GCC or .NET is that Java’s
compiler can make determinations based on runtime profiling about how to
compile that “last mile” in the most optimal way possible. The bytecode
compilation does, as you say, primarily speed up the interpretation process.
However it’s far from the whole story, and the runtime JITing of bytecode
into native code is where the magic lives. To miss that is to miss the
greatest single feature of the JVM.

This also is true, but that benefit is entirely unusable for highly
dynamic code, unfortunately – and, in fact, even bytecode compilation
might be a bit too much to ask for too-dynamic code. I suppose it’s
something for pointier heads than mine, since I’m not actually a
compiler-writer or language-designer (yet). It’s also worth noting that
this isn’t accomplishing anything that isn’t also accomplished by the
Perl JIT compiler.

code or to an intermediate bytecode of some kind?
There is no intermediate bytecode step for Perl, as far as I’m aware.
It’s not a question I’ve directly asked one of the Perl internals
maintainers, but everything I know about the Perl compiler confirms my
belief that it simply does compilation to machine code.

We’re also juggling terms pretty loosely here. A compiler converts
human-readable code into machine-readable code. If the “machine” is a VM,
then you’re fully compiling. If the VM code later gets compiled into “real
machine” code, that’s another compile cycle. Compilation isn’t as cut and
dried as you make it out to be, and claiming that, for example, Java is
“half compiled” is just plain wrong.

Let’s call it “virtually compiled”, then, since it’s being compiled to
code that is readable by a “virtual machine” – or, better yet, we can
call it bytecode and say that it’s not fully compiled to physical
machine-readable code, which is what I was trying to explain in the
first place.

require some pretty clever programmers to implement in a sane fashion.

There are a lot of clever programmers out there.

True, of course. The problem is getting them to work on a given
problem.

Having worked heavily on a Ruby implementation, I can say for certain that
99% of Ruby code is static. There are some dynamic bits, especially within
Rails where methods are juggled about like flaming swords, but even these
dynamic bits eventually settle into mostly-static sections of code.

I love that imagery, with the flaming sword juggling. Thanks.

idea.
I’m sure a VM or similar approach (and, frankly, I do prefer the
fast-interpreter approach over the VM approach) would provide ample
opportunity to improve upon Ruby’s current performance, but that doesn’t
necessarily mean it’s better than other approaches to improving
performance. That’s where I was aiming.

Ruby running on top of a VM. It matters much more that translation to
whatever underlying machine…virtual or otherwise…is as optimal and
clean as possible.

A dividing line between “interpreter” and “VM” has always seemed rather
more clear to me than you make it sound. Yes, I do refer to a
simulation of an “imaginary” (or, more to the point, “virtual”) machine,
as opposed to a process that interprets code. Oh, wait, there’s that
really, really obvious dividing line I keep seeing.

The use (or lack) of a VM does indeed matter: it’s an implementation
detail, and implementation details make a rather significant difference
in performance. The ability of the parser to quickly execute what’s fed
to it is important, as you indicate, but so too is the ability of the
parser to run quickly itself – unless your program is actually compiled
to machine-native code for the hardware, in which case the lack of need
for the parser to execute at all at runtime is significant.

On Thu, Jul 27, 2006 at 06:33:08AM +0900, Ashley M. wrote:

Just out of curiosity (since I don’t know much about this subject),
what do yo think of the approach Microsoft took with the CLR? From
what I read it’s very similar to the JVM except it compiles directly
to native code, and makes linking to native libraries easier. I
assume this is closer to JVM behaviour than Perl 5 behaviour. Is
there anything to be learnt from it for Ruby?

I’m not as familiar with what’s going on under the hood of the CLR as
the JVM, but from what I do know it exhibits both advantages and
disadvantages in comparison with the Java VM. Thus far, the evidence
seems to be leaning in the direction of the CLR’s advantages over the
JVM coming into play more often than the disadvantages, however, which
seems to indicate that the compromises that were made may have been the
“right” compromises, as far as this comparison goes.

In fact, the CLR seems in some ways to be a compromise between
Perl-style JIT compilation and Java-style bytecode compilation with
runtime VM-interpretation (there really needs to be a term for what a VM
does separate from either compilation or interpretation, since what it
does generally isn’t strictly either of them). There may well be
something to learn from that for future Ruby implementations, though I’d
warn away from trying to take the “all languages compile to the same
intermediate bytecode” approach that the CLR takes – it tries to be too
many things at once, basically, and ends up introducing some
inefficiencies in that sense. If you want to do everything CLR does,
with Ruby, then port Ruby to the CLR, but if you want to simply gain
performance benefits from studying up on the CLR, make sure you
cherry-pick the bits that are relevant to the task at hand.

I think Ruby would probably best benefit from something somewhere
between the Perl compiler’s behavior and the CLR compiler.
Specifically, compile all the static algorithm behavior in your code to
something persistent, link in all the rest as uncompiled (though perhaps
parse-tree compiled, which is almost but not quite the same as bytecode
compiled) code, and let that be machine-code compiled at runtime. This
might even conceivably be broken into two separate compilers to minimize
the last-stage compiler size needed on client systems and to optimize
each part to operate as quickly as possible.

Run all this stuff past a true expert before charging off to implement
it, of course. I’m an armchair theorist.

On 7/26/06, Francis C. [email protected] wrote:

We do need Ruby to solve a very different set of next-generation
problems, for which C and Java (and even Perl and Python) are very
poorly suited.

I agree, Francis, and I’d add that exactly those areas where people seem
to
frequently have performance concerns are areas where Ruby’s best
features
are practically ignored. Doing large-scale vector transformations does
not
require the unique ability to override Fixnum operations or treat
numbers as
objects, and so the benefits of those features are completely wasted
while
still bringing along their own baggage. Obviously as a JRuby developer
I’m
advocating using the right tool for the job, be it Java or Ruby or C. I
also
know that Ruby can do better, and I’m hoping we’ll see improvements
sooner
rather than later.

On Thu, Jul 27, 2006 at 08:25:11AM +0900, Csaba H. wrote:

Hence:
print x, bar() ==> 5 6
print x, bar() ==> 6 6

Is it just me, or are there no proper closures in that example code?

On 2006-07-26, Sean O’Halpin [email protected] wrote:

implemented. For example, Ruby has real closures, Python doesn’t. I

Even if OT, just for the sake of correctness: let me remark that Python
does have closures. Local functions (ones defined within another
function’s body) are scoped lexically.

It’s just sort of an anti-POLA (and inconvenient, as-is) piece of
semantics that variables get reinitalized upon assignment.

Hence:

def foo():
x = 5
def bar():
x = 6
return x
bar()
return x, bar

x, bar = foo()
print x, bar() ==> 5 6

def foo():
_x = [5]
def bar():
_x[0] = 6
return _x[0]
bar()
return _x[0], bar

x, bar = foo()
print x, bar() ==> 6 6

Regards,
Csaba