Couple quick questions about YARV

I know YARV is far from finished, but:

  1. any idea what the up-front cost for starting up YARV will be? I’m
    just hoping we won’t see 1 second of disk churn that we’ve come to
    expect from the JVM.

  2. will existing Ruby extensions continue to work?

Josh

Hi,

In message “Re: couple quick questions about YARV”
on Mon, 13 Feb 2006 08:28:32 +0900, Joshua H.
[email protected] writes:

|1. any idea what the up-front cost for starting up YARV will be? I’m
|just hoping we won’t see 1 second of disk churn that we’ve come to
|expect from the JVM.

% ruby1.8 -ve 0
ruby 1.8.4 (2005-12-24) [i486-linux]
-e:1: warning: useless use of a literal in void context

real 0m0.009s
user 0m0.004s
sys 0m0.001s

% ruby.yarv -ve 0
ruby 1.9.0 (2006-02-09) [i686-linux]
YARVCore 0.3.3 (rev: 389) [opts: ]
warning: useless use of a literal in void context

real 0m0.006s
user 0m0.004s
sys 0m0.001s

|2. will existing Ruby extensions continue to work?

Currently, yes. No promise for the future.

						matz.

Not to hijack the OP’s thread, but a follow-up question:

what speed-up can realistically be expected? I 've seen Koichi’s slides
and they look impressive but if I am not mistaken the optimizations
looked to be concerned with basic math operations and so on. what about
function calls / blocks / object instantiations / hash and array
implementation / … ? Is YARV going to muscle those around as well,
or not really?
if so (or if not), what performance improvements would be within reach?

thanks,
stijn

Hi,

In message “Re: couple quick questions about YARV”
on Mon, 13 Feb 2006 13:23:24 +0900, “stijn”
[email protected] writes:

|what speed-up can realistically be expected?

YARV runs much faster on calls and basic operations. But it runs
slower on eval()'ing. Ko1 is now working on it, I think.

						matz.

I’ve just tried:

$ cat bm.rb
def m
yield
end

i = 0
1000000.times do
i = m { m{ i } + 1}
end

puts i
$ time ruby bm.rb
1000000

real 0m11.721s
user 0m9.927s
sys 0m0.081s
$ time ruby.yarv bm.rb
1000000

real 0m2.376s
user 0m2.275s
sys 0m0.026s

It’s silly example, but it looks very impressive anyway.

Kent.

David V. [email protected] writes:

In message “Re: couple quick questions about YARV”
I wonder if that will be an incentive for people to use standard reflection
features instead of eval hacks whenever possible, the latter seem to crop out
as faster in trivial benchmarks than the former for me right now.

hope

2006/2/13, Kent S. [email protected]:

i = m { m{ i } + 1}
1000000

real 0m2.376s
user 0m2.275s
sys 0m0.026s

It’s silly example, but it looks very impressive anyway.

Kent.

can anyone post a benchmark using a simple eval() ?

greetings, Dirk

DÅ?a Pondelok 13 Február 2006 05:32 Yukihiro M. napísal:

Hi,

In message “Re: couple quick questions about YARV”

on Mon, 13 Feb 2006 13:23:24 +0900, "stijn" <[email protected]> 

writes:

|what speed-up can realistically be expected?

YARV runs much faster on calls and basic operations. But it runs
slower on eval()'ing. Ko1 is now working on it, I think.

  					matz.

I wonder if that will be an incentive for people to use standard
reflection
features instead of eval hacks whenever possible, the latter seem to
crop out
as faster in trivial benchmarks than the former for me right now.

Is something like smalltalkish “clean” blocks being pondered to make
calls
like define_method less prone to leak memory keeping local scopes from
being
GCed?

David V.

On Feb 12, 2006, at 8:12 PM, Yukihiro M. wrote:

[snip evidence of YARV starting faster than Ruby 1.8]
Excellent! I’m really glad to see that.

For a slightly more radical question: are there any long-term plans
to revisit the garbage-collection strategy? For interactive
programs, it is really undesirable to have the whole program lock up
from time to time (while the garbage collector runs). I know Java
has implemented some alternative garbage collection algorithms to
support more real-time use. And there have been some papers
published about multithreaded garbage collection (like [0])

Thoughts?

Josh

[0] http://portal.acm.org/citation.cfm?id=286878

Josh

[0] http://portal.acm.org/citation.cfm?id=286878

Way to do partial time-limited GC runs would be nice, but no idea about
feasibility. Run GC for 2ms, then do more stuff, and when alloc limit
hit,
run GC for 2ms again. A single 20ms GC run is a visible glitch, but
twenty 2ms runs spread over twenty frames is practically invisible.

Ilmari H. ha scritto:

twenty 2ms runs spread over twenty frames is practically invisible.
I think the easier way would be, again, to integrate Bohem’s GC, which
claims incrementality, but matz tryed it many years ago and I recall he
was’nt satisfied (search ruby-talk for details). Maybe things are
changed.