Why does this code leak?

On Jan 10, 2008, at 5:26 AM, Robert D. wrote:

end

(42/6).times {
GC.start

p “Foo” => ObjectSpace.each_object(Foo){}

Foo.new
}

When you create the lambda, what is the value of “self” inside the
lambda?

The answer is that it is going to be the object in which the lambda
was created. In the code above, this would be the object that you are
trying to finalize – i.e. an instance of Foo. Since the lambda has a
reference to the Foo instance, that instance will always be marked by
the GC, and hence, it will never be garbage collected.

You can verify this by adding a puts statement inside the lambda …

$ cat a.rb
class Foo
def initialize
ObjectSpace.define_finalizer self, lambda {puts self.object_id}
end
end

10.times {
GC.start
Foo.new
p “Foo” => ObjectSpace.each_object(Foo){}
}

$ ruby a.rb
{“Foo”=>1}
{“Foo”=>2}
{“Foo”=>3}
{“Foo”=>4}
{“Foo”=>5}
{“Foo”=>6}
{“Foo”=>7}
{“Foo”=>8}
{“Foo”=>9}
{“Foo”=>10}

The object ID is never printed; hence, the finalizer is never called.

Now let’s define the finalizer lambda outside the scope of the
instance we are trying to finalize. This prevents the lambda from
having a reference to the Foo instance.

$ cat a.rb
Finalizer = lambda do |object_id|
puts object_id
end

class Foo
def initialize
ObjectSpace.define_finalizer self, Finalizer
end
end

10.times {
GC.start
Foo.new
p “Foo” => ObjectSpace.each_object(Foo){}
}

$ ruby a.rb
{“Foo”=>1}
89480
{“Foo”=>1}
{“Foo”=>2}
89480
{“Foo”=>2}
84800
{“Foo”=>2}
89480
{“Foo”=>2}
84800
{“Foo”=>2}
89480
{“Foo”=>2}
84800
{“Foo”=>2}
{“Foo”=>3}
84780
84800
89480

You’ll notice that the Foo instance count does not grow (yes, it is
shown as non-zero at the end of the program). But you’ll also notice
that the finalizer is called exactly 10 times. Even though the last
Foo instance count shows 3 objects remaining, they are cleaned up as
shown by their object IDs being printed out by our finalizer.

The lesson here is that you always need to create your finalizer Proc
at the Class level, not at the instance level.

The ruby garbage collector is conservative, but it will clean up after
you just fine.

Blessings,
TwP

On Jan 10, 2008, at 1:03 PM, Rick DeNatale wrote:

end
end

Note that we are in class Class so self when the lambda is created is
not the new instance but Class itself. In this case it looks as if
the lambda (or something else) is holding on to the binding of the
caller of the finalize method where object is bound to the object to
be finalized.

Hmmm … I get the same results as my previous example:

$ cat a.rb
class Class
def leaky_finalizer
lambda {|object_id|
puts “#{object_id} #{local_variables.inspect}
#{instance_variables.inspect}”
}
end

def new(*a, &b)
object = allocate
object.send :initialize, *a, &b
object
ensure
ObjectSpace.define_finalizer object, leaky_finalizer
end
end

class Foo; end

10.times {
GC.start
Foo.new
p “Foo” => ObjectSpace.each_object(Foo){}
}

$ ruby a.rb
{“Foo”=>1}
{“Foo”=>2}
{“Foo”=>3}
84800 [“object_id”] []
{“Foo”=>3}
89480 [“object_id”] []
84470 [“object_id”] []
{“Foo”=>2}
{“Foo”=>3}
84360 [“object_id”] []
83880 [“object_id”] []
{“Foo”=>2}
89480 [“object_id”] []
{“Foo”=>2}
83740 [“object_id”] []
{“Foo”=>2}
84730 [“object_id”] []
{“Foo”=>2}
84770 [“object_id”] []
84800 [“object_id”] []

It looks like everything is getting cleaned up – just not as quickly
as one would assume. But by the end of the program, all 10 finalizers
have been called.

Blessings,
TwP

On Jan 10, 2008 9:03 PM, Rick DeNatale [email protected] wrote:

lambda?

The answer is that it is going to be the object in which the lambda
was created. In the code above, this would be the object that you are
trying to finalize – i.e. an instance of Foo. Since the lambda has a
reference to the Foo instance, that instance will always be marked by
the GC, and hence, it will never be garbage collected.

Right,
I honestly fail to see why the closure shall take a reference to the
object . I am with Ara here, no need to keep a reference to the object
and this is not only MHO but also that one of Ruby1.9;), if not a bug
at least it is odd behavior.

Robert

http://ruby-smalltalk.blogspot.com/


Whereof one cannot speak, thereof one must be silent.
Ludwig Wittgenstein

On 1/10/08, Tim P. [email protected] wrote:

On Jan 10, 2008, at 5:26 AM, Robert D. wrote:

class Foo
def initialize
ObjectSpace.define_finalizer self, lambda{}
end
end

When you create the lambda, what is the value of “self” inside the
lambda?

The answer is that it is going to be the object in which the lambda
was created. In the code above, this would be the object that you are
trying to finalize – i.e. an instance of Foo. Since the lambda has a
reference to the Foo instance, that instance will always be marked by
the GC, and hence, it will never be garbage collected.

Right, this analysis is correct for Robert’s code, and I was thinking
the same thing about Ara’s “leaky_finalizer” code as well, but that
code, here simplified, doesn’t have the same problem as far as I can
tell:

class Class

def leaky_finalizer
lambda{}
end

def new *a, &b
object = allocate
object.send :initialize, *a, &b
object
ensure
ObjectSpace.define_finalizer object, leaky_finalizer
end
end

end

Note that we are in class Class so self when the lambda is created is
not the new instance but Class itself. In this case it looks as if
the lambda (or something else) is holding on to the binding of the
caller of the finalize method where object is bound to the object to
be finalized.


Rick DeNatale

My blog on Ruby
http://talklikeaduck.denhaven2.com/

On 1/10/08, Robert D. [email protected] wrote:

I honestly fail to see why the closure shall take a reference to the
object . I am with Ara here, no need to keep a reference to the object
and this is not only MHO but also that one of Ruby1.9;), if not a bug
at least it is odd behavior.

Because the code which creates the proc doesn’t do any analysis of
what’s inside the block.

It’s like the story about the guy who checked out of hotel, and
protested the mini-bar charge on his bill, saying, “I didn’t use
anything from the mini-bar.” The Hotel manager said, “I’m sorry sir,
it’s Hotel policy, the mini-bar was available for your use, and we
have to charge a fee to maintain it.”

The man hesitated a second and quickly wrote out a bill for $100 and
presented it to the Hotel Manager, who asked “What’s this for?”

The man said “For sleeping with my wife.”

The Hotel manager said, “I didn’t sleep with your wife!”

To which the man said, “But she was available for your use, and she’s
much more expensive to maintain than that mini-bar.”

Seriously, it might be possible for the Ruby parser to mark the AST or
byte-codes representing a block to indicate whether or not it needed
to be a closure or not, and perhaps even to limit what actually got
bound, but as far as I know it doesn’t.

I’m also of the opinion that expecting objects to be reclaimed as
rapidly as ‘logically’ possible might not be the best trade-off in
designing a GC anyway.


Rick DeNatale

My blog on Ruby
http://talklikeaduck.denhaven2.com/

On Jan 10, 2008 10:15 PM, Robert D. [email protected] wrote:

When you create the lambda, what is the value of “self” inside the
object . I am with Ara here, no need to keep a reference to the object
and this is not only MHO but also that one of Ruby1.9;), if not a bug
at least it is odd behavior.

I thought the same at first time. Read

and you should change opinion.

2 comments from post:
“Many languages have compilers that perform analysis on the closures
and capture only the variables it actually closes on; This includes
the “self” variable of the instance. This is not currently the case
with Ruby? Is there any reason why the compiler could not be
implemented in such a way?”

Ola response:
“Tomas: yes, there is a marvelous reason for this: eval. There is no
way for the parser to know which variables will be used at any point.
The price you pay for a VERY dynamic language.”


Rados³aw Bu³at

http://radarek.jogger.pl - mój blog

On Jan 10, 2008 10:26 PM, Rados³aw Bu³at [email protected] wrote:

I honestly fail to see why the closure shall take a reference to the
object . I am with Ara here, no need to keep a reference to the object
and this is not only MHO but also that one of Ruby1.9;), if not a bug
at least it is odd behavior.

I thought the same at first time. Read
Ola Bini: Programming Language Synchronicity: Ruby closures and memory usage
and you should change opinion.
I read it and I feel that it has nothing to do with what we are
discussing here. Ola states - incorrectly I believe, although I admit
he is bright - that blocks need keep references to self, but why he
omits…
Sorry if I am being stupid but I still cannot see any need to capture
self in a block
way for the parser to know which variables will be used at any point.
The price you pay for a VERY dynamic language."
Answering this and to Rick, I have read your reply in the meantime:

I was not referring to making an analysis of what to capture ( an
interesting discussion too of course ) but
to not capturing self in any case!

As said above I would love to know a case where this is needed in
which case one shall probably file a bug report to Ruby 1.9.

Cheers
Robert


Rados³aw Bu³at

http://radarek.jogger.pl - mój blog


http://ruby-smalltalk.blogspot.com/


Whereof one cannot speak, thereof one must be silent.
Ludwig Wittgenstein

On 1/10/08, Rados³aw Bu³at [email protected] wrote:

On Jan 10, 2008 10:15 PM, Robert D. [email protected] wrote:

“Many languages have compilers that perform analysis on the closures
and capture only the variables it actually closes on; This includes
the “self” variable of the instance. This is not currently the case
with Ruby? Is there any reason why the compiler could not be
implemented in such a way?”

Ola response:
“Tomas: yes, there is a marvelous reason for this: eval. There is no
way for the parser to know which variables will be used at any point.
The price you pay for a VERY dynamic language.”

But this is a mis-analysis. Just because you can dynamicaly eval a
string at run-time doesn’t mean that the compiler can’t perform
analysis on code producing a block whether it’s done at ‘compile-time’
or ‘execution-time’ which in Ruby are actually the same time anyway.

Doing optional optimizations based on analysis of the actual code is
something commonly done in advance implementations of dynamic
languages. Some languages do this optimization later than others.
Smalltalk for example DOES have what is in effect separate compile and
execute times, and Smalltalk sometimes restricts the program so as to
allow optimizations such as turning

 expression ifTrue: [x] ifFalse:[y]

into test and branch logic instead of the method send it appears to
be. Other languages like self, do these optimizations much later,
often AFTER partial execution deferring them until the VM notices that
a particular code path is frequently executed and would benefit, and
these kinds of techniques go far beyond what would be required for the
Ruby compiler/VM to analyze a block for references before reifiying a
proc.

Having said all this, I would urge caution, because such
implementation approaches work best when accomplished by careful
cost-benefit analysis.


Rick DeNatale

My blog on Ruby
http://talklikeaduck.denhaven2.com/

On Jan 10, 2008 11:00 PM, Rick DeNatale [email protected] wrote:

Having said all this, I would urge caution, because such
implementation approaches work best when accomplished by careful
cost-benefit analysis.
Agreed, but do you think that this kind of indeterminism is acceptable
upon the explicit call of GC.start, I am not sure.

Cheers
Robert


Whereof one cannot speak, thereof one must be silent.
Ludwig Wittgenstein

On Fri, 11 Jan 2008 07:09:04 +0900, “Rick DeNatale”
[email protected] wrote:

Now since the VM doesn’t look inside the block when creating a proc,
it has to assume that the binding of he context in which the block was
created has to be captured.

Also, even if the VM did look inside the block to see which variables
were
captured, it has to keep all of them around anyway because they have to
remain accessible because the binding is exposed via Proc#binding.

(That’s more or less the main reason why us JRuby folks aren’t big fans
of Proc#binding…)

-mental

On 1/10/08, Robert D. [email protected] wrote:

As said above I would love to know a case where this is needed in
which case one shall probably file a bug report to Ruby 1.9.

Imagine this code:

class Foo
def initialize
creation_time = Time.now
ObjectSpace.define_finalizer self, lambda{ puts “An Object has
died (#{creation_time}-#(Time.now}) R.I.P.”
end

end

Here in the block puts really means self.puts, so the block needs to
capture the binding of self, as well as the binding of the local
creation_time.

Now since the VM doesn’t look inside the block when creating a proc,
it has to assume that the binding of he context in which the block was
created has to be captured.


Rick DeNatale

My blog on Ruby
http://talklikeaduck.denhaven2.com/

On Jan 10, 2008 11:09 PM, Rick DeNatale [email protected] wrote:
Thanks for your time Rick.
I have just written lots of testcode and there is no need to post it,
it is clear self is captured in the closure. ( Probably very useful
too ).
This happens for 1.9 too, so why did we get the following

class Foo
def initialize
creation_time = Time.now
ObjectSpace.define_finalizer self, lambda{ puts “An Object has
died (#{creation_time}-#(Time.now}) R.I.P.”
end

end

I guess the finalizer is not used and thus the lambda thrown away:

682/183 > cat leak.rb && ruby1.9 leak.rb

vim: sw=2 ts=2 ft=ruby expandtab tw=0 nu syn:

Foo = Class::new{
def initialize
ObjectSpace.define_finalizer self, lambda {p :finalized}
end
}

(42/7).times {
Foo.new
GC.start
p “Foo” => ObjectSpace.each_object(Foo){}
}
{“Foo”=>1}
{“Foo”=>1}
{“Foo”=>1}
{“Foo”=>1}
{“Foo”=>1}
{“Foo”=>1}

Bingo!!!
Robert

Whereof one cannot speak, thereof one must be silent.
Ludwig Wittgenstein

On 1/10/08, MenTaLguY [email protected] wrote:

On Fri, 11 Jan 2008 07:09:04 +0900, “Rick DeNatale” [email protected] wrote:

Now since the VM doesn’t look inside the block when creating a proc,
it has to assume that the binding of he context in which the block was
created has to be captured.

Also, even if the VM did look inside the block to see which variables were
captured, it has to keep all of them around anyway because they have to
remain accessible because the binding is exposed via Proc#binding.

Good observation!

$ qri Proc#binding
----------------------------------------------------------- Proc#binding
prc.binding => binding


 Returns the binding associated with prc. Note that Kernel#eval
 accepts either a Proc or a Binding object as its second parameter.

    def fred(param)
      proc {}
    end

    b = fred(99)
    eval("param", b.binding)   #=> 99
    eval("param", b)           #=> 99

Any optimization of procs by making them less than a full closure even
those representing an empty block would break this ‘specification’.

On the other hand Ruby 1.9 made changes to much less obscure
specifications!


Rick DeNatale

My blog on Ruby
http://talklikeaduck.denhaven2.com/

This discussion reminds me of how such little details can have
significant effects. Having the Proc#binding method seems to me to be
somewhat similar to the “classical” Smalltalk dependency design.

This was one of, if not the, first examples of the Observer pattern.

Smalltalk defines methods on Object which allow dependents (observers)
to be added to any object, an object notifies observers when it
changes by self.changed which sends the message update to each
dependent with the object as the parameter.

Since this could be used with any object, but was actually used with
few objects, the implementation in the Object class stored the list of
dependents in a global identity dictionary(a hash which uses identity
rather than equality in comparing keys) keyed on the object.

What this means is that as long as an object has any dependents, it,
and it’s dependents can’t be GCed, even though nothing outside of the
dependency graph refers to any of those objects.

For Smalltalk applications which actually used dependents it was
common practice to override the method used to find the collection of
dependents and keep it in an instance variable in the object itself
rather than using the global identity dictionary. I just looked at
the Squeak image and there’s a subclass of Object called Model whose
sole purpose is to do this.

Interestingly, if one were to do this in Ruby, the default
implementation could easily use an instance variable, since in Ruby
unlike Smalltak, an instance variables don’t take up any space in an
object until it’s actually needed. i.e.

class Object

 def dependents
     # Defer actually creating a dependents iv until we have at

least one dependent
@dependents || []
end

 def add_dependent(dependent)
      (@dependents ||= []) << dependent
end

def changed
     self.dependents.each {|dependent| dependent.update(self)}
end

end

This dynamic instance variable allocation is one of the reasons I now
prefer Ruby to Smalltalk despite a long relationship with the former.


Rick DeNatale

My blog on Ruby
http://talklikeaduck.denhaven2.com/

On Fri, 11 Jan 2008 08:41:38 +0900, “Rick DeNatale”
[email protected] wrote:

Any optimization of procs by making them less than a full closure even
those representing an empty block would break this ‘specification’.

On the other hand Ruby 1.9 made changes to much less obscure
specifications!

Well… my impression was that Matz wasn’t sold on the idea of changing
it
at the time it was discussed on ruby-core.

-mental

On Jan 11, 2008 1:19 AM, Rick DeNatale [email protected] wrote:

This dynamic instance variable allocation is one of the reasons I now
prefer Ruby to Smalltalk despite a long relationship with the former.
This is indeed a feature I like a lot, it’s supression was however
discussed once, it is still there though.
OTOH who knows maybe Squeak will have it tomorrow, do you think that
would be possible with the actual VM?

Cheers
Robert


Whereof one cannot speak, thereof one must be silent.
Ludwig Wittgenstein

On 1/11/08, Robert D. [email protected] wrote:

On Jan 11, 2008 1:19 AM, Rick DeNatale [email protected] wrote:

This dynamic instance variable allocation is one of the reasons I now
prefer Ruby to Smalltalk despite a long relationship with the former.
This is indeed a feature I like a lot, it’s supression was however
discussed once, it is still there though.
OTOH who knows maybe Squeak will have it tomorrow, do you think that
would be possible with the actual VM?

Well just about anything is possible, as we used to say it’s a Simple
Matter of Programming.

On the other hand, I doubt that it would be practical to do this with
Squeak or other ST implementations of which I’m aware. It’s pretty
fundamental to the design of the VM that instance variables are bound
at class definition time to an offset from the beginning of the
object. The byte code is optimized for fetching and storing such iv
references. When you change a class definition in Smalltalk, say by
adding an iv, then the ide recompiles all the methods of the class and
any subclasses since this causes ivs to move around in the object
instance. Most ST implementations also then mutate any existing
instances as well.

Dave Ungar, after starting work on Self, used to amuse himself by
going to various Smalltalk implementations, adding an instance
variable to Object and seeing how long the system lived.

I just tried this with Squeak, got a warning that Object can’t be
changed with the option to proceed anyway, then got a second warning
with proceed option, after which it started churning away recompiling
all the classes in the image, got through about 30 of the 1500 or so
and hung.

Ruby is more like self than Smalltalk in this regard. In Ruby IVs are
implemented as values in a hash keyed by the iv name. In self, the
whole object is basically a collection of named slots, and methods are
just executable objects referenced by some of these slots.

So in Smalltalk, the class holds both a format descriptor of its
instances and a method dictionary used to find instance methods. In
Ruby the instance layout is in the object itself and is self described
via the hash, while the method dictionary remains in the klass. In
self everything is in the ‘instance’ there are no formal classes but
there is a notion of delegation via a special reference slot which is
used to find a named slot which is not in the current object.


Rick DeNatale

My blog on Ruby
http://talklikeaduck.denhaven2.com/

On Jan 11, 2008 11:33 AM, Rick DeNatale [email protected] wrote:

any subclasses since this causes ivs to move around in the object
all the classes in the image, got through about 30 of the 1500 or so
via the hash, while the method dictionary remains in the klass. In
self everything is in the ‘instance’ there are no formal classes but
there is a notion of delegation via a special reference slot which is
used to find a named slot which is not in the current object.
Very interesting stuff I thought that in the Bluebook there were ivars
in predefined slots (16) and others were added in
a dictionary (of course a very rare case ). So somehow I wondered if
dynamic ivars could just be added to the dictionary.
But I am afraid that I am completely OT now, however thanks a lot for
your time Rick.

Robert


Rick DeNatale

My blog on Ruby
http://talklikeaduck.denhaven2.com/


http://ruby-smalltalk.blogspot.com/


Whereof one cannot speak, thereof one must be silent.
Ludwig Wittgenstein