Tail Call Optimization (Tail Recursion)

I did some googling to find out if Ruby supports tail call optimization,
as I wanted to write tail-recursive functions. All posts I found
indicated that 1.9 supports this, but it is not enabled by default.
However, all those posts were about two years old, so I’m wondering what
is the current situation: do I still need to upgrade to 1.9 to get that
capability? Or is there something in 1.8 I can enable?

Terry M. wrote in post #993440:

I did some googling to find out if Ruby supports tail call optimization,
as I wanted to write tail-recursive functions. All posts I found
indicated that 1.9 supports this, but it is not enabled by default.
However, all those posts were about two years old, so I’m wondering what
is the current situation: do I still need to upgrade to 1.9 to get that
capability? Or is there something in 1.8 I can enable?

Couldn’t you test that by creating two recursive functions: one with a
tail call and one without, and then benchmarking them?

7stud – wrote:

Terry M. wrote in post #993440:

I did some googling to find out if Ruby supports tail call optimization,
as I wanted to write tail-recursive functions. All posts I found
indicated that 1.9 supports this, but it is not enabled by default.
However, all those posts were about two years old, so I’m wondering what
is the current situation: do I still need to upgrade to 1.9 to get that
capability? Or is there something in 1.8 I can enable?

Couldn’t you test that by creating two recursive functions: one with a
tail call and one without, and then benchmarking them?

def fib n, a = 0, b = 1
if n == 0
b
else
a, b = b, a + b
fib n - 1, a, b
end
end

1.upto(9){|n|
p [n, fib(n)]
}

p fib( 50_000 )

==>

[1, 1]
[2, 2]
[3, 3]
[4, 5]
[5, 8]
[6, 13]
[7, 21]
[8, 34]
[9, 55]
try4.rb:23:in fib': stack level too deep (SystemStackError) from try4.rb:23:infib’
from try4.rb:31

irb(main):004:0> VERSION
=> “1.8.7”

it seems like fib(50000) is not only too big for MRI ruby, but also
Python
and Lua and Rubinius panic on it…

jRuby also complains but points to instructions to increase the stack…

otherways, I have MacRuby, Racket and Haskell in their tail recursive
fib
solving routine, no results yet but all are still working…

2011/4/18 WJ [email protected]

There’s a flag that you can enable to turn it on.

Also, http://timelessrepo.com/tailin-ruby

Louis-Philippe wrote:

otherways, I have MacRuby, Racket and Haskell in their tail recursive fib
solving routine, no results yet but all are still working…

I’m surprised that they are still working. Gambit Scheme takes
very little time for this.

(define (fib n a b)
(if (zero? n)
b
(fib (- n 1) b (+ a b))))

(time
(let ((bigfib (fib 50000 0 1)))
#t))

==>

453 ms real time
454 ms cpu time (438 user, 16 system)
762 collections accounting for 141 ms real time (125 user, 16 

system)
119074992 bytes allocated
no minor faults
no major faults

On 2011-04-18, at 13:40, Louis-Philippe wrote:

it seems like fib(50000) is not only too big for MRI ruby, but also Python
and Lua and Rubinius panic on it…
There was some traffic on the Python mailing list maybe a year ago, it
seems that Python will not implement TR optimization in the near (or
likely far) future; Python’s BDFL seems unconvinced of its value. AFAIK
no TR PEP has ever been written. Lua’s TR is very specific, I believe
the recursive call has to stand as a statement on its own. When I tried
this with Lua 5.2.0 (alpha), I saw every evidence of TR optimization.

end
print(fib(1), fib(3), fib(10), fib(100), fib(1000), fib(10000))
1 3 89 5.7314784401382e+20 7.0330367711423e+208 inf
print(fib(1000000))
inf

TR optimization can’t easily be done on the JVM (and probably on .NET),
because it interferes with the code verification process the JVM does.
Accordingly, I doubt TR can be made mandatory for Ruby implementations,
which is what I’d like. However, a boolean function or built-in constant
that says whether it’s performed would be very nice, so one can test
this before running a program.

– vincent

Guido has said that any interpreter that implements TCO is not Python.

On 2011-04-18, at 18:42, Steve K. wrote:

Guido has said that any interpreter that implements TCO is not Python.

I recall that quote, and it totally mystifies me. Of course, he IS BDFL,
and has the right to say what is and is not Python. But it seems like a
very peculiar comment, and, based on some of the message traffic I saw
at the time, he seems to misunderstand what this optimization is. There
are good reasons (relating to the JVM, which I mentioned earlier) why
one might decide NOT to make it mandatory, but his absolute refusal
seems somewhat doctrinaire to me.

If every environment in which Ruby (or Python) runs supported TCO, then
there would be little reason to ban it. If that isn’t practical, then a
way of discerning whether it’s supported in a given implementation seems
appropriate.

That said, very few tail recursive methods would likely be written in
Ruby, given the many other fine ways of looping. So my reason for
wanting it has more to do with a small set of cases where it’s more
expressive than other techniques, rather than something that
substantially changes the way we write programs.

In reality, the more useful benefit of TCO comes when it’s not a
recursive procedure, but simply invoking another method, and thus using
less stack. That leads, for example, to a very nice and clean way of
writing state machines, for example.

– vincent

I’m surprised that they are still working. Gambit Scheme takes
very little time for this.

and they still are :wink: as I was not using the same implementation of
fib…
mine was not tail recursive:

def fib(n)
if n > 1
fib(n-1) + fib(n-2)
else
n
end
end

opposed to the tail recursive implementation cited above:

def tail_fib n, a = 0, b = 1
if n == 0
b
else
a, b = b, a + b
tail_fib n - 1, a, b
end
end

the thing is the tail recursive one doesn’t really yield the full
sequence
as it omits the 0, and so:

fib(30) => 832040
tail_fib(30) => 1346269

anyway, it’s not that relevant in the current scope, except that with
this
tail recursive implementation it can be confirmed that MRI, Rubinius and
JRuby again fail to complete, while MacRuby does just fine.

I recall that quote, and it totally mystifies me.

He makes a sensible argument, it’s just not the particular set of
tradeoffs
that I’d choose. But that’s why I don’t use Python, and he does. :slight_smile:

Essentially, his argument boils down to this: with TCO, you can write
programs that need TCO to function. And then you have Python code that
works on some interpreters and not others. And that’s bad.

In reality, the more useful benefit of TCO comes when it’s not a
recursive

procedure, but simply invoking another method, and thus using less stack.

This, along with what Michael says, is also a pretty nice thing. Fits
the
‘optimization’ name much better. :slight_smile:

Steve K. wrote in post #993760:

In reality, the more useful benefit of TCO comes when it’s not a
recursive

procedure, but simply invoking another method, and thus using less stack.

Of course you will then lose the backtrace. Also, stack is cheap to
allocate and release, and the amount used will be small unless your code
is hundreds of method calls deep.

On Apr 18, 2011, at 10:51 PM, Vincent M. wrote:

In reality, the more useful benefit of TCO comes when it’s not a recursive
procedure, but simply invoking another method, and thus using less stack. That
leads, for example, to a very nice and clean way of writing state machines, for
example.

This is the most compelling argument for me, especially regarding Ruby:
modern
Ruby approaches emphasize heavy refactoring to small methods, and since
nearly
everything else you do is a method call, I’m willing to bet TCO could
kick in a fair
amount in the tail of those small methods. It might add up. But that’s
just my gut
instinct; I’ve never toyed with the TCO options.

Michael E.
[email protected]
http://carboni.ca/

On Tue, Apr 19, 2011 at 10:48:51PM +0900, Steve K. wrote:

He makes a sensible argument, it’s just not the particular set of
tradeoffs that I’d choose. But that’s why I don’t use Python, and he
does. :slight_smile:

Essentially, his argument boils down to this: with TCO, you can write
programs that need TCO to function. And then you have Python code
that works on some interpreters and not others. And that’s bad.

That occurred to me as a possible reason for making the statement Guido
made, and it’s a reasonable statement to make for that reason when
speaking of a current version of the language, really. That argument
doesn’t apply to reasons to exclude tail call optimization from future
versions, though.

There are techniques to retain a backtrace, even with TCO.

Louis-Philippe wrote:

the thing is the tail recursive one doesn’t really yield the full sequence
as it omits the 0, and so:

fib(30) => 832040
tail_fib(30) => 1346269

Corrected:

;; Gambit Scheme

(define (fib n)
(define (%fib n a b)
(if (= 1 n)
b
(%fib (- n 1) b (+ a b))))
(%fib n 0 1))

(println "30th fibonacci number is " (fib 30))

(define bigfib 0)
(time
(set! bigfib (fib 50000)))
(let* ((str (number->string bigfib))
(len (string-length str)))
(println “Result of (fib 50000) has " len " digits:”)
(println (substring str 0 9) " … " (substring str (- len 9) len)))

==>

30th fibonacci number is 832040
(time (set! bigfib (fib 50000)))
562 ms real time
516 ms cpu time (500 user, 16 system)
807 collections accounting for 157 ms real time (188 user, 0 system)
119070200 bytes allocated
no minor faults
no major faults
Result of (fib 50000) has 10450 digits:
107777348 … 373553125

On Wed, Apr 20, 2011 at 6:35 AM, WJ [email protected] wrote:

2.upto(n){
a,b = b, a+b
}
b
end

Just for the fun of it a version with a Hash as “function” for faster
repeated access.

13:35:36 Temp$ ruby19 fib.rb
user system total real
0 def(50000) 0.187000 0.000000 0.187000 ( 0.184000)
0 hash[50000] 0.313000 0.047000 0.360000 ( 0.350000)
0 def(40000) 0.234000 0.000000 0.234000 ( 0.241000)
0 hash[40000] 0.000000 0.000000 0.000000 ( 0.000000)
1 def(50000) 0.359000 0.000000 0.359000 ( 0.361000)
1 hash[50000] 0.000000 0.000000 0.000000 ( 0.000000)
1 def(40000) 0.250000 0.000000 0.250000 ( 0.240000)
1 hash[40000] 0.000000 0.000000 0.000000 ( 0.000000)
2 def(50000) 0.360000 0.000000 0.360000 ( 0.365000)
2 hash[50000] 0.000000 0.000000 0.000000 ( 0.000000)
2 def(40000) 0.234000 0.000000 0.234000 ( 0.240000)
2 hash[40000] 0.000000 0.000000 0.000000 ( 0.000000)
3 def(50000) 0.375000 0.000000 0.375000 ( 0.363000)
3 hash[50000] 0.000000 0.000000 0.000000 ( 0.000000)
3 def(40000) 0.235000 0.000000 0.235000 ( 0.239000)
3 hash[40000] 0.000000 0.000000 0.000000 ( 0.000000)
4 def(50000) 0.359000 0.000000 0.359000 ( 0.357000)
4 hash[50000] 0.000000 0.000000 0.000000 ( 0.000000)
4 def(40000) 0.234000 0.000000 0.234000 ( 0.241000)
4 hash[40000] 0.000000 0.000000 0.000000 ( 0.000000)
13:35:53 Temp$ cat -n fib.rb
1
2 require ‘benchmark’
3
4 def fib n
5 a,b = 0,1
6 2.upto(n){
7 a,b = b, a+b
8 }
9 b
10 end
11
12 f2 = Hash.new do |h,n|
13 l = nil
14
15 (h.keys.max + 1).upto n do |i|
16 h[i] = l = h[i-1] + h[i-2]
17 end
18
19 l
20 end.merge(0=>0, 1=>1)
21
22 Benchmark.bm 15 do |x|
23 5.times do |i|
24 [50_000, 40_000].each do |n|
25 x.report “#{i} def(#{n})” do
26 fib n
27 end
28
29 x.report “#{i} hash[#{n}]” do
30 f2[n]
31 end
32 end
33 end
34
35 end
13:35:56 Temp$

Cheers

robert

WJ wrote:

(time (set! bigfib (fib 50000)))
562 ms real time
516 ms cpu time (500 user, 16 system)

This Ruby version runs in about 0.6 seconds:

def fib n
a,b = 0,1
2.upto(n){
a,b = b, a+b
}
b
end