Numeric Maze (#60)

As with so many of our Ruby Q. problems, this is another pathfinding
challenge. We’re probably pretty use to seeing that pruning of the
search space
is usually a big win in these cases and this problem is no exception.
Let’s
talk a little about exactly why that is the case.

When you think about the operations allowed by this quiz, one thing that
becomes
obvious is that n * 2 and n / 2 are opposites. If you do one and then
the
other, you end up right back at the number you had before you applied
any
operations. That kind of busy work isn’t helpful. In fact, visiting
any number
a second time is pointless since we’ve already seen an equal or faster
path for
the same thing.

There’s a lot more duplication in the problem than the opposite
operations too.
Let’s start working 2 to 9 by hand, so we can see that:

2
  2, 4  (double)
    2, 4, 8  (double)
      2, 4, 8, 16  (double)
      2, 4, 8, 4   (halve)
      2, 4, 8, 10  (add two)
    2, 4, 2  (halve)
      2, 4, 2, 4  (double)
      2, 4, 2, 1  (halve)
      2, 4, 2, 4  (add two)
    2, 4, 6  (add two)
      2, 4, 6, 12  (double)
      2, 4, 6, 3   (halve)
      2, 4, 6, 8   (add two)
  2, 1  (halve)
    2, 1, 2  (double)
      2, 4, 2, 4  (double)
      2, 4, 2, 1  (halve)
      2, 4, 2, 4  (add two)
    2, 1, 3  (add two)
      2, 4, 3, 6  (double)
      2, 4, 3, 5  (add two)
  2, 4  (add two)
    2, 4, 8  (double)
      2, 4, 8, 16  (double)
      2, 4, 8, 4   (halve)
      2, 4, 8, 10  (add two)
    2, 4, 2  (halve)
      2, 4, 2, 4  (double)
      2, 4, 2, 1  (halve)
      2, 4, 2, 4  (add two)
    2, 4, 6  (add two)
      2, 4, 6, 12  (double)
      2, 4, 6, 3   (halve)
      2, 4, 6, 8   (add two)

There’s a lot of paths already and we’re not there yet. Let’s look at
the exact
same tree, but with one simple rule of pruning applied: We can toss out
any
operation that results in a number we have already seen. Watch how that
changes
things:

2
  2, 4  (double)
    2, 4, 8  (double)
      2, 4, 8, 16  (double)
      2, 4, 8, 10  (add two)
    2, 4, 6  (add two)
      2, 4, 6, 12  (double)
      2, 4, 6, 3   (halve)
  2, 1  (halve)
    2, 1, 3  (add two)
      2, 4, 3, 5  (add two)

Those two trees go to the same depth and both represent the same set of
numbers.
However, the second one is over three times less work. Imagine how much
we can
save as the numbers keep growing and growing.

Another important optimization involves limits. Even with our simple 2
to 9
example, we can be up to 64 after only five operations (2, 4, 8, 16, 32,
64).
64 is a long way from 9 and probably not helping us get there. We can
limit
upper and lower bounds for the numbers, or even by limiting the steps
the path
can take. (Florian Pflug made a great post in the quiz thread about the
latter.) The only thing to be careful of with an optimization like this
is that
you make sure you don’t impose a limit low enough to prevent an optimal
solution.

Many other optimizations were used. Some, like storing instance data in
Integer
objects, have that questionable code smell and are probably best
avoided. Other
solutions, while super fast on huge inputs, did not produce the shortest
path in
all cases. Finally, many optimizations involve timing various elements
of Ruby
syntax for minor increases here and there, but that’s more detail than
we need
to go into in this summary. Given that, let’s examine a nice solution,
by
Tristan Allwood, using the two optimizations described above:

require 'set'

class MazeSolver

  def solve start, finish
    visited = Set.new

    tul, tll = if start > finish
                 [(start << 1) + 4, nil]
               else
                  [(finish << 1) + 4, nil]
               end

    solve_it [[start]], finish, visited, tul, tll
  end

  def solve_it lpos, target, visited, tul, tll
    n = []
    lpos.each do |vs|
      v = vs.last
      next if tul and v > tul
      next if tll and v < tll

      return vs if v == target

      d = v << 1                      # double
      h = v >> 1 unless (v & 1) == 1  # half
      p2 = v + 2                      # plus 2

      n << (vs.clone << d) if visited.add? d
      n << (vs.clone << h) if h and visited.add? h
      n << (vs.clone << p2) if visited.add? p2
    end

    return solve_it(n, target, visited,tul, tll)
  end
end

if __FILE__ == $0
  puts MazeSolver.new.solve(ARGV[0].to_i, ARGV[1].to_i).join(" ")
end

Tristan’s solution makes use of bit operations, because they tend to be
faster
than multiplication and division. All you need to know about these is
that n <<
1 == n * 2 and n >> 1 == n / 2.

The primary interface method for the code above is solve(). It takes
the start
and finish numbers. As you can see, it sets up a visited Set object to
keep
track of the numbers we’ve seen, assigns upper and lower limits, then
hands off
to solve_it().

In solve_it(), each path of numbers is walked and expanded by the three
operations. Note the calls to visited.add?() before new paths are
added. This
is the optimization keeping us from revisiting numbers. The next if tul
and v >
tul line skips to the next iteration if we’ve passed the upper limit.
That’s
the other big optimization. After another level of paths have been
added,
solve_it() just recurses to find the next set of operations. This only
ever
goes as deep as there are steps in the solution, so there’s not much
danger of
overrunning the stack for problems we can reasonably solve.

The final if statement of the program triggers the process from the two
parameters passed to the program.

My thanks to the many, many participants that generated great solutions
and
discussion, especially all you new guys! I also need to thank the quiz
creator
who was very involved in the discussion and gave me a bunch of tips for
this
summary.

Tomorrow, we have a dice rolling challenge for all you RPG players out
there…

Tristan’s solution makes use of bit operations, because they tend to be faster
than multiplication and division. All you need to know about these is that n <<
1 == n * 2 and n >> 1 == n / 2.

Are they really faster? Ruby bits are not directly in CPU registers.

My rule of thumb is that every method call in Ruby takes a huge amount
of
time, whether it is a bitshift or a multiplication (or even a regexp
check).
For the record, I did a quick test:

kero@pc67140460:~/tmp$ time ruby -e ‘1_000_000.times { |i| i << 1 }’
real 0m2.683s
user 0m1.970s
sys 0m0.710s
kero@pc67140460:~/tmp$ time ruby -e ‘1_000_000.times { |i| i << 1 }’
real 0m2.687s
user 0m1.880s
sys 0m0.810s
kero@pc67140460:~/tmp$ time ruby -e ‘1_000_000.times { |i| i * 2 }’
real 0m2.684s
user 0m2.080s
sys 0m0.600s
kero@pc67140460:~/tmp$ time ruby -e ‘1_000_000.times { |i| i * 2 }’
real 0m2.689s
user 0m2.060s
sys 0m0.640s

No significant differences whatsoever.

Bye,
Kero.

PS: fun quiz, but that should be clear with all my postings :slight_smile:

On Jan 5, 2006, at 9:37 AM, Kero wrote:

Tristan’s solution makes use of bit operations, because they tend
to be faster
than multiplication and division. All you need to know about
these is that n <<
1 == n * 2 and n >> 1 == n / 2.

Are they really faster?

Good point. Thanks for showing us the real story.

James Edward G. II

On Jan 5, 2006, at 10:37 AM, Kero wrote:

time, whether it is a bitshift or a multiplication (or even a
regexp check).
For the record, I did a quick test:

As I suspected. Stupid C idioms. :slight_smile:

~ ryan ~

On Jan 5, 2006, at 5:37 AM, Ruby Q. wrote:

My thanks to the many, many participants that generated great
solutions and
discussion, especially all you new guys! I also need to thank the
quiz creator
who was very involved in the discussion and gave me a bunch of tips
for this
summary.

I think that the purely binary pattern/bit-twiddling based solutions
should at least get an honorable mention.

Great quiz!

–Steve

The shift operator comes in handy and can save a lot of time.
These are equivalent operations:

time ruby -e ‘1_000_000.times { || 2**31 }’

real 0m33.225s
user 0m32.452s
sys 0m0.015s

time ruby -e ‘1_000_000.times { || 1<<31 }’

real 0m3.469s
user 0m3.421s
sys 0m0.000s

time ruby -e ‘1_000_000.times { ||
2222222222222222222222222222222 }’

real 0m13.839s
user 0m13.656s
sys 0m0.015s

Hey guys,

I’ve just come back from a 2 day holiday and discover my code on
rubyquiz.com… Wow! For my first entry on ruby-quiz (and i’m fairly
new to ruby to boot) i’m quite happy :slight_smile: Cheers all!

I’ve noticed a few comments regarding my use of bitshift, I didn’t
expect them to be faster (nor intend them to be) but I just tend to
think in terms of the binary idioms when it comes to +ve odd numbers,
mult/div 2. The fact that I felt the compulsion to write comments next
to them indicating what I was doing should probably be an indication
that I should have been using more standard notation…

Anyways thanks again for making my day :slight_smile:

Cheers,

Tristan

Lou V. wrote:

real 0m3.469s
Kero wrote:
This is an implementation detail,

the ** operator of Fixnum calls rb_big_pow converting self to a Bignum
regardless of the value. ‘<<’ checks the size of the result and uses
real bitshifting if possible.

While this was interresting to me it’s not the explanation of the effect
above because the value 2**31 is always a bignum, but the implementation
of the ** operator in Bignum isn’t very fast (while shifting the
internal representation of the bignum is faster)

To prove my point that this isn’t realy because bitshifts are faster try
this:

1_000_000.times { || 2.0**30 }

At least on my machine this is faster than any of your three examples
above. (if you choose smaller values ‘<<’ will be the fastest because it
doesn’t convert to bignums as stated above)

cheers

Simon

Then I guess I just love implementation details.
:wink:

ruby -e "puts (229).class"
Fixnum
time ruby -e '1_000_000.times { 2
29 }’

real 0m25.788s
user 0m25.686s
sys 0m0.031s

time ruby -e ‘1_000_000.times { 2.0**29 }’

real 0m1.233s
user 0m1.234s
sys 0m0.000s

ruby -e “puts (2.0**29).class”
Float
time ruby -e ‘1_000_000.times { 1<<29 }’

real 0m0.559s
user 0m0.577s
sys 0m0.000s

ruby -e “puts (1<<29).class”
Fixnum
time ruby -e ‘1_000_000.times {
22222222222222222222222222222 }’

real 0m5.970s
user 0m5.983s
sys 0m0.015s

ruby -e “puts
(22222222222222222222222222222).class”
Fixnum

Kero [email protected] writes:

Tristan’s solution makes use of bit operations, because they tend to be faster
than multiplication and division. All you need to know about these is that n <<
1 == n * 2 and n >> 1 == n / 2.

Are they really faster? Ruby bits are not directly in CPU registers.

IIRC, a bitshift on modern Intel processors is extremely inefficient
because it has to throw-away all kind of assumptions. I even think
n << 1 gets compiled to n+n by good compiler. (Can’t test at the
moment,
I’m on PPC only here.)

user 0m1.880s
No significant differences whatsoever.
These microbenchmarks are totally pointless; you’d need to write them
in assembler since you don’t know which code your compiler generated
for <<.

Bye,
Kero.

PS: fun quiz, but that should be clear with all my postings :slight_smile:

Absolutely, I enjoyed it, even if I didn’t sent in my solution–a
breadth-first search like so many others. (I used the cute trick of
int.send(X, 2) for X being :/, :* or :+, which I hadn’t seen yet
anywhere…)

On 2006-01-05, Simon Kröger [email protected] wrote:

Lou V. wrote:

The shift operator comes in handy and can save a lot of time.
These are equivalent operations:
[snip 3 examples with considerable speed differences]

This is an implementation detail,

perhaps.
I’d say ** takes more time; not that bitshifts take less time.

the ** operator of Fixnum calls rb_big_pow converting self to a Bignum
regardless of the value. ‘<<’ checks the size of the result and uses
real bitshifting if possible.

worse:
kero@chmeee:~$ time ruby -e ‘100_000.times { || 200**10 }’

real 0m1.740s
user 0m1.540s
sys 0m0.072s
kero@chmeee:~$ time ruby -e ‘100_000.times { || 2**10 }’

real 0m2.122s
user 0m1.928s
sys 0m0.064s

The float value ranks first with about 0.5 seconds runtime.

If Fixnum#** is slower than Bignum#**, that’s not a detail, that’s plain
weird :slight_smile:

Something to keep in mind…

Bye,
Kero.