Float addition problem / Processor bug?

Hi,

Check out this weird IRB output:

=== START ===
irb(main):001:0> a = 2.95 + 2.95 + 2.95
=> 8.85
irb(main):002:0> puts 8.85 - a
-1.77635683940025e-15
=> nil
=== END ===

I’m running “ruby 1.8.6 (2007-06-07 patchlevel 36) [i486-linux]” on
Xubuntu beta with this processor:

=== START cat /proc/cpuinfo ===
processor : 0
vendor_id : GenuineIntel
cpu family : 15
model : 2
model name : Intel® Pentium® 4 CPU 2.26GHz
stepping : 4
cpu MHz : 2271.951
cache size : 512 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 2
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm up
bogomips : 4547.85
clflush size : 64
=== END cat /proc/cpuinfo ===

Is it a processor bug, a simple rounding problem or a ruby error?

How do I proceed?

Thanks,

Dinkel

On 19.08.2007 19:49, Christian L. wrote:

=== END ===
stepping : 4
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm up
bogomips : 4547.85
clflush size : 64
=== END cat /proc/cpuinfo ===

Is it a processor bug, a simple rounding problem or a ruby error?

The second one. This is a standard rounding issue. The short
explanation is that you see the numbers in decimal system while the
computer uses binary representation for floats.

How do I proceed?

Live with it or use other data types like BigDecimal or Rational.

irb(main):001:0> require ‘bigdecimal’
=> true
irb(main):002:0> x = BigDecimal.new “2.95”
=> #BigDecimal:7ff8def8,‘0.295E1’,8(8)
irb(main):003:0> a = x + x + x
=> #BigDecimal:7ff82d78,‘0.885E1’,8(16)
irb(main):004:0> 8.85 - a
=> 0.0
irb(main):005:0> (8.85 - a).class
=> Float
irb(main):007:0> BigDecimal.new(“8.85”) - a
=> #BigDecimal:7ff70128,‘0.0’,4(16)
irb(main):008:0>

Kind regards

robert

Is it a processor bug, a simple rounding problem or a ruby error?

The second one. This is a standard rounding issue. The short
explanation is that you see the numbers in decimal system while the
computer uses binary representation for floats.

I kind of knew it :wink: - but still thank you Robert for confirming!

How do I proceed?

Live with it or use other data types like BigDecimal or Rational.

[ SNIP irb code ]

I changed my app to use BigDecimal now and it works fine this way.

However I don’t like this Float behaviour of Ruby. Especially that the
String representation of such values looks correct, while testing for
equality says ‘false’ - like this:

=== START irb output ===
irb(main):014:0> a = 2.95 + 2.95 + 2.95
=> 8.85
irb(main):015:0> puts a
8.85
=> nil
irb(main):016:0> puts a == 8.85
false
=> nil
=== END irb output ===

With whole numbers the conversion from Fixnum to Bignum is transparent
to the user and makes life a lot easier. Wouldn’t it be possible to have
the same behaviour with floating point numbers?

Dinkel

Christian L. [email protected] wrote:

=== END ===
stepping : 4
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm up
bogomips : 4547.85
clflush size : 64
=== END cat /proc/cpuinfo ===

Is it a processor bug, a simple rounding problem or a ruby error?

Good reading:

http://docs.sun.com/source/806-3568/ncg_goldberg.html

m.

Christian L. wrote:

However I don’t like this Float behaviour of Ruby. Especially that the
String representation of such values looks correct, while testing for
equality says ‘false’ - like this:

That’s no “ruby” problem. You have to understand what floats are (and
they are that way everywhere, not just ruby): approximations. Think of
it this way: try to represent 1/3 (0.333…) in a decimal precisely.
It doesn’t work. Or try to represent pi in decimal precisely. Doesn’t
work either. The same way a float can never represent some numbers
precisely.
Also see http://wiki.rubygarden.org/Ruby/page/show/RubyLangFAQ
Maybe do some research of your own on floats, e.g. read a bit about them
on wikipedia.

Regards
Stefan

Christian L. wrote:

I know that this is no Ruby-specific problem and I can live with it in a
more low level language like C, where a ‘float’ is represented by 32/64
bits depending on your processor type, the same as an ‘int’ is stored as
a 32 bit (?) value.

If you have an idea on how to do that, I’m sure the core developers
would love to hear. The nature of the problem is not the same as with
Fixnums vs. Bignums. With non-integral values your knowledge about the
problem is required. Ruby provides several ways to deal with them, but
it is upon you to choose the best way. There are Rational, BigDecimal
and Float. Personally I don’t see how the system should manage to
automatically select “the best” variant, as that depends upon your
needs. As far as I see it, those are not algorithmically ascertainable.

What is IMHO arguable is to what literals default, e.g. whether 2.95
should mean Float(“2.95”) or BigDecimal(“2.95”) or Rational(295, 100).

I didn’t but a lot of thinking in it, so there might be many issues to
solve. But again, it would be nice to have this behaviour in a
programming language that - generally very successfully - follows the
principle of least surprise (POLS).

Yes, it would be nice. Just as far as I see it impossible. But some
people love to solve seemingly impossible problems, so who knows…

Regards
Stefan

However I don’t like this Float behaviour of Ruby. Especially that the
String representation of such values looks correct, while testing for
equality says ‘false’ - like this:

That’s no “ruby” problem. You have to understand what floats are (and
they are that way everywhere, not just ruby): approximations. Think of
it this way: try to represent 1/3 (0.333…) in a decimal precisely.
It doesn’t work. Or try to represent pi in decimal precisely. Doesn’t
work either. The same way a float can never represent some numbers
precisely.
Also see http://wiki.rubygarden.org/Ruby/page/show/RubyLangFAQ
Maybe do some research of your own on floats, e.g. read a bit about them
on wikipedia.

I know that this is no Ruby-specific problem and I can live with it in a
more low level language like C, where a ‘float’ is represented by 32/64
bits depending on your processor type, the same as an ‘int’ is stored as
a 32 bit (?) value.

As far as I understand Ruby, a Fixnum is stored as a C ‘int’, but if you
are working with higher/lower numbers (Bignum(s)), Ruby abstracts form
the C ‘int’ data-type to whatever else (possibly just multiple C ‘int’
that are concatenated together).

So my question was, why a similar behaviour couldn’t be done with
floating point values. Here’s my utopia on what I would like to see:

Let’s you have a string representation of a floating point value like
“0.125” (1/8). This value has an EXACT representation as a C ‘float’
value, therefore it is stored like that. If you have “0.295” there is NO
exact representation in a C ‘float’ and therefore the value needs to be
stored otherwise (like BigDecimal does it). The aritmetic calculations
should be chosen accordingly (normal processor instructions on 'float’s
or abstracted from those simple cases where needed).

I didn’t but a lot of thinking in it, so there might be many issues to
solve. But again, it would be nice to have this behaviour in a
programming language that - generally very successfully - follows the
principle of least surprise (POLS).

Stefan R. wrote:

it is upon you to choose the best way. There are Rational, BigDecimal

principle of least surprise (POLS).

Yes, it would be nice. Just as far as I see it impossible. But some
people love to solve seemingly impossible problems, so who knows…

Regards
Stefan

  1. As far as the “principle of least surprise” is concerned, the fact
    that floating point arithmetic on most systems is in binary and hence
    numbers like 0.1 are not exactly representable is surprising only to
    those who have never been trained in the use of floating point
    arithmetic. If Ruby is your first language, or the first one you’ve
    used floating point in, I can understand surprise. But to my knowledge
    Ruby floating point behaves no more “surprisingly” than that in any
    other language.

  2. At the risk of angering the duck typing crowd, I’ve been a numerical
    programmer since Fortran II, in which one got automatic “promotion” of
    integers to floating point values in expressions and assignments, but
    not much else in the way of conveniences. In short, I expect to have
    to declare the types of numbers!

I expect to have to specify whether a number is integer (fixnum),
multi-precision integer(bignum), floating, double precision, complex,
rational or big decimal if such a thing exists in the language. It is
surprising to me when I don’t need to specify that. :slight_smile: For that
matter, I also expect to have to declare fixed sizes for
multidimensional arrays of constant numerical type.

In return for these expectations, I expect the compiler / interpreter /
runtime to provide me with optimization. Ruby doesn’t do that part of
it, and may never do it, since it’s easy to offload numeric processing
from Ruby to C, where that kind of magic can happen at full warp speed.
:slight_smile: