Bug in converting float to int?


#1

I just tried this on 1.8.6. Is this a bug or am I missing something?

a=10.12
=> 10.12

(a*100).to_i
=> 1011

And if this is in fact intended behavior, how can I actually get the
results I want? (Converting dollars and cents into cents)


#2

Oh and it looks like this only happens with floats ending in .12.

Ruby thinks 12 hundredths is not as big as it’s buddies?? :slight_smile:


#3

Jeff V. wrote:

I just tried this on 1.8.6. Is this a bug or am I missing something?

a=10.12
=> 10.12

(a*100).to_i
=> 1011

And if this is in fact intended behavior, how can I actually get the
results I want? (Converting dollars and cents into cents)

A FAQ.

irb(main):001:0> sprintf("%.16f", 100*10.12)
=> “1011.9999999999998863”

http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems


#4

On Feb 13, 2009, at 12:41 AM, Jeff V. wrote:

I just tried this on 1.8.6. Is this a bug or am I missing something?

a=10.12
=> 10.12

(a*100).to_i
=> 1011

Ruby floating point values are stored in binary format.
10.12 (decimal) can not be represented exactly in binary:

sprintf("%0.50f", 10.12)
=> “10.11999999999999921840299066388979554176330566406250”

The default conversion of float to string (e.g., for output) rounds
to 6 (or maybe 7) places and truncates trailing zeros:

sprintf("%0.6f", a).sub(/0+\Z/,’’)
=> “10.12”

Float#to_i truncates:

sprintf("%0.50f", 10.12 * 100 )
=> “1011.99999999999988631316227838397026062011718750000000”

(10.12 * 100).to_i
=> 1011


#5

Gary W. wrote:

Ruby floating point values are stored in binary format.
10.12 (decimal) can not be represented exactly in binary:

sprintf("%0.50f", 10.12)
=> “10.11999999999999921840299066388979554176330566406250”

Clifford H. wrote:

Avoid using floating point. Seriously. Some might advice you to round:
(a*100).round => 1012, but that’s unreliable, even with some quite
small number of digits.

Store cents as an integer, or use BigDecimal.

Thanks everyone, quite interesting info there. It looks like I will be
doing my dollars to cents math a little differently.

Clifford, can you elaborate on why round is unreliable?


#6

Jeff V. wrote:

I just tried this on 1.8.6. Is this a bug or am I missing something?

a=10.12
=> 10.12

(a*100).to_i
=> 1011

And if this is in fact intended behavior, how can I actually get the
results I want? (Converting dollars and cents into cents)

Just to tack on my 1.9997438 cents :wink:

We had a big go-round about this a while back. This behavior is not
specific to Ruby. The reason Ruby behaves like this is that it adheres
to the IEEE standard for floating point numbers, which is common to most
programming languages. A standard so old that it was first pressed into
clay tablets back when computers were powered by oxen and water wheels.
Ruby keeps this standard because it’s fast and it reflects the way that
computers really work with numbers. There are plenty of alternatives for
dealing with floating point numbers which are usually offered as high
precision scientific numerical packages. But these are almost always
slower than the IEEE standard. And, while they are less lousy (I imagine
most could keep two decimal places straight) they are still prone to
precision error, just because computers are, at their very core, integer
only.


#7

Raphael C. wrote:

Just to tack on my 1.9997438 cents :wink:

We had a big go-round about this a while back. This behavior is not
specific to Ruby. The reason Ruby behaves like this is that it adheres
to the IEEE standard for floating point numbers, which is common to most
programming languages. A standard so old that it was first pressed into
clay tablets back when computers were powered by oxen and water wheels.
Ruby keeps this standard because it’s fast and it reflects the way that
computers really work with numbers. There are plenty of alternatives for
dealing with floating point numbers which are usually offered as high
precision scientific numerical packages. But these are almost always
slower than the IEEE standard. And, while they are less lousy (I imagine
most could keep two decimal places straight) they are still prone to
precision error, just because computers are, at their very core, integer
only.

Just for “fun”, I tried the following on g++ 4.01 (PPC version)

#include

using namespace std;

int main(int argc, char* argv[]) {
float x = 10.12;
int y = (int) (x * 100.0);
cout << x << " , " << y << endl;
return(0);
}

which generates the familiar output…

10.12 , 1011


#8

Jeff V. wrote:

I just tried this on 1.8.6. Is this a bug or am I missing something?

Wikipedia is good, but I think the best-known article on this is here:
http://docs.sun.com/source/806-3568/ncg_goldberg.html, entitled
“What Every Scientist Should Know About Floating-Point Arithmetic”.

And if this is in fact intended behavior, how can I actually get the
results I want? (Converting dollars and cents into cents)

Avoid using floating point. Seriously. Some might advice you to round:
(a*100).round => 1012, but that’s unreliable, even with some quite
small number of digits.

Store cents as an integer, or use BigDecimal.

Clifford H…


#9

Jeff V. wrote:

Clifford, can you elaborate on why round is unreliable?

If you’ve read the Goldberg paper, I doubt I can add anything.

Conversions to/from ASCII are never totally symmetrical.
You can convert to float and back to string and get a
different result, for the simple reason that it’s not an
exact one-to-one mapping. For some of those patterns,
the change in the last digit will be propagated by the
rounding process, resulting in a change to your value.

I recall learning this in 1986 on an HP500. which converted
516.12 to double and back as 516.11 - only five significant
figures.

Clifford H…