0.06 == 0.06 returns false in Ruby?

Hi

I wrote a simple test program, basically the program asks the user to
enter floats. the entered values shall be used as keys for Hash (the
values are irrelavent).

The program tries to find the lowest untaken key/float (floats are used
as keys).

please see attachment.

please run the prog, and enter the following:
0.01 - OK
0.02 - OK
0.03 - OK
0.04 - OK
0.05 - OK
0.06 - Problem

Can anyone explain to me what’s going on here?

thanks
jason

Jason G. wrote:

please run the prog, and enter the following:
0.01 - OK
0.02 - OK
0.03 - OK
0.04 - OK
0.05 - OK
0.06 - Problem

Can anyone explain to me what’s going on here?

You should never compare floating point numbers for equality in any
language. This is true at least for C, C++, .NET, and Java, to name a
few. It just won’t be accurate. Unlike integers, floats are not stored
as their true Platonic forms ;). If you really wanted to see how the
digits differed, you could print both numbers to a large number of
digits (printf("%.20f", my_number)). You don’t even need to do any math
to see the roundoff error.

printf("%.50f", 1.1)
1.10000000000000008881784197001252323389053344726562

To compare floats, you must ask whether they are within a certain
threshold of each other.
Epsilon = 0.00000000001
return (num1-num2).abs < Epsilon # num1 == num2

Dan

From: Jason G. [mailto:[email protected]]

Subject: 0.06 == 0.06 returns false in Ruby?

http://www.ruby-forum.com/attachment/193/test.rb

oops, careful w your statement/subject :slight_smile:

obviously, 0.06 == 0.06. any language can vouch on that.

irb(main):008:0> 0.06 == 0.06
=> true

the problem crops out here (they trigger when you do operations),

irb(main):009:0> 0.05+0.01 == 0.06
=> false

you can check the diff

irb(main):010:0> (0.05+0.01) - 0.06
=> 6.93889390390723e-18

as mentioned by Dan, careful on comparing floats. And as to any
precision subject, there is what we call significant digits…

this floating problem is a faq and is very surprising on such a very
high language such as ruby. can we address this? maybe create flag like
$EPSILON=0 or something, or may flag to revert to rational or bigdeci
like $FLOAT_PROCESSOR=RATIONAL…

just a thought.

kind regards -botp

thanks for the help

There is also a class called BigDecimal (or something like that) if
you want to have really accurate numbers and no floating-point errors.

Dan Z. wrote:

To compare floats, you must ask whether they are within a certain
threshold of each other.
Epsilon = 0.00000000001
return (num1-num2).abs < Epsilon # num1 == num2

Dan

Oops, I didn’t realize that Ruby comes with an Epsilon. This should be a
fine test for equality:

return (num1-num2).abs < Float::EPSILON

Now, whether Float#==(other) should make this check might be worth
thinking about, but I really have no opinion on the matter–I’m used to
not comparing floats like this.

Dan

2007/8/31, doug meyer [email protected]:

There is also a class called BigDecimal (or something like that) if
you want to have really accurate numbers and no floating-point errors.

+1

Peña wrote:
–snip–

Unfortunately, there is no easy solution to this problem. Here is a
catalog of often proposed solutions and why they do not work:

1 (proposed by doug meyer in this thread) Always use
(x-y).abs < Float::EPSILON
as a test for equality.

This won’t work because the rounding error easily can get bigger than
Float::EPSILON, especially when dealing with numbers that are bigger
than unity. e.g.
y = 100.1 + 0.3
y - 100.4 # => -1.421e-14, while Float::EPSILON = 2.22e-16

2 Always use
(x-y).abs < (x.abs + y.abs) * Float::EPSILON)
as a test for equality.

Better than the first proposal, but won’t work if the rounding error
gets too large after a complex computation.
In addition, (1) and (2) suffer from the problem that x==y and y==z do
not imply x==z.

3 Use Bigdezimal

This only shifts the problem a few decimal places down, and tests for
equality will fail as with the normal floats.

4 Use Rationals

Works if you only have to deal with rational operations. But doesn’t
solve the following
x = sqrt(2)
y = x + 1
x + 0.2 == y - 0.8 # => false
In addition, rational arithmetic can produce huge numbers pretty fast,
and this will slow down computations enormously.

5 Use a symbolic math package

This could in theory solve the issue with equality, but in practice
there
is no way to decide that two symbolic representations of a number are
the
same, like
1 / (sqrt(2) - 1) == sqrt(2) + 1
Also, very, very slow.

6 Use interval arithmetic

Gives you strict bounds on your solution, but can’t answer x==y.

Summing up, when using floating point arithmetic there is no one true
way.
There is no substitute for understanding numbers and analyzing your
problem.

HTH,

Michael

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Michael U. wrote:

y = 100.1 + 0.3
3 Use Bigdezimal
x + 0.2 == y - 0.8 # => false
6 Use interval arithmetic

Gives you strict bounds on your solution, but can’t answer x==y.

Summing up, when using floating point arithmetic there is no one true way.
There is no substitute for understanding numbers and analyzing your
problem.

Well … OK … but …

This whole floating-point thing comes up here on a weekly basis, and
I’ll bet it comes up on all the other language mailing lists too. No
matter how many times you repeat this, no matter how many web sites
explaining floating point arithmetic you point people to, etc., you are
still going to get people who don’t know how it works and have
expectations that aren’t realistic. An awful lot of calculators have
been built using decimal arithmetic just because there are a few less
“anomalies” that need to be explained.

People like me who do number crunching for a living know all this stuff
inside and out. I actually learned the basics of scientific computing in
scaled fixed-point arithmetic, and it’s only been in recent years (since
the Pentium, in fact) that just about every computer you’re likely to
touch has had floating point hardware. Before that, you were likely to
be dealing with slow and inaccurate libraries emulating the hardware
unless you were in a scientific research environment. And it’s also been
only a few more years since nearly all new architectures supported
(mostly) the IEEE floating point standard.

Before that, it was chaos – most 32-bit floating point arithmetic was
unusable except for data storage, the reigning supercomputers had
floating point units optimized for speed at the expense of correctness,
you actually had to pay for good math libraries and whole books of
garbage number crunching algorithms were popular best-sellers. In short,
even the folks who knew very well how it should be done made both
necessary compromises and serious mistakes. It took some brave souls
like William Kahan several years to get some of the more obvious garbage
out of “common practice”.

So give the newbies a break on this issue – the professionals have only
been doing it mostly right since about 1990. :slight_smile:

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFG179D8fKMegVjSM8RAnaZAJ0X16UuHOEvWc5iZDurg7f607xr8QCfed+C
FG+18FnY10HxP+8t6R/62bM=
=jJ+X
-----END PGP SIGNATURE-----

Daniel DeLorme wrote:

for equality.
start with, what would be wrong with making them more intuitive
approximations? IMHO Float should use the above algorithm for == and
reserve normal floating point arithmetic for eql?

Daniel

Michael wrote very convincingly that there is no simple solution that
will work in all cases. I’m convinced, at least. If there is no solution
that works 100% of the time, we can’t give the illusion that there is.
To do so would be to teach bad programming practices to newcomers, and
that’s not fair.

The current “==” in okay because it works the way a moderately
experienced programmer would expect. A perfect “==” that could deal with
floats would be even better, but we aren’t gonna get that. A “==” that
seems like magic and almost always works is really pretty dangerous.

Dan

Michael U. wrote:

Better than the first proposal, but won’t work if the rounding error
gets too large after a complex computation.
In addition, (1) and (2) suffer from the problem that x==y and y==z do
not imply x==z.

But it would fix 99% of problems. It would be worth it just for the sake
of reducing those questions on the list :stuck_out_tongue:

Seriously though, since floating point calculations are approximative to
start with, what would be wrong with making them more intuitive
approximations? IMHO Float should use the above algorithm for == and
reserve normal floating point arithmetic for eql?

Daniel

Hi,

Am Freitag, 31. Aug 2007, 17:03:38 +0900 schrieb Daniel DeLorme:

Unfortunately, there is no easy solution to this problem. Here is a
catalog of often proposed solutions and why they do not work:

But it would fix 99% of problems. It would be worth it just for the sake of
reducing those questions on the list :stuck_out_tongue:

Seriously though, since floating point calculations are approximative to
start with, what would be wrong with making them more intuitive
approximations? IMHO Float should use the above algorithm for == and
reserve normal floating point arithmetic for eql?

I alway liked it to be forced to decide what are countable
dimensions and what are continuous ones. This made my
programming style much clearer.

Bertram

From: Robert K. [mailto:[email protected]]

And who decides about the size of epsilon and which algorithm to

choose? There is no one size fits all answer to that hence leaving

Float#== the way it is (i.e. compare for exact identical values) is

the only viable option. Otherwise you would soon see similar

questions on the list, i.e., "how come it sometimes works and

sometimes it doesn’t". And they become more difficult to answer as

the cases are likely less easy to spot / explain.

indeed :frowning:

maybe i’m not “realistic” but i thought

0.05 + 0.01 == 0.06 => false

was “unrealistic” enough for a simple and plain 2 decimal arithmetic.
Even people w zero-know on computers would laugh about it (yes, try
explaining it to your wife or kids, eg).

For simple math (eg those dealing w money):
i can live w slowness in simple math (quite a paradox if you ask me).
i can live w 1/3 == 0.3333333333 =>false
or that sqrt(2) == 1.1414213562 => false
i use bigdecimal. bigdecimal handles 0.05 + 0.01 == 0.06 => true

For complex math,
i can live w slowness (no question there).
bigdecimal easily handles sqrt(2) at 100 digits: 2.sqrt(100) =>
#<BigDecimal:b7d74094,‘0.1414213562 3730950488 0168872420 9698078569
6718753769 4807317667 9737990732 4784621070 3885038753 4327641572
7350138462 309122925E1’,124(124)
so yes, i still use bigdecimal for complex math.

so regardless, of whether its simple or complex (or “highly precise” or
not), i use bigdecimal. counting ang looping otoh has no problem w me
since fixnum/bignum handles this flawlessly. (Also, note that big RDBMS
like oracle and postgresql use BCD and fixed pt math for numerics).

So, my question probably is (maybe this could be addressed to Matz): How
can i make ruby use a particular arithmetic, like bigdecimal eg, so that
literals like 1.05, and operations like 1+1.01 are now handled as
bigdecimals.

thank you and kind regards -botp

2007/8/31, Daniel DeLorme [email protected]:

for equality.
start with, what would be wrong with making them more intuitive
approximations? IMHO Float should use the above algorithm for == and
reserve normal floating point arithmetic for eql?

And who decides about the size of epsilon and which algorithm to
choose? There is no one size fits all answer to that hence leaving
Float#== the way it is (i.e. compare for exact identical values) is
the only viable option. Otherwise you would soon see similar
questions on the list, i.e., “how come it sometimes works and
sometimes it doesn’t”. And they become more difficult to answer as
the cases are likely less easy to spot / explain.

Kind regards

robert

On 8/31/07, Peña, Botp [email protected] wrote:

maybe i’m not “realistic” but i thought

0.05 + 0.01 == 0.06 => false

was “unrealistic” enough for a simple and plain 2 decimal arithmetic. Even people w zero-know on computers would laugh about it (yes, try explaining it to your wife or kids, eg).
This seems indeed a very valid argument at first sight.
But I feel that it is not the case.
Ruby has Integers and Floats, it does not have Fixed Digit Decimals
that is all which is to discuss here, as long as we discuss Floats
Michael is just dead right, now saying that we should have something
which delivers
0.05 + 0.01 == 0.06 => true
is a slightly different issue.
Personally I do not miss it because if I want decimals to be precise
to n digits I will just multiply by 10**n, at least the precision is
clear than.
But that is a matter of taste, I guess.
Cheers
Robert

2007/8/31, Peña, Botp [email protected]:

maybe i’m not “realistic” but i thought

0.05 + 0.01 == 0.06 => false

was “unrealistic” enough for a simple and plain 2 decimal arithmetic. Even people w zero-know on computers would laugh about it (yes, try explaining it to your wife or kids, eg).

That’s probably the exact reason why not your wife or kids write
software but people who are (hopefully) experts. :slight_smile: If you study
computer sciences you’ll typically hit the topic of numeric issues at
some point.

so regardless, of whether its simple or complex (or “highly precise” or not), i use bigdecimal. counting ang looping otoh has no problem w me since fixnum/bignum handles this flawlessly. (Also, note that big RDBMS like oracle and postgresql use BCD and fixed pt math for numerics).

So, my question probably is (maybe this could be addressed to Matz): How can i make ruby use a particular arithmetic, like bigdecimal eg, so that literals like 1.05, and operations like 1+1.01 are now handled as bigdecimals.

Well, you could provide your formula as strings and convert it to
something that creates BigDecimals along the way, like

irb(main):015:0> “0.01+0.05”.gsub(%r{\d+(?:.\d*)?},
“BigDecimal.new(‘\&’)”)
=> “BigDecimal.new(‘0.01’)+BigDecimal.new(‘0.05’)”
irb(main):016:0> eval(“0.01+0.05”.gsub(%r{\d+(?:.\d*)?},
“BigDecimal.new(‘\&’)”))
=> #BigDecimal:7ff6dd60,‘0.6E-1’,4(12)

note this is of course not a proper solution since the RX does not

match all valid floats

Kind regards

robert

Hi,
Try converting to strings e.g:
irb(main):004:0> (0.05 + 0.01).to_s == 0.06.to_s
=> true

BR
Davor

On 8/30/07, doug meyer [email protected] wrote:

There is also a class called BigDecimal (or something like that) if
you want to have really accurate numbers and no floating-point errors.

A warning on things like BigDecimal.
Unless I’m mistaken it’s still stored in twos compliment, which means
you’ll still end up with the same sort of floating point problems
(albeit further down), and the same numbers that you can’t express
exactly with a normal float, can’t be expressed exactly with a
BigDecimal.

You’d think some fixed point math libraries would help, but be
careful, because many of those also store in twos compliment.

–Kyle

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Bertram S. wrote:

approximations? IMHO Float should use the above algorithm for == and
reserve normal floating point arithmetic for eql?

I alway liked it to be forced to decide what are countable
dimensions and what are continuous ones. This made my
programming style much clearer.

Bertram

Well … since everything in computing is countable … :slight_smile:

But seriously, real computing on real digital machines involves
translation from infinite and/or continuous semantics to very large
finite discrete processes. About the only thing that’s truly infinite is
the time it takes to complete

while true do
end

But there are some techniques not terribly well known that improve on
floating point arithmetic as normally implemented in hardware. They’re
too expensive for mass-market hardware, so they’re usually implemented
in software. Interval arithmetic has already been mentioned, but there
are some others. Try “A New Approach to Scientific Computation” by
Kulisch and Miranker.

Most of these are historical curiosities these days because of the
widespread distribution of high-performance open-source libraries for
exact and symbolic computation, such as GiNaC, GMP, CLN, etc. And it’s
pretty easy using SWIG to interface one or more of these to Ruby.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFG2DJg8fKMegVjSM8RAqz5AJ9iT+1hEDwtuYht2pnp/cnup/B2QwCgzIte
BwjwjradiTPNAV4h8LX/bj8=
=wT6P
-----END PGP SIGNATURE-----

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Robert K. wrote:

indeed :frowning:
some point.
Actually, unless you’re a (hard) science or engineering major, you
probably won’t. Numerical analysis/methods aren’t really considered part
of “computer science”. Computer science is mostly about discrete
mathematics, data structures, programming languages and their
interpreters and compilers, etc. And you probably won’t get it in a
“software engineering” program either. Applied mathematics is your best
shot, I think.

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFG2DP18fKMegVjSM8RAhnIAKDO4mMSDAh/pJ1BSe4ICFj4tD+9vACfdgh5
CZvGLjrSlH4QVf6sEG2b4Lc=
=XRUf
-----END PGP SIGNATURE-----