# Floating point subtraction question

#1

Hi,

I’m a bit confused with some simple floating point arithmetic:

% ruby -e ‘puts “#{(1.2 - 1.0) == 0.2}”’
false

% ruby -e ‘puts “#{(1.5 - 1.0) == 0.5}”’
true

I’m sure there’s a perfectly logical reason for this, but i think i’ve
been staring at it for too long to see what it is.

Can anyone shed some light on this?

Nick

ps:
% ruby -v
ruby 1.8.3 (2005-06-23) [i486-linux]

#2

On Nov 27, 2005, at 3:41 PM, Nicholas R. wrote:

I’m a bit confused with some simple floating point arithmetic:

% ruby -e ‘puts “#{(1.2 - 1.0) == 0.2}”’
false

% ruby -e ‘puts “#{(1.5 - 1.0) == 0.5}”’
true

Slim:~ gavinkistner\$ irb
irb(main):001:0> 1.2-1.0
=> 0.2
irb(main):002:0> (1.2-1.0)==0.2
=> false
irb(main):003:0> “%.31f” % [1.2-1.0]
=> “0.1999999999999999555910790149937”
irb(main):004:0> “%.31f” % [0.2]
=> “0.2000000000000000111022302462516”
irb(main):005:0> “%.31f” % [1.2]
=> “1.1999999999999999555910790149937”

Because many digits cannot be precisely represented as powers of 2.

For more info, try a plethora of online resources on floating point
math, such as:
http://en.wikipedia.org/wiki/Floating_point#Problems_with_floating-point

#3

On Nov 27, 2005, at 5:41 PM, Nicholas R. wrote:

I’m a bit confused with some simple floating point arithmetic:

% ruby -e ‘puts “#{(1.2 - 1.0) == 0.2}”’
false

% ruby -e ‘puts “#{(1.5 - 1.0) == 0.5}”’
true

I’m sure there’s a perfectly logical reason for this, but i think i’ve
been staring at it for too long to see what it is.

The short answer is that decimal floating point literals can not
accurately represent the binary floating point values so you get
all sorts of rounding errors. Take a look at
http://babbage.cs.qc.edu/courses/cs341/IEEE-754.html

In general it is a mistake to use == to compare floating point values
because
of this mismatch. An alternate approach is to ask if the difference
between two values is less than some small delta:

use abs(a - b) < delta
for some suitable (small, near zero) value of delta

For all the gory details read “What Every Computer Scientist Should
http://docs.sun.com/source/806-3568/ncg_goldberg.html

#4

John W. Kennedy wrote:

ruby 1.8.3 (2005-06-23) [i486-linux]

where epsilon is some really small number that serves for your purposes
as “practically equal”.

(IBM mainframes have hardware to handle decimal fractions directly, and
a very few languages – COBOL, RPG, PL/I, and Ada '95 come to mind –
have the extra features to make use of it. Some other languages, Java,
for example, include libraries to help you do the “calculate in cents”
trick. Ruby, however, does not at present include either facility.)

BASIC on the 8-bit Atari used binary-coded decimal.

#5

Nicholas R. wrote:

I’m sure there’s a perfectly logical reason for this, but i think i’ve
been staring at it for too long to see what it is.

Can anyone shed some light on this?

Nick

ps:
% ruby -v
ruby 1.8.3 (2005-06-23) [i486-linux]

This has nothing to do with Ruby, itself, but is a basic characteristic
of nearly all languages and nearly all computers.

Remember how you can’t make 1/3 a decimal fraction, because it’s
0.3333333333333 – carried out to infinity, so that if you add 1/3 + 1/3

• 1/3, you get 0.9999999 – carried out to infinity, instead of 1.0?
Well, computers (almost always) use binary fractions, and you can’t make
1/10 a binary fraction, because it comes out as 0.000110011001100110011
– carried out to infinity, and so you get the same kind of problem.

If you are doing science or engineering, it doesn’t matter, because
things in the real world don’t come out as neat decimal fractions,
anyway. But you do want to avoid checking for equality. Instead of

``````x==y
``````

look for

``````abs(x-y)<epsilon
``````

where epsilon is some really small number that serves for your purposes
as “practically equal”.

If, on the other hand, you are working with money, don’t use fractions.
Calculate in cents (or centimes, or new pence, or whatever), and only
divide by 100 (or whatever) at the last moment, when you’re printing