Wierd floating point output

Folks,

While we had a discussion around the magic number 0.06, I tried to
output arbitrary floating point numbers and was puzzled about the
imprecision of my ruby executable (or a library it uses for this
task) on my Windows machine (Athlon XP, Ruby 1.8.6 win32 installer
from ruby-lang.org):

C:>ruby --version
ruby 1.8.6 (2007-03-13 patchlevel 0) [i386-mswin32]
C:>irb
irb(main):001:0> “%.60f” % 0.1
=> “0.100000000000000010000000000000000000000000000000000000000000”
irb(main):002:0> 0.1.to_s
=> “0.1”

I mean, irrespective of wrongly outputting 0.1 as “0.01” using
.to_s, it cannot even correctly calculate the number using floating
point printf-like string substitution! :frowning:

When I let my Athlon64 calculate the same thing (Ruby 1.8.6 on
gentoo, 64 bit), it outputs the following which is correct:

$ ruby --version
ruby 1.8.6 (2007-03-13 patchlevel 0) [x86_64-linux]
$ irb
irb(main):001:0> “%.60f” % 0.1
=> “0.100000000000000005551115123125782702118158340454101562500000”

Is this a 32-bit-ism? Using the old 1.8.4 cygwin binary which came
along with cygwin a long time ago, I get the following output which
is not correct but a lot more precise than the output from 1.8.6-25:

$ /usr/bin/ruby.exe --version
ruby 1.8.4 (2005-12-24) [i386-cygwin]
$ irb
irb(main):001:0> “%.60f” % 0.1
=> “0.100000000000000005551115123125782702118158000000000000000000”

Just for comparison: All versions output the same if I give them the
precise binary representation of the number:

irb(main):001:0> [-4,-5,-8,-9,-12,-13,-16,-17,-20,-21,-24,-25,
-28,-29,-32,-33,-36,-37,-40,-41,-44,-45,-48,-49,-52,-53,-55
].inject(BigDecimal(“0”)) {|sum,ex| sum+BigDecimal.new(“2.0”)**ex}
=> #<BigDecimal:4bd9e90,‘0.1000000000 0000000555 1115123125
7827021181 5834045410 15625E0’,56(96)>

So, result:

  1. My AMD64-based self-compiled version works correct. Absolute error=0

  2. The cygwin-based ruby from 2005 seems to put more effort into the
    calculation but produces an error of about 3.4e-43. Note that for
    the exact number 0.01, the correct floating point representation has
    an error of <2.0**-57 which is about 6.9e-18, so we can consider
    this error as “OK” although I specifically requested 60 decimal
    digits which it could not supply.

  3. The pure 32 bit windows version distributed by ruby-lang.org
    makes the output completely wrong. It produces an error of 1e-17
    which is more than the floating point representation introduces –
    for no good reason! While I specifically asked for 60 digits
    (whereas 55 would be sufficient) I got only 17.

Note that any floating point number (based on IEEE 754) can be
exactly represented using decimal notation using “enough” digits
– which are here 55.

Is there something I did wrong? Maybe there are some constants i can
tweak in the Float class so I get more precise values?

  • Matthias

Hi –

On Tue, 4 Sep 2007, Matthias Wächter wrote:

C:>irb
irb(main):001:0> “%.60f” % 0.1
=> “0.100000000000000010000000000000000000000000000000000000000000”
irb(main):002:0> 0.1.to_s
=> “0.1”

I mean, irrespective of wrongly outputting 0.1 as “0.01” using
.to_s, it cannot even correctly calculate the number using floating
point printf-like string substitution! :frowning:

Where does it output “0.01”?

David

On 03.09.2007 19:51, [email protected] wrote:

On Tue, 4 Sep 2007, Matthias Wächter wrote:

irb(main):002:0> 0.1.to_s
=> “0.1”

I mean, irrespective of wrongly outputting 0.1 as “0.01” using
.to_s, it cannot even correctly calculate the number using floating
point printf-like string substitution! :frowning:

Where does it output “0.01”?

Sigh – I should proofread everything.

The sentence should start with “I mean, irrespective of wrongly
outputting 0.1 as “0.1” using .to_s, …”

What I mean is that any float stored as a IEEE 754 double (64 bit),
like 0.1 with hex notation 0x3FB999999999999A should have a distinct
string representation such that (a==b) == (a.to_s==b.to_s). Note
that the next higher double 0.1+2.0**-56 with hex notation
0x3FB999999999999B has the same .to_s representation, which is not good.

But that is discussed in another thread.

  • Matthias

Stefan R. schrieb:

Matthias Wächter wrote:

What I mean is that any float stored as a IEEE 754 double (64 bit),
like 0.1 with hex notation 0x3FB999999999999A should have a distinct
string representation such that (a==b) == (a.to_s==b.to_s). Note
that the next higher double 0.1+2.0**-56 with hex notation
0x3FB999999999999B has the same .to_s representation, which is not good.
I disagree. As long as you’re calculating, you are working with floats
anyway, so you have maximum precision. Comparing floats should be done
using delta comparison anyway too.

Which, as discussed in the 0.06 thread, is hard to accomplish
nevertheless. Anyway.

Comparing floats as string is slightly put stupid.

It was the code meaning of "if two numbers are identical, I want to see
a difference in the string representation, too). Not more.

For any other (primary) object/class this holds true, but not for
floats. This is unacceptable, period. Suppose Array#inspect would only
output the first three and the final element, putting a “…” inbetween.

For printing a result it only makes sense to print
a maximum precision for very very rare cases. In the most cases,
printing with 5, 6 places is enough. If you need more you more, you can
always resort to sprintf/String#%

What brings me/us away from this argument to the primary question how
precise the sprintf/String#% can output strings. Do you have any
feelings about the top posting as well?

Does noone care about on some platforms, Ruby is not able to correctly
output floating points with the asked precision?

what does ruby -e ‘print “%.60f” %0.1’ output on your platform?

Again, it’s not a matter of taste how a floating point value is
transferred back to decimals. This can be done unambiguously, especially
if enough positions after decimal point are requested like in “print
%.60f”.

  • Matthias

Matthias Wächter wrote:

Which, as discussed in the 0.06 thread, is hard to accomplish nevertheless.

Can you give some examples of pairs of floats that are hard
to compare using delta comparison?

Matthias Wächter wrote:

On 03.09.2007 19:51, [email protected] wrote:

On Tue, 4 Sep 2007, Matthias Wächter wrote:

irb(main):002:0> 0.1.to_s
=> “0.1”

I mean, irrespective of wrongly outputting 0.1 as “0.01” using
.to_s, it cannot even correctly calculate the number using floating
point printf-like string substitution! :frowning:

Where does it output “0.01”?

Sigh – I should proofread everything.

The sentence should start with “I mean, irrespective of wrongly
outputting 0.1 as “0.1” using .to_s, …”

What I mean is that any float stored as a IEEE 754 double (64 bit),
like 0.1 with hex notation 0x3FB999999999999A should have a distinct
string representation such that (a==b) == (a.to_s==b.to_s). Note
that the next higher double 0.1+2.0**-56 with hex notation
0x3FB999999999999B has the same .to_s representation, which is not good.

But that is discussed in another thread.

  • Matthias

I disagree. As long as you’re calculating, you are working with floats
anyway, so you have maximum precision. Comparing floats should be done
using delta comparison anyway too. Comparing floats as string is
slightly put stupid. For printing a result it only makes sense to print
a maximum precision for very very rare cases. In the most cases,
printing with 5, 6 places is enough. If you need more you more, you can
always resort to sprintf/String#%

Regards
Stefan

On Sep 3, 5:32 pm, Stefan R. [email protected] wrote:

using delta comparison anyway too.
end

(0.01 + 0.05).in_delta?(0.06) # => true
0.01 + 0.05 =~ 0.06 # => true # can’t set the delta, is similarly evil
as ==

class Numeric
def in_delta?(other, delta=Float::EPSILON*16) # chose the delta
wisely
(self-other).abs < delta
end
alias =~ in_delta?
end

p 1.23456789e4.in_delta?( 1.234567892e4 )
p 1.23456789e-1.in_delta?( 1.234567892e-1 )
p 1.23456789e-6.in_delta?( 1.234567892e-6 )
p 1.23456789e-11.in_delta?( 1.234567892e-11 )

==== output ====
false
false
true
true

Matthias Wächter wrote:

Stefan R. schrieb:

Matthias Wächter wrote:

What I mean is that any float stored as a IEEE 754 double (64 bit),
like 0.1 with hex notation 0x3FB999999999999A should have a distinct
string representation such that (a==b) == (a.to_s==b.to_s). Note
that the next higher double 0.1+2.0**-56 with hex notation
0x3FB999999999999B has the same .to_s representation, which is not good.
I disagree. As long as you’re calculating, you are working with floats
anyway, so you have maximum precision. Comparing floats should be done
using delta comparison anyway too.

Which, as discussed in the 0.06 thread, is hard to accomplish
nevertheless. Anyway.

Is it? That’s news to me.
class Numeric
def delta?(other, delta=Float::EPSILON*16) # chose the delta wisely
(self-other).abs < delta
end
alias =~ delta?
end

(0.01 + 0.05).in_delta?(0.06) # => true
0.01 + 0.05 =~ 0.06 # => true # can’t set the delta, is similarly evil
as ==

Comparing floats as string is slightly put stupid.

It was the code meaning of "if two numbers are identical, I want to see
a difference in the string representation, too). Not more.

And that’s where you don’t understand floats obviously. Floats are
approximations. Never test approximations for identity. That’s
destined to fail. If you want identity, use an exact system as mentioned
often enough by now I think. E.g. Rational, BigDecimal or whatever else
floats your boat.

Does noone care about on some platforms, Ruby is not able to correctly
output floating points with the asked precision?

Ruby? Are you sure you found the culprit there? Are you sure ruby
doesn’t just wrap around whatever float libs are available on the
platform? Because in that case, it’s your platform that is to blame, not
ruby. I don’t know it, and I won’t go into the source to verify it, but
I wouldn’t just go around and blame ruby without being sure.
I understood that floats are approximations, and it doesn’t bother me
much (maybe it would if I did scientific calculations - but then I’d
probably know a bit more about it and/or use different means) whether
the approximation is 0.000000000000010% or 0.000000000000006% off
(that’s the difference in your 0.1 example - notice something?).

what does ruby -e ‘print “%.60f” %0.1’ output on your platform?
Why would it matter?

Again, it’s not a matter of taste how a floating point value is
transferred back to decimals. This can be done unambiguously, especially
if enough positions after decimal point are requested like in “print
%.60f”.

It’s just rather pointless as at some time in your calculation, you lost
precision already.
So now let me ask you, what do you do that you need 60 places? Are you
sure floats are the right tool?

Regards
Stefan

On Sep 3, 9:02 pm, Stefan R. [email protected] wrote:

def in_delta?(other, delta=Float::EPSILON*16) # chose the delta

quotient of the two numbers (its proximity to 1).
Yes, having manually to choose the delta for each
comparison is not good.

Here is a modificaion of a method mentioned
by Michael U. earlier.

def eq( x, y )
x == y or
(x-y).abs < (x.abs + y.abs) * Float::EPSILON * 4
end

DATA.each{|s|
strings = s.chomp.split(“;”)
floats = strings.map{|s| eval(s) }
puts strings.join( " == " ) +
" : #{ floats[0]==floats[1] } #{ eq( *floats ) }"
}

END
(0.05+0.01);0.06
(0.34+0.01);0.35
0.0;0.0
1.23456789e9;1.234567892e9
1.23456789e4;1.234567892e4
1.23456789e-1;1.234567892e-1
1.23456789e-6;1.234567892e-6
1.23456789e-11;1.234567892e-11
1.23456789012345e9;1.234567890123452e9
1.23456789012345e4;1.234567890123452e4
1.23456789012345e-1;1.234567890123452e-1
1.23456789012345e-6;1.234567890123452e-6
1.23456789012345e-11;1.234567890123452e-11

==== output ====
(0.05+0.01) == 0.06 : false true
(0.34+0.01) == 0.35 : false true
0.0 == 0.0 : true true
1.23456789e9 == 1.234567892e9 : false false
1.23456789e4 == 1.234567892e4 : false false
1.23456789e-1 == 1.234567892e-1 : false false
1.23456789e-6 == 1.234567892e-6 : false false
1.23456789e-11 == 1.234567892e-11 : false false
1.23456789012345e9 == 1.234567890123452e9 : false true
1.23456789012345e4 == 1.234567890123452e4 : false true
1.23456789012345e-1 == 1.234567890123452e-1 : false true
1.23456789012345e-6 == 1.234567890123452e-6 : false true
1.23456789012345e-11 == 1.234567890123452e-11 : false true

William J. wrote:

On Sep 3, 5:32 pm, Stefan R. [email protected] wrote:

using delta comparison anyway too.
end

(0.01 + 0.05).in_delta?(0.06) # => true
0.01 + 0.05 =~ 0.06 # => true # can’t set the delta, is similarly evil
as ==

class Numeric
def in_delta?(other, delta=Float::EPSILON*16) # chose the delta
wisely
(self-other).abs < delta
end
alias =~ in_delta?
end

p 1.23456789e4.in_delta?( 1.234567892e4 )
p 1.23456789e-1.in_delta?( 1.234567892e-1 )
p 1.23456789e-6.in_delta?( 1.234567892e-6 )
p 1.23456789e-11.in_delta?( 1.234567892e-11 )

==== output ====
false
false
true
true

That’s why, chose your delta wisely. It is also why the =~ is only
marginally less surprising than == and still evil.
Maybe one could improve it a bit using a delta created using the
magnitude (hello, dear log). Another approach might be to use the
quotient of the two numbers (its proximity to 1).
With both suggestions I have no idea about hazardeous situations, maybe
somebody with either more recent CS training or daily use of floats can
shed some light on that. Also it is too late at night to have clear
thoughts about that.

Regards
Stefan

On 9/4/07, Matthias Wächter [email protected] wrote:

Now that we know the precise value of the float back in decimal, we
can make three cases:

2. I am not interested in the high precision of converting double
back to a decimal string representation, but I am interested in a
string representation that allows me, like for any other object, to
distinguish two different binary float values.

It is not true that “for any other object” a to_s call provides a
string representation which will allow disambiguation of all distinct
values. That’s not what to_s is for - to_s is about what to use in the
default case when a string representation is called for. I personally
would be put off if 0.1.to_s returned anything but “0.1” by default.

  1. I am not interested in high-precision output as I know that
    arbitrary decimal values are handled using approximations after all,
    and I don’t want to be bothered with this level of detail.

to_s is for just this sort of default case, which covers the vast,
vast majority of use of the language.

If you want to start talking about Float#inspect, you may find more
listeners. :slight_smile:

-A

William J. wrote:

wisely
false

floats = strings.map{|s| eval(s) }
puts strings.join( " (x-y).abs < (x.abs + y.abs) *== " ) +
" : #{ floats[0]==floats[1] } #{ eq( *floats ) }"
}
–snip–

Please note, that the formula I gave has a bug in it:
The condition

(x-y).abs < (x.abs + y.abs) * Delta

is always false if y == 0 (and Delta < 1 :slight_smile:
Better use something along the lines of

(x-y).abs < (x.abs + y.abs + 1.0) * Delta

Still, there is no Delta that would fit all applications. The one
you gave may be good for the examples you had, but I doubt it would
be fitting for more complex computations (where the rounding error
gets larger than 4*Float::EPSILON).

Regards,

Michael

On Sep 12, 5:13 am, Michael U. [email protected] wrote:

def in_delta?(other, delta=Float::EPSILON*16) # chose the delta
false
comparison is not good.
strings = s.chomp.split(“;”)
(x-y).abs < (x.abs + y.abs) * Delta

is always false if y == 0 (and Delta < 1 :slight_smile:
Better use something along the lines of

(x-y).abs < (x.abs + y.abs + 1.0) * Delta

This works pretty well, but seems to have a problem
with very small numbers. In the program below, this
method (eq1) incorrectly reports that these are equal:

1.23456789e-11 == 1.234567892e-11
1.23456789e-16 == 1.234567892e-16

The function eq2 is more complex, but it seems more
consistent.

def eq1( x, y, epsilon = Float::EPSILON * 8 )
x == y or
(x-y).abs < (x.abs + y.abs + 1.0) * epsilon
end

def eq2( x, y, epsilon = Float::EPSILON * 8 )
return true if x == y
return true if
(i = [x,y].index( 0.0 )) && ( [x,y][1-i].abs < epsilon )
return (x-y).abs < (x.abs + y.abs) * epsilon
end

DATA.each{|s|
strings = s.chomp.split(/ *; */)
equality = ( strings.pop == “=” ).to_s
floats = strings.map{|s| eval(s) }
results = [ :eq1, :eq2 ].map{|fun|
send( fun, *floats ).to_s }.map{|s|
if s == equality; s else s.upcase end }
puts strings.map{|s|
s +
if p = s.index(‘e’); " " * (4 - (s.size - p))
else “”
end }.
join( " == " ).rjust(45) + " " + results.
map{|s| s.ljust(5)}.join( " " )
}

END
(0.05+0.01) ; 0.06 ; =
(0.34+0.01) ; 0.35 ; =
0.0 ; 0.0 ; =
1e-15 ; 0.0 ; =
0.0 ; 1e-15 ; =
1.23456789e14 ; 1.234567892e14 ; !
1.23456789e9 ; 1.234567892e9 ; !
1.23456789e4 ; 1.234567892e4 ; !
1.23456789e-1 ; 1.234567892e-1 ; !
1.23456789e-6 ; 1.234567892e-6 ; !
1.23456789e-11 ; 1.234567892e-11 ; !
1.23456789e-16 ; 1.234567892e-16 ; !
1.23456789012345e14 ; 1.234567890123452e14 ; =
1.23456789012345e9 ; 1.234567890123452e9 ; =
1.23456789012345e4 ; 1.234567890123452e4 ; =
1.23456789012345e-1 ; 1.234567890123452e-1 ; =
1.23456789012345e-6 ; 1.234567890123452e-6 ; =
1.23456789012345e-11 ; 1.234567890123452e-11 ; =
1.23456789012345e-16 ; 1.234567890123452e-16 ; =

==== output ====
(0.05+0.01) == 0.06 true true
(0.34+0.01) == 0.35 true true
0.0 == 0.0 true true
1e-15 == 0.0 true true
0.0 == 1e-15 true true
1.23456789e14 == 1.234567892e14 false false
1.23456789e9 == 1.234567892e9 false false
1.23456789e4 == 1.234567892e4 false false
1.23456789e-1 == 1.234567892e-1 false false
1.23456789e-6 == 1.234567892e-6 false false
1.23456789e-11 == 1.234567892e-11 TRUE false
1.23456789e-16 == 1.234567892e-16 TRUE false
1.23456789012345e14 == 1.234567890123452e14 true true
1.23456789012345e9 == 1.234567890123452e9 true true
1.23456789012345e4 == 1.234567890123452e4 true true
1.23456789012345e-1 == 1.234567890123452e-1 true true
1.23456789012345e-6 == 1.234567890123452e-6 true true
1.23456789012345e-11 == 1.234567890123452e-11 true true
1.23456789012345e-16 == 1.234567890123452e-16 true true

On 04.09.2007 00:32, Stefan R. wrote:

Matthias Wächter wrote:

It was the code meaning of "if two numbers are identical, I want to see
a difference in the string representation, too). Not more.

And that’s where you don’t understand floats obviously. Floats are
approximations. Never test approximations for identity. That’s
destined to fail. If you want identity, use an exact system as mentioned
often enough by now I think. E.g. Rational, BigDecimal or whatever else
floats your boat.

Right, floats are approximations of decimals. But they stand for
theirself. A float or double is a precise representation of a
base-2-encoded floating point. If I ask for a float for “0.25” I
expect it to be precise, not an approximation. That is part of
IEEE 754. Certainly, For “0.1” it is an approximation, but the float
(say, double) representation of 0.1, which is only an approximation
for 0.1, converted back to decimal is precisely
“0.1000000000000000055511151231257827021181583404541015625”, period.
Please accept that fact.

Now that we know the precise value of the float back in decimal, we
can make three cases:

  1. I am interested in the high precision of converting double back
    to a decimal string representation.

Well, then use “%.55g” or something like that to get it back. But
certainly, for smaller numbers this might not be enough anyway. a
full string representation of the float representation of 1e-200
requires a format of “%.515g” to be shown correctly. Interesting
though, that Python allows no more than 109 decimal points using
“%.109g” and throws an OverflowError exception with 110 points
onwards (!).

BTW: Float::MIN requires 715 digits for an exact representation.
Funny output awaits you from Python (2.4.4) when you ask for
“%.714e” % 0.1

  1. I am not interested in the high precision of converting double
    back to a decimal string representation, but I am interested in a
    string representation that allows me, like for any other object, to
    distinguish two different binary float values.

-> Here we have the point where Ruby lacks a feature: A
distinguishable string representation with minimum length (just what
Marshal.dump does). Note that 0.1.to_s doesn’t have to have 55
decimals in the output to be identifiable: For the given number of
0.1, the least significant float (i.e. double) digit has a weight of
2.0**-56 what is about 1.4E-17, so this is the magnitude where Ruby
could stop outputting the string representation.

So the double with binary representation 0x3FB999999999999A and the
next double with succeeding binary representations would have to
have these string representations:

For 0.1 (0x3FB999999999999A)
0.10000000000000000555 … exact
0.10000000000000001 distinguishable (output of Marshal.dump)

For 0.1+2.0**-56 (0x3FB999999999999B)
0.10000000000000001942 … exact
0.10000000000000002 distinguishable

For 0.1+2.0**-55 (0x3FB999999999999C)
0.10000000000000003330 … exact
0.10000000000000003 distinguishable

For 0.1+2.0**-55+2.0**-56 (0x3FB999999999999D)
0.10000000000000004718 … exact
0.10000000000000005 distinguishable (note there is no float
for an end digit of “4”)

  1. I am not interested in high-precision output as I know that
    arbitrary decimal values are handled using approximations after all,
    and I don’t want to be bothered with this level of detail.

Well, there is an argument, certainly. First you take exact decimal
“numbers”, put them into variables which converts them to an
approximated binary floating point format with high precision of
around 17 digits after the comma. Second you make calculations on
them which might increase the error, and when the result is designed
for output, you use “%.#{MY_precision}f” to get a result from the
imperfect calculation. So be it.

The question, the only question here is whether .to_s should
already make assumptions on the requested output precision.

Have I overlooked an overridable constant in Float so I can make
.to_s behave exactly like Marshal.dump?

Here is a naive function that compares binary and string
representations of slightly differing values:

base=0.1
(-60…-47).each do |ex|
b=base
n=(b+2**ex)
an,ab=[n,b].collect{|m| m.to_s}
mn,mb=[n,b].collect{|m|
Marshal.dump(m).sub(/…([^\000])./,’\1’)}
ea,en=[[an,ab],[n,b]].collect{|l,r|
l==r ? “==”:"!=" }
puts "#{b}+2^#{ex}: #{ea}, #{en}, "+
"String: #{an}#{ea}#{ab}, "+
“Number: #{mn}#{en}#{mb}”
end

Just run this - watch the output: The Marshal.dump (i.e., the real
float) differs for 1+2**-56 already, but .to_s changes first at
1+2**-50. Tell my why .to_s drops about 2 decimal digits from the
result for no good reason.

Does noone care about on some platforms, Ruby is not able to correctly
output floating points with the asked precision?
Ruby? Are you sure you found the culprit there? Are you sure ruby
doesn’t just wrap around whatever float libs are available on the
platform?

Quite possible. I don’t mind. Actually, win32-Python cannot output
more digits either while both cygwin-Python or the one on my
gentoo-box correctly output the asked precision, so it looks like a
windows (or VC++) library “feature”.

from marshal.c:

[…]
#ifdef DBL_DIG
#define FLOAT_DIG (DBL_DIG+2)
#else
#define FLOAT_DIG 17
#endif
[…]
/* xxx: should not use system’s sprintf(3) */
sprintf(buf, “%.*g”, FLOAT_DIG, d);
[…]

First to see: If DBL_DIG is not given, it goes for 17 digits. Second
is this nice comment – I like that one :slight_smile:

Now a quick look to numeric.c:

[…]
sprintf(buf, “%#.15g”, value); /* ensure to print decimal point */
[…]

Now why this? Why output only 15 digits on to_s, but use 17 digits
when marshaling? Is there any good reason for that? Is 15 what
customers ask for to hide the rounding issues of floats or is it a bug?

Because in that case, it’s your platform that is to blame, not
ruby. I don’t know it, and I won’t go into the source to verify it, but
I wouldn’t just go around and blame ruby without being sure.

Noting the “xxx:” comment above makes me believe that Ruby
developers are already aware of imprecise libaries but didn’t have
had the time to fix that (i.e., write an own one or use a GNU
version of it).

I understood that floats are approximations, and it doesn’t bother me
much (maybe it would if I did scientific calculations - but then I’d
probably know a bit more about it and/or use different means) whether
the approximation is 0.000000000000010% or 0.000000000000006% off
(that’s the difference in your 0.1 example - notice something?).

Great words. “I don’t mind the precision if it looks precise
enough”. Either I can rely on correct IEEE 754 support or I can’t.

Again, it’s not a matter of taste how a floating point value is
transferred back to decimals. This can be done unambiguously, especially
if enough positions after decimal point are requested like in “print
%.60f”.
It’s just rather pointless as at some time in your calculation, you lost
precision already.

Who says that? Just because you always use 10-based numbers and
can live with the approximation? Another person might use precise
binary values, i.e. powers of two, and expects from the programming
language, the libraries and the floating point processor to
precisely manage and output it. Even if the thread started with
0.1 which is stored in float as the 56-digit value given above, I
expect to get this very value back from the programming language.

So now let me ask you, what do you do that you need 60 places? Are you
sure floats are the right tool?

Floats are good for what floats are good. What answer do you expect
from me?

  • Matthias