Ruby 1.9+ , floats, and decimal

0.2-0.1
=> 0.1

1.2-0.1
=> 1.1

1.2-1.1
=> 0.0999999999999999

gotcha!

ok, i know what you’re thinking. this dead horse is double dead…

in the spirit of advancing to a better computing env, shouldn’t it be
time for ruby to default to “real” decimal instead of float?

arguments:

  1. ruby using the dot decimal notation, for sure that is a decimal
    notation (and ruby is not c)
  2. we remove 99% of deci gotchas
  3. there is always #to_f. we can put it to better use (if you want
    speed and not precision).
  4. i wont ask again and again :wink:

kind regards -botp
ps: this is not thunk. LOL :))

On Sun, Apr 18, 2010 at 8:07 PM, botp [email protected] wrote:

ok, i know what you’re thinking. this dead horse is double dead…

in the spirit of advancing to a better computing env, shouldn’t it be
time for ruby to default to “real” decimal instead of float?

I think decimal floating point (as supported in the 2008 version of
IEEE 754) would
be a good idea. OTOH, to reduce the potential impact on existing code
– including
on performance, where binary floating point, since it has more
prevalent hardware
support, is likely to outperform in most cases – it might be better
to introduce a
simple syntax for decimal floating point literals, e.g. an integer or
floating point
expression with a trailing (no intervening whitespace) “d” like 1.0d
would be treated
as a decimal floating point value.

On Mon, 19 Apr 2010 00:24:56 -0500, Christopher D.
[email protected] wrote:

On Sun, Apr 18, 2010 at 8:07 PM, botp [email protected] wrote:

shouldn’t it be time for ruby to default to “real” decimal instead
of float?

I think decimal floating point (as supported in the 2008 version of
IEEE 754) would be a good idea.

The decimal formats in 754-2008 are storage formats only … the
math specification is still binary.

OTOH, to reduce the potential impact on existing code – including
on performance, where binary floating point, since it has more
prevalent hardware support, is likely to outperform in most cases

More prevalent than what? There is no commercial architecture that
has a decimal FPU. Even integer BCD support is rare now.

– it might be better to introduce a simple syntax for decimal
floating point literals, e.g. an integer or floating point expression
with a trailing (no intervening whitespace) “d” like 1.0d would be
treated as a decimal floating point value.

Just adding syntax won’t help - you’d need to automagically activate
the BigDecimal class as well.

George

On Sun, 18 Apr 2010 22:07:34 -0500, botp [email protected] wrote:

in the spirit of advancing to a better computing env, shouldn’t it be
time for ruby to default to “real” decimal instead of float?

arguments:

  1. ruby using the dot decimal notation, for sure that is a decimal
    notation (and ruby is not c)
  2. we remove 99% of deci gotchas
  3. there is always #to_f. we can put it to better use (if you want
    speed and not precision).

Default behavior? I don’t know whether it’s a good idea or not. On
one hand, it supports the principle of least surprise for newbies …
but on the other hand it keeps legions of programmers clueless about
reality.

Lisp defaults to precise arithmetic but falls back on imprecise binary
floating point under a whole bunch of corner conditions … and
imprecision is contagious … any operation that involves an imprecise
operand produces an imprecise result. The Lisp standard committee
spent a very great deal of time defining the conditions under which
arithmetic should produce precise or imprecise results. But the rules
are many and too confusing … I’ve seen experienced Lisp programmers
panic when they trigger an imprecise result, or when they try to
integrate Lisp with libs in other languages (C, Fortran, etc.) that
don’t have precise arithmetic.

I can see supporting parallel numeric towers - one precise and an
imprecise, performance oriented one - and letting the programmer
decide which to use. But in that case, I think there should only be
explicit conversions between them.

But I also think programmers need to understand how floating point
hardware operates because, if they become professional, sooner or
later they will have to deal with it.

George

But I also think programmers need to understand how floating point
hardware operates because, if they become professional, sooner or
later they will have to deal with it.

Every programmer must be a professional?

Those who don’t understand it may not become programmers?

Honestly we moved far away from the ancient times of Assembly and C.

If information is important, “programmers” will know it sooner or later
anyway.

in the spirit of advancing to a better computing env, shouldn’t it be
time for ruby to default to “real” decimal instead of float?

+1

Then newbies wouldn’t be caught again and again. More experienced
programmers would know what they’re doing and could choose explicitly
choose floats when speed requires it.

in the spirit of advancing to a better computing env, shouldn’t it be
time for ruby to default to “real” decimal instead of float?

I think so.

OTOH, to reduce the potential impact on existing code
– including on performance, where binary floating point, since it has more
prevalent hardware support, is likely to outperform in most cases – it might be better
to introduce a simple syntax for decimal floating point literals, e.g. an integer or
floating point expression with a trailing (no intervening whitespace) “d” like 1.0d
would be treated as a decimal floating point value.

I like that, though I would prefer to default to BigDecimal and require
an extra 0.0f for floats, but that’s just me.

Of course, in reality switching to BigDecimal by default would break so
much existing code I can’t imagine them ever doing it.

Re: “it causes problems when you eventually mix BD’s with floats” I
would suggest trying to overcome these problems by keeping BigDecimals
by default when the two are mixed. One can check if a float value
“matches a known decimal” by something like:

def float_to_big_decimal f
if( (“%f” % f).to_f == f)
# this float looks like a default decimal value
return BigDecimal(“%f” % f)
else
return BigDecimal.new(“%.20g” % f)
end
end

it should work, but doesn’t seem to work:

BigDecimal.new(“0.20000000000000007”)
=> #<BigDecimal:1eeee78,‘0.2000000000 0000006661E0’,20(28)>
Is that last BigDecimal value a bug (the 6’s)?

That being said, computer science is so entrenched in normal IEEE floats
that I doubt it is realistic for BigDecimal to become the default.
We’re stuck.

So maybe the suggestion of adding the optional post-fix, like

1.1d

would be nice, but nobody would use it probably, since it’s not the
default. Sigh.

I can see supporting parallel numeric towers - one precise and an
imprecise, performance oriented one - and letting the programmer
decide which to use. But in that case, I think there should only be
explicit conversions between them.

Maybe a command line parameter could specify if you want to use “floats”
or “bigdecimals” by default. That has potential.

Also of note is that 1.9.2 currently displays floats “in their gory
details” if they don’t match a known decimal value. There’s also this
ticket:
http://redmine.ruby-lang.org/issues/show/2152
to display floats’ “gory details” as well as in readable form.
Currently we only have “gory details” in 1.9.2 and “readability” in
1.9.1, so the ticket is to suggest we want both. Go there and +1 it if
you want it :slight_smile:

Cheers.
[rp]

On Mon, Apr 19, 2010 at 1:24 AM, Christopher D. [email protected]
wrote:

gotcha!
on performance, where binary floating point, since it has more
prevalent hardware
support, is likely to outperform in most cases – it might be better
to introduce a
simple syntax for decimal floating point literals, e.g. an integer or
floating point
expression with a trailing (no intervening whitespace) “d” like 1.0d
would be treated
as a decimal floating point value.

I’m afraid that this idea of making BigDecimal the default is based on
a naive view that BigDecimal is some kind of Panacea. It’s not. And
a Fixed length decimal float would be even worse, for example:

1). Just because BigDecimal has what seems like better behavior in
some cases compared to a binary float, it has just as many problems in
other cases.

(BigDecimal(“1”) / BigDecimal(“3”)).to_s
=> “0.333333333333333333333333333333333333E0”

Both binary and decimal floats have an infinite number of real
numbers which they can’t express exactly.

  1. Rounding errors in floating pt get bigger when the base is bigger
    http://docs.sun.com/source/806-3568/ncg_goldberg.html

Although the latter point is somewhat obviated by BigDecimal which
uses a variable length, variable length does nothing for the first
point, and the length of a BigDecimal is practically bounded, since it
has to be represented in the finite storage of the platform.

And the variable length is a major reason why the performance of
BigDecimal arithmetic is so inferior to other representations.

There’s no getting around the fact that different number
representations have different trade-offs, and in many cases any
programmer will need to make choices based on those trade-offs.


Rick DeNatale

Blog: http://talklikeaduck.denhaven2.com/
Github: rubyredrick (Rick DeNatale) · GitHub
Twitter: @RickDeNatale
WWR: http://www.workingwithrails.com/person/9021-rick-denatale
LinkedIn: http://www.linkedin.com/in/rickdenatale

On Tue, May 11, 2010 at 8:11 AM, Rick DeNatale [email protected]
wrote:

(BigDecimal(“1”) / BigDecimal(“3”)).to_s
=> “0.333333333333333333333333333333333333E0”

And extended example which might make the point a bit better:

((BigDecimal(“1”) / BigDecimal(“3”))*BigDecimal(“3”)).to_s
=> “0.999999999999999999999999999999999999E0”

Rick DeNatale

Blog: http://talklikeaduck.denhaven2.com/
Github: rubyredrick (Rick DeNatale) · GitHub
Twitter: @RickDeNatale
WWR: http://www.workingwithrails.com/person/9021-rick-denatale
LinkedIn: http://www.linkedin.com/in/rickdenatale

On Tue, May 11, 2010 at 11:09 AM, Roger P. [email protected]
wrote:

Good point.
Maybe a good default would be Rational then?

Rational(1,3)
=> (1/3)

No there is no universally good default. The best representation
depends on the problem, that’s just something that programmers will
ultimately need to face.

And extended example which might make the point a bit better:
((BigDecimal(“1”) / BigDecimal(“3”))*BigDecimal(“3”)).to_s
=> “0.999999999999999999999999999999999999E0”

That one seemed to work ok. Was your point that it is limited to 36
decimals by default?

So you don’t expect (1/3) * 3 to equal 1?

You can’t represent 1/3 exactly as a float in base 10, as long as you
have a finite number of digits, and you can’t have more digits than
you have memory to hold.


Rick DeNatale

Blog: http://talklikeaduck.denhaven2.com/
Github: rubyredrick (Rick DeNatale) · GitHub
Twitter: @RickDeNatale
WWR: http://www.workingwithrails.com/person/9021-rick-denatale
LinkedIn: http://www.linkedin.com/in/rickdenatale

=> “0.999999999999999999999999999999999999E0”

That one seemed to work ok. �Was your point that it is limited to 36
decimals by default?

So you don’t expect (1/3) * 3 to equal 1?

Oh ok. I was a bit dense there. Rational could do better in this
instance. Are there situations where rational would not be a good
default, except for speed reasons? (just asking theoretically–not
proposing it at all)
Thanks!
-rp

1). Just because BigDecimal has what seems like better behavior in
some cases compared to a binary float, it has just as many problems in
other cases.

(BigDecimal(“1”) / BigDecimal(“3”)).to_s
=> “0.333333333333333333333333333333333333E0”

Both binary and decimal floats have an infinite number of real
numbers which they can’t express exactly.

Good point.
Maybe a good default would be Rational then?

Rational(1,3)
=> (1/3)

And extended example which might make the point a bit better:
((BigDecimal(“1”) / BigDecimal(“3”))*BigDecimal(“3”)).to_s
=> “0.999999999999999999999999999999999999E0”

That one seemed to work ok. Was your point that it is limited to 36
decimals by default?

Thanks!
-rp

On 5/11/10, Roger P. [email protected] wrote:

Good point.
Maybe a good default would be Rational then?

Rational(1,3)
=> (1/3)

Rational can’t represent irrationals or transcendentals exactly. More
to the point, it has the same problems as BigDecimal: it’s slow and
intermediate results can end up taking up lots of memory (unless
rounded).

I do wish there was a syntax for literal BigDecimals, Rationals,
Imaginaries, and Complex numbers, tho.

And extended example which might make the point a bit better:
((BigDecimal(“1”) / BigDecimal(“3”))*BigDecimal(“3”)).to_s
=> “0.999999999999999999999999999999999999E0”

That one seemed to work ok. Was your point that it is limited to 36
decimals by default?

The correct answer would have been “1.0”.

On Wed, May 12, 2010 at 12:05 AM, Rick DeNatale
[email protected] wrote:

Neither floats (regardless of base) nor rationals can represent all
real numbers. Each pair of consecutive float values has an infinite
number of reals which fall in the crack between them. Neither can
exactly represent irrational values including Pi, e, sqrt(2) etc. etc.

i hope you are not exaggerating :wink:
everyone knows that pi or e or sqt(2) or 1/3 cannot be express exactly
in decimal form.

the problem w current float representation is that they balk in plain
and simple addition and subtraction operation :frowning:
can we not just fix that at least ?

For general purpose languages, a combination of integers (possibly
including arbitrary length integers as in Ruby/Smalltalk, etc) and
binary floats is probably the best choice for ‘default’
representations, along with giving the programmer to choose
alternative representations for particular uses based on informed
trade-offs.

agree totally.


Rick DeNatale

kind regards -botp

On Tue, May 11, 2010 at 11:37 AM, Roger P. [email protected]
wrote:

proposing it at all)
Thanks!

Neither floats (regardless of base) nor rationals can represent all
real numbers. Each pair of consecutive float values has an infinite
number of reals which fall in the crack between them. Neither can
exactly represent irrational values including Pi, e, sqrt(2) etc. etc.

For general purpose languages, a combination of integers (possibly
including arbitrary length integers as in Ruby/Smalltalk, etc) and
binary floats is probably the best choice for ‘default’
representations, along with giving the programmer to choose
alternative representations for particular uses based on informed
trade-offs.


Rick DeNatale

Blog: http://talklikeaduck.denhaven2.com/
Github: rubyredrick (Rick DeNatale) · GitHub
Twitter: @RickDeNatale
WWR: http://www.workingwithrails.com/person/9021-rick-denatale
LinkedIn: http://www.linkedin.com/in/rickdenatale

On Tue, May 11, 2010 at 8:05 PM, botp [email protected] wrote:

On Wed, May 12, 2010 at 12:05 AM, Rick DeNatale [email protected] wrote:

Neither floats (regardless of base) nor rationals can represent all
real numbers. Each pair of consecutive float values has an infinite
number of reals which fall in the crack between them. Neither can
exactly represent irrational values including Pi, e, sqrt(2) etc. etc.

i hope you are not exaggerating :wink:
everyone knows that pi or e or sqt(2) or 1/3 cannot be express exactly
in decimal form.

And the first three can’t be represented numerically exactly in a
finite space in any base.

the problem w current float representation is that they balk in plain
and simple addition and subtraction operation :frowning:
can we not just fix that at least ?

If I knew how, I’d be on a private island somewhere.

Floats have been around for a long time, FORTRAN was born in 1954.
I’ve been programming for about 71% of FORTRANs life span, and been
keeping an eye on Computer Science since then. Floats will always be
floats, and have the properties of floats.

Computer floats are really just the same kind of representation used
on slide rules with more digits. The real use case for floats is to
represent numbers with a very wide range of magnitudes, from very
large to very small in a limited number of digits/bytes. This is also
the purpose of scientific notation. Slide rule operators generally
work with a maximum of 3 digits, and slide rules had enough precision
to serve the purpose of engineers for decades.

When you add 1.2E10 and 9.9E-2 and only preserve only 3 digits you get
1.2E10. With more digits theres still a point where a smaller addend
gets lost.

The properties of float representations have been studied for over 50
years, and although they are well understood by those who have
actually studied them, and IEEE has standardized their behavior, they
will always hold surprises to those who casually expect them to always
get the ‘right’ results. It’s not that the results aren’t right, but
that they are sometimes surprising to a naive observer.


Rick DeNatale

Blog: http://talklikeaduck.denhaven2.com/
Github: rubyredrick (Rick DeNatale) · GitHub
Twitter: @RickDeNatale
WWR: http://www.workingwithrails.com/person/9021-rick-denatale
LinkedIn: http://www.linkedin.com/in/rickdenatale

to introduce a simple syntax for decimal floating point literals, e.g. an integer or
floating point expression with a trailing (no intervening whitespace) “d” like 1.0d
would be treated as a decimal floating point value.

Another option (suggested by Caleb C. offline) would be to add
methods to float, like
#to_bd
#to_r

That being said, I personally wouldn’t mind if Ruby defaulted to using
Rational and used floats only upon request… :slight_smile:

-rp

On Fri, May 7, 2010 at 5:40 PM, George N. [email protected]
wrote:

The decimal formats in 754-2008 are storage formats only … the
math specification is still binary.

I haven’t read the spec myself, but every published reference to it
I’ve seen says it specifies decimal floating point arithmetic as well
as storage formats (the latter being somewhat pointless without the
former.)

Rick DeNatale wrote:

For general purpose languages, a combination of integers (possibly
including arbitrary length integers as in Ruby/Smalltalk, etc) and
binary floats is probably the best choice for ‘default’
representations, along with giving the programmer to choose
alternative representations for particular uses based on informed
trade-offs.

And to add to that point: most languages are based on int and float, so
if those are the default types in ruby, it is easier to port code (or
your brain) to and from ruby. Why add another gotcha to the list that
newbies have to learn when coming to ruby from perl, c, java, js, …?

Decimals have much higher precision and are usually used within financial applications that require a high degree of accuracy. Decimals are much slower (up to 20X times in some tests) than a double/float. Decimals and Floats/Doubles cannot be compared without a cast whereas Floats and Doubles can. Decimals also allow the encoding or trailing zeros.

Float - 7 digits (32 bit)

Double-15-16 digits (64 bit)

Decimal -28-29 significant digits (128 bit)

The main difference is Floats and Doubles are binary floating point types and a Decimal will store the value as a floating decimal point type. So Decimals have much higher precision and are usually used within monetary (financial) applications that require a high degree of accuracy. But in performance wise Decimals are slower than double and float types.