The point is to illustrate why you might want to use BigDecimal (i.e.
so 3.2 - 2.0 would in fact = 1.2).
require ‘bigdecimal’
x = BigDecimal(“3.2”)
y = BigDecimal(“2.0”)
z = BigDecimal(“1.2”)
p x - y == z ? “equal” : “not equal” # prints “equal”
I’m fairly new to Ruby and don’t do much programming, but when I saw
this example I was surprised that the default behavior is that 3.2 -
2.0 != 1.2
To me, this violates the “Principal of least surprise”, but I guess it
isn’t a big deal because I don’t remember it being discussed in
Programming Ruby book (but it certainly may have been).
To me, this violates the “Principal of least surprise”, but I guess it
isn’t a big deal because I don’t remember it being discussed in
Programming Ruby book (but it certainly may have been).
Do other languages work this way?
Yep. This is pretty standard.
This article is tolerable, but rambles a bit:
Floating-point arithmetic - Wikipedia
There was a personal computer that had a language in ROM
that used binary-coded-decimal floating point.
Using that language, 3.2 - 2.0 yielded precisely 1.2.
I’m fairly new to Ruby and don’t do much programming, but when I saw
this example I was surprised that the default behavior is that 3.2 -
2.0 != 1.2
To me, this violates the “Principal of least surprise”,
Do you mean the “principle of least suprise”? One word means primary,
the
other means standard or canon. They sound the same, but that’s all they
have in common.
but I guess it
isn’t a big deal because I don’t remember it being discussed in
Programming Ruby book (but it certainly may have been).
Do other languages work this way?
All of them that use binary internal storage, yes. A decimal value like
1.2
is a continuing fraction in binary and cannot be precisely represented
in
binary in a finite number of places. Sort of like 1/3 in either binary
or
decimal.
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.