Microrant on Ruy's Math Skills

Typed by clumsy thumbs on tiny keys.

On 24 Jan 2012, at 02:28 AM, Chad P. [email protected] wrote:

On Tue, Jan 24, 2012 at 10:23:36AM +0900, Ryan D. wrote:

It’d be nice if you read the thread before chipping in. Adam P.
nailed this one. Here’s another example:

I don’t get it. I’ve been polite and reasonable – and evidently
overlooked one fucking statement.

massively irrelevant reply…

Yep, that Ryan D. was a massive prick, there. You called it! I’m not
even on the list any more: I’m just thumbing through this old thread I
bumped in to in my inbox and “Whazam!” there’re two pricky posts by the
same “Ryan D.” author. Anyway, I’m going to go and eat soup. With my
Son. How cool is that!?

Have fun storming the castle!

On Tue, Jan 24, 2012 at 12:58 PM, Florian G.
[email protected]wrote:

unacceptable for
me (expecially, because float is the correct datatype there).
Changing such a primitive in a language is reckless.

Ha, just discovering the ‘to_d’ function in ‘bigdecimal/util’

It’s actually very close to what I wanted (15.78D):

$ irb
1.9.3p0 :001 > require ‘bigdecimal’
=> true
1.9.3p0 :002 > require ‘bigdecimal/util’
=> true
1.9.3p0 :003 > a = “15.78”.to_d
=> #BigDecimal:8b2d9e8,‘0.1578E2’,18(18)
1.9.3p0 :004 > b = 2.to_d
=> #BigDecimal:8b2a810,‘0.2E1’,9(36)
1.9.3p0 :005 > a/b == 7.89.to_d
=> true
1.9.3p0 :006 > 1.234567890123.to_d
=> #<BigDecimal:8ba49bc,‘0.1234567890 123E1’,27(45)>

I’m all set here :slight_smile:

Peter

On Jan 24, 2012, at 9:01 AM, Peter V. wrote:

Ha, just discovering the ‘to_d’ function in ‘bigdecimal/util’
1.9.3p0 :005 > a/b == 7.89.to_d
=> true
1.9.3p0 :006 > 1.234567890123.to_d

Be careful with these last two. The Float => BigDecimal
conversion that is happening with to_d is doing some rounding:

("%.60f" % 7.89) #=>
“7.889999999999999680255768907954916357994079589843750000000000”
("%.60f" % 7.89).to_d == “7.89”.to_d #=> false

Gary W.

On 23.01.2012 11:33, Peter V. wrote:

Then we could also answer future questions about

“How come 1.1 - 1.0 != 0.1 ??”

with

“1.1B - 1B == 0.1B”

I’m curious to see what’s gonna happen once Desktop CPUs start to
support the nice decimal floating-point types of IEEE 754-2008
[IEEE 754-2008 revision - Wikipedia] … maybe Ruby
should already prepare syntactically for support of these types. And, in
theory, there is already a library from Intel
[Intel® Decimal Floating-Point Math Library]
one can use to do decimal floating-point math.

– Matthias

2012/1/25 Matthias W. [email protected]:

Im curious to see whats gonna happen once Desktop CPUs start to support
the nice decimal floating-point types of IEEE 754-2008
[IEEE 754-2008 revision - Wikipedia] maybe Ruby
should already prepare syntactically for support of these types. And, in
theory, there is already a library from Intel

[Intel® Decimal Floating-Point Math Library]

one can use to do decimal floating-point math.

wow, thanks for the update. i hope ruby can indeed support that even
if i need to recompile manually.

-botp

Aren’t Statistical calculations where the current behavior would be
desired a special rather than general case though? If this particular
gotcha is coming up every six months and the types of tasks that would
benefit from this behavior is smaller than those that are harmed by it
shouldn’t the default behavior be adjusted to account for this?

On Wed, Jan 25, 2012 at 4:03 PM, Kevin [email protected] wrote:

Aren’t Statistical calculations where the current behavior would be
desired a special rather than general case though?

It’s not some statistical calculations only. Just think of statics
calculations.

If this particular
gotcha is coming up every six months and the types of tasks that would
benefit from this behavior is smaller than those that are harmed by it
shouldn’t the default behavior be adjusted to account for this?

Math is such a fundamental thing that you do not want to risk breaking
anything at all by changing this. The standard is set and other tools
are available which are only mildly awkward. So we could do it if
cost of keeping the old is higher than cost of change as you said.
But: the cost of keeping things is actually much lower than cost of
change (which unfortunately constantly seems to be underestimated in
SE); it’s merely the effort for everybody new to the domain of numeric
mathematics to learn its fundamental rules (plus our explanations and
the little additional typing for BigDecimal usage.) And the good
thing is: this knowledge is portable to other languages as well.
Compared to that the cost of only estimating what the change would
break is dramatic.

Cheers

robert

“Standard is better than better.” -Anon.

Also, you don’t need a “to_f” to reproduce this – just enter “0.5 -
0.4 - 0.1” into any Ruby, Python or JavaScript interpreter – or a C
or Java program, for that matter – to get a perfectly consistent, and
consistently surprising, very very small negative number that is not
quite zero.

So how to test around this in unit tests? In RSpec, use be_within (ne
be_close) [1]; in Wrong (which works inside many test frameworks), use
close_to? [2]

  • A

[1]

[2] wrong/lib/wrong/close_to.rb at master · sconover/wrong · GitHub

On Wed, Jan 25, 2012 at 6:05 AM, Alex C. [email protected] wrote:

So how to test around this in unit tests? In RSpec, use be_within (ne
be_close) [1]; in Wrong (which works inside many test frameworks), use
close_to? [2]

I’m really impressed with that Wrong library; well done!

Just throwing this out there: judging float equality based on
difference is incorrect. You should use ratio, or perhaps in some
cases a combination of both.

For instance, if my standard test library defines a default tolerance
of 0.00001, that seems pretty good, right? OK, but what if the floats
I’m testing are actually really close to zero?

assert { x.close_to? 0.0000000135 }

Well… x could have the value 0.0000000134999999999999994243561, in
that crazy way that floats behave. That’s clearly “close enough” to
the intended value, but it will fail the test because the default
tolerance is inappropriate for this case.

Of course, I can set a different tolerance for a given test, but the
deeper problem is this: numerate people use ratio instead of
difference to judge the proximity of one number to another, and that’s
how we should implement tests for float pseudo-equality. You
shouldn’t need any parameter then; the implementation should work no
matter the scale of the floats involved.

Discuss.

On Tue, Jan 24, 2012 at 11:15 PM, Gavin S. [email protected]
wrote:

On Wed, Jan 25, 2012 at 6:05 AM, Alex C. [email protected] wrote:

So how to test around this in unit tests? In RSpec, use be_within (ne
be_close) [1]; in Wrong (which works inside many test frameworks), use
close_to? [2]

I’m really impressed with that Wrong library; well done!

Glad you like it! It would work even better if MRI attached ASTs
and/or source code to procs/lambdas/methods, rather than merely
source_location. I’m thinking of logging a bug or two about that.

Just throwing this out there: judging float equality based on
difference is incorrect. You should use ratio, or perhaps in some
cases a combination of both.

Fascinating! So if we used division-and-proximity-to-1 instead of
subtraction-and-proximity-to-0 then we could possibly do away with the
tolerance parameter altogether… or at least redefine it.

Of course, your argument reverses itself when dealing with very large
numbers! Let’s say I’m dealing with Time. If I say time a should be
close to time b, then I probably want the same default precision (say,
10 seconds) no matter when I’m performing the test, but using ratios
will give me quite different tolerances depending on whether my
baseline is epoch (0 = 1/1/1970) or Time.now.

now = Time.now.to_i.to_f; (now/(now+10))
=> 0.9999999924671322
now = 1.to_f; (now/(now+10))
=> 0.09090909090909091

In any case… I will be happy to review your patch! :slight_smile:

  • A

Generally, you should use ratio instead of difference when comparing
floats, but in many cases - such as the time one provided by Alex - a
difference is obviously a better idea.

There is no silver bullet, just use whatever works for you in a
particular case.

– Matma R.

I have tried this, but recently discovered the same issues arise.

To review, instead of:

(b-d) <= a && (b+d) >= a

We use ratios:

(a / b - 1).abs <= d

But try 1.1, 1.0 and d=0.1

(1.1 / 1.0 - 1).abs <= 0.1

and it is false though it should be true b/c

(1.1 / 1.0 - 1).abs #=> 0.10000000000000009

Which is exactly what instigated my micro-rant.

Any advice?

On 01/24/2012 11:15 PM, Gavin S. wrote:

I does and it’s broken! LOL!

2012/1/26 Bartosz Dziewoński [email protected]:

Generally, you should use ratio instead of difference when comparing
floats, but in many cases - such as the time one provided by Alex - a
difference is obviously a better idea.

There is no silver bullet, just use whatever works for you in a particular case.

I doubt that difference is a better idea “in many cases” and I don’t
think the time example is even valid, though I could be wrong. Got
any other examples?

As for no silver bullet, the problem of comparing floats arises from
engineering and should be solved by engineering, not by whatever works
in a particular case. So while I don’t have a silver bullet, my gut
feeling says there is one.

I’m sure in particular programs, near enough is good enough. If Bob’s
program thinks 39.5 and 39.47 are close enough to be called “equal”,
then that’s a test Bob needs to implement himself, in both his program
and his tests. It has nothing whatsoever to do with “float equality”
in a standard unit testing sense.

On Thu, Jan 26, 2012 at 6:31 AM, Joel VanderWerf
[email protected] wrote:

I’m really impressed with that Wrong library; well done!

Agreed. I’m only using it on a few experimental projects, so far.

Anyone else using Alex’s Wrong?

Having spent untold hours creating my own testing library
(whitestone), I’m afraid I am only admiring Wrong from a distance.

Credit where it’s due, of course: I admired assert2.0 from a distance as
well :slight_smile:

On Thu, Jan 26, 2012 at 5:53 AM, Intransition [email protected]
wrote:

Which is exactly what instigated my micro-rant. Any advice?
This isn’t a problem. The aim here is to test float equality. The
test should return true iff a human would look at the two floats and
say “yep, they’re meant to be the same thing, it’s just the electrical
engineering that got in the way”.

1.0 and 1.1 are not float-equal in this sense – not even close – and
0.1 is a ridiculous tolerance for testing float ratio. I’m sure you
were just experimenting, but what I’m saying is: your counterexample
doesn’t undermine the overall approach.

On Thu, Jan 26, 2012 at 5:29 AM, Alex C. [email protected] wrote:

numbers!
I don’t think so. Floats are implemented using a coefficient and an
exponent. So are the following two floats essentially equal?

A: 6.30912402 E 59
B: 6.30912401999999999999 E 59

I’d say yes. What about these two?

A: 6.30912402 E 59
B: 6.30912401999999999999 E 58

Of course not! There is an order-of-magnitude difference. So perhaps
a unit testing float comparison should work like this (pseudo-code):

def float_equal? a, b
c1, m1 = coefficient(a), exponent(a)
c2, m2 = coefficient(b), exponent(b)
m1 == m2 and (c1/c2 - 1).abs < 0.000000000001
end

If you take the magnitude away, then dealing with very large numbers
shouldn’t be a problem.

Let’s say I’m dealing with Time. If I say time a should be
close to time b, then I probably want the same default precision (say,
10 seconds) no matter when I’m performing the test, but using ratios
will give me quite different tolerances depending on whether my
baseline is epoch (0 = 1/1/1970) or Time.now.

If your application or library want to know if two times are within 10
seconds of each other, then that’s a property of your code and has
nothing to do with float implementations. In other words, to compare
Time objects, use Time objects, not Float objects :slight_smile:

In any case… I will be happy to review your patch! :slight_smile:

Hard to offer a patch to code I don’t even have installed :), but here
is an excerpt from my implementation. See whitestone/lib/whitestone/assertion_classes.rb at master · gsinclair/whitestone · GitHub, line
274, for context [1].

  def run
    if @actual.zero? or @expected.zero?
      # There's no scale, so we can only go on difference.
      (@actual - @expected) < @epsilon
    else
      # We go by ratio. The ratio of two equal numbers is one, so 

the ratio
# of two practically-equal floats will be very nearly one.
@ratio = (@actual/@expected - 1).abs
@ratio < @epsilon
end
end

The problem with this is it’s using @epsilon for two different
purposes: a “difference” epsilon and a “ratio” epsilon. That is
clearly wrong, but I just implemented something that would work for
me. I figured there must be a best-practice approach out there
somewhere that I could learn from. I firmly believe this problem
should be solved once and for all, and it won’t be by testing
difference, and there should be value for epsilon that is justified by
the engineering. [2]

I also believe the built-in Float class should provide methods to
assist us. It gives us inaccuracy, so it should give us the tools to
deal with it.

class Float
def essentially_equal_to?(other)
# Best-practice implementation here with scientifically valid
value for epsilon.
end

def within_delta_of?(other, delta)
  (self - other).abs < delta
    # No default value for delta because it is entirely

context-dependent. This is
# a convenience method only.
end
end

a = 1.1 - 1.0
a.essentially_equal_to?(0.1) # true

4.7.within_delta_of?(4.9251, 0.2) # false

[1] Full link for posterity:

[2] While “one epsilon to rule them all” is appealing, the problem is
that the errors inherent in float representation get magnified by
computation. However, even raising two “essentially equal” floats to
the power of 50 doesn’t change their essential equality, assuming a
ratio of 1e-10 is good enough:

a = 0.1 # 0.1
b = 1.1 - 1.0 # 0.10000000000000009

xa = a ** 50 # 1.0000000000000027e-50
xb = b ** 50 # 1.0000000000000444e-50

proximity_ratio = (xa/xb - 1).abs
# 4.163336342344337e-14

proximity_ratio < 1e-10
# true

By the way, the proximity_ratio for the original a and b was
7.77156e-16, so I hastily conclude:

  • The engineering compromises in the representation of floats gives
    us a proximity ratio of around 1e-15 (7.77156e-16 above).
  • Raising to an enormous power changes the proximity ratio to around
    1e-13 (4.163e-14 above).
  • A reasonable value for epsilon might therefore be 1e-12.

I expect this conclusion might depend on my choice of values for a and
b, though.

If you made it this far, congratulations.

Seriously? You blame the example? Reminds me of that old joke.

Patient: “Doctor, it hurts when I do this.”
Doctor: “Well, don’t do that!”

The problem is you’re looking at in only as an engineer might, dealing
with
very small deltas. But as an implementer of such a method as #close_to?
or
#approx? I would never assume there are no valid applications that are
concerned with bulkier scales.

On Wed, Jan 25, 2012 at 7:36 PM, Gavin S. [email protected]
wrote:

Which is exactly what instigated my micro-rant. Any advice?

This isn’t a problem. The aim here is to test float equality. The
test should return true iff a human would look at the two floats and
say “yep, they’re meant to be the same thing, it’s just the electrical
engineering that got in the way”.

Yes, this is exactly the thing to care about. Thank you.

1.0 and 1.1 are not float-equal in this sense – not even close – and

0.1 is a ridiculous tolerance for testing float ratio. I’m sure you
were just experimenting, but what I’m saying is: your counterexample
doesn’t undermine the overall approach.

And yes, 0.1 is a ridiculous tolerance and this, to be blunt, is not so
much a “counter” example as a “dumb” one.