Bug in % (Float)?

On 01/09/07, Morton G. [email protected] wrote:

On Sep 1, 2007, at 7:50 AM, Calamitas wrote:
You may be right, but I am not convinced. That is, I believe there
might be a way to order the calculation so that in general the errors
cancel rather than reinforce.

Sure. That’s what every numerical algorithm designer tries to do. But
it is far from easy to do in general, and I am not at all sure if it
is really possible to have the errors cancel in every case. But even
if it is possible, figuring out what the best order is will likely be
very expensive to do programmatically and thus not suited for a
general purpose language.

BTW, I just found an example where your m_mod_n function still gives
the “wrong” result.

1.0 / 3125.0 == 0.00032 # => true
m_mod_n(1.0, 0.00032) # => 0.000319999999999876

In this case, 1.0 / 0.00032 is not exactly 3125.0, i.e., / does not
counter the approximation of 0.00032 anymore.

The question you ask Ruby is what 1.0 % 0.1 is. That’s near a
discontinuity in %. You know that there is an error in representing
0.1, and it can go either side of the discontinuity. As it turns out,
it goes on the wrong side of the discontinuity as far as you are
concerned, but from the point of view of Ruby, that side has the
closest floating point number.

The question is: can Ruby’s POV be changed to agree with mine? :slight_smile: I
understand that you are arguing that it can’t.

You can change Ruby’s POV, sure. It depends on what you want. The best
result for every atomic operation, or the best result overall. The
latter sounds ideal, but I don’t believe it can be done in general.
(If you can do that, I’m pretty sure you’ll be famous.) The former is
something that can be done and that I believe modern processors’
floating point units do or try to do. It does req

On Sep 2, 2007, at 7:32 AM, Calamitas wrote:

very expensive to do programmatically and thus not suited for a
general purpose language.

BTW, I just found an example where your m_mod_n function still gives
the “wrong” result.

1.0 / 3125.0 == 0.00032 # => true
m_mod_n(1.0, 0.00032) # => 0.000319999999999876

In this case, 1.0 / 0.00032 is not exactly 3125.0, i.e., / does not
counter the approximation of 0.00032 anymore.

I’m not too surprised. It was bound to break for some sufficiently
small n; the discontinuities become denser as n -> 0. My horror over
Ruby’s (actually I suspect the underlying C math library) result with
n = 0.1 is 0.1 is so large. Even 0.00032 is rather a larger n than I
would like.

You can change Ruby’s POV, sure. It depends on what you want. The best
result for every atomic operation, or the best result overall. The
latter sounds ideal, but I don’t believe it can be done in general.
(If you can do that, I’m pretty sure you’ll be famous.)

Yeah. But someday when you tell your grandchildren about the eminent
mathematicians you once knew, my name isn’t going to come up :slight_smile:

The former is
something that can be done and that I believe modern processors’
floating point units do or try to do. It does require the programmer
to think about error propagation, and that’s not easy, but unless you
can obtain the ideal of the best overall result, the programmer still
needs to think about error propagation, which may be harder because
the more magic that happens the more unpredictable things become.
Also, it will be tempting to not think about error propagation
and the software could run correctly for a long while, pass all
tests, but then break in the most unexpected and horrible way.

All too true. That’s why I use Mathematica rather than a general-
purpose language such as Ruby for serious numerical work. Because
there are things like Pisot numbers lurking in the mathematical
bushes, there will always be seemingly innocuous computations that go
sour when IEEE floats are used. I never meant to imply that I thought
otherwise. I am just wondering if there isn’t a better way to do %
with IEEE floats.

Regards, Morton

On 02/09/07, Morton G. [email protected] wrote:

On Sep 2, 2007, at 7:32 AM, Calamitas wrote:
I’m not too surprised. It was bound to break for some sufficiently
small n; the discontinuities become denser as n → 0. My horror over
Ruby’s (actually I suspect the underlying C math library) result with
n = 0.1 is 0.1 is so large. Even 0.00032 is rather a larger n than I
would like.

Well, the smallest n for which m_mod_n(1.0, 1.0 / n) is way off is n =
93. If it’s any consolation though, among the first million n, there
are only 78161 for which m_mod_n doesn’t give something zeroish. There
are 497886 such n for which Ruby’s built-in % is off.

You can change Ruby’s POV, sure. It depends on what you want. The best
result for every atomic operation, or the best result overall. The
latter sounds ideal, but I don’t believe it can be done in general.
(If you can do that, I’m pretty sure you’ll be famous.)

Yeah. But someday when you tell your grandchildren about the eminent
mathematicians you once knew, my name isn’t going to come up :slight_smile:

Dang. I don’t know any eminent mathematicians, and I thought now
here’s my chance… Yeah well :wink:

All too true. That’s why I use Mathematica rather than a general-
purpose language such as Ruby for serious numerical work. Because
there are things like Pisot numbers lurking in the mathematical
bushes, there will always be seemingly innocuous computations that go
sour when IEEE floats are used. I never meant to imply that I thought
otherwise. I am just wondering if there isn’t a better way to do %
with IEEE floats.

That all depends on what ‘better’ means I guess.

Will you let me know what Wolfram Research answers?

Regards,
Peter

On Sep 2, 2007, at 7:57 PM, Calamitas wrote:

are only 78161 for which m_mod_n doesn’t give something zeroish. There
are 497886 such n for which Ruby’s built-in % is off.

That is a real consolation :slight_smile:

Seriously, to me, this is the most convincing argument you’ve come up
with in support of your position that finding a better method for
computing % with floats is a lost cause.

Will you let me know what Wolfram Research answers?

Certainly. I’ll pass it on directly to your gmail account. It may be
some time before I get an answer since they are a bit swamped right
now with problems arising from their new Mathematica 6 release, and
my query is a minor side-issue from their POV.

Regards, Morton

On 03/09/07, Morton G. [email protected] wrote:

computing % with floats is a lost cause.
In theory, I think it’s possible to do % in a way that enforces that
1.0 % (1.0 / n) equals 0.0 or something close to it. This could be
done by shifting the discontinuity slightly. But it creates a bias
that may be detrimental to other applications.

Will you let me know what Wolfram Research answers?

Certainly. I’ll pass it on directly to your gmail account. It may be
some time before I get an answer since they are a bit swamped right
now with problems arising from their new Mathematica 6 release, and
my query is a minor side-issue from their POV.

Ok, thanks.

Regards,
Peter