Python's current support for floating point is very poorly defined. In fact all it does is say it depends on what C does. C in turn says very little, which leaves the programmer with no practical way to reason about floating point in his program.
I believe this could be fixed, although not without significant effort, and (unavoidably) harming the performance of floating point. First though I'd like to discuss NaNs.
NaN stands for “Not a Number.” It is used when we don't know how to represent the result of the operation, or rather the form we're currently using can't represent the result. One common way to produce a NaN is to evaluate Infinity/Infinity. anything/Infinity always produces 0, Infinity/anything always produces Infinity, so Infinity/Infinity should produce both 0 and Infinity. Obviously it can't be both so we say it produces a NaN instead. Or maybe it should raise an exception, but that's not what I want to discuss. I want to discuss how to handle a NaN once it is created.
Most people “know” that IEEE Floating Point requires “any comparison involving a NaN must return False, even NaN==NaN.” However, from my digging on the web it seems this is not what IEEE requires at all! A post by DWCantrell[1] quotes:
Four mutually exclusive relations are possible: "less than," "equal," "greater than," and "unordered." The last case arises when at least one operand is NaN. Every NaN shall compare unordered with everything, including itself.
So what IEEE requires is you treat NaN as “unordered.” In C the closest that can be done is to treat all comparisons as False, which is probably why people think returning False is what IEEE requires. But in Python we can do better, by raising an exception. Those that have a sane fallback could catch the exception, but the rest of us could rest easy in the fact that our errors will be reported.
Unfortunately there's another problem. In Python identity supersedes value. “
a is b
” automatically implies “
a == b
.” We may be able to treat it as “unordered” in some cases, but not all (notably those involving containers.) But this begs the question, do we really
want it to be “unordered” in all cases? We say “unordered” because we have no reasonable ways to compare by value, and because IEEE Floating Point only deals with comparisons by value, “unordered” is the only answer it gives. However, basic math does use identity comparisons! “a=a” is obviously a true statement. So why should Python limit itself to value comparisons when the identity comparison is just as valid and provides us with more information? Thus, I believe Python should not only tolerate identity comparisons involving NaN, but it should embrace them!
However, that leads us to needing to preserve the identity of NaN objects. This is actually quite easy. We simply need to create a NaN type which cannot be interned (unlike other number types), and have floating point operations return it instead. All comparisons would raise an exception unless they are comparisons with itself. This should all be the default behavior for non-number types, which is not surprising since NaN is “not a number.”
As an aside, you may wonder if this affects numeric arrays. It does not. A copied NaN will not share identity with the original object, and thus will not compare equal with the original object. Numeric needs only document that it always copies the NaN objects.
One final note on NaNs. If identity is significant then you obviously don't want a global NaN constant. Instead I propose a
makeNaN()
function, which would create a unique object with every call.
Okay, back to floating point in general. What I believe we need is to do is provide identical results on all platforms by default and provide an option for faster computation if less stringent results are required. Ideally it should also have a way to simulate those less stringent results even if they're not native to the platform, to allow testing and development of all platforms from a single computer.
The way I believe this should be done is using Context objects, similar (but not identical) to what decimal provides. A few notes:
- What should the size of the output be? I believe the simplest behavior is to have the output be exactly specified by the Context, and not be affected by the size of the inputs. This makes operations such as
1/somefloat(3)
produce an obvious result. [Edit] It also means Infinity
can be a singleton, maybe even a separate class with no concept of size.
- I believe Context objects should be created by a factory function and should be immutable. The reason is that altering a hardware context is actually quite expensive, and allowing attribute modification would be misleading. It's easier to swap two unchanging Context objects than it is to repeatedly compute the state of a single changing Context object. Additionally, having Contexts be immutable means there is less exposed when the user wants to create their own Context class (to produce intervals for instance.)
- There should be
sys.getfloatcontext()
and sys._setfloatcontext()
functions, providing access to per-thread contexts. Note that sys._setfloatcontext()
has a leading underscore, indicating that it is private—only those hacking on the implementation are expected to use it, not those wanting to changing the default context (use a private context instead.) - How to implement this all is probably the biggest issue. The best solution I have is to use LLVM to access the hardware in a portable way while circumventing C entirely. This has the added advantage that it could be reused by the PyPy project.
- The most common Contexts would be IEEE-like 32bit, 64bit, and 80bit sizes. They would provide exact results for all operations and functions (even transcendental functions such as
sqrt()
!) - If exact results are not necessary then we could either provide functions named
loosesqrt()
or provide a loose context where all functions are loose. We would then have to decide on whether the functions will be documented to provide results within a certain range, or if they're defined to provide any result at all (random()
?) - A loose context may be permitted to arbitrarily pick the size of the objects it outputs, so long as those objects are stable once created. For instance, “
c.divide(1, 3) == c.divide(1, 3)
” may return one object in 32bits and the other in 80bits, thus causing the comparison to return False. - [Edit] It may not be obvious so I'll say so explicitly:
math.sqrt()
would become a wrapper for sys.getfloatcontext().sqrt()
[1]
http://groups.google.ca/group/sci.math.num-analysis
/msg/912e7246f03b4185?hl=en