@@ -127,7 +127,11 @@ with inexact values become comparable to one another::
127
127
128
128
Binary floating-point arithmetic holds many surprises like this. The problem
129
129
with "0.1" is explained in precise detail below, in the "Representation Error"
130
- section. See `The Perils of Floating Point <https://www.lahey.com/float.htm >`_
130
+ section. See `Examples of Floating Point Problems
131
+ <https://jvns.ca/blog/2023/01/13/examples-of-floating-point-problems/> `_ for
132
+ a pleasant summary of how binary floating-point works and the kinds of
133
+ problems commonly encountered in practice. Also see
134
+ `The Perils of Floating Point <https://www.lahey.com/float.htm >`_
131
135
for a more complete account of other common surprises.
132
136
133
137
As that says near the end, "there are no easy answers." Still, don't be unduly
@@ -151,7 +155,7 @@ Another form of exact arithmetic is supported by the :mod:`fractions` module
151
155
which implements arithmetic based on rational numbers (so the numbers like
152
156
1/3 can be represented exactly).
153
157
154
- If you are a heavy user of floating point operations you should take a look
158
+ If you are a heavy user of floating- point operations you should take a look
155
159
at the NumPy package and many other packages for mathematical and
156
160
statistical operations supplied by the SciPy project. See <https://scipy.org>.
157
161
@@ -211,12 +215,14 @@ decimal fractions cannot be represented exactly as binary (base 2) fractions.
211
215
This is the chief reason why Python (or Perl, C, C++, Java, Fortran, and many
212
216
others) often won't display the exact decimal number you expect.
213
217
214
- Why is that? 1/10 is not exactly representable as a binary fraction. Almost all
215
- machines today (November 2000) use IEEE-754 floating point arithmetic, and
216
- almost all platforms map Python floats to IEEE-754 "double precision". 754
217
- doubles contain 53 bits of precision, so on input the computer strives to
218
- convert 0.1 to the closest fraction it can of the form *J */2**\ *N * where *J * is
219
- an integer containing exactly 53 bits. Rewriting ::
218
+ Why is that? 1/10 is not exactly representable as a binary fraction. Since at
219
+ least 2000, almost all machines use IEEE 754 binary floating-point arithmetic,
220
+ and almost all platforms map Python floats to IEEE 754 binary64 "double
221
+ precision" values. IEEE 754 binary64 values contain 53 bits of precision, so
222
+ on input the computer strives to convert 0.1 to the closest fraction it can of
223
+ the form *J */2**\ *N * where *J * is an integer containing exactly 53 bits.
224
+ Rewriting
225
+ ::
220
226
221
227
1 / 10 ~= J / (2**N)
222
228
@@ -243,7 +249,8 @@ by rounding up::
243
249
>>> q+1
244
250
7205759403792794
245
251
246
- Therefore the best possible approximation to 1/10 in 754 double precision is::
252
+ Therefore the best possible approximation to 1/10 in IEEE 754 double precision
253
+ is::
247
254
248
255
7205759403792794 / 2 ** 56
249
256
@@ -256,7 +263,7 @@ if we had not rounded up, the quotient would have been a little bit smaller than
256
263
1/10. But in no case can it be *exactly * 1/10!
257
264
258
265
So the computer never "sees" 1/10: what it sees is the exact fraction given
259
- above, the best 754 double approximation it can get: :
266
+ above, the best IEEE 754 double approximation it can get:
260
267
261
268
>>> 0.1 * 2 ** 55
262
269
3602879701896397.0
0 commit comments