@@ -127,7 +127,11 @@ with inexact values become comparable to one another::
127127
128128Binary floating-point arithmetic holds many surprises like this. The problem
129129with "0.1" is explained in precise detail below, in the "Representation Error"
130- section. See `The Perils of Floating Point <https://www.lahey.com/float.htm >`_
130+ section. See `Examples of Floating Point Problems
131+ <https://jvns.ca/blog/2023/01/13/examples-of-floating-point-problems/> `_ for
132+ a pleasant summary of how binary floating-point works and the kinds of
133+ problems commonly encountered in practice. Also see
134+ `The Perils of Floating Point <https://www.lahey.com/float.htm >`_
131135for a more complete account of other common surprises.
132136
133137As that says near the end, "there are no easy answers." Still, don't be unduly
@@ -151,7 +155,7 @@ Another form of exact arithmetic is supported by the :mod:`fractions` module
151155which implements arithmetic based on rational numbers (so the numbers like
1521561/3 can be represented exactly).
153157
154- If you are a heavy user of floating point operations you should take a look
158+ If you are a heavy user of floating- point operations you should take a look
155159at the NumPy package and many other packages for mathematical and
156160statistical operations supplied by the SciPy project. See <https://scipy.org>.
157161
@@ -211,12 +215,14 @@ decimal fractions cannot be represented exactly as binary (base 2) fractions.
211215This is the chief reason why Python (or Perl, C, C++, Java, Fortran, and many
212216others) often won't display the exact decimal number you expect.
213217
214- Why is that? 1/10 is not exactly representable as a binary fraction. Almost all
215- machines today (November 2000) use IEEE-754 floating point arithmetic, and
216- almost all platforms map Python floats to IEEE-754 "double precision". 754
217- doubles contain 53 bits of precision, so on input the computer strives to
218- convert 0.1 to the closest fraction it can of the form *J */2**\ *N * where *J * is
219- an integer containing exactly 53 bits. Rewriting ::
218+ Why is that? 1/10 is not exactly representable as a binary fraction. Since at
219+ least 2000, almost all machines use IEEE 754 binary floating-point arithmetic,
220+ and almost all platforms map Python floats to IEEE 754 binary64 "double
221+ precision" values. IEEE 754 binary64 values contain 53 bits of precision, so
222+ on input the computer strives to convert 0.1 to the closest fraction it can of
223+ the form *J */2**\ *N * where *J * is an integer containing exactly 53 bits.
224+ Rewriting
225+ ::
220226
221227 1 / 10 ~= J / (2**N)
222228
@@ -243,7 +249,8 @@ by rounding up::
243249 >>> q+1
244250 7205759403792794
245251
246- Therefore the best possible approximation to 1/10 in 754 double precision is::
252+ Therefore the best possible approximation to 1/10 in IEEE 754 double precision
253+ is::
247254
248255 7205759403792794 / 2 ** 56
249256
@@ -256,7 +263,7 @@ if we had not rounded up, the quotient would have been a little bit smaller than
2562631/10. But in no case can it be *exactly * 1/10!
257264
258265So the computer never "sees" 1/10: what it sees is the exact fraction given
259- above, the best 754 double approximation it can get: :
266+ above, the best IEEE 754 double approximation it can get:
260267
261268 >>> 0.1 * 2 ** 55
262269 3602879701896397.0
0 commit comments