2016-08-09 19:44:52 -03:00
|
|
|
.. testsetup::
|
|
|
|
|
|
|
|
import math
|
2023-02-19 15:21:37 -04:00
|
|
|
from fractions import Fraction
|
2016-08-09 19:44:52 -03:00
|
|
|
|
2007-08-15 11:28:22 -03:00
|
|
|
.. _tut-fp-issues:
|
|
|
|
|
|
|
|
**************************************************
|
|
|
|
Floating Point Arithmetic: Issues and Limitations
|
|
|
|
**************************************************
|
|
|
|
|
|
|
|
.. sectionauthor:: Tim Peters <tim_one@users.sourceforge.net>
|
2023-02-19 15:21:37 -04:00
|
|
|
.. sectionauthor:: Raymond Hettinger <python at rcn dot com>
|
2007-08-15 11:28:22 -03:00
|
|
|
|
|
|
|
|
|
|
|
Floating-point numbers are represented in computer hardware as base 2 (binary)
|
2023-02-19 15:21:37 -04:00
|
|
|
fractions. For example, the **decimal** fraction ``0.625``
|
|
|
|
has value 6/10 + 2/100 + 5/1000, and in the same way the **binary** fraction ``0.101``
|
|
|
|
has value 1/2 + 0/4 + 1/8. These two fractions have identical values, the only
|
2007-08-15 11:28:22 -03:00
|
|
|
real difference being that the first is written in base 10 fractional notation,
|
|
|
|
and the second in base 2.
|
|
|
|
|
|
|
|
Unfortunately, most decimal fractions cannot be represented exactly as binary
|
|
|
|
fractions. A consequence is that, in general, the decimal floating-point
|
|
|
|
numbers you enter are only approximated by the binary floating-point numbers
|
|
|
|
actually stored in the machine.
|
|
|
|
|
|
|
|
The problem is easier to understand at first in base 10. Consider the fraction
|
|
|
|
1/3. You can approximate that as a base 10 fraction::
|
|
|
|
|
|
|
|
0.3
|
|
|
|
|
|
|
|
or, better, ::
|
|
|
|
|
|
|
|
0.33
|
|
|
|
|
|
|
|
or, better, ::
|
|
|
|
|
|
|
|
0.333
|
|
|
|
|
|
|
|
and so on. No matter how many digits you're willing to write down, the result
|
|
|
|
will never be exactly 1/3, but will be an increasingly better approximation of
|
|
|
|
1/3.
|
|
|
|
|
|
|
|
In the same way, no matter how many base 2 digits you're willing to use, the
|
|
|
|
decimal value 0.1 cannot be represented exactly as a base 2 fraction. In base
|
|
|
|
2, 1/10 is the infinitely repeating fraction ::
|
|
|
|
|
|
|
|
0.0001100110011001100110011001100110011001100110011...
|
|
|
|
|
2009-04-24 00:09:06 -03:00
|
|
|
Stop at any finite number of bits, and you get an approximation. On most
|
|
|
|
machines today, floats are approximated using a binary fraction with
|
2009-06-28 20:21:38 -03:00
|
|
|
the numerator using the first 53 bits starting with the most significant bit and
|
2009-04-24 00:09:06 -03:00
|
|
|
with the denominator as a power of two. In the case of 1/10, the binary fraction
|
|
|
|
is ``3602879701896397 / 2 ** 55`` which is close to but not exactly
|
|
|
|
equal to the true value of 1/10.
|
|
|
|
|
|
|
|
Many users are not aware of the approximation because of the way values are
|
|
|
|
displayed. Python only prints a decimal approximation to the true decimal
|
|
|
|
value of the binary approximation stored by the machine. On most machines, if
|
|
|
|
Python were to print the true decimal value of the binary approximation stored
|
2023-02-19 15:21:37 -04:00
|
|
|
for 0.1, it would have to display::
|
2007-08-15 11:28:22 -03:00
|
|
|
|
|
|
|
>>> 0.1
|
2009-04-24 00:09:06 -03:00
|
|
|
0.1000000000000000055511151231257827021181583404541015625
|
2007-08-15 11:28:22 -03:00
|
|
|
|
2009-04-24 00:09:06 -03:00
|
|
|
That is more digits than most people find useful, so Python keeps the number
|
2023-02-19 15:21:37 -04:00
|
|
|
of digits manageable by displaying a rounded value instead:
|
|
|
|
|
|
|
|
.. doctest::
|
2007-08-15 11:28:22 -03:00
|
|
|
|
2009-04-24 00:09:06 -03:00
|
|
|
>>> 1 / 10
|
|
|
|
0.1
|
2007-08-15 11:28:22 -03:00
|
|
|
|
2009-04-24 00:09:06 -03:00
|
|
|
Just remember, even though the printed result looks like the exact value
|
|
|
|
of 1/10, the actual stored value is the nearest representable binary fraction.
|
2007-08-15 11:28:22 -03:00
|
|
|
|
2009-04-24 00:09:06 -03:00
|
|
|
Interestingly, there are many different decimal numbers that share the same
|
|
|
|
nearest approximate binary fraction. For example, the numbers ``0.1`` and
|
|
|
|
``0.10000000000000001`` and
|
|
|
|
``0.1000000000000000055511151231257827021181583404541015625`` are all
|
|
|
|
approximated by ``3602879701896397 / 2 ** 55``. Since all of these decimal
|
2009-04-24 16:06:29 -03:00
|
|
|
values share the same approximation, any one of them could be displayed
|
2009-04-24 00:09:06 -03:00
|
|
|
while still preserving the invariant ``eval(repr(x)) == x``.
|
2007-08-15 11:28:22 -03:00
|
|
|
|
2010-07-29 10:56:56 -03:00
|
|
|
Historically, the Python prompt and built-in :func:`repr` function would choose
|
2009-06-28 19:30:13 -03:00
|
|
|
the one with 17 significant digits, ``0.10000000000000001``. Starting with
|
2009-04-24 00:09:06 -03:00
|
|
|
Python 3.1, Python (on most systems) is now able to choose the shortest of
|
|
|
|
these and simply display ``0.1``.
|
2007-08-15 11:28:22 -03:00
|
|
|
|
|
|
|
Note that this is in the very nature of binary floating-point: this is not a bug
|
|
|
|
in Python, and it is not a bug in your code either. You'll see the same kind of
|
|
|
|
thing in all languages that support your hardware's floating-point arithmetic
|
|
|
|
(although some languages may not *display* the difference by default, or in all
|
|
|
|
output modes).
|
|
|
|
|
2023-02-19 15:21:37 -04:00
|
|
|
For more pleasant output, you may wish to use string formatting to produce a
|
|
|
|
limited number of significant digits:
|
|
|
|
|
|
|
|
.. doctest::
|
2007-08-15 11:28:22 -03:00
|
|
|
|
2010-08-04 17:56:28 -03:00
|
|
|
>>> format(math.pi, '.12g') # give 12 significant digits
|
2009-04-24 00:09:06 -03:00
|
|
|
'3.14159265359'
|
|
|
|
|
2010-08-04 17:56:28 -03:00
|
|
|
>>> format(math.pi, '.2f') # give 2 digits after the point
|
|
|
|
'3.14'
|
|
|
|
|
2009-04-24 00:09:06 -03:00
|
|
|
>>> repr(math.pi)
|
|
|
|
'3.141592653589793'
|
|
|
|
|
|
|
|
It's important to realize that this is, in a real sense, an illusion: you're
|
|
|
|
simply rounding the *display* of the true machine value.
|
2007-08-15 11:28:22 -03:00
|
|
|
|
2009-04-26 17:10:50 -03:00
|
|
|
One illusion may beget another. For example, since 0.1 is not exactly 1/10,
|
2023-02-19 15:21:37 -04:00
|
|
|
summing three values of 0.1 may not yield exactly 0.3, either:
|
|
|
|
|
|
|
|
.. doctest::
|
2009-04-26 18:37:46 -03:00
|
|
|
|
2023-02-19 15:21:37 -04:00
|
|
|
>>> 0.1 + 0.1 + 0.1 == 0.3
|
2009-04-26 18:37:46 -03:00
|
|
|
False
|
|
|
|
|
|
|
|
Also, since the 0.1 cannot get any closer to the exact value of 1/10 and
|
|
|
|
0.3 cannot get any closer to the exact value of 3/10, then pre-rounding with
|
2023-02-19 15:21:37 -04:00
|
|
|
:func:`round` function cannot help:
|
2009-04-26 18:37:46 -03:00
|
|
|
|
2023-02-19 15:21:37 -04:00
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> round(0.1, 1) + round(0.1, 1) + round(0.1, 1) == round(0.3, 1)
|
2009-04-26 18:37:46 -03:00
|
|
|
False
|
|
|
|
|
|
|
|
Though the numbers cannot be made closer to their intended exact values,
|
2023-02-19 15:21:37 -04:00
|
|
|
the :func:`math.isclose` function can be useful for comparing inexact values:
|
2009-04-26 18:37:46 -03:00
|
|
|
|
2023-02-19 15:21:37 -04:00
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> math.isclose(0.1 + 0.1 + 0.1, 0.3)
|
|
|
|
True
|
|
|
|
|
|
|
|
Alternatively, the :func:`round` function can be used to compare rough
|
|
|
|
approximations::
|
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> round(math.pi, ndigits=2) == round(22 / 7, ndigits=2)
|
|
|
|
True
|
2007-08-15 11:28:22 -03:00
|
|
|
|
|
|
|
Binary floating-point arithmetic holds many surprises like this. The problem
|
|
|
|
with "0.1" is explained in precise detail below, in the "Representation Error"
|
2023-02-19 15:21:37 -04:00
|
|
|
section. See `Examples of Floating Point Problems
|
|
|
|
<https://jvns.ca/blog/2023/01/13/examples-of-floating-point-problems/>`_ for
|
|
|
|
a pleasant summary of how binary floating point works and the kinds of
|
|
|
|
problems commonly encountered in practice. Also see
|
|
|
|
`The Perils of Floating Point <https://www.lahey.com/float.htm>`_
|
2007-08-15 11:28:22 -03:00
|
|
|
for a more complete account of other common surprises.
|
|
|
|
|
|
|
|
As that says near the end, "there are no easy answers." Still, don't be unduly
|
|
|
|
wary of floating-point! The errors in Python float operations are inherited
|
|
|
|
from the floating-point hardware, and on most machines are on the order of no
|
|
|
|
more than 1 part in 2\*\*53 per operation. That's more than adequate for most
|
2009-06-28 19:30:13 -03:00
|
|
|
tasks, but you do need to keep in mind that it's not decimal arithmetic and
|
2007-08-15 11:28:22 -03:00
|
|
|
that every float operation can suffer a new rounding error.
|
|
|
|
|
|
|
|
While pathological cases do exist, for most casual use of floating-point
|
|
|
|
arithmetic you'll see the result you expect in the end if you simply round the
|
|
|
|
display of your final results to the number of decimal digits you expect.
|
2008-05-25 22:03:56 -03:00
|
|
|
:func:`str` usually suffices, and for finer control see the :meth:`str.format`
|
|
|
|
method's format specifiers in :ref:`formatstrings`.
|
2007-08-15 11:28:22 -03:00
|
|
|
|
2008-10-05 14:57:52 -03:00
|
|
|
For use cases which require exact decimal representation, try using the
|
|
|
|
:mod:`decimal` module which implements decimal arithmetic suitable for
|
|
|
|
accounting applications and high-precision applications.
|
|
|
|
|
|
|
|
Another form of exact arithmetic is supported by the :mod:`fractions` module
|
|
|
|
which implements arithmetic based on rational numbers (so the numbers like
|
|
|
|
1/3 can be represented exactly).
|
|
|
|
|
2007-08-31 00:25:11 -03:00
|
|
|
If you are a heavy user of floating point operations you should take a look
|
2020-10-01 20:22:14 -03:00
|
|
|
at the NumPy package and many other packages for mathematical and
|
2016-05-07 04:49:07 -03:00
|
|
|
statistical operations supplied by the SciPy project. See <https://scipy.org>.
|
2008-10-05 13:46:29 -03:00
|
|
|
|
|
|
|
Python provides tools that may help on those rare occasions when you really
|
|
|
|
*do* want to know the exact value of a float. The
|
|
|
|
:meth:`float.as_integer_ratio` method expresses the value of a float as a
|
2023-02-19 15:21:37 -04:00
|
|
|
fraction:
|
|
|
|
|
|
|
|
.. doctest::
|
2008-10-05 13:46:29 -03:00
|
|
|
|
|
|
|
>>> x = 3.14159
|
|
|
|
>>> x.as_integer_ratio()
|
2009-06-28 19:30:13 -03:00
|
|
|
(3537115888337719, 1125899906842624)
|
2008-10-05 13:46:29 -03:00
|
|
|
|
|
|
|
Since the ratio is exact, it can be used to losslessly recreate the
|
2023-02-19 15:21:37 -04:00
|
|
|
original value:
|
|
|
|
|
|
|
|
.. doctest::
|
2008-10-05 13:46:29 -03:00
|
|
|
|
|
|
|
>>> x == 3537115888337719 / 1125899906842624
|
|
|
|
True
|
|
|
|
|
|
|
|
The :meth:`float.hex` method expresses a float in hexadecimal (base
|
2023-02-19 15:21:37 -04:00
|
|
|
16), again giving the exact value stored by your computer:
|
|
|
|
|
|
|
|
.. doctest::
|
2008-10-05 13:46:29 -03:00
|
|
|
|
|
|
|
>>> x.hex()
|
|
|
|
'0x1.921f9f01b866ep+1'
|
|
|
|
|
|
|
|
This precise hexadecimal representation can be used to reconstruct
|
2023-02-19 15:21:37 -04:00
|
|
|
the float value exactly:
|
|
|
|
|
|
|
|
.. doctest::
|
2008-10-05 13:46:29 -03:00
|
|
|
|
|
|
|
>>> x == float.fromhex('0x1.921f9f01b866ep+1')
|
|
|
|
True
|
|
|
|
|
|
|
|
Since the representation is exact, it is useful for reliably porting values
|
|
|
|
across different versions of Python (platform independence) and exchanging
|
|
|
|
data with other languages that support the same format (such as Java and C99).
|
|
|
|
|
2023-02-19 15:21:37 -04:00
|
|
|
Another helpful tool is the :func:`sum` function which helps mitigate
|
|
|
|
loss-of-precision during summation. It uses extended precision for
|
|
|
|
intermediate rounding steps as values are added onto a running total.
|
|
|
|
That can make a difference in overall accuracy so that the errors do not
|
|
|
|
accumulate to the point where they affect the final total:
|
|
|
|
|
|
|
|
.. doctest::
|
2009-04-26 19:01:46 -03:00
|
|
|
|
2022-12-23 18:35:58 -04:00
|
|
|
>>> 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 == 1.0
|
2009-04-26 19:01:46 -03:00
|
|
|
False
|
2023-02-19 15:21:37 -04:00
|
|
|
>>> sum([0.1] * 10) == 1.0
|
2009-04-26 19:01:46 -03:00
|
|
|
True
|
2008-10-05 13:46:29 -03:00
|
|
|
|
2023-02-19 15:21:37 -04:00
|
|
|
The :func:`math.fsum()` goes further and tracks all of the "lost digits"
|
|
|
|
as values are added onto a running total so that the result has only a
|
|
|
|
single rounding. This is slower than :func:`sum` but will be more
|
|
|
|
accurate in uncommon cases where large magnitude inputs mostly cancel
|
|
|
|
each other out leaving a final sum near zero:
|
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> arr = [-0.10430216751806065, -266310978.67179024, 143401161448607.16,
|
|
|
|
... -143401161400469.7, 266262841.31058735, -0.003244936839808227]
|
|
|
|
>>> float(sum(map(Fraction, arr))) # Exact summation with single rounding
|
|
|
|
8.042173697819788e-13
|
|
|
|
>>> math.fsum(arr) # Single rounding
|
|
|
|
8.042173697819788e-13
|
|
|
|
>>> sum(arr) # Multiple roundings in extended precision
|
|
|
|
8.042178034628478e-13
|
|
|
|
>>> total = 0.0
|
|
|
|
>>> for x in arr:
|
|
|
|
... total += x # Multiple roundings in standard precision
|
|
|
|
...
|
|
|
|
>>> total # Straight addition has no correct digits!
|
|
|
|
-0.0051575902860057365
|
|
|
|
|
|
|
|
|
2007-08-15 11:28:22 -03:00
|
|
|
.. _tut-fp-error:
|
|
|
|
|
|
|
|
Representation Error
|
|
|
|
====================
|
|
|
|
|
|
|
|
This section explains the "0.1" example in detail, and shows how you can perform
|
|
|
|
an exact analysis of cases like this yourself. Basic familiarity with binary
|
|
|
|
floating-point representation is assumed.
|
|
|
|
|
|
|
|
:dfn:`Representation error` refers to the fact that some (most, actually)
|
|
|
|
decimal fractions cannot be represented exactly as binary (base 2) fractions.
|
|
|
|
This is the chief reason why Python (or Perl, C, C++, Java, Fortran, and many
|
2009-04-24 00:09:06 -03:00
|
|
|
others) often won't display the exact decimal number you expect.
|
2007-08-15 11:28:22 -03:00
|
|
|
|
|
|
|
Why is that? 1/10 is not exactly representable as a binary fraction. Almost all
|
|
|
|
machines today (November 2000) use IEEE-754 floating point arithmetic, and
|
|
|
|
almost all platforms map Python floats to IEEE-754 "double precision". 754
|
|
|
|
doubles contain 53 bits of precision, so on input the computer strives to
|
Merged revisions 69129-69131,69139-69140,69143,69154-69159,69169,69288-69289,69293,69297-69301,69348 via svnmerge from
svn+ssh://pythondev@svn.python.org/python/trunk
........
r69129 | benjamin.peterson | 2009-01-30 19:42:55 -0600 (Fri, 30 Jan 2009) | 1 line
check the errno in bad fd cases
........
r69130 | andrew.kuchling | 2009-01-30 20:50:09 -0600 (Fri, 30 Jan 2009) | 1 line
Add a section
........
r69131 | andrew.kuchling | 2009-01-30 21:26:02 -0600 (Fri, 30 Jan 2009) | 1 line
Text edits and markup fixes
........
r69139 | mark.dickinson | 2009-01-31 10:44:04 -0600 (Sat, 31 Jan 2009) | 2 lines
Add an extra test for long <-> float hash equivalence.
........
r69140 | benjamin.peterson | 2009-01-31 10:52:03 -0600 (Sat, 31 Jan 2009) | 1 line
PyErr_BadInternalCall() raises a SystemError, not TypeError #5112
........
r69143 | benjamin.peterson | 2009-01-31 15:00:10 -0600 (Sat, 31 Jan 2009) | 1 line
I believe the intention here was to avoid a global lookup
........
r69154 | benjamin.peterson | 2009-01-31 16:33:02 -0600 (Sat, 31 Jan 2009) | 1 line
fix indentation in comment
........
r69155 | david.goodger | 2009-01-31 16:53:46 -0600 (Sat, 31 Jan 2009) | 1 line
markup fix
........
r69156 | gregory.p.smith | 2009-01-31 16:57:30 -0600 (Sat, 31 Jan 2009) | 4 lines
- Issue #5104: The socket module now raises OverflowError when 16-bit port and
protocol numbers are supplied outside the allowed 0-65536 range on bind()
and getservbyport().
........
r69157 | benjamin.peterson | 2009-01-31 17:43:25 -0600 (Sat, 31 Jan 2009) | 1 line
add explanatory comment
........
r69158 | benjamin.peterson | 2009-01-31 17:54:38 -0600 (Sat, 31 Jan 2009) | 1 line
more flags which only work for function blocks
........
r69159 | gregory.p.smith | 2009-01-31 18:16:01 -0600 (Sat, 31 Jan 2009) | 2 lines
Update doc wording as suggested in issue4903.
........
r69169 | guilherme.polo | 2009-01-31 20:56:16 -0600 (Sat, 31 Jan 2009) | 3 lines
Restore Tkinter.Tk._loadtk so this test doesn't fail for problems
related to ttk.
........
r69288 | georg.brandl | 2009-02-05 04:30:57 -0600 (Thu, 05 Feb 2009) | 1 line
#5153: fix typo in example.
........
r69289 | georg.brandl | 2009-02-05 04:37:07 -0600 (Thu, 05 Feb 2009) | 1 line
#5144: document that PySys_SetArgv prepends the script directory (or the empty string) to sys.path.
........
r69293 | georg.brandl | 2009-02-05 04:59:28 -0600 (Thu, 05 Feb 2009) | 1 line
#5059: fix example.
........
r69297 | georg.brandl | 2009-02-05 05:32:18 -0600 (Thu, 05 Feb 2009) | 1 line
#5015: document PythonHome API functions.
........
r69298 | georg.brandl | 2009-02-05 05:33:21 -0600 (Thu, 05 Feb 2009) | 1 line
#4827: fix callback example.
........
r69299 | georg.brandl | 2009-02-05 05:35:28 -0600 (Thu, 05 Feb 2009) | 1 line
#4820: use correct module for ctypes.util.
........
r69300 | georg.brandl | 2009-02-05 05:38:23 -0600 (Thu, 05 Feb 2009) | 1 line
#4563: disable alpha and roman lists, fixes wrong formatting of contributor list.
........
r69301 | georg.brandl | 2009-02-05 05:40:35 -0600 (Thu, 05 Feb 2009) | 1 line
#5031: fix Thread.daemon property docs.
........
r69348 | benjamin.peterson | 2009-02-05 19:47:31 -0600 (Thu, 05 Feb 2009) | 1 line
fix download link
........
2009-02-05 22:40:07 -04:00
|
|
|
convert 0.1 to the closest fraction it can of the form *J*/2**\ *N* where *J* is
|
2007-08-15 11:28:22 -03:00
|
|
|
an integer containing exactly 53 bits. Rewriting ::
|
|
|
|
|
|
|
|
1 / 10 ~= J / (2**N)
|
|
|
|
|
|
|
|
as ::
|
|
|
|
|
|
|
|
J ~= 2**N / 10
|
|
|
|
|
|
|
|
and recalling that *J* has exactly 53 bits (is ``>= 2**52`` but ``< 2**53``),
|
2023-02-19 15:21:37 -04:00
|
|
|
the best value for *N* is 56:
|
|
|
|
|
|
|
|
.. doctest::
|
2007-08-15 11:28:22 -03:00
|
|
|
|
2009-06-28 20:21:38 -03:00
|
|
|
>>> 2**52 <= 2**56 // 10 < 2**53
|
|
|
|
True
|
2007-08-15 11:28:22 -03:00
|
|
|
|
|
|
|
That is, 56 is the only value for *N* that leaves *J* with exactly 53 bits. The
|
2023-02-19 15:21:37 -04:00
|
|
|
best possible value for *J* is then that quotient rounded:
|
|
|
|
|
|
|
|
.. doctest::
|
2007-08-15 11:28:22 -03:00
|
|
|
|
|
|
|
>>> q, r = divmod(2**56, 10)
|
|
|
|
>>> r
|
2008-08-10 09:16:45 -03:00
|
|
|
6
|
2007-08-15 11:28:22 -03:00
|
|
|
|
|
|
|
Since the remainder is more than half of 10, the best approximation is obtained
|
2023-02-19 15:21:37 -04:00
|
|
|
by rounding up:
|
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
|
2007-08-15 11:28:22 -03:00
|
|
|
|
|
|
|
>>> q+1
|
2008-08-10 09:16:45 -03:00
|
|
|
7205759403792794
|
2007-08-15 11:28:22 -03:00
|
|
|
|
2009-06-28 20:21:38 -03:00
|
|
|
Therefore the best possible approximation to 1/10 in 754 double precision is::
|
2007-08-15 11:28:22 -03:00
|
|
|
|
2009-06-28 20:21:38 -03:00
|
|
|
7205759403792794 / 2 ** 56
|
2007-08-15 11:28:22 -03:00
|
|
|
|
2009-04-24 00:09:06 -03:00
|
|
|
Dividing both the numerator and denominator by two reduces the fraction to::
|
|
|
|
|
2009-06-28 20:21:38 -03:00
|
|
|
3602879701896397 / 2 ** 55
|
2009-04-24 00:09:06 -03:00
|
|
|
|
2007-08-15 11:28:22 -03:00
|
|
|
Note that since we rounded up, this is actually a little bit larger than 1/10;
|
|
|
|
if we had not rounded up, the quotient would have been a little bit smaller than
|
|
|
|
1/10. But in no case can it be *exactly* 1/10!
|
|
|
|
|
|
|
|
So the computer never "sees" 1/10: what it sees is the exact fraction given
|
2023-02-19 15:21:37 -04:00
|
|
|
above, the best 754 double approximation it can get:
|
|
|
|
|
|
|
|
.. doctest::
|
2007-08-15 11:28:22 -03:00
|
|
|
|
2009-04-24 00:09:06 -03:00
|
|
|
>>> 0.1 * 2 ** 55
|
|
|
|
3602879701896397.0
|
2007-08-15 11:28:22 -03:00
|
|
|
|
2009-06-28 20:21:38 -03:00
|
|
|
If we multiply that fraction by 10\*\*55, we can see the value out to
|
2023-02-19 15:21:37 -04:00
|
|
|
55 decimal digits:
|
|
|
|
|
|
|
|
.. doctest::
|
2007-08-15 11:28:22 -03:00
|
|
|
|
2009-06-28 20:21:38 -03:00
|
|
|
>>> 3602879701896397 * 10 ** 55 // 2 ** 55
|
2009-04-24 00:09:06 -03:00
|
|
|
1000000000000000055511151231257827021181583404541015625
|
2007-08-15 11:28:22 -03:00
|
|
|
|
2009-06-28 20:21:38 -03:00
|
|
|
meaning that the exact number stored in the computer is equal to
|
|
|
|
the decimal value 0.1000000000000000055511151231257827021181583404541015625.
|
|
|
|
Instead of displaying the full decimal value, many languages (including
|
2023-02-19 15:21:37 -04:00
|
|
|
older versions of Python), round the result to 17 significant digits:
|
|
|
|
|
|
|
|
.. doctest::
|
2009-06-28 20:21:38 -03:00
|
|
|
|
|
|
|
>>> format(0.1, '.17f')
|
|
|
|
'0.10000000000000001'
|
2007-08-15 11:28:22 -03:00
|
|
|
|
2009-04-24 00:09:06 -03:00
|
|
|
The :mod:`fractions` and :mod:`decimal` modules make these calculations
|
2023-02-19 15:21:37 -04:00
|
|
|
easy:
|
|
|
|
|
|
|
|
.. doctest::
|
2007-08-15 11:28:22 -03:00
|
|
|
|
2009-04-24 00:09:06 -03:00
|
|
|
>>> from decimal import Decimal
|
|
|
|
>>> from fractions import Fraction
|
2009-06-28 20:21:38 -03:00
|
|
|
|
|
|
|
>>> Fraction.from_float(0.1)
|
|
|
|
Fraction(3602879701896397, 36028797018963968)
|
|
|
|
|
|
|
|
>>> (0.1).as_integer_ratio()
|
|
|
|
(3602879701896397, 36028797018963968)
|
|
|
|
|
|
|
|
>>> Decimal.from_float(0.1)
|
|
|
|
Decimal('0.1000000000000000055511151231257827021181583404541015625')
|
|
|
|
|
|
|
|
>>> format(Decimal.from_float(0.1), '.17')
|
|
|
|
'0.10000000000000001'
|