Merge 3.5.3 release head with main 3.5 branch.

This commit is contained in:
Larry Hastings 2017-01-17 00:56:40 -08:00
commit 09e4ce5a95
52 changed files with 592 additions and 379 deletions

View File

@ -148,6 +148,7 @@ b4cbecbc0781e89a309d03b60a1f75f8499250e6 v3.4.3
737efcadf5a678b184e0fa431aae11276bf06648 v3.4.4
3631bb4a2490292ebf81d3e947ae36da145da564 v3.4.5rc1
619b61e505d0e2ccc8516b366e4ddd1971b46a6f v3.4.5
e199a272ccdac5a8c073d4690f60c13e0b6d86b0 v3.4.6rc1
5d4b6a57d5fd7564bf73f3db0e46fe5eeb00bcd8 v3.5.0a1
0337bd7ebcb6559d69679bc7025059ad1ce4f432 v3.5.0a2
82656e28b5e5c4ae48d8dd8b5f0d7968908a82b6 v3.5.0a3

View File

@ -168,7 +168,7 @@ can be combined with a binding flag.
Methods with these flags must be of type :c:type:`PyCFunctionWithKeywords`.
The function expects three parameters: *self*, *args*, and a dictionary of
all the keyword arguments. The flag is typically combined with
all the keyword arguments. The flag must be combined with
:const:`METH_VARARGS`, and the parameters are typically processed using
:c:func:`PyArg_ParseTupleAndKeywords`.

View File

@ -2165,8 +2165,8 @@ Speaking logging messages
-------------------------
There might be situations when it is desirable to have logging messages rendered
in an audible rather than a visible format. This is easy to do if you have text-
to-speech (TTS) functionality available in your system, even if it doesn't have
in an audible rather than a visible format. This is easy to do if you have
text-to-speech (TTS) functionality available in your system, even if it doesn't have
a Python binding. Most TTS systems have a command line program you can run, and
this can be invoked from a handler using :mod:`subprocess`. It's assumed here
that TTS command line programs won't expect to interact with users or take a

View File

@ -174,7 +174,7 @@ ArgumentParser objects
* conflict_handler_ - The strategy for resolving conflicting optionals
(usually unnecessary)
* add_help_ - Add a -h/--help option to the parser (default: ``True``)
* add_help_ - Add a ``-h/--help`` option to the parser (default: ``True``)
* allow_abbrev_ - Allows long options to be abbreviated if the
abbreviation is unambiguous. (default: ``True``)
@ -211,7 +211,7 @@ The help for this program will display ``myprogram.py`` as the program name
-h, --help show this help message and exit
--foo FOO foo help
$ cd ..
$ python subdir\myprogram.py --help
$ python subdir/myprogram.py --help
usage: myprogram.py [-h] [--foo FOO]
optional arguments:

View File

@ -105,8 +105,8 @@ in :mod:`logging` itself) and defining handlers which are declared either in
:param disable_existing_loggers: If specified as ``False``, loggers which
exist when this call is made are left
enabled. The default is ``True`` because this
enables old behaviour in a backward-
compatible way. This behaviour is to
enables old behaviour in a
backward-compatible way. This behaviour is to
disable any existing loggers unless they or
their ancestors are explicitly named in the
logging configuration.

View File

@ -900,8 +900,8 @@ possible, while any potentially slow operations (such as sending an email via
.. class:: QueueHandler(queue)
Returns a new instance of the :class:`QueueHandler` class. The instance is
initialized with the queue to send messages to. The queue can be any queue-
like object; it's used as-is by the :meth:`enqueue` method, which needs
initialized with the queue to send messages to. The queue can be any
queue-like object; it's used as-is by the :meth:`enqueue` method, which needs
to know how to send messages to it.
@ -956,8 +956,8 @@ possible, while any potentially slow operations (such as sending an email via
Returns a new instance of the :class:`QueueListener` class. The instance is
initialized with the queue to send messages to and a list of handlers which
will handle entries placed on the queue. The queue can be any queue-
like object; it's passed as-is to the :meth:`dequeue` method, which needs
will handle entries placed on the queue. The queue can be any queue-like
object; it's passed as-is to the :meth:`dequeue` method, which needs
to know how to get messages from it. If ``respect_handler_level`` is ``True``,
a handler's level is respected (compared with the level for the message) when
deciding whether to pass messages to that handler; otherwise, the behaviour

View File

@ -32,8 +32,8 @@ sending a graphics file.
.. function:: encode(input, output, quotetabs, header=False)
Encode the contents of the *input* file and write the resulting quoted-
printable data to the *output* file. *input* and *output* must be
Encode the contents of the *input* file and write the resulting quoted-printable
data to the *output* file. *input* and *output* must be
:term:`binary file objects <file object>`. *quotetabs*, a flag which controls
whether to encode embedded spaces and tabs must be provideda and when true it
encodes such embedded whitespace, and when false it leaves them unencoded.

View File

@ -1288,8 +1288,8 @@ to sockets.
to transmit as opposed to sending the file until EOF is reached. File
position is updated on return or also in case of error in which case
:meth:`file.tell() <io.IOBase.tell>` can be used to figure out the number of
bytes which were sent. The socket must be of :const:`SOCK_STREAM` type. Non-
blocking sockets are not supported.
bytes which were sent. The socket must be of :const:`SOCK_STREAM` type.
Non-blocking sockets are not supported.
.. versionadded:: 3.5

View File

@ -557,6 +557,10 @@ The module defines the following classes, functions and decorators:
As a shorthand for this type, :class:`bytes` can be used to
annotate arguments of any of the types mentioned above.
.. class:: Deque(deque, MutableSequence[T])
A generic version of :class:`collections.deque`.
.. class:: List(list, MutableSequence[T])
Generic version of :class:`list`.

View File

@ -1795,6 +1795,9 @@ sentinel
the same attribute will always return the same object. The objects
returned have a sensible repr so that test failure messages are readable.
The ``sentinel`` attributes don't preserve their identity when they are
:mod:`copied <copy>` or :mod:`pickled <pickle>`.
Sometimes when testing you need to test that a specific object is passed as an
argument to another method, or returned. It can be common to create named
sentinel objects to test this. :data:`sentinel` provides a convenient way of

View File

@ -1637,11 +1637,11 @@ Loading and running tests
The method optionally resolves *name* relative to the given *module*.
.. versionchanged:: 3.5
If an :exc:`ImportError` or :exc:`AttributeError` occurs while traversing
*name* then a synthetic test that raises that error when run will be
returned. These errors are included in the errors accumulated by
self.errors.
.. versionchanged:: 3.5
If an :exc:`ImportError` or :exc:`AttributeError` occurs while traversing
*name* then a synthetic test that raises that error when run will be
returned. These errors are included in the errors accumulated by
self.errors.
.. method:: loadTestsFromNames(names, module=None)

View File

@ -111,7 +111,7 @@ random UUID.
.. attribute:: UUID.variant
The UUID variant, which determines the internal layout of the UUID. This will be
one of the integer constants :const:`RESERVED_NCS`, :const:`RFC_4122`,
one of the constants :const:`RESERVED_NCS`, :const:`RFC_4122`,
:const:`RESERVED_MICROSOFT`, or :const:`RESERVED_FUTURE`.

View File

@ -765,9 +765,9 @@ Custom classes
Special attributes: :attr:`~definition.__name__` is the class name; :attr:`__module__` is
the module name in which the class was defined; :attr:`~object.__dict__` is the
dictionary containing the class's namespace; :attr:`~class.__bases__` is a
tuple (possibly a singleton) containing the base classes, in the
order of their occurrence in the base class list; :attr:`__doc__` is the
class's documentation string, or ``None`` if undefined.
tuple containing the base classes, in the order of their occurrence in the
base class list; :attr:`__doc__` is the class's documentation string, or
``None`` if undefined.
Class instances
.. index::

View File

@ -145,8 +145,8 @@ strings. Unicode uses 16-bit numbers to represent characters instead of the
8-bit number used by ASCII, meaning that 65,536 distinct characters can be
supported.
The final interface for Unicode support was arrived at through countless often-
stormy discussions on the python-dev mailing list, and mostly implemented by
The final interface for Unicode support was arrived at through countless
often-stormy discussions on the python-dev mailing list, and mostly implemented by
Marc-André Lemburg, based on a Unicode string type implementation by Fredrik
Lundh. A detailed explanation of the interface was written up as :pep:`100`,
"Python Unicode Integration". This article will simply cover the most
@ -885,8 +885,8 @@ interfaces for processing XML have become common: SAX2 (version 2 of the Simple
API for XML) provides an event-driven interface with some similarities to
:mod:`xmllib`, and the DOM (Document Object Model) provides a tree-based
interface, transforming an XML document into a tree of nodes that can be
traversed and modified. Python 2.0 includes a SAX2 interface and a stripped-
down DOM interface as part of the :mod:`xml` package. Here we will give a brief
traversed and modified. Python 2.0 includes a SAX2 interface and a stripped-down
DOM interface as part of the :mod:`xml` package. Here we will give a brief
overview of these new interfaces; consult the Python documentation or the source
code for complete details. The Python XML SIG is also working on improved
documentation.

View File

@ -159,8 +159,8 @@ precede any statement that will result in bytecodes being produced.
PEP 207: Rich Comparisons
=========================
In earlier versions, Python's support for implementing comparisons on user-
defined classes and extension types was quite simple. Classes could implement a
In earlier versions, Python's support for implementing comparisons on user-defined
classes and extension types was quite simple. Classes could implement a
:meth:`__cmp__` method that was given two instances of a class, and could only
return 0 if they were equal or +1 or -1 if they weren't; the method couldn't
raise an exception or return anything other than a Boolean value. Users of
@ -465,11 +465,11 @@ Windows being the primary examples; on these systems, it's impossible to
distinguish the filenames ``FILE.PY`` and ``file.py``, even though they do store
the file's name in its original case (they're case-preserving, too).
In Python 2.1, the :keyword:`import` statement will work to simulate case-
sensitivity on case-insensitive platforms. Python will now search for the first
In Python 2.1, the :keyword:`import` statement will work to simulate case-sensitivity
on case-insensitive platforms. Python will now search for the first
case-sensitive match by default, raising an :exc:`ImportError` if no such file
is found, so ``import file`` will not import a module named ``FILE.PY``. Case-
insensitive matching can be requested by setting the :envvar:`PYTHONCASEOK`
is found, so ``import file`` will not import a module named ``FILE.PY``.
Case-insensitive matching can be requested by setting the :envvar:`PYTHONCASEOK`
environment variable before starting the Python interpreter.
.. ======================================================================
@ -481,8 +481,8 @@ PEP 217: Interactive Display Hook
When using the Python interpreter interactively, the output of commands is
displayed using the built-in :func:`repr` function. In Python 2.1, the variable
:func:`sys.displayhook` can be set to a callable object which will be called
instead of :func:`repr`. For example, you can set it to a special pretty-
printing function::
instead of :func:`repr`. For example, you can set it to a special
pretty-printing function::
>>> # Create a recursive data structure
... L = [1,2,3]

View File

@ -962,8 +962,8 @@ New and Improved Modules
* The new :mod:`hmac` module implements the HMAC algorithm described by
:rfc:`2104`. (Contributed by Gerhard Häring.)
* Several functions that originally returned lengthy tuples now return pseudo-
sequences that still behave like tuples but also have mnemonic attributes such
* Several functions that originally returned lengthy tuples now return
pseudo-sequences that still behave like tuples but also have mnemonic attributes such
as memberst_mtime or :attr:`tm_year`. The enhanced functions include
:func:`stat`, :func:`fstat`, :func:`statvfs`, and :func:`fstatvfs` in the
:mod:`os` module, and :func:`localtime`, :func:`gmtime`, and :func:`strptime` in
@ -1141,8 +1141,8 @@ Some of the more notable changes are:
The most significant change is the ability to build Python as a framework,
enabled by supplying the :option:`!--enable-framework` option to the configure
script when compiling Python. According to Jack Jansen, "This installs a self-
contained Python installation plus the OS X framework "glue" into
script when compiling Python. According to Jack Jansen, "This installs a
self-contained Python installation plus the OS X framework "glue" into
:file:`/Library/Frameworks/Python.framework` (or another location of choice).
For now there is little immediate added benefit to this (actually, there is the
disadvantage that you have to change your PATH to be able to find Python), but

View File

@ -86,8 +86,8 @@ The union and intersection of sets can be computed with the :meth:`union` and
It's also possible to take the symmetric difference of two sets. This is the
set of all elements in the union that aren't in the intersection. Another way
of putting it is that the symmetric difference contains all elements that are in
exactly one set. Again, there's an alternative notation (``^``), and an in-
place version with the ungainly name :meth:`symmetric_difference_update`. ::
exactly one set. Again, there's an alternative notation (``^``), and an
in-place version with the ungainly name :meth:`symmetric_difference_update`. ::
>>> S1 = sets.Set([1,2,3,4])
>>> S2 = sets.Set([3,4,5,6])
@ -288,8 +288,8 @@ use characters outside of the usual alphanumerics.
PEP 273: Importing Modules from ZIP Archives
============================================
The new :mod:`zipimport` module adds support for importing modules from a ZIP-
format archive. You don't need to import the module explicitly; it will be
The new :mod:`zipimport` module adds support for importing modules from a
ZIP-format archive. You don't need to import the module explicitly; it will be
automatically imported if a ZIP archive's filename is added to ``sys.path``.
For example:
@ -375,8 +375,8 @@ PEP 278: Universal Newline Support
==================================
The three major operating systems used today are Microsoft Windows, Apple's
Macintosh OS, and the various Unix derivatives. A minor irritation of cross-
platform work is that these three platforms all use different characters to
Macintosh OS, and the various Unix derivatives. A minor irritation of
cross-platform work is that these three platforms all use different characters to
mark the ends of lines in text files. Unix uses the linefeed (ASCII character
10), MacOS uses the carriage return (ASCII character 13), and Windows uses a
two-character sequence of a carriage return plus a newline.

View File

@ -517,8 +517,8 @@ Sometimes you can see this inaccuracy when the number is printed::
>>> 1.1
1.1000000000000001
The inaccuracy isn't always visible when you print the number because the FP-to-
decimal-string conversion is provided by the C library, and most C libraries try
The inaccuracy isn't always visible when you print the number because the
FP-to-decimal-string conversion is provided by the C library, and most C libraries try
to produce sensible output. Even if it's not displayed, however, the inaccuracy
is still there and subsequent operations can magnify the error.
@ -595,8 +595,8 @@ exponent::
...
decimal.InvalidOperation: x ** (non-integer)
You can combine :class:`Decimal` instances with integers, but not with floating-
point numbers::
You can combine :class:`Decimal` instances with integers, but not with
floating-point numbers::
>>> a + 4
Decimal("39.72")
@ -684,8 +684,8 @@ includes a quick-start tutorial and a reference.
Raymond Hettinger, Aahz, and Tim Peters.
http://www.lahey.com/float.htm
The article uses Fortran code to illustrate many of the problems that floating-
point inaccuracy can cause.
The article uses Fortran code to illustrate many of the problems that
floating-point inaccuracy can cause.
http://speleotrove.com/decimal/
A description of a decimal-based representation. This representation is being
@ -741,8 +741,8 @@ functions in Python's implementation required that the numeric locale remain set
to the ``'C'`` locale. Often this was because the code was using the C
library's :c:func:`atof` function.
Not setting the numeric locale caused trouble for extensions that used third-
party C libraries, however, because they wouldn't have the correct locale set.
Not setting the numeric locale caused trouble for extensions that used third-party
C libraries, however, because they wouldn't have the correct locale set.
The motivating example was GTK+, whose user interface widgets weren't displaying
numbers in the current locale.
@ -918,8 +918,8 @@ Here are all of the changes that Python 2.4 makes to the core Python language.
(Contributed by Raymond Hettinger.)
* Encountering a failure while importing a module no longer leaves a partially-
initialized module object in ``sys.modules``. The incomplete module object left
* Encountering a failure while importing a module no longer leaves a partially-initialized
module object in ``sys.modules``. The incomplete module object left
behind would fool further imports of the same module into succeeding, leading to
confusing errors. (Fixed by Tim Peters.)
@ -1028,8 +1028,8 @@ complete list of changes, or look through the CVS logs for all the details.
previous ones left off. (Implemented by Walter Dörwald.)
* There is a new :mod:`collections` module for various specialized collection
datatypes. Currently it contains just one type, :class:`deque`, a double-
ended queue that supports efficiently adding and removing elements from either
datatypes. Currently it contains just one type, :class:`deque`, a double-ended
queue that supports efficiently adding and removing elements from either
end::
>>> from collections import deque
@ -1485,8 +1485,8 @@ Some of the changes to Python's build process and to the C API are:
intended as an aid to people developing the Python core. Providing
:option:`!--enable-profiling` to the :program:`configure` script will let you
profile the interpreter with :program:`gprof`, and providing the
:option:`!--with-tsc` switch enables profiling using the Pentium's Time-Stamp-
Counter register. Note that the :option:`!--with-tsc` switch is slightly
:option:`!--with-tsc` switch enables profiling using the Pentium's
Time-Stamp-Counter register. Note that the :option:`!--with-tsc` switch is slightly
misnamed, because the profiling feature also works on the PowerPC platform,
though that processor architecture doesn't call that register "the TSC
register". (Contributed by Jeremy Hylton.)
@ -1540,8 +1540,8 @@ code:
* The :mod:`tarfile` module now generates GNU-format tar files by default.
* Encountering a failure while importing a module no longer leaves a partially-
initialized module object in ``sys.modules``.
* Encountering a failure while importing a module no longer leaves a
partially-initialized module object in ``sys.modules``.
* :const:`None` is now a constant; code that binds a new value to the name
``None`` is now a syntax error.

View File

@ -157,8 +157,8 @@ Here's a small but realistic example::
server_log = functools.partial(log, subsystem='server')
server_log('Unable to open socket')
Here's another example, from a program that uses PyGTK. Here a context-
sensitive pop-up menu is being constructed dynamically. The callback provided
Here's another example, from a program that uses PyGTK. Here a context-sensitive
pop-up menu is being constructed dynamically. The callback provided
for the menu option is a partially applied version of the :meth:`open_item`
method, where the first argument has been provided. ::
@ -171,8 +171,8 @@ method, where the first argument has been provided. ::
popup_menu.append( ("Open", open_func, 1) )
Another function in the :mod:`functools` module is the
``update_wrapper(wrapper, wrapped)`` function that helps you write well-
behaved decorators. :func:`update_wrapper` copies the name, module, and
``update_wrapper(wrapper, wrapped)`` function that helps you write
well-behaved decorators. :func:`update_wrapper` copies the name, module, and
docstring attribute to a wrapper function so that tracebacks inside the wrapped
function are easier to understand. For example, you might write::
@ -297,8 +297,8 @@ can't protect against having your submodule's name being used for a new module
added in a future version of Python.
In Python 2.5, you can switch :keyword:`import`'s behaviour to absolute imports
using a ``from __future__ import absolute_import`` directive. This absolute-
import behaviour will become the default in a future version (probably Python
using a ``from __future__ import absolute_import`` directive. This absolute-import
behaviour will become the default in a future version (probably Python
2.7). Once absolute imports are the default, ``import string`` will always
find the standard library's version. It's suggested that users should begin
using absolute imports as much as possible, so it's preferable to begin writing
@ -602,8 +602,8 @@ be used with the ':keyword:`with`' statement. File objects are one example::
... more processing code ...
After this statement has executed, the file object in *f* will have been
automatically closed, even if the :keyword:`for` loop raised an exception part-
way through the block.
automatically closed, even if the :keyword:`for` loop raised an exception
part-way through the block.
.. note::
@ -1558,8 +1558,8 @@ complete list of changes, or look through the SVN logs for all the details.
You can also pack and unpack data to and from buffer objects directly using the
``pack_into(buffer, offset, v1, v2, ...)`` and ``unpack_from(buffer,
offset)`` methods. This lets you store data directly into an array or a memory-
mapped file.
offset)`` methods. This lets you store data directly into an array or a
memory-mapped file.
(:class:`Struct` objects were implemented by Bob Ippolito at the NeedForSpeed
sprint. Support for buffer objects was added by Martin Blais, also at the
@ -2281,8 +2281,8 @@ Acknowledgements
The author would like to thank the following people for offering suggestions,
corrections and assistance with various drafts of this article: Georg Brandl,
Nick Coghlan, Phillip J. Eby, Lars Gustäbel, Raymond Hettinger, Ralf W. Grosse-
Kunstleve, Kent Johnson, Iain Lowe, Martin von Löwis, Fredrik Lundh, Andrew
Nick Coghlan, Phillip J. Eby, Lars Gustäbel, Raymond Hettinger, Ralf W.
Grosse-Kunstleve, Kent Johnson, Iain Lowe, Martin von Löwis, Fredrik Lundh, Andrew
McNamara, Skip Montanaro, Gustavo Niemeyer, Paul Prescod, James Pryor, Mike
Rovner, Scott Weikart, Barry Warsaw, Thomas Wouters.

View File

@ -290,8 +290,8 @@ be used with the ':keyword:`with`' statement. File objects are one example::
... more processing code ...
After this statement has executed, the file object in *f* will have been
automatically closed, even if the :keyword:`for` loop raised an exception part-
way through the block.
automatically closed, even if the :keyword:`for` loop raised an exception
part-way through the block.
.. note::

View File

@ -102,6 +102,7 @@ PyAPI_FUNC(int) _PyDict_HasOnlyStringKeys(PyObject *mp);
Py_ssize_t _PyDict_KeysSize(PyDictKeysObject *keys);
Py_ssize_t _PyDict_SizeOf(PyDictObject *);
PyObject *_PyDict_Pop(PyDictObject *, PyObject *, PyObject *);
PyObject *_PyDict_Pop_KnownHash(PyDictObject *, PyObject *, Py_hash_t, PyObject *);
PyObject *_PyDict_FromKeys(PyObject *, PyObject *, PyObject *);
#define _PyDict_HasSplitTable(d) ((d)->ma_values != NULL)

View File

@ -19,8 +19,8 @@
#define PY_MAJOR_VERSION 3
#define PY_MINOR_VERSION 5
#define PY_MICRO_VERSION 3
#define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_FINAL
#define PY_RELEASE_SERIAL 0
#define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_GAMMA
#define PY_RELEASE_SERIAL 1
/* Version as a string */
#define PY_VERSION "3.5.3+"

View File

@ -327,6 +327,10 @@ class CDLL(object):
"""
_func_flags_ = _FUNCFLAG_CDECL
_func_restype_ = c_int
# default values for repr
_name = '<uninitialized>'
_handle = 0
_FuncPtr = None
def __init__(self, name, mode=DEFAULT_MODE, handle=None,
use_errno=False,

View File

@ -1416,7 +1416,6 @@ def getframeinfo(frame, context=1):
except OSError:
lines = index = None
else:
start = max(start, 0)
start = max(0, min(start, len(lines) - context))
lines = lines[start:start+context]
index = lineno - 1 - start

View File

@ -129,9 +129,14 @@ def getLevelName(level):
Otherwise, the string "Level %s" % level is returned.
"""
# See Issues #22386 and #27937 for why it's this way
return (_levelToName.get(level) or _nameToLevel.get(level) or
"Level %s" % level)
# See Issues #22386, #27937 and #29220 for why it's this way
result = _levelToName.get(level)
if result is not None:
return result
result = _nameToLevel.get(level)
if result is not None:
return result
return "Level %s" % level
def addLevelName(level, levelName):
"""

View File

@ -70,17 +70,28 @@ def run_python_until_end(*args, **env_vars):
elif not env_vars and not env_required:
# ignore Python environment variables
cmd_line.append('-E')
# Need to preserve the original environment, for in-place testing of
# shared library builds.
env = os.environ.copy()
# set TERM='' unless the TERM environment variable is passed explicitly
# see issues #11390 and #18300
if 'TERM' not in env_vars:
env['TERM'] = ''
# But a special flag that can be set to override -- in this case, the
# caller is responsible to pass the full environment.
if env_vars.pop('__cleanenv', None):
env = {}
if sys.platform == 'win32':
# Windows requires at least the SYSTEMROOT environment variable to
# start Python.
env['SYSTEMROOT'] = os.environ['SYSTEMROOT']
# Other interesting environment variables, not copied currently:
# COMSPEC, HOME, PATH, TEMP, TMPDIR, TMP.
else:
# Need to preserve the original environment, for in-place testing of
# shared library builds.
env = os.environ.copy()
# set TERM='' unless the TERM environment variable is passed explicitly
# see issues #11390 and #18300
if 'TERM' not in env_vars:
env['TERM'] = ''
env.update(env_vars)
cmd_line.extend(args)
proc = subprocess.Popen(cmd_line, stdin=subprocess.PIPE,

View File

@ -244,7 +244,7 @@ class TestCurses(unittest.TestCase):
# Functions only available on a few platforms
def test_colors_funcs(self):
if not curses.has_colors():
self.skip('requires colors support')
self.skipTest('requires colors support')
curses.start_color()
curses.init_pair(2, 1,1)
curses.color_content(1)
@ -267,7 +267,7 @@ class TestCurses(unittest.TestCase):
def test_getmouse(self):
(availmask, oldmask) = curses.mousemask(curses.BUTTON1_PRESSED)
if availmask == 0:
self.skip('mouse stuff not available')
self.skipTest('mouse stuff not available')
curses.mouseinterval(10)
# just verify these don't cause errors
curses.ungetmouse(0, 0, 0, 0, curses.BUTTON1_PRESSED)

View File

@ -7,6 +7,7 @@ import pickle
from random import choice
import sys
from test import support
import time
import unittest
from weakref import proxy
try:
@ -1364,6 +1365,20 @@ class TestLRU:
pause.reset()
self.assertEqual(f.cache_info(), (0, (i+1)*n, m*n, i+1))
@unittest.skipUnless(threading, 'This test requires threading.')
def test_lru_cache_threaded3(self):
@self.module.lru_cache(maxsize=2)
def f(x):
time.sleep(.01)
return 3 * x
def test(i, x):
with self.subTest(thread=i):
self.assertEqual(f(x), 3 * x, i)
threads = [threading.Thread(target=test, args=(i, v))
for i, v in enumerate([1, 2, 2, 3, 2])]
with support.start_threads(threads):
pass
def test_need_for_rlock(self):
# This will deadlock on an LRU cache that uses a regular lock

View File

@ -477,7 +477,7 @@ class NewIMAPTests(NewIMAPTestsMixin, unittest.TestCase):
@unittest.skipUnless(ssl, "SSL not available")
class NewIMAPSSLTests(NewIMAPTestsMixin, unittest.TestCase):
imap_class = imaplib.IMAP4_SSL
imap_class = IMAP4_SSL
server_class = SecureTCPServer
def test_ssl_raises(self):

View File

@ -308,6 +308,14 @@ class BuiltinLevelsTest(BaseTest):
self.assertEqual(logging.getLevelName('INFO'), logging.INFO)
self.assertEqual(logging.getLevelName(logging.INFO), 'INFO')
def test_regression_29220(self):
"""See issue #29220 for more information."""
logging.addLevelName(logging.INFO, '')
self.addCleanup(logging.addLevelName, logging.INFO, 'INFO')
self.assertEqual(logging.getLevelName(logging.INFO), '')
self.assertEqual(logging.getLevelName(logging.NOTSET), 'NOTSET')
self.assertEqual(logging.getLevelName('NOTSET'), logging.NOTSET)
class BasicFilterTest(BaseTest):
"""Test the bundled Filter class."""

View File

@ -59,9 +59,6 @@ class PowTest(unittest.TestCase):
def test_powint(self):
self.powtest(int)
def test_powlong(self):
self.powtest(int)
def test_powfloat(self):
self.powtest(float)

View File

@ -4719,14 +4719,10 @@ def isTipcAvailable():
return False
try:
f = open("/proc/modules")
except IOError as e:
except (FileNotFoundError, IsADirectoryError, PermissionError):
# It's ok if the file does not exist, is a directory or if we
# have not the permission to read it. In any other case it's a
# real error, so raise it again.
if e.errno in (errno.ENOENT, errno.EISDIR, errno.EACCES):
return False
else:
raise
# have not the permission to read it.
return False
with f:
for line in f:
if line.startswith("tipc "):

View File

@ -5,7 +5,8 @@ import subprocess
import shutil
from copy import copy
from test.support import (run_unittest, TESTFN, unlink, check_warnings,
from test.support import (run_unittest,
import_module, TESTFN, unlink, check_warnings,
captured_stdout, skip_unless_symlink, change_cwd)
import sysconfig
@ -387,7 +388,8 @@ class TestSysConfig(unittest.TestCase):
@unittest.skipUnless(sys.platform == 'linux', 'Linux-specific test')
def test_triplet_in_ext_suffix(self):
import ctypes, platform, re
ctypes = import_module('ctypes')
import platform, re
machine = platform.machine()
suffix = sysconfig.get_config_var('EXT_SUFFIX')
if re.match('(aarch64|arm|mips|ppc|powerpc|s390|sparc)', machine):

View File

@ -1572,6 +1572,9 @@ class CollectionsAbcTests(BaseTestCase):
def test_list(self):
self.assertIsSubclass(list, typing.List)
def test_deque(self):
self.assertIsSubclass(collections.deque, typing.Deque)
def test_set(self):
self.assertIsSubclass(set, typing.Set)
self.assertNotIsSubclass(frozenset, typing.Set)
@ -1642,6 +1645,14 @@ class CollectionsAbcTests(BaseTestCase):
self.assertIsSubclass(MyDefDict, collections.defaultdict)
self.assertNotIsSubclass(collections.defaultdict, MyDefDict)
def test_no_deque_instantiation(self):
with self.assertRaises(TypeError):
typing.Deque()
with self.assertRaises(TypeError):
typing.Deque[T]()
with self.assertRaises(TypeError):
typing.Deque[int]()
def test_no_set_instantiation(self):
with self.assertRaises(TypeError):
typing.Set()

View File

@ -464,6 +464,13 @@ class UnicodeTest(string_tests.CommonTest,
self.checkraises(TypeError, ' ', 'join', [1, 2, 3])
self.checkraises(TypeError, ' ', 'join', ['1', '2', 3])
@unittest.skipIf(sys.maxsize > 2**32,
'needs too much memory on a 64-bit platform')
def test_join_overflow(self):
size = int(sys.maxsize**0.5) + 1
seq = ('A' * size,) * size
self.assertRaises(OverflowError, ''.join, seq)
def test_replace(self):
string_tests.CommonTest.test_replace(self)

View File

@ -247,11 +247,12 @@ class ProxyTests(unittest.TestCase):
def test_proxy_bypass_environment_host_match(self):
bypass = urllib.request.proxy_bypass_environment
self.env.set('NO_PROXY',
'localhost, anotherdomain.com, newdomain.com:1234')
'localhost, anotherdomain.com, newdomain.com:1234, .d.o.t')
self.assertTrue(bypass('localhost'))
self.assertTrue(bypass('LocalHost')) # MixedCase
self.assertTrue(bypass('LOCALHOST')) # UPPERCASE
self.assertTrue(bypass('newdomain.com:1234'))
self.assertTrue(bypass('foo.d.o.t')) # issue 29142
self.assertTrue(bypass('anotherdomain.com:8888'))
self.assertTrue(bypass('www.newdomain.com:1234'))
self.assertFalse(bypass('prelocalhost'))

View File

@ -59,6 +59,7 @@ __all__ = [
'SupportsRound',
# Concrete collection types.
'Deque',
'Dict',
'DefaultDict',
'List',
@ -1771,6 +1772,15 @@ class List(list, MutableSequence[T], extra=list):
"use list() instead")
return _generic_new(list, cls, *args, **kwds)
class Deque(collections.deque, MutableSequence[T], extra=collections.deque):
__slots__ = ()
def __new__(cls, *args, **kwds):
if _geqv(cls, Deque):
raise TypeError("Type Deque cannot be instantiated; "
"use deque() instead")
return _generic_new(collections.deque, cls, *args, **kwds)
class Set(set, MutableSet[T], extra=set):

View File

@ -2450,6 +2450,7 @@ def proxy_bypass_environment(host, proxies=None):
no_proxy_list = [proxy.strip() for proxy in no_proxy.split(',')]
for name in no_proxy_list:
if name:
name = name.lstrip('.') # ignore leading dots
name = re.escape(name)
pattern = r'(.+\.)?%s$' % name
if (re.match(pattern, hostonly, re.I)

View File

@ -1226,7 +1226,7 @@ LIBSUBDIRS= tkinter tkinter/test tkinter/test/test_tkinter \
turtledemo \
multiprocessing multiprocessing/dummy \
unittest unittest/test unittest/test/testmock \
venv venv/scripts venv/scripts/posix \
venv venv/scripts venv/scripts/common venv/scripts/posix \
curses pydoc_data $(MACHDEPS)
libinstall: build_all $(srcdir)/Lib/$(PLATDIR) $(srcdir)/Modules/xxmodule.c
@for i in $(SCRIPTDIR) $(LIBDEST); \

View File

@ -13,6 +13,19 @@ Core and Builtins
Library
-------
- Issue #29011: Fix an important omission by adding Deque to the typing module.
- Issue #29219: Fixed infinite recursion in the repr of uninitialized
ctypes.CDLL instances.
- Issue #28969: Fixed race condition in C implementation of functools.lru_cache.
KeyError could be raised when cached function with full cache was
simultaneously called from differen threads with the same uncached arguments.
- Issue #29142: In urllib.request, suffixes in no_proxy environment variable with
leading dots could match related hostnames again (e.g. .b.c matches a.b.c).
Patch by Milan Oberkirch.
What's New in Python 3.5.3?
===========================
@ -526,17 +539,17 @@ Library
- Issue #27972: Prohibit Tasks to await on themselves.
- Issue #26923: Fix asyncio.Gather to refuse being cancelled once all
- Issue #26923: Fix asyncio.Gather to refuse being cancelled once all
children are done.
Patch by Johannes Ebke.
- Issue #26796: Don't configure the number of workers for default
- Issue #26796: Don't configure the number of workers for default
threadpool executor.
Initial patch by Hans Lawrenz.
- Issue #28600: Optimize loop.call_soon().
- Issue #28613: Fix get_event_loop() return the current loop if
- Issue #28613: Fix get_event_loop() return the current loop if
called from coroutines/callbacks.
- Issue #28639: Fix inspect.isawaitable to always return bool
@ -551,7 +564,7 @@ Library
- Issue #24142: Reading a corrupt config file left the parser in an
invalid state. Original patch by Florian Höch.
- Issue #28990: Fix SSL hanging if connection is closed before handshake
- Issue #28990: Fix SSL hanging if connection is closed before handshake
completed.
(Patch by HoHo-Ho)

View File

@ -2169,7 +2169,7 @@ static PyTypeObject defdict_type = {
PyDoc_STRVAR(_count_elements_doc,
"_count_elements(mapping, iterable) -> None\n\
\n\
Count elements in the iterable, updating the mappping");
Count elements in the iterable, updating the mapping");
static PyObject *
_count_elements(PyObject *self, PyObject *args)

View File

@ -1119,12 +1119,12 @@ context_getattr(PyObject *self, PyObject *name)
PyObject *retval;
if (PyUnicode_Check(name)) {
if (_PyUnicode_EqualToASCIIString(name, "traps")) {
if (PyUnicode_CompareWithASCIIString(name, "traps") == 0) {
retval = ((PyDecContextObject *)self)->traps;
Py_INCREF(retval);
return retval;
}
if (_PyUnicode_EqualToASCIIString(name, "flags")) {
if (PyUnicode_CompareWithASCIIString(name, "flags") == 0) {
retval = ((PyDecContextObject *)self)->flags;
Py_INCREF(retval);
return retval;
@ -1144,10 +1144,10 @@ context_setattr(PyObject *self, PyObject *name, PyObject *value)
}
if (PyUnicode_Check(name)) {
if (_PyUnicode_EqualToASCIIString(name, "traps")) {
if (PyUnicode_CompareWithASCIIString(name, "traps") == 0) {
return context_settraps_dict(self, value);
}
if (_PyUnicode_EqualToASCIIString(name, "flags")) {
if (PyUnicode_CompareWithASCIIString(name, "flags") == 0) {
return context_setstatus_dict(self, value);
}
}
@ -2446,14 +2446,14 @@ dectuple_as_str(PyObject *dectuple)
tmp = PyTuple_GET_ITEM(dectuple, 2);
if (PyUnicode_Check(tmp)) {
/* special */
if (_PyUnicode_EqualToASCIIString(tmp, "F")) {
if (PyUnicode_CompareWithASCIIString(tmp, "F") == 0) {
strcat(sign_special, "Inf");
is_infinite = 1;
}
else if (_PyUnicode_EqualToASCIIString(tmp, "n")) {
else if (PyUnicode_CompareWithASCIIString(tmp, "n") == 0) {
strcat(sign_special, "NaN");
}
else if (_PyUnicode_EqualToASCIIString(tmp, "N")) {
else if (PyUnicode_CompareWithASCIIString(tmp, "N") == 0) {
strcat(sign_special, "sNaN");
}
else {

View File

@ -864,42 +864,56 @@ bounded_lru_cache_wrapper(lru_cache_object *self, PyObject *args, PyObject *kwds
}
if (self->full && self->root.next != &self->root) {
/* Use the oldest item to store the new key and result. */
PyObject *oldkey, *oldresult;
PyObject *oldkey, *oldresult, *popresult;
/* Extricate the oldest item. */
link = self->root.next;
lru_cache_extricate_link(link);
/* Remove it from the cache.
The cache dict holds one reference to the link,
and the linked list holds yet one reference to it. */
if (_PyDict_DelItem_KnownHash(self->cache, link->key,
link->hash) < 0) {
popresult = _PyDict_Pop_KnownHash((PyDictObject *)self->cache,
link->key, link->hash,
Py_None);
if (popresult == Py_None) {
/* Getting here means that this same key was added to the
cache while the lock was released. Since the link
update is already done, we need only return the
computed result and update the count of misses. */
Py_DECREF(popresult);
Py_DECREF(link);
Py_DECREF(key);
}
else if (popresult == NULL) {
lru_cache_append_link(self, link);
Py_DECREF(key);
Py_DECREF(result);
return NULL;
}
/* Keep a reference to the old key and old result to
prevent their ref counts from going to zero during the
update. That will prevent potentially arbitrary object
clean-up code (i.e. __del__) from running while we're
still adjusting the links. */
oldkey = link->key;
oldresult = link->result;
else {
Py_DECREF(popresult);
/* Keep a reference to the old key and old result to
prevent their ref counts from going to zero during the
update. That will prevent potentially arbitrary object
clean-up code (i.e. __del__) from running while we're
still adjusting the links. */
oldkey = link->key;
oldresult = link->result;
link->hash = hash;
link->key = key;
link->result = result;
if (_PyDict_SetItem_KnownHash(self->cache, key, (PyObject *)link,
hash) < 0) {
Py_DECREF(link);
link->hash = hash;
link->key = key;
link->result = result;
if (_PyDict_SetItem_KnownHash(self->cache, key, (PyObject *)link,
hash) < 0) {
Py_DECREF(link);
Py_DECREF(oldkey);
Py_DECREF(oldresult);
return NULL;
}
lru_cache_append_link(self, link);
Py_INCREF(result); /* for return */
Py_DECREF(oldkey);
Py_DECREF(oldresult);
return NULL;
}
lru_cache_append_link(self, link);
Py_INCREF(result); /* for return */
Py_DECREF(oldkey);
Py_DECREF(oldresult);
} else {
/* Put result in a new link at the front of the queue. */
link = (lru_list_elem *)PyObject_GC_New(lru_list_elem,

View File

@ -1012,7 +1012,7 @@ _io_TextIOWrapper___init___impl(textio *self, PyObject *buffer,
errors);
if (self->encoder == NULL)
goto error;
/* Get the normalized named of the codec */
/* Get the normalized name of the codec */
res = _PyObject_GetAttrId(codec_info, &PyId_name);
if (res == NULL) {
if (PyErr_ExceptionMatches(PyExc_AttributeError))

View File

@ -845,14 +845,16 @@ _parse_array_unicode(PyScannerObject *s, PyObject *pystr, Py_ssize_t idx, Py_ssi
int kind;
Py_ssize_t end_idx;
PyObject *val = NULL;
PyObject *rval = PyList_New(0);
PyObject *rval;
Py_ssize_t next_idx;
if (rval == NULL)
return NULL;
if (PyUnicode_READY(pystr) == -1)
return NULL;
rval = PyList_New(0);
if (rval == NULL)
return NULL;
str = PyUnicode_DATA(pystr);
kind = PyUnicode_KIND(pystr);
end_idx = PyUnicode_GET_LENGTH(pystr) - 1;
@ -1559,8 +1561,11 @@ encoder_listencode_obj(PyEncoderObject *s, _PyAccu *acc,
return -1;
}
if (Py_EnterRecursiveCall(" while encoding a JSON object"))
if (Py_EnterRecursiveCall(" while encoding a JSON object")) {
Py_DECREF(newobj);
Py_XDECREF(ident);
return -1;
}
rv = encoder_listencode_obj(s, acc, newobj, indent_level);
Py_LeaveRecursiveCall();
@ -1604,7 +1609,7 @@ encoder_listencode_dict(PyEncoderObject *s, _PyAccu *acc,
if (open_dict == NULL || close_dict == NULL || empty_dict == NULL)
return -1;
}
if (Py_SIZE(dct) == 0)
if (PyDict_Size(dct) == 0) /* Fast path */
return _PyAccu_Accumulate(acc, empty_dict);
if (s->markers != Py_None) {

View File

@ -1548,9 +1548,9 @@ memo_put(PicklerObject *self, PyObject *obj)
}
static PyObject *
get_dotted_path(PyObject *obj, PyObject *name) {
get_dotted_path(PyObject *obj, PyObject *name)
{
_Py_static_string(PyId_dot, ".");
_Py_static_string(PyId_locals, "<locals>");
PyObject *dotted_path;
Py_ssize_t i, n;
@ -1561,12 +1561,7 @@ get_dotted_path(PyObject *obj, PyObject *name) {
assert(n >= 1);
for (i = 0; i < n; i++) {
PyObject *subpath = PyList_GET_ITEM(dotted_path, i);
PyObject *result = PyUnicode_RichCompare(
subpath, _PyUnicode_FromId(&PyId_locals), Py_EQ);
int is_equal = (result == Py_True);
assert(PyBool_Check(result));
Py_DECREF(result);
if (is_equal) {
if (_PyUnicode_EqualToASCIIString(subpath, "<locals>")) {
if (obj == NULL)
PyErr_Format(PyExc_AttributeError,
"Can't pickle local object %R", name);
@ -3537,13 +3532,12 @@ save_reduce(PicklerObject *self, PyObject *args, PyObject *obj)
else if (PyUnicode_Check(name)) {
if (self->proto >= 4) {
_Py_IDENTIFIER(__newobj_ex__);
use_newobj_ex = PyUnicode_Compare(
name, _PyUnicode_FromId(&PyId___newobj_ex__)) == 0;
use_newobj_ex = _PyUnicode_EqualToASCIIId(
name, &PyId___newobj_ex__);
}
if (!use_newobj_ex) {
_Py_IDENTIFIER(__newobj__);
use_newobj = PyUnicode_Compare(
name, _PyUnicode_FromId(&PyId___newobj__)) == 0;
use_newobj = _PyUnicode_EqualToASCIIId(name, &PyId___newobj__);
}
}
Py_XDECREF(name);

View File

@ -259,7 +259,7 @@ sed -e 's/[ ]*#.*//' -e '/^[ ]*$/d' |
for mod in $MODS
do
EXTDECLS="${EXTDECLS}extern PyObject* PyInit_$mod(void);$NL"
INITBITS="${INITBITS} {\"$mod\", PyInit_$mod},$NL"
INITBITS="${INITBITS} {\"$mod\", PyInit_$mod},$NL"
done

View File

@ -1475,9 +1475,8 @@ _PyDict_Next(PyObject *op, Py_ssize_t *ppos, PyObject **pkey,
/* Internal version of dict.pop(). */
PyObject *
_PyDict_Pop(PyDictObject *mp, PyObject *key, PyObject *deflt)
_PyDict_Pop_KnownHash(PyDictObject *mp, PyObject *key, Py_hash_t hash, PyObject *deflt)
{
Py_hash_t hash;
PyObject *old_value, *old_key;
PyDictKeyEntry *ep;
PyObject **value_addr;
@ -1490,12 +1489,6 @@ _PyDict_Pop(PyDictObject *mp, PyObject *key, PyObject *deflt)
_PyErr_SetKeyError(key);
return NULL;
}
if (!PyUnicode_CheckExact(key) ||
(hash = ((PyASCIIObject *) key)->hash) == -1) {
hash = PyObject_Hash(key);
if (hash == -1)
return NULL;
}
ep = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr);
if (ep == NULL)
return NULL;
@ -1520,6 +1513,28 @@ _PyDict_Pop(PyDictObject *mp, PyObject *key, PyObject *deflt)
return old_value;
}
PyObject *
_PyDict_Pop(PyDictObject *mp, PyObject *key, PyObject *deflt)
{
Py_hash_t hash;
if (mp->ma_used == 0) {
if (deflt) {
Py_INCREF(deflt);
return deflt;
}
_PyErr_SetKeyError(key);
return NULL;
}
if (!PyUnicode_CheckExact(key) ||
(hash = ((PyASCIIObject *) key)->hash) == -1) {
hash = PyObject_Hash(key);
if (hash == -1)
return NULL;
}
return _PyDict_Pop_KnownHash(mp, key, hash, deflt);
}
/* Internal version of dict.from_keys(). It is subclass-friendly. */
PyObject *
_PyDict_FromKeys(PyObject *cls, PyObject *iterable, PyObject *value)

View File

@ -2237,7 +2237,7 @@ odictvalues_new(PyObject *od)
/* ----------------------------------------------
MutableMappping implementations
MutableMapping implementations
Mapping:

View File

@ -9752,7 +9752,7 @@ PyUnicode_Join(PyObject *separator, PyObject *seq)
use_memcpy = 1;
#endif
for (i = 0; i < seqlen; i++) {
const Py_ssize_t old_sz = sz;
size_t add_sz;
item = items[i];
if (!PyUnicode_Check(item)) {
PyErr_Format(PyExc_TypeError,
@ -9763,16 +9763,18 @@ PyUnicode_Join(PyObject *separator, PyObject *seq)
}
if (PyUnicode_READY(item) == -1)
goto onError;
sz += PyUnicode_GET_LENGTH(item);
add_sz = PyUnicode_GET_LENGTH(item);
item_maxchar = PyUnicode_MAX_CHAR_VALUE(item);
maxchar = Py_MAX(maxchar, item_maxchar);
if (i != 0)
sz += seplen;
if (sz < old_sz || sz > PY_SSIZE_T_MAX) {
if (i != 0) {
add_sz += seplen;
}
if (add_sz > (size_t)(PY_SSIZE_T_MAX - sz)) {
PyErr_SetString(PyExc_OverflowError,
"join() result is too long for a Python string");
goto onError;
}
sz += add_sz;
if (use_memcpy && last_obj != NULL) {
if (PyUnicode_KIND(last_obj) != PyUnicode_KIND(item))
use_memcpy = 0;
@ -10418,7 +10420,7 @@ replace(PyObject *self, PyObject *str1,
u = unicode_empty;
goto done;
}
if (new_size > (PY_SSIZE_T_MAX >> (rkind-1))) {
if (new_size > (PY_SSIZE_T_MAX / rkind)) {
PyErr_SetString(PyExc_OverflowError,
"replace string is too long");
goto error;

View File

@ -1,6 +1,9 @@
#include "Python.h"
#ifdef MS_WINDOWS
# include <windows.h>
/* All sample MSDN wincrypt programs include the header below. It is at least
* required with Min GW. */
# include <wincrypt.h>
#else
# include <fcntl.h>
# ifdef HAVE_SYS_STAT_H
@ -37,10 +40,9 @@ win32_urandom_init(int raise)
return 0;
error:
if (raise)
if (raise) {
PyErr_SetFromWindowsErr(0);
else
Py_FatalError("Failed to initialize Windows random API (CryptoGen)");
}
return -1;
}
@ -53,8 +55,9 @@ win32_urandom(unsigned char *buffer, Py_ssize_t size, int raise)
if (hCryptProv == 0)
{
if (win32_urandom_init(raise) == -1)
if (win32_urandom_init(raise) == -1) {
return -1;
}
}
while (size > 0)
@ -63,11 +66,9 @@ win32_urandom(unsigned char *buffer, Py_ssize_t size, int raise)
if (!CryptGenRandom(hCryptProv, (DWORD)chunk, buffer))
{
/* CryptGenRandom() failed */
if (raise)
if (raise) {
PyErr_SetFromWindowsErr(0);
else
Py_FatalError("Failed to initialized the randomized hash "
"secret using CryptoGen)");
}
return -1;
}
buffer += chunk;
@ -76,58 +77,23 @@ win32_urandom(unsigned char *buffer, Py_ssize_t size, int raise)
return 0;
}
/* Issue #25003: Don't use getentropy() on Solaris (available since
* Solaris 11.3), it is blocking whereas os.urandom() should not block. */
#elif defined(HAVE_GETENTROPY) && !defined(sun)
#define PY_GETENTROPY 1
/* Fill buffer with size pseudo-random bytes generated by getentropy().
Return 0 on success, or raise an exception and return -1 on error.
If fatal is nonzero, call Py_FatalError() instead of raising an exception
on error. */
static int
py_getentropy(unsigned char *buffer, Py_ssize_t size, int fatal)
{
while (size > 0) {
Py_ssize_t len = Py_MIN(size, 256);
int res;
if (!fatal) {
Py_BEGIN_ALLOW_THREADS
res = getentropy(buffer, len);
Py_END_ALLOW_THREADS
if (res < 0) {
PyErr_SetFromErrno(PyExc_OSError);
return -1;
}
}
else {
res = getentropy(buffer, len);
if (res < 0)
Py_FatalError("getentropy() failed");
}
buffer += len;
size -= len;
}
return 0;
}
#else
#else /* !MS_WINDOWS */
#if defined(HAVE_GETRANDOM) || defined(HAVE_GETRANDOM_SYSCALL)
#define PY_GETRANDOM 1
/* Call getrandom()
/* Call getrandom() to get random bytes:
- Return 1 on success
- Return 0 if getrandom() syscall is not available (failed with ENOSYS or
EPERM) or if getrandom(GRND_NONBLOCK) failed with EAGAIN (system urandom
not initialized yet) and raise=0.
- Return 0 if getrandom() is not available (failed with ENOSYS or EPERM),
or if getrandom(GRND_NONBLOCK) failed with EAGAIN (system urandom not
initialized yet).
- Raise an exception (if raise is non-zero) and return -1 on error:
getrandom() failed with EINTR and the Python signal handler raised an
exception, or getrandom() failed with a different error. */
if getrandom() failed with EINTR, raise is non-zero and the Python signal
handler raised an exception, or if getrandom() failed with a different
error.
getrandom() is retried if it failed with EINTR: interrupted by a signal. */
static int
py_getrandom(void *buffer, Py_ssize_t size, int raise)
{
@ -142,16 +108,19 @@ py_getrandom(void *buffer, Py_ssize_t size, int raise)
* see https://bugs.python.org/issue26839. To avoid this, use the
* GRND_NONBLOCK flag. */
const int flags = GRND_NONBLOCK;
char *dest;
long n;
if (!getrandom_works) {
return 0;
}
dest = buffer;
while (0 < size) {
#ifdef sun
/* Issue #26735: On Solaris, getrandom() is limited to returning up
to 1024 bytes */
to 1024 bytes. Call it multiple times if more bytes are
requested. */
n = Py_MIN(size, 1024);
#else
n = Py_MIN(size, LONG_MAX);
@ -161,34 +130,35 @@ py_getrandom(void *buffer, Py_ssize_t size, int raise)
#ifdef HAVE_GETRANDOM
if (raise) {
Py_BEGIN_ALLOW_THREADS
n = getrandom(buffer, n, flags);
n = getrandom(dest, n, flags);
Py_END_ALLOW_THREADS
}
else {
n = getrandom(buffer, n, flags);
n = getrandom(dest, n, flags);
}
#else
/* On Linux, use the syscall() function because the GNU libc doesn't
* expose the Linux getrandom() syscall yet. See:
* https://sourceware.org/bugzilla/show_bug.cgi?id=17252 */
expose the Linux getrandom() syscall yet. See:
https://sourceware.org/bugzilla/show_bug.cgi?id=17252 */
if (raise) {
Py_BEGIN_ALLOW_THREADS
n = syscall(SYS_getrandom, buffer, n, flags);
n = syscall(SYS_getrandom, dest, n, flags);
Py_END_ALLOW_THREADS
}
else {
n = syscall(SYS_getrandom, buffer, n, flags);
n = syscall(SYS_getrandom, dest, n, flags);
}
#endif
if (n < 0) {
/* ENOSYS: getrandom() syscall not supported by the kernel (but
* maybe supported by the host which built Python). EPERM:
* getrandom() syscall blocked by SECCOMP or something else. */
/* ENOSYS: the syscall is not supported by the kernel.
EPERM: the syscall is blocked by a security policy (ex: SECCOMP)
or something else. */
if (errno == ENOSYS || errno == EPERM) {
getrandom_works = 0;
return 0;
}
if (errno == EAGAIN) {
/* getrandom(GRND_NONBLOCK) fails with EAGAIN if the system
urandom is not initialiazed yet. In this case, fall back on
@ -202,32 +172,101 @@ py_getrandom(void *buffer, Py_ssize_t size, int raise)
}
if (errno == EINTR) {
if (PyErr_CheckSignals()) {
if (!raise) {
Py_FatalError("getrandom() interrupted by a signal");
if (raise) {
if (PyErr_CheckSignals()) {
return -1;
}
return -1;
}
/* retry getrandom() */
/* retry getrandom() if it was interrupted by a signal */
continue;
}
if (raise) {
PyErr_SetFromErrno(PyExc_OSError);
}
else {
Py_FatalError("getrandom() failed");
}
return -1;
}
buffer += n;
dest += n;
size -= n;
}
return 1;
}
#endif
#elif defined(HAVE_GETENTROPY)
#define PY_GETENTROPY 1
/* Fill buffer with size pseudo-random bytes generated by getentropy():
- Return 1 on success
- Return 0 if getentropy() syscall is not available (failed with ENOSYS or
EPERM).
- Raise an exception (if raise is non-zero) and return -1 on error:
if getentropy() failed with EINTR, raise is non-zero and the Python signal
handler raised an exception, or if getentropy() failed with a different
error.
getentropy() is retried if it failed with EINTR: interrupted by a signal. */
static int
py_getentropy(char *buffer, Py_ssize_t size, int raise)
{
/* Is getentropy() supported by the running kernel? Set to 0 if
getentropy() failed with ENOSYS or EPERM. */
static int getentropy_works = 1;
if (!getentropy_works) {
return 0;
}
while (size > 0) {
/* getentropy() is limited to returning up to 256 bytes. Call it
multiple times if more bytes are requested. */
Py_ssize_t len = Py_MIN(size, 256);
int res;
if (raise) {
Py_BEGIN_ALLOW_THREADS
res = getentropy(buffer, len);
Py_END_ALLOW_THREADS
}
else {
res = getentropy(buffer, len);
}
if (res < 0) {
/* ENOSYS: the syscall is not supported by the running kernel.
EPERM: the syscall is blocked by a security policy (ex: SECCOMP)
or something else. */
if (errno == ENOSYS || errno == EPERM) {
getentropy_works = 0;
return 0;
}
if (errno == EINTR) {
if (raise) {
if (PyErr_CheckSignals()) {
return -1;
}
}
/* retry getentropy() if it was interrupted by a signal */
continue;
}
if (raise) {
PyErr_SetFromErrno(PyExc_OSError);
}
return -1;
}
buffer += len;
size -= len;
}
return 1;
}
#endif /* defined(HAVE_GETENTROPY) && !defined(sun) */
static struct {
int fd;
@ -235,136 +274,123 @@ static struct {
ino_t st_ino;
} urandom_cache = { -1 };
/* Read random bytes from the /dev/urandom device:
/* Read 'size' random bytes from py_getrandom(). Fall back on reading from
/dev/urandom if getrandom() is not available.
- Return 0 on success
- Raise an exception (if raise is non-zero) and return -1 on error
Call Py_FatalError() on error. */
static void
dev_urandom_noraise(unsigned char *buffer, Py_ssize_t size)
{
int fd;
Py_ssize_t n;
Possible causes of errors:
assert (0 < size);
- open() failed with ENOENT, ENXIO, ENODEV, EACCES: the /dev/urandom device
was not found. For example, it was removed manually or not exposed in a
chroot or container.
- open() failed with a different error
- fstat() failed
- read() failed or returned 0
#ifdef PY_GETRANDOM
if (py_getrandom(buffer, size, 0) == 1) {
return;
}
/* getrandom() failed with ENOSYS or EPERM,
fall back on reading /dev/urandom */
#endif
read() is retried if it failed with EINTR: interrupted by a signal.
fd = _Py_open_noraise("/dev/urandom", O_RDONLY);
if (fd < 0) {
Py_FatalError("Failed to open /dev/urandom");
}
The file descriptor of the device is kept open between calls to avoid using
many file descriptors when run in parallel from multiple threads:
see the issue #18756.
while (0 < size)
{
do {
n = read(fd, buffer, (size_t)size);
} while (n < 0 && errno == EINTR);
st_dev and st_ino fields of the file descriptor (from fstat()) are cached to
check if the file descriptor was replaced by a different file (which is
likely a bug in the application): see the issue #21207.
if (n <= 0) {
/* read() failed or returned 0 bytes */
Py_FatalError("Failed to read bytes from /dev/urandom");
break;
}
buffer += n;
size -= n;
}
close(fd);
}
/* Read 'size' random bytes from py_getrandom(). Fall back on reading from
/dev/urandom if getrandom() is not available.
Return 0 on success. Raise an exception and return -1 on error. */
If the file descriptor was closed or replaced, open a new file descriptor
but don't close the old file descriptor: it probably points to something
important for some third-party code. */
static int
dev_urandom_python(char *buffer, Py_ssize_t size)
dev_urandom(char *buffer, Py_ssize_t size, int raise)
{
int fd;
Py_ssize_t n;
struct _Py_stat_struct st;
#ifdef PY_GETRANDOM
int res;
#endif
if (size <= 0)
return 0;
if (raise) {
struct _Py_stat_struct st;
#ifdef PY_GETRANDOM
res = py_getrandom(buffer, size, 1);
if (res < 0) {
return -1;
}
if (res == 1) {
return 0;
}
/* getrandom() failed with ENOSYS or EPERM,
fall back on reading /dev/urandom */
#endif
if (urandom_cache.fd >= 0) {
/* Does the fd point to the same thing as before? (issue #21207) */
if (_Py_fstat_noraise(urandom_cache.fd, &st)
|| st.st_dev != urandom_cache.st_dev
|| st.st_ino != urandom_cache.st_ino) {
/* Something changed: forget the cached fd (but don't close it,
since it probably points to something important for some
third-party code). */
urandom_cache.fd = -1;
if (urandom_cache.fd >= 0) {
/* Does the fd point to the same thing as before? (issue #21207) */
if (_Py_fstat_noraise(urandom_cache.fd, &st)
|| st.st_dev != urandom_cache.st_dev
|| st.st_ino != urandom_cache.st_ino) {
/* Something changed: forget the cached fd (but don't close it,
since it probably points to something important for some
third-party code). */
urandom_cache.fd = -1;
}
}
if (urandom_cache.fd >= 0)
fd = urandom_cache.fd;
else {
fd = _Py_open("/dev/urandom", O_RDONLY);
if (fd < 0) {
if (errno == ENOENT || errno == ENXIO ||
errno == ENODEV || errno == EACCES) {
PyErr_SetString(PyExc_NotImplementedError,
"/dev/urandom (or equivalent) not found");
}
/* otherwise, keep the OSError exception raised by _Py_open() */
return -1;
}
if (urandom_cache.fd >= 0) {
/* urandom_fd was initialized by another thread while we were
not holding the GIL, keep it. */
close(fd);
fd = urandom_cache.fd;
}
else {
if (_Py_fstat(fd, &st)) {
close(fd);
return -1;
}
else {
urandom_cache.fd = fd;
urandom_cache.st_dev = st.st_dev;
urandom_cache.st_ino = st.st_ino;
}
}
}
do {
n = _Py_read(fd, buffer, (size_t)size);
if (n == -1)
return -1;
if (n == 0) {
PyErr_Format(PyExc_RuntimeError,
"Failed to read %zi bytes from /dev/urandom",
size);
return -1;
}
buffer += n;
size -= n;
} while (0 < size);
}
if (urandom_cache.fd >= 0)
fd = urandom_cache.fd;
else {
fd = _Py_open("/dev/urandom", O_RDONLY);
fd = _Py_open_noraise("/dev/urandom", O_RDONLY);
if (fd < 0) {
if (errno == ENOENT || errno == ENXIO ||
errno == ENODEV || errno == EACCES)
PyErr_SetString(PyExc_NotImplementedError,
"/dev/urandom (or equivalent) not found");
/* otherwise, keep the OSError exception raised by _Py_open() */
return -1;
}
if (urandom_cache.fd >= 0) {
/* urandom_fd was initialized by another thread while we were
not holding the GIL, keep it. */
close(fd);
fd = urandom_cache.fd;
}
else {
if (_Py_fstat(fd, &st)) {
while (0 < size)
{
do {
n = read(fd, buffer, (size_t)size);
} while (n < 0 && errno == EINTR);
if (n <= 0) {
/* stop on error or if read(size) returned 0 */
close(fd);
return -1;
}
else {
urandom_cache.fd = fd;
urandom_cache.st_dev = st.st_dev;
urandom_cache.st_ino = st.st_ino;
}
buffer += n;
size -= n;
}
close(fd);
}
do {
n = _Py_read(fd, buffer, (size_t)size);
if (n == -1) {
return -1;
}
if (n == 0) {
PyErr_Format(PyExc_RuntimeError,
"Failed to read %zi bytes from /dev/urandom",
size);
return -1;
}
buffer += n;
size -= n;
} while (0 < size);
return 0;
}
@ -376,8 +402,8 @@ dev_urandom_close(void)
urandom_cache.fd = -1;
}
}
#endif /* !MS_WINDOWS */
#endif
/* Fill buffer with pseudo-random bytes generated by a linear congruent
generator (LCG):
@ -400,29 +426,98 @@ lcg_urandom(unsigned int x0, unsigned char *buffer, size_t size)
}
}
/* Read random bytes:
- Return 0 on success
- Raise an exception (if raise is non-zero) and return -1 on error
Used sources of entropy ordered by preference, preferred source first:
- CryptGenRandom() on Windows
- getrandom() function (ex: Linux and Solaris): call py_getrandom()
- getentropy() function (ex: OpenBSD): call py_getentropy()
- /dev/urandom device
Read from the /dev/urandom device if getrandom() or getentropy() function
is not available or does not work.
Prefer getrandom() over getentropy() because getrandom() supports blocking
and non-blocking mode and Python requires non-blocking RNG at startup to
initialize its hash secret: see the PEP 524.
Prefer getrandom() and getentropy() over reading directly /dev/urandom
because these functions don't need file descriptors and so avoid ENFILE or
EMFILE errors (too many open files): see the issue #18756.
Only use RNG running in the kernel. They are more secure because it is
harder to get the internal state of a RNG running in the kernel land than a
RNG running in the user land. The kernel has a direct access to the hardware
and has access to hardware RNG, they are used as entropy sources.
Note: the OpenSSL RAND_pseudo_bytes() function does not automatically reseed
its RNG on fork(), two child processes (with the same pid) generate the same
random numbers: see issue #18747. Kernel RNGs don't have this issue,
they have access to good quality entropy sources.
If raise is zero:
- Don't raise an exception on error
- Don't call the Python signal handler (don't call PyErr_CheckSignals()) if
a function fails with EINTR: retry directly the interrupted function
- Don't release the GIL to call functions.
*/
static int
pyurandom(void *buffer, Py_ssize_t size, int raise)
{
#if defined(PY_GETRANDOM) || defined(PY_GETENTROPY)
int res;
#endif
if (size < 0) {
if (raise) {
PyErr_Format(PyExc_ValueError,
"negative argument not allowed");
}
return -1;
}
if (size == 0) {
return 0;
}
#ifdef MS_WINDOWS
return win32_urandom((unsigned char *)buffer, size, raise);
#else
#if defined(PY_GETRANDOM) || defined(PY_GETENTROPY)
#ifdef PY_GETRANDOM
res = py_getrandom(buffer, size, raise);
#else
res = py_getentropy(buffer, size, raise);
#endif
if (res < 0) {
return -1;
}
if (res == 1) {
return 0;
}
/* getrandom() or getentropy() function is not available: failed with
ENOSYS, EPERM or EAGAIN. Fall back on reading from /dev/urandom. */
#endif
return dev_urandom(buffer, size, raise);
#endif
}
/* Fill buffer with size pseudo-random bytes from the operating system random
number generator (RNG). It is suitable for most cryptographic purposes
except long living private keys for asymmetric encryption.
Return 0 on success, raise an exception and return -1 on error. */
Return 0 on success. Raise an exception and return -1 on error. */
int
_PyOS_URandom(void *buffer, Py_ssize_t size)
{
if (size < 0) {
PyErr_Format(PyExc_ValueError,
"negative argument not allowed");
return -1;
}
if (size == 0)
return 0;
#ifdef MS_WINDOWS
return win32_urandom((unsigned char *)buffer, size, 1);
#elif defined(PY_GETENTROPY)
return py_getentropy(buffer, size, 0);
#else
return dev_urandom_python((char*)buffer, size);
#endif
return pyurandom(buffer, size, 1);
}
void
@ -463,13 +558,14 @@ _PyRandom_Init(void)
}
}
else {
#ifdef MS_WINDOWS
(void)win32_urandom(secret, secret_size, 0);
#elif defined(PY_GETENTROPY)
(void)py_getentropy(secret, secret_size, 1);
#else
dev_urandom_noraise(secret, secret_size);
#endif
int res;
/* _PyRandom_Init() is called very early in the Python initialization
and so exceptions cannot be used (use raise=0). */
res = pyurandom(secret, secret_size, 0);
if (res < 0) {
Py_FatalError("failed to get random numbers to initialize Python");
}
}
}
@ -481,8 +577,6 @@ _PyRandom_Fini(void)
CryptReleaseContext(hCryptProv, 0);
hCryptProv = 0;
}
#elif defined(PY_GETENTROPY)
/* nothing to clean */
#else
dev_urandom_close();
#endif