requested name doesn't exist in globals: clear the KeyError exception before
calling PyObject_GetItem(). Fail also if the raised exception is not a
KeyError.
This changes the main documentation, doc strings, source code comments, and a
couple error messages in the test suite. In some cases the word was removed
or edited some other way to fix the grammar.
Issue #25274: sys.setrecursionlimit() now raises a RecursionError if the new
recursion limit is too low depending at the current recursion depth. Modify
also the "lower-water mark" formula to make it monotonic. This mark is used to
decide when the overflowed flag of the thread state is reset.
function instead of the getentropy() function. The getentropy() function is
blocking to generate very good quality entropy, os.urandom() doesn't need such
high-quality entropy.
On the x86 OpenBSD 5.8 buildbot, the integer overflow check is ignored. Copy
the tv_sec variable into a Py_time_t variable instead of "simply" casting it to
Py_time_t, to fix the integer overflow check.
function instead of the getentropy() function. The getentropy() function is
blocking to generate very good quality entropy, os.urandom() doesn't need such
high-quality entropy.
On Windows, the tv_sec field of the timeval structure has the type C long,
whereas it has the type C time_t on all other platforms. A C long has a size of
32 bits (signed inter, 1 bit for the sign, 31 bits for the value) which is not
enough to store an Epoch timestamp after the year 2038.
Add the _PyTime_AsTimevalTime_t() function written for datetime.datetime.now():
convert a _PyTime_t timestamp to a (secs, us) tuple where secs type is time_t.
It allows to support dates after the year 2038 on Windows.
Enhance also _PyTime_AsTimeval_impl() to detect overflow on the number of
seconds when rounding the number of microseconds.
On Windows, the tv_sec field of the timeval structure has the type C long,
whereas it has the type C time_t on all other platforms. A C long has a size of
32 bits (signed inter, 1 bit for the sign, 31 bits for the value) which is not
enough to store an Epoch timestamp after the year 2038.
Add the _PyTime_AsTimevalTime_t() function written for datetime.datetime.now():
convert a _PyTime_t timestamp to a (secs, us) tuple where secs type is time_t.
It allows to support dates after the year 2038 on Windows.
Enhance also _PyTime_AsTimeval_impl() to detect overflow on the number of
seconds when rounding the number of microseconds.
Overflow test in test_FromSecondsObject() fails on FreeBSD 10.0 buildbot which
uses clang. clang implements more aggressive optimization which gives
different result than GCC on undefined behaviours.
Check if a multiplication will overflow, instead of checking if a
multiplicatin had overflowed, to avoid undefined behaviour.
Add also debug information if the test on overflow fails.
* Filter values which would overflow on conversion to the C long type
(for timeval.tv_sec).
* Adjust also the message of OverflowError on PyTime conversions
* test_time: add debug information if a timestamp conversion fails
Drop all hardcoded tests. Instead, reimplement each function in Python, usually
using decimal.Decimal for the rounding mode.
Add much more values to the dataset. Test various timestamp units from
picroseconds to seconds, in integer and float.
Enhance also _PyTime_AsSecondsDouble().
datetime.datetime now round microseconds to nearest with ties going to nearest
even integer (ROUND_HALF_EVEN), as round(float), instead of rounding towards
-Infinity (ROUND_FLOOR).
pytime API: replace _PyTime_ROUND_HALF_UP with _PyTime_ROUND_HALF_EVEN. Fix
also _PyTime_Divide() for negative numbers.
_PyTime_AsTimeval_impl() now reuses _PyTime_Divide() instead of reimplementing
rounding modes.
Issue #24891: Fix a race condition at Python startup if the file descriptor
of stdin (0), stdout (1) or stderr (2) is closed while Python is creating
sys.stdin, sys.stdout and sys.stderr objects. These attributes are now set
to None if the creation of the object failed, instead of raising an OSError
exception. Initial patch written by Marco Paolini.
Don't check anymore at runtime that the monotonic clock doesn't go backward.
Yes, it happens. It occurs sometimes each month on a Debian buildbot slave
running in a VM.
The problem is that Python cannot do anything useful if a monotonic clock goes
backward. It was decided in the PEP 418 to not fix the system, but only expose
the clock provided by the OS.
with ties going away from zero (ROUND_HALF_UP), as Python 2 and Python older
than 3.3, instead of rounding to nearest with ties going to nearest even
integer (ROUND_HALF_EVEN).
See the latest version of getrandom() manual page:
http://man7.org/linux/man-pages/man2/getrandom.2.html#NOTES
The behavior when a call to getrandom() that is blocked while reading from
/dev/urandom is interrupted by a signal handler depends on the
initialization state of the entropy buffer and on the request size, buflen.
If the entropy is not yet initialized, then the call will fail with the
EINTR error. If the entropy pool has been initialized and the request size
is large (buflen > 256), the call either succeeds, returning a partially
filled buffer, or fails with the error EINTR. If the entropy pool has been
initialized and the request size is small (buflen <= 256), then getrandom()
will not fail with EINTR. Instead, it will return all of the bytes that
have been requested.
Note: py_getrandom() calls getrandom() with flags=0.
Summary of changes:
1. Coroutines now have a distinct, separate from generators
type at the C level: PyGen_Type, and a new typedef PyCoroObject.
PyCoroObject shares the initial segment of struct layout with
PyGenObject, making it possible to reuse existing generators
machinery. The new type is exposed as 'types.CoroutineType'.
As a consequence of having a new type, CO_GENERATOR flag is
no longer applied to coroutines.
2. Having a separate type for coroutines made it possible to add
an __await__ method to the type. Although it is not used by the
interpreter (see details on that below), it makes coroutines
naturally (without using __instancecheck__) conform to
collections.abc.Coroutine and collections.abc.Awaitable ABCs.
[The __instancecheck__ is still used for generator-based
coroutines, as we don't want to add __await__ for generators.]
3. Add new opcode: GET_YIELD_FROM_ITER. The opcode is needed to
allow passing native coroutines to the YIELD_FROM opcode.
Before this change, 'yield from o' expression was compiled to:
(o)
GET_ITER
LOAD_CONST
YIELD_FROM
Now, we use GET_YIELD_FROM_ITER instead of GET_ITER.
The reason for adding a new opcode is that GET_ITER is used
in some contexts (such as 'for .. in' loops) where passing
a coroutine object is invalid.
4. Add two new introspection functions to the inspec module:
getcoroutinestate(c) and getcoroutinelocals(c).
5. inspect.iscoroutine(o) is updated to test if 'o' is a native
coroutine object. Before this commit it used abc.Coroutine,
and it was requested to update inspect.isgenerator(o) to use
abc.Generator; it was decided, however, that inspect functions
should really be tailored for checking for native types.
6. sys.set_coroutine_wrapper(w) API is updated to work with only
native coroutines. Since types.coroutine decorator supports
any type of callables now, it would be confusing that it does
not work for all types of coroutines.
7. Exceptions logic in generators C implementation was updated
to raise clearer messages for coroutines:
Before: TypeError("generator raised StopIteration")
After: TypeError("coroutine raised StopIteration")
Known limitations of the current implementation:
- documentation changes are incomplete
- there's a reference leak I haven't tracked down yet
The leak is most visible by running:
./python -m test -R3:3 test_importlib
However, you can also see it by running:
./python -X showrefcount
Importing the array or _testmultiphase modules, and
then deleting them from both sys.modules and the local
namespace shows significant increases in the total
number of active references each cycle. By contrast,
with _testcapi (which continues to use single-phase
initialisation) the global refcounts stabilise after
a couple of cycles.
* adds missing INCREF in WITH_CLEANUP_START
* adds missing DECREF in WITH_CLEANUP_FINISH
* adds several new tests Yury created while investigating this
CID 1291697 (#1 of 1): Dereference before null check (REVERSE_INULL)
check_after_deref: Null-checking tb suggests that it may be null, but it has already been dereferenced on all paths leading to the check.
The concept of .pyo files no longer exists. Now .pyc files have an
optional `opt-` tag which specifies if any extra optimizations beyond
the peepholer were applied.
Add also a new _PyTime_AsMicroseconds() function.
threading.TIMEOUT_MAX is now be smaller: only 292 years instead of 292,271
years on 64-bit system for example. Sorry, your threads will hang a *little
bit* shorter. Call me if you want to ensure that your locks wait longer, I can
share some tricks with you.
* _PyTime_AsTimeval() now ensures that tv_usec is always positive
* _PyTime_AsTimespec() now ensures that tv_nsec is always positive
* _PyTime_AsTimeval() now returns an integer on overflow instead of raising an
exception
* Rename _PyTime_FromObject() to _PyTime_FromSecondsObject()
* Add _PyTime_AsNanosecondsObject() and _testcapi.pytime_fromsecondsobject()
* Add unit tests
In practice, _PyTime_t is a number of nanoseconds. Its C type is a 64-bit
signed number. It's integer value is in the range [-2^63; 2^63-1]. In seconds,
the range is around [-292 years; +292 years]. In term of Epoch timestamp
(1970-01-01), it can store a date between 1677-09-21 and 2262-04-11.
The API has a resolution of 1 nanosecond and use integer number. With a
resolution on 1 nanosecond, 64-bit IEEE 754 floating point numbers loose
precision after 194 days. It's not the case with this API. The drawback is
overflow for values outside [-2^63; 2^63-1], but these values are unlikely for
most Python modules, except of the datetime module.
New functions:
- _PyTime_GetMonotonicClock()
- _PyTime_FromObject()
- _PyTime_AsMilliseconds()
- _PyTime_AsTimeval()
This change uses these new functions in time.sleep() to avoid rounding issues.
The new API will be extended step by step, and the old API will be removed step
by step. Currently, some code is duplicated just to be able to move
incrementally, instead of pushing a large change at once.
Flushing sys.stdout and sys.stderr in Py_FatalError() can call again
Py_FatalError(). Add a reentrant flag to detect this case and just abort at the
second call.
It should help to see exceptions when stderr if buffered: PyErr_Display() calls
sys.stderr.write(), it doesn't write into stderr file descriptor directly.
* Display the current Python stack if an exception was raised but the exception
has no traceback
* Disable faulthandler if an exception was raised (before it was only disabled
if no exception was raised)
* To display the current Python stack, call PyGILState_GetThisThreadState()
which works even if the GIL was released
Flushing sys.stdout and sys.stderr in Py_FatalError() can call again
Py_FatalError(). Add a reentrant flag to detect this case and just abort at the
second call.
sys.stderr
It should help to see exceptions when stderr if buffered: PyErr_Display() calls
sys.stderr.write(), it doesn't write into stderr file descriptor directly.
I expected more users of _Py_wstat(), but in practice it's only used by
Modules/getpath.c. Move the function because it's not needed on Windows.
Windows uses PC/getpathp.c which uses the Win32 API (ex: GetFileAttributesW())
not the POSIX API.
* Display the current Python stack if an exception was raised but the exception
has no traceback
* Disable faulthandler if an exception was raised (before it was only disabled
if no exception was raised)
* To display the current Python stack, call PyGILState_GetThisThreadState()
which works even if the GIL was released
which returned an invalid result (result+error or no result without error) in
the exception message.
Add also unit test to check that the exception contains the name of the
function.
Special case: the final _PyEval_EvalFrameEx() check doesn't mention the
function since it didn't execute a single function but a whole frame.
interrupted by a signal
Add a new _PyTime_AddDouble() function and remove _PyTime_ADD_SECONDS() macro.
The _PyTime_ADD_SECONDS only supported an integer number of seconds, the
_PyTime_AddDouble() has subsecond resolution.
EINTR error and special cases for Windows.
These functions now truncate the length to PY_SSIZE_T_MAX to have a portable
and reliable behaviour. For example, read() result is undefined if counter is
greater than PY_SSIZE_T_MAX on Linux.
available, syscall introduced in the Linux kernel 3.17. It is more reliable
and more secure, because it avoids the need of a file descriptor and waits
until the kernel has enough entropy.
* _Py_open() now raises exceptions on error. If open() fails, it raises an
OSError with the filename.
* _Py_open() now releases the GIL while calling open()
* Add _Py_open_noraise() when _Py_open() cannot be used because the GIL is not
held
raise a SystemError if a function returns a result and raises an exception.
The SystemError is chained to the previous exception.
Refactor also PyObject_Call() and PyCFunction_Call() to make them more readable.
Remove some checks which became useless (duplicate checks).
Change reviewed by Serhiy Storchaka.