The compile ignores constant statements and emit a SyntaxWarning warning.
Don't emit the warning for string statement because triple quoted string is a
common syntax for multiline comments.
Don't emit the warning on ellipis neither: 'def f(): ...' is a legit syntax for
abstract functions.
Changes:
* test_ast: ignore SyntaxWarning when compiling test statements. Modify
test_load_const() to use assignment expressions rather than constant
expression.
* test_code: add more kinds of constant statements, ignore SyntaxWarning when
testing that the compiler removes constant statements.
* test_grammar: ignore SyntaxWarning on the statement "1"
obj2ast_constant() code is baesd on obj2ast_object() which has a special case
for Py_None. But in practice, we don't need to have a special case for
constants.
Issue noticed by Joseph Jevnik on a review.
Issue #26146: Add a new kind of AST node: ast.Constant. It can be used by
external AST optimizers, but the compiler does not emit directly such node.
An optimizer can replace the following AST nodes with ast.Constant:
* ast.NameConstant: None, False, True
* ast.Num: int, float, complex
* ast.Str: str
* ast.Bytes: bytes
* ast.Tuple if items are constants too: tuple
* frozenset
Update code to accept ast.Constant instead of ast.Num and/or ast.Str:
* compiler
* docstrings
* ast.literal_eval()
* Tools/parser/unparse.py
with no known parent package.
Previously SystemError was raised if the parent package didn't exist
(e.g., __package__ was set to '').
Thanks to Florent Xicluna and Yongzhi Pan for reporting the issue.
In a previous change, __spec__.parent was prioritized over
__package__. That is a backwards-compatibility break, but we do
eventually want __spec__ to be the ground truth for module details. So
this change reverts the change in semantics and instead raises an
ImportWarning when __package__ != __spec__.parent to give people time
to adjust to using spec objects.
Issue #26161: Use Py_uintptr_t instead of void* for atomic pointers in
pyatomic.h. Use atomic_uintptr_t when <stdatomic.h> is used.
Using void* causes compilation warnings depending on which implementation of
atomic types is used.
Issue #25843: When compiling code, don't merge constants if they are equal but
have a different types. For example, "f1, f2 = lambda: 1, lambda: 1.0" is now
correctly compiled to two different functions: f1() returns 1 (int) and f2()
returns 1.0 (int), even if 1 and 1.0 are equal.
Add a new _PyCode_ConstantKey() private function.
Issue #25843: When compiling code, don't merge constants if they are equal but
have a different types. For example, "f1, f2 = lambda: 1, lambda: 1.0" is now
correctly compiled to two different functions: f1() returns 1 (int) and f2()
returns 1.0 (int), even if 1 and 1.0 are equal.
Add a new _PyCode_ConstantKey() private function.
Issue #26107: The format of the co_lnotab attribute of code objects changes to
support negative line number delta.
Changes:
* assemble_lnotab(): if line number delta is less than -128 or greater than
127, emit multiple (offset_delta, lineno_delta) in co_lnotab
* update functions decoding co_lnotab to use signed 8-bit integers
- dis.findlinestarts()
- PyCode_Addr2Line()
- _PyCode_CheckLineNumber()
- frame_setlineno()
* update lnotab_notes.txt
* increase importlib MAGIC_NUMBER to 3361
* document the change in What's New in Python 3.6
* cleanup also PyCode_Optimize() to use better variable names
Issue #26154: Add a new private _PyThreadState_UncheckedGet() function which
gets the current thread state, but don't call Py_FatalError() if it is NULL.
Python 3.5.1 removed the _PyThreadState_Current symbol from the Python C API to
no more expose complex and private atomic types. Atomic types depends on the
compiler or can even depend on compiler options. The new function
_PyThreadState_UncheckedGet() allows to get the variable value without having
to care of the exact implementation of atomic types.
Changes:
* Replace direct usage of the _PyThreadState_Current variable with a call to
_PyThreadState_UncheckedGet().
* In pystate.c, replace direct usage of the _PyThreadState_Current variable
with the PyThreadState_GET() macro for readability.
* Document also PyThreadState_Get() in pystate.h
not defined for a relative import.
This is the start of work to try and clean up import semantics to rely
more on a module's spec than on the myriad attributes that get set on
a module. Thanks to Rose Ames for the patch.
Don't fallback to PyDict_GetItemWithError() if the hash is unknown: compute the
hash instead. Add also comments to explain the optimization a little bit.
This avoids possible buffer overreads when int(), float(), compile(), exec()
and eval() are passed bytes-like objects. Similar code is removed from the
complex() constructor, where it was not reachable.
Patch by John Leitch, Serhiy Storchaka and Martin Panter.
requested name doesn't exist in globals: clear the KeyError exception before
calling PyObject_GetItem(). Fail also if the raised exception is not a
KeyError.
This changes the main documentation, doc strings, source code comments, and a
couple error messages in the test suite. In some cases the word was removed
or edited some other way to fix the grammar.
Issue #25274: sys.setrecursionlimit() now raises a RecursionError if the new
recursion limit is too low depending at the current recursion depth. Modify
also the "lower-water mark" formula to make it monotonic. This mark is used to
decide when the overflowed flag of the thread state is reset.
function instead of the getentropy() function. The getentropy() function is
blocking to generate very good quality entropy, os.urandom() doesn't need such
high-quality entropy.
On the x86 OpenBSD 5.8 buildbot, the integer overflow check is ignored. Copy
the tv_sec variable into a Py_time_t variable instead of "simply" casting it to
Py_time_t, to fix the integer overflow check.
function instead of the getentropy() function. The getentropy() function is
blocking to generate very good quality entropy, os.urandom() doesn't need such
high-quality entropy.
On Windows, the tv_sec field of the timeval structure has the type C long,
whereas it has the type C time_t on all other platforms. A C long has a size of
32 bits (signed inter, 1 bit for the sign, 31 bits for the value) which is not
enough to store an Epoch timestamp after the year 2038.
Add the _PyTime_AsTimevalTime_t() function written for datetime.datetime.now():
convert a _PyTime_t timestamp to a (secs, us) tuple where secs type is time_t.
It allows to support dates after the year 2038 on Windows.
Enhance also _PyTime_AsTimeval_impl() to detect overflow on the number of
seconds when rounding the number of microseconds.
On Windows, the tv_sec field of the timeval structure has the type C long,
whereas it has the type C time_t on all other platforms. A C long has a size of
32 bits (signed inter, 1 bit for the sign, 31 bits for the value) which is not
enough to store an Epoch timestamp after the year 2038.
Add the _PyTime_AsTimevalTime_t() function written for datetime.datetime.now():
convert a _PyTime_t timestamp to a (secs, us) tuple where secs type is time_t.
It allows to support dates after the year 2038 on Windows.
Enhance also _PyTime_AsTimeval_impl() to detect overflow on the number of
seconds when rounding the number of microseconds.
Overflow test in test_FromSecondsObject() fails on FreeBSD 10.0 buildbot which
uses clang. clang implements more aggressive optimization which gives
different result than GCC on undefined behaviours.
Check if a multiplication will overflow, instead of checking if a
multiplicatin had overflowed, to avoid undefined behaviour.
Add also debug information if the test on overflow fails.
* Filter values which would overflow on conversion to the C long type
(for timeval.tv_sec).
* Adjust also the message of OverflowError on PyTime conversions
* test_time: add debug information if a timestamp conversion fails
Drop all hardcoded tests. Instead, reimplement each function in Python, usually
using decimal.Decimal for the rounding mode.
Add much more values to the dataset. Test various timestamp units from
picroseconds to seconds, in integer and float.
Enhance also _PyTime_AsSecondsDouble().
datetime.datetime now round microseconds to nearest with ties going to nearest
even integer (ROUND_HALF_EVEN), as round(float), instead of rounding towards
-Infinity (ROUND_FLOOR).
pytime API: replace _PyTime_ROUND_HALF_UP with _PyTime_ROUND_HALF_EVEN. Fix
also _PyTime_Divide() for negative numbers.
_PyTime_AsTimeval_impl() now reuses _PyTime_Divide() instead of reimplementing
rounding modes.
Issue #24891: Fix a race condition at Python startup if the file descriptor
of stdin (0), stdout (1) or stderr (2) is closed while Python is creating
sys.stdin, sys.stdout and sys.stderr objects. These attributes are now set
to None if the creation of the object failed, instead of raising an OSError
exception. Initial patch written by Marco Paolini.
Don't check anymore at runtime that the monotonic clock doesn't go backward.
Yes, it happens. It occurs sometimes each month on a Debian buildbot slave
running in a VM.
The problem is that Python cannot do anything useful if a monotonic clock goes
backward. It was decided in the PEP 418 to not fix the system, but only expose
the clock provided by the OS.
with ties going away from zero (ROUND_HALF_UP), as Python 2 and Python older
than 3.3, instead of rounding to nearest with ties going to nearest even
integer (ROUND_HALF_EVEN).
See the latest version of getrandom() manual page:
http://man7.org/linux/man-pages/man2/getrandom.2.html#NOTES
The behavior when a call to getrandom() that is blocked while reading from
/dev/urandom is interrupted by a signal handler depends on the
initialization state of the entropy buffer and on the request size, buflen.
If the entropy is not yet initialized, then the call will fail with the
EINTR error. If the entropy pool has been initialized and the request size
is large (buflen > 256), the call either succeeds, returning a partially
filled buffer, or fails with the error EINTR. If the entropy pool has been
initialized and the request size is small (buflen <= 256), then getrandom()
will not fail with EINTR. Instead, it will return all of the bytes that
have been requested.
Note: py_getrandom() calls getrandom() with flags=0.
Summary of changes:
1. Coroutines now have a distinct, separate from generators
type at the C level: PyGen_Type, and a new typedef PyCoroObject.
PyCoroObject shares the initial segment of struct layout with
PyGenObject, making it possible to reuse existing generators
machinery. The new type is exposed as 'types.CoroutineType'.
As a consequence of having a new type, CO_GENERATOR flag is
no longer applied to coroutines.
2. Having a separate type for coroutines made it possible to add
an __await__ method to the type. Although it is not used by the
interpreter (see details on that below), it makes coroutines
naturally (without using __instancecheck__) conform to
collections.abc.Coroutine and collections.abc.Awaitable ABCs.
[The __instancecheck__ is still used for generator-based
coroutines, as we don't want to add __await__ for generators.]
3. Add new opcode: GET_YIELD_FROM_ITER. The opcode is needed to
allow passing native coroutines to the YIELD_FROM opcode.
Before this change, 'yield from o' expression was compiled to:
(o)
GET_ITER
LOAD_CONST
YIELD_FROM
Now, we use GET_YIELD_FROM_ITER instead of GET_ITER.
The reason for adding a new opcode is that GET_ITER is used
in some contexts (such as 'for .. in' loops) where passing
a coroutine object is invalid.
4. Add two new introspection functions to the inspec module:
getcoroutinestate(c) and getcoroutinelocals(c).
5. inspect.iscoroutine(o) is updated to test if 'o' is a native
coroutine object. Before this commit it used abc.Coroutine,
and it was requested to update inspect.isgenerator(o) to use
abc.Generator; it was decided, however, that inspect functions
should really be tailored for checking for native types.
6. sys.set_coroutine_wrapper(w) API is updated to work with only
native coroutines. Since types.coroutine decorator supports
any type of callables now, it would be confusing that it does
not work for all types of coroutines.
7. Exceptions logic in generators C implementation was updated
to raise clearer messages for coroutines:
Before: TypeError("generator raised StopIteration")
After: TypeError("coroutine raised StopIteration")
Known limitations of the current implementation:
- documentation changes are incomplete
- there's a reference leak I haven't tracked down yet
The leak is most visible by running:
./python -m test -R3:3 test_importlib
However, you can also see it by running:
./python -X showrefcount
Importing the array or _testmultiphase modules, and
then deleting them from both sys.modules and the local
namespace shows significant increases in the total
number of active references each cycle. By contrast,
with _testcapi (which continues to use single-phase
initialisation) the global refcounts stabilise after
a couple of cycles.
* adds missing INCREF in WITH_CLEANUP_START
* adds missing DECREF in WITH_CLEANUP_FINISH
* adds several new tests Yury created while investigating this
CID 1291697 (#1 of 1): Dereference before null check (REVERSE_INULL)
check_after_deref: Null-checking tb suggests that it may be null, but it has already been dereferenced on all paths leading to the check.
The concept of .pyo files no longer exists. Now .pyc files have an
optional `opt-` tag which specifies if any extra optimizations beyond
the peepholer were applied.