* Stores all location info in linetable to conform to PEP 626.
* Remove column table from code objects.
* Remove end-line table from code objects.
* Document new location table format
* Moves the bytecode to the end of the corresponding PyCodeObject, and quickens it in-place.
* Removes the almost-always-unused co_varnames, co_freevars, and co_cellvars member caches
* _PyOpcode_Deopt is a new mapping from all opcodes to their un-quickened forms.
* _PyOpcode_InlineCacheEntries is renamed to _PyOpcode_Caches
* _Py_IncrementCountAndMaybeQuicken is renamed to _PyCode_Warmup
* _Py_Quicken is renamed to _PyCode_Quicken
* _co_quickened is renamed to _co_code_adaptive (and is now a read-only memoryview).
* Do not emit unused nonzero opargs anymore in the compiler.
Instead of manually enumerating the global strings in generate_global_objects.py, we extrapolate the list from usage of _Py_ID() and _Py_STR() in the source files.
This is partly inspired by gh-31261.
https://bugs.python.org/issue46541
We're no longer using _Py_IDENTIFIER() (or _Py_static_string()) in any core CPython code. It is still used in a number of non-builtin stdlib modules.
The replacement is: PyUnicodeObject (not pointer) fields under _PyRuntimeState, statically initialized as part of _PyRuntime. A new _Py_GET_GLOBAL_IDENTIFIER() macro facilitates lookup of the fields (along with _Py_GET_GLOBAL_STRING() for non-identifier strings).
https://bugs.python.org/issue46541#msg411799 explains the rationale for this change.
The core of the change is in:
* (new) Include/internal/pycore_global_strings.h - the declarations for the global strings, along with the macros
* Include/internal/pycore_runtime_init.h - added the static initializers for the global strings
* Include/internal/pycore_global_objects.h - where the struct in pycore_global_strings.h is hooked into _PyRuntimeState
* Tools/scripts/generate_global_objects.py - added generation of the global string declarations and static initializers
I've also added a --check flag to generate_global_objects.py (along with make check-global-objects) to check for unused global strings. That check is added to the PR CI config.
The remainder of this change updates the core code to use _Py_GET_GLOBAL_IDENTIFIER() instead of _Py_IDENTIFIER() and the related _Py*Id functions (likewise for _Py_GET_GLOBAL_STRING() instead of _Py_static_string()). This includes adding a few functions where there wasn't already an alternative to _Py*Id(), replacing the _Py_Identifier * parameter with PyObject *.
The following are not changed (yet):
* stop using _Py_IDENTIFIER() in the stdlib modules
* (maybe) get rid of _Py_IDENTIFIER(), etc. entirely -- this may not be doable as at least one package on PyPI using this (private) API
* (maybe) intern the strings during runtime init
https://bugs.python.org/issue46541
* Add PRECALL_FUNCTION opcode.
* Move 'call shape' varaibles into struct.
* Replace CALL_NO_KW and CALL_KW with KW_NAMES and CALL instructions.
* Specialize for builtin methods taking using the METH_FASTCALL | METH_KEYWORDS protocol.
* Allow kwnames for specialized calls to builtin types.
* Specialize calls to tuple(arg) and str(arg).
* Add RETURN_GENERATOR and JUMP_NO_INTERRUPT opcodes.
* Trim frame and generator by word each.
* Minor refactor of frame.c
* Update test.test_sys to account for smaller frames.
* Treat generator functions as normal functions when evaluating and specializing.
* Do not PUSH/POP traceback or type to the stack as part of exc_info
* Remove exc_traceback and exc_type from _PyErr_StackItem
* Add to what's new, because this change breaks things like Cython
The line numbers of actually calling the decorator functions of
functions and classes was wrong (as opposed to loading them, were they
have been correct previously too).
Co-authored-by: Łukasz Langa <lukasz@langa.pl>
* Make internal APIs that take PyFrameConstructor take a PyFunctionObject instead.
* Add reference to function to frame, borrow references to builtins and globals.
* Add COPY_FREE_VARS instruction to allow specialization of calls to inner functions.
The new resizing system works like this;
```
$ cat t.py
a + a + a + b + c + a + a + a + b + c + a + a + a + b + c + a + a + a + b + c
[repeated 99 more times]
$ ./python t.py
RESIZE: prev len = 32, new len = 66
FINAL SIZE: 56
-----------------------------------------------------
RESIZE: prev len = 32, new len = 66
RESIZE: prev len = 66, new len = 134
RESIZE: prev len = 134, new len = 270
RESIZE: prev len = 270, new len = 542
RESIZE: prev len = 542, new len = 1086
RESIZE: prev len = 1086, new len = 2174
RESIZE: prev len = 2174, new len = 4350
RESIZE: prev len = 4350, new len = 8702
FINAL SIZE: 8004
```
So now we do considerably lower number of `_PyBytes_Resize` calls.
Automerge-Triggered-By: GH:isidentical
This PR is part of PEP 657 and augments the compiler to emit ending
line numbers as well as starting and ending columns from the AST
into compiled code objects. This allows bytecodes to be correlated
to the exact source code ranges that generated them.
This information is made available through the following public APIs:
* The `co_positions` method on code objects.
* The C API function `PyCode_Addr2Location`.
Co-authored-by: Batuhan Taskaya <isidentical@gmail.com>
Co-authored-by: Ammar Askar <ammar@ammaraskar.com>
Currently, if an arg value escapes (into the closure for an inner function) we end up allocating two indices in the fast locals even though only one gets used. Additionally, using the lower index would be better in some cases, such as with no-arg `super()`. To address this, we update the compiler to fix the offsets so each variable only gets one "fast local". As a consequence, now some cell offsets are interspersed with the locals (only when an arg escapes to an inner function).
https://bugs.python.org/issue43693
This was reverted in GH-26596 (commit 6d518bb) due to some bad memory accesses.
* Add the MAKE_CELL opcode. (gh-26396)
The memory accesses have been fixed.
https://bugs.python.org/issue43693
This moves logic out of the frame initialization code and into the compiler and eval loop. Doing so simplifies the runtime code and allows us to optimize it better.
https://bugs.python.org/issue43693
These were reverted in gh-26530 (commit 17c4edc) due to refleaks.
* 2c1e258 - Compute deref offsets in compiler (gh-25152)
* b2bf2bc - Add new internal code objects fields: co_fastlocalnames and co_fastlocalkinds. (gh-26388)
This change fixes the refleaks.
https://bugs.python.org/issue43693
* Revert "bpo-43693: Compute deref offsets in compiler (gh-25152)"
This reverts commit b2bf2bc1ec.
* Revert "bpo-43693: Add new internal code objects fields: co_fastlocalnames and co_fastlocalkinds. (gh-26388)"
This reverts commit 2c1e2583fd.
These two commits are breaking the refleak buildbots.
Merges locals and cells into a single array.
Saves a pointer in the interpreter and means that we don't need the LOAD_CLOSURE opcode any more
https://bugs.python.org/issue43693
A number of places in the code base (notably ceval.c and frameobject.c) rely on mapping variable names to indices in the frame "locals plus" array (AKA fast locals), and thus opargs. Currently the compiler indirectly encodes that information on the code object as the tuples co_varnames, co_cellvars, and co_freevars. At runtime the dependent code must calculate the proper mapping from those, which isn't ideal and impacts performance-sensitive sections. This is something we can easily address in the compiler instead.
This change addresses the situation by replacing internal use of co_varnames, etc. with a single combined tuple of names in locals-plus order, along with a minimal array mapping each to its kind (local vs. cell vs. free). These two new PyCodeObject fields, co_fastlocalnames and co_fastllocalkinds, are not exposed to Python code for now, but co_varnames, etc. are still available with the same values as before (though computed lazily).
Aside from the (mild) performance impact, there are a number of other benefits:
* there's now a clear, direct relationship between locals-plus and variables
* code that relies on the locals-plus-to-name mapping is simpler
* marshaled code objects are smaller and serialize/de-serialize faster
Also note that we can take this approach further by expanding the possible values in co_fastlocalkinds to include specific argument types (e.g. positional-only, kwargs). Doing so would allow further speed-ups in _PyEval_MakeFrameVector(), which is where args get unpacked into the locals-plus array. It would also allow us to shrink marshaled code objects even further.
https://bugs.python.org/issue43693
"Zero cost" exception handling.
* Uses a lookup table to determine how to handle exceptions.
* Removes SETUP_FINALLY and POP_TOP block instructions, eliminating (most of) the runtime overhead of try statements.
* Reduces the size of the frame object by about 60%.
This fixes the following warning:
'initializing': conversion from 'Py_ssize_t' to 'int', possible loss of data [D:\a\cpython\cpython\PCbuild\pythoncore.vcxproj]
* Add length parameter to PyLineTable_InitAddressRange and doen't use sentinel values at end of table. Makes the line number table more robust.
* Update PyCodeAddressRange to match PEP 626.
To improve the user experience understanding what part of the error messages associated with SyntaxErrors is wrong, we can highlight the whole error range and not only place the caret at the first character. In this way:
>>> foo(x, z for z in range(10), t, w)
File "<stdin>", line 1
foo(x, z for z in range(10), t, w)
^
SyntaxError: Generator expression must be parenthesized
becomes
>>> foo(x, z for z in range(10), t, w)
File "<stdin>", line 1
foo(x, z for z in range(10), t, w)
^^^^^^^^^^^^^^^^^^^^
SyntaxError: Generator expression must be parenthesized
* Modify compiler to reduce stack consumption for large expressions.
* Add more tests for stack usage.
* Add NEWS item.
* Raise SystemError for truly excessive stack use.
* Handle check for sending None to starting generator and coroutine into bytecode.
* Document new bytecode and make it fail gracefully if mis-compiled.
* Use instruction offset, rather than bytecode offset. Streamlines interpreter dispatch a bit, and removes most EXTENDED_ARGs for jumps.
* Change some uses of PyCode_Addr2Line to PyFrame_GetLineNumber
Remove the compiler functions using "struct _mod" type, because the
public AST C API was removed:
* PyAST_Compile()
* PyAST_CompileEx()
* PyAST_CompileObject()
* PyFuture_FromAST()
* PyFuture_FromASTObject()
These functions were undocumented and excluded from the limited C API.
Rename functions:
* PyAST_CompileObject() => _PyAST_Compile()
* PyFuture_FromASTObject() => _PyFuture_FromAST()
Moreover, _PyFuture_FromAST() is no longer exported (replace
PyAPI_FUNC() with extern). _PyAST_Compile() remains exported for
test_peg_generator.
Remove also compatibility functions:
* PyAST_Compile()
* PyAST_CompileEx()
* PyFuture_FromAST()
Rename Include/symtable.h to to Include/internal/pycore_symtable.h,
don't export symbols anymore (replace PyAPI_FUNC and PyAPI_DATA with
extern) and rename functions:
* PyST_GetScope() to _PyST_GetScope()
* PySymtable_BuildObject() to _PySymtable_Build()
* PySymtable_Free() to _PySymtable_Free()
Remove PySymtable_Build(), Py_SymtableString() and
Py_SymtableStringObject() functions.
The Py_SymtableString() function was part the stable ABI by mistake
but it could not be used, since the symtable.h header file was
excluded from the limited C API.
The Python symtable module remains available and is unchanged.
Move _PyAST_GetDocString() and _PyAST_ExprAsUnicode() functions the
internal C API: from Include/ast.h to a new
Include/internal/pycore_ast.h header file. Don't export these
functions anymore: replace PyAPI_FUNC() with extern.
Remove also unused includes.
* bpo-43497: Emit SyntaxWarnings for assertions with tuple constants.
Add a test that shows that a tuple constant (a tuple, where all of its
members are also compile-time constants) produces a SyntaxWarning. Then
fix this failure.
* Make SyntaxWarnings also work when "optimized".
* Split tests for SyntaxWarning to SyntaxError conversion
SyntaxWarnings emitted by the compiler when configured to be errors are
actually raised as SyntaxError exceptions.
Move these tests into their own method and add a test to ensure they are
raised. Previously we only tested that they were not raised for a
"valid" assertion statement.
* Replace Py_FatalError() calls with regular SystemError exceptions.
* compiler_exit_scope() calls _PyErr_WriteUnraisableMsg() to log the
PySequence_DelItem() failure.
* compiler_unit_check() uses _PyMem_IsPtrFreed().
* compiler_make_closure(): remove "(reftype == FREE)" comment since
reftype can also be LOCAL or GLOBAL_EXPLICIT.
When Python is built in debug mode (with C assertions), calling a
type slot like sq_length (__len__() in Python) now fails with a fatal
error if the slot succeeded with an exception set, or failed with no
exception set. The error message contains the slot, the type name,
and the current exception (if an exception is set).
* Check the result of all slots using _Py_CheckSlotResult().
* No longer pass op_name to ternary_op() in release mode.
* Replace operator with dunder Python method name in error messages.
For example, replace "*" with "__mul__".
* Fix compiler_exit_scope() when an exception is set.
* Fix bytearray.extend() when an exception is set: don't call
bytearray_setslice() with an exception set.
* Mark bytecodes at end of try-except as artificial.
* Make sure that the CFG is consistent throughout optimiization.
* Extend line-number propagation logic so that implicit returns after 'try-except' or 'with' have the correct line numbers.
* Update importlib
* Add test for frame.f_lineno with/without tracing.
* Make sure that frame.f_lineno is correct regardless of whether frame.f_trace is set.
* Update importlib
* Add NEWS
* Delete jump instructions that bypass empty blocks
* Add news entry
* Explicitly check for unconditional jump opcodes
Using the is_jump function results in the inclusion of instructions like
returns for which this optimization is not really valid. So, instead
explicitly check that the instruction is an unconditional jump.
* Handle conditional jumps, delete jumps gracefully
* Ensure b_nofallthrough and b_reachable are valid
* Add test for redundant jumps
* Regenerate importlib.h and edit Misc/ACKS
* Fix bad whitespace
Reduce memory footprint and improve performance of loading modules having many func annotations.
>>> sys.getsizeof({"a":"int","b":"int","return":"int"})
232
>>> sys.getsizeof(("a","int","b","int","return","int"))
88
The tuple is converted into dict on the fly when `func.__annotations__` is accessed first.
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
Co-authored-by: Inada Naoki <songofacandy@gmail.com>