* Adds EXIT_INTERPRETER instruction to exit PyEval_EvalDefault()
* Simplifies RETURN_VALUE, YIELD_VALUE and RETURN_GENERATOR instructions as they no longer need to check for entry frames.
The switch cases (really TARGET(opcode) macros) have been moved from ceval.c to generated_cases.c.h. That file is generated from instruction definitions in bytecodes.c (which impersonates a C file so the C code it contains can be edited without custom support in e.g. VS Code).
The code generator lives in Tools/cases_generator (it has a README.md explaining how it works). The DSL used to describe the instructions is a work in progress, described in https://github.com/faster-cpython/ideas/blob/main/3.12/interpreter_definition.md.
This is surely a work-in-progress. An easy next step could be auto-generating super-instructions.
**IMPORTANT: Merge Conflicts**
If you get a merge conflict for instruction implementations in ceval.c, your best bet is to port your changes to bytecodes.c. That file looks almost the same as the original cases, except instead of `TARGET(NAME)` it uses `inst(NAME)`, and the trailing `DISPATCH()` call is omitted (the code generator adds it automatically).
This reduces confusion between jumps at the bytecode level
(e.g. JUMPTO(), JUMPBY(), and various JUMP_*() opcodes)
and jumps in the C code (which are 'goto' statements).
Change FOR_ITER to have the same stack effect regardless of whether it branches or not.
Performance is unchanged as FOR_ITER (and specialized forms jump over the cleanup code).
Make sys.setprofile() and sys.settrace() functions reentrant. They
can no long fail with: RuntimeError("Cannot install a trace function
while another trace function is being installed").
Make _PyEval_SetTrace() and _PyEval_SetProfile() functions reentrant,
rather than detecting and rejecting reentrant calls. Only delete the
reference to function arguments once the new function is fully set,
when a reentrant call is safe. Call also _PySys_Audit() earlier.
The `}` marked with `/* End instructions */` is the end of the switch.
There is another pair of `{}` around the switch, which is vestigial
from ancient times when it was `for (;;) { switch (opcode) { ... } }`.
All `DISPATCH` macro calls should be inside that pair.
Remove the sys.getdxp() function and the Tools/scripts/analyze_dxp.py
script. DXP stands for "dynamic execution pairs". They were related
to DYNAMIC_EXECUTION_PROFILE and DXPAIRS macros which have been
removed in Python 3.11. Python can now be built with "./configure
--enable-pystats" to gather statistics on Python opcodes.
* gh-93503: Add APIs to set profiling and tracing functions in all threads in the C-API
* Use a separate API
* Fix NEWS entry
* Add locks around the loop
* Document ignoring exceptions
* Use the new APIs in the sys module
* Update docs
Move the follow functions and type from frameobject.h to pyframe.h,
so the standard <Python.h> provide frame getter functions:
* PyFrame_Check()
* PyFrame_GetBack()
* PyFrame_GetBuiltins()
* PyFrame_GetGenerator()
* PyFrame_GetGlobals()
* PyFrame_GetLasti()
* PyFrame_GetLocals()
* PyFrame_Type
Remove #include "frameobject.h" from many C files. It's no longer
needed.
This was added for bpo-40514 (gh-84694) to test out a per-interpreter GIL. However, it has since proven unnecessary to keep the experiment in the repo. (It can be done as a branch in a fork like normal.) So here we are removing:
* the configure option
* the macro
* the code enabled by the macro
Fix __lltrace__ debug feature if the stdout encoding is not UTF-8.
If the stdout encoding is not UTF-8, the first call to
lltrace_resume_frame() indirectly sets lltrace to 0 when calling
unicode_check_encoding_errors() which calls
encodings.search_function().
Currently, calling Py_EnterRecursiveCall() and
Py_LeaveRecursiveCall() may use a function call or a static inline
function call, depending if the internal pycore_ceval.h header file
is included or not. Use a different name for the static inline
function to ensure that the static inline function is always used in
Python internals for best performance. Similar approach than
PyThreadState_GET() (function call) and _PyThreadState_GET() (static
inline function).
* Rename _Py_EnterRecursiveCall() to _Py_EnterRecursiveCallTstate()
* Rename _Py_LeaveRecursiveCall() to _Py_LeaveRecursiveCallTstate()
* pycore_ceval.h: Rename Py_EnterRecursiveCall() to
_Py_EnterRecursiveCall() and Py_LeaveRecursiveCall() and
_Py_LeaveRecursiveCall()
* Check the types of PRECALL_METHOD_DESCRIPTOR_FAST_WITH_KEYWORDS
* fix PRECALL_NO_KW_METHOD_DESCRIPTOR_NOARGS as well
* fix PRECALL_NO_KW_METHOD_DESCRIPTOR_O
* fix PRECALL_NO_KW_METHOD_DESCRIPTOR_FAST
Move the following API from Include/opcode.h (public C API) to a new
Include/internal/pycore_opcode.h header file (internal C API):
* EXTRA_CASES
* _PyOpcode_Caches
* _PyOpcode_Deopt
* _PyOpcode_Jump
* _PyOpcode_OpName
* _PyOpcode_RelativeJump
Macros Py_DECREF, Py_XDECREF, Py_IS_TYPE, _Py_atomic_load_32bit_impl
and _Py_DECREF_SPECIALIZED are redefined as macros
that completely replace the inline functions of the same name.
These three came out in the top four of functions that (in MSVC)
somehow weren't inlined.
Co-authored-by: Victor Stinner <vstinner@python.org>
Co-authored-by: Dennis Sweeney <36520290+sweeneyde@users.noreply.github.com>
Apparently a switch on an 8-bit quantity where all cases are
present generates a more efficient jump (doing only one indexed
memory load instead of two).
So we make opcode and use_tracing uint8_t, and generate a macro
full of extra `case NNN:` lines for all unused opcodes.
See https://github.com/faster-cpython/ideas/issues/321#issuecomment-1103263673
* Transform opcodes into opnames
* Print the whole stack at each opcode, and eliminate prtrace output at each (push/pop/stackadj)
* Display info about the function at each resume_frame
Remove the Include/code.h header file. C extensions should only
include the main <Python.h> header file.
Python.h includes directly Include/cpython/code.h instead.
* Moves the bytecode to the end of the corresponding PyCodeObject, and quickens it in-place.
* Removes the almost-always-unused co_varnames, co_freevars, and co_cellvars member caches
* _PyOpcode_Deopt is a new mapping from all opcodes to their un-quickened forms.
* _PyOpcode_InlineCacheEntries is renamed to _PyOpcode_Caches
* _Py_IncrementCountAndMaybeQuicken is renamed to _PyCode_Warmup
* _Py_Quicken is renamed to _PyCode_Quicken
* _co_quickened is renamed to _co_code_adaptive (and is now a read-only memoryview).
* Do not emit unused nonzero opargs anymore in the compiler.
Remove the private undocumented function
_PyEval_GetCoroutineOriginTrackingDepth() from the C API. Call the
public sys.get_coroutine_origin_tracking_depth() function instead.
Change the internal function
_PyEval_SetCoroutineOriginTrackingDepth():
* Remove the 'tstate' parameter;
* Add return value and raises an exception if depth is negative;
* No longer export the function: call the public
sys.set_coroutine_origin_tracking_depth() function instead.
Uniformize also function declarations in pycore_ceval.h.
When an exception is created in a nested call to PyObject_GetAttr, any
external calls will override the context information of the
AttributeError that we have already placed in the most internal call.
This will cause the suggestions we create to nor work properly as the
attribute name and object that we will be using are the incorrect ones.
To avoid this, we need to check first if these attributes are already
set and bail out if that's the case.
Rename also struct _cframe to struct _PyCFrame.
Add a comment suggesting using public functions rather than using
directly the private _PyCFrame structure.
We're no longer using _Py_IDENTIFIER() (or _Py_static_string()) in any core CPython code. It is still used in a number of non-builtin stdlib modules.
The replacement is: PyUnicodeObject (not pointer) fields under _PyRuntimeState, statically initialized as part of _PyRuntime. A new _Py_GET_GLOBAL_IDENTIFIER() macro facilitates lookup of the fields (along with _Py_GET_GLOBAL_STRING() for non-identifier strings).
https://bugs.python.org/issue46541#msg411799 explains the rationale for this change.
The core of the change is in:
* (new) Include/internal/pycore_global_strings.h - the declarations for the global strings, along with the macros
* Include/internal/pycore_runtime_init.h - added the static initializers for the global strings
* Include/internal/pycore_global_objects.h - where the struct in pycore_global_strings.h is hooked into _PyRuntimeState
* Tools/scripts/generate_global_objects.py - added generation of the global string declarations and static initializers
I've also added a --check flag to generate_global_objects.py (along with make check-global-objects) to check for unused global strings. That check is added to the PR CI config.
The remainder of this change updates the core code to use _Py_GET_GLOBAL_IDENTIFIER() instead of _Py_IDENTIFIER() and the related _Py*Id functions (likewise for _Py_GET_GLOBAL_STRING() instead of _Py_static_string()). This includes adding a few functions where there wasn't already an alternative to _Py*Id(), replacing the _Py_Identifier * parameter with PyObject *.
The following are not changed (yet):
* stop using _Py_IDENTIFIER() in the stdlib modules
* (maybe) get rid of _Py_IDENTIFIER(), etc. entirely -- this may not be doable as at least one package on PyPI using this (private) API
* (maybe) intern the strings during runtime init
https://bugs.python.org/issue46541
* Add PRECALL_FUNCTION opcode.
* Move 'call shape' varaibles into struct.
* Replace CALL_NO_KW and CALL_KW with KW_NAMES and CALL instructions.
* Specialize for builtin methods taking using the METH_FASTCALL | METH_KEYWORDS protocol.
* Allow kwnames for specialized calls to builtin types.
* Specialize calls to tuple(arg) and str(arg).
* Add RETURN_GENERATOR and JUMP_NO_INTERRUPT opcodes.
* Trim frame and generator by word each.
* Minor refactor of frame.c
* Update test.test_sys to account for smaller frames.
* Treat generator functions as normal functions when evaluating and specializing.
Previously, the main interpreter was allocated on the heap during runtime initialization. Here we instead embed it into _PyRuntimeState, which means it is statically allocated as part of the _PyRuntime global. The same goes for the initial thread state (of each interpreter, including the main one). Consequently there are fewer allocations during runtime/interpreter init, fewer possible failures, and better memory locality.
FYI, this also helps efforts to consolidate globals, which in turns helps work on subinterpreter isolation.
https://bugs.python.org/issue45953
* Do not PUSH/POP traceback or type to the stack as part of exc_info
* Remove exc_traceback and exc_type from _PyErr_StackItem
* Add to what's new, because this change breaks things like Cython
This parallels _PyRuntimeState.interpreters. Doing this helps make it more clear what part of PyInterpreterState relates to its threads.
https://bugs.python.org/issue46008
This falls into the category of keep-allocation-and-initialization separate. It also allows us to use _PyEval_InitState() safely in functions that return void.
https://bugs.python.org/issue46008
* Make generator, coroutine and async gen structs all the same size.
* Store interpreter frame in generator (and coroutine). Reduces the number of allocations neeeded for a generator from two to one.
* Split exit paths into exceptional and non-exceptional.
* Move exit tracing code to individual bytecodes.
* Wrap all trace entry and exit events in macros to make them clearer and easier to enhance.
* Move return sequence into RETURN_VALUE, YIELD_VALUE and YIELD_FROM. Distinguish between normal trace events and dtrace events.
* Make internal APIs that take PyFrameConstructor take a PyFunctionObject instead.
* Add reference to function to frame, borrow references to builtins and globals.
* Add COPY_FREE_VARS instruction to allow specialization of calls to inner functions.