* Moves the bytecode to the end of the corresponding PyCodeObject, and quickens it in-place.
* Removes the almost-always-unused co_varnames, co_freevars, and co_cellvars member caches
* _PyOpcode_Deopt is a new mapping from all opcodes to their un-quickened forms.
* _PyOpcode_InlineCacheEntries is renamed to _PyOpcode_Caches
* _Py_IncrementCountAndMaybeQuicken is renamed to _PyCode_Warmup
* _Py_Quicken is renamed to _PyCode_Quicken
* _co_quickened is renamed to _co_code_adaptive (and is now a read-only memoryview).
* Do not emit unused nonzero opargs anymore in the compiler.
Remove the private undocumented function
_PyEval_GetCoroutineOriginTrackingDepth() from the C API. Call the
public sys.get_coroutine_origin_tracking_depth() function instead.
Change the internal function
_PyEval_SetCoroutineOriginTrackingDepth():
* Remove the 'tstate' parameter;
* Add return value and raises an exception if depth is negative;
* No longer export the function: call the public
sys.set_coroutine_origin_tracking_depth() function instead.
Uniformize also function declarations in pycore_ceval.h.
When an exception is created in a nested call to PyObject_GetAttr, any
external calls will override the context information of the
AttributeError that we have already placed in the most internal call.
This will cause the suggestions we create to nor work properly as the
attribute name and object that we will be using are the incorrect ones.
To avoid this, we need to check first if these attributes are already
set and bail out if that's the case.
Rename also struct _cframe to struct _PyCFrame.
Add a comment suggesting using public functions rather than using
directly the private _PyCFrame structure.
We're no longer using _Py_IDENTIFIER() (or _Py_static_string()) in any core CPython code. It is still used in a number of non-builtin stdlib modules.
The replacement is: PyUnicodeObject (not pointer) fields under _PyRuntimeState, statically initialized as part of _PyRuntime. A new _Py_GET_GLOBAL_IDENTIFIER() macro facilitates lookup of the fields (along with _Py_GET_GLOBAL_STRING() for non-identifier strings).
https://bugs.python.org/issue46541#msg411799 explains the rationale for this change.
The core of the change is in:
* (new) Include/internal/pycore_global_strings.h - the declarations for the global strings, along with the macros
* Include/internal/pycore_runtime_init.h - added the static initializers for the global strings
* Include/internal/pycore_global_objects.h - where the struct in pycore_global_strings.h is hooked into _PyRuntimeState
* Tools/scripts/generate_global_objects.py - added generation of the global string declarations and static initializers
I've also added a --check flag to generate_global_objects.py (along with make check-global-objects) to check for unused global strings. That check is added to the PR CI config.
The remainder of this change updates the core code to use _Py_GET_GLOBAL_IDENTIFIER() instead of _Py_IDENTIFIER() and the related _Py*Id functions (likewise for _Py_GET_GLOBAL_STRING() instead of _Py_static_string()). This includes adding a few functions where there wasn't already an alternative to _Py*Id(), replacing the _Py_Identifier * parameter with PyObject *.
The following are not changed (yet):
* stop using _Py_IDENTIFIER() in the stdlib modules
* (maybe) get rid of _Py_IDENTIFIER(), etc. entirely -- this may not be doable as at least one package on PyPI using this (private) API
* (maybe) intern the strings during runtime init
https://bugs.python.org/issue46541
* Add PRECALL_FUNCTION opcode.
* Move 'call shape' varaibles into struct.
* Replace CALL_NO_KW and CALL_KW with KW_NAMES and CALL instructions.
* Specialize for builtin methods taking using the METH_FASTCALL | METH_KEYWORDS protocol.
* Allow kwnames for specialized calls to builtin types.
* Specialize calls to tuple(arg) and str(arg).
* Add RETURN_GENERATOR and JUMP_NO_INTERRUPT opcodes.
* Trim frame and generator by word each.
* Minor refactor of frame.c
* Update test.test_sys to account for smaller frames.
* Treat generator functions as normal functions when evaluating and specializing.
Previously, the main interpreter was allocated on the heap during runtime initialization. Here we instead embed it into _PyRuntimeState, which means it is statically allocated as part of the _PyRuntime global. The same goes for the initial thread state (of each interpreter, including the main one). Consequently there are fewer allocations during runtime/interpreter init, fewer possible failures, and better memory locality.
FYI, this also helps efforts to consolidate globals, which in turns helps work on subinterpreter isolation.
https://bugs.python.org/issue45953
* Do not PUSH/POP traceback or type to the stack as part of exc_info
* Remove exc_traceback and exc_type from _PyErr_StackItem
* Add to what's new, because this change breaks things like Cython
This parallels _PyRuntimeState.interpreters. Doing this helps make it more clear what part of PyInterpreterState relates to its threads.
https://bugs.python.org/issue46008
This falls into the category of keep-allocation-and-initialization separate. It also allows us to use _PyEval_InitState() safely in functions that return void.
https://bugs.python.org/issue46008
* Make generator, coroutine and async gen structs all the same size.
* Store interpreter frame in generator (and coroutine). Reduces the number of allocations neeeded for a generator from two to one.
* Split exit paths into exceptional and non-exceptional.
* Move exit tracing code to individual bytecodes.
* Wrap all trace entry and exit events in macros to make them clearer and easier to enhance.
* Move return sequence into RETURN_VALUE, YIELD_VALUE and YIELD_FROM. Distinguish between normal trace events and dtrace events.
* Make internal APIs that take PyFrameConstructor take a PyFunctionObject instead.
* Add reference to function to frame, borrow references to builtins and globals.
* Add COPY_FREE_VARS instruction to allow specialization of calls to inner functions.
* Avoid making C calls for most calls to Python functions.
* Change initialize_locals(steal=true) and _PyTuple_FromArraySteal to consume the argument references regardless of whether they succeed or fail.
Add PyThreadState_EnterTracing() and PyThreadState_LeaveTracing()
functions to the limited C API to suspend and resume tracing and
profiling.
Add an unit test on the PyThreadState C API to _testcapi.
Add also internal _PyThreadState_DisableTracing() and
_PyThreadState_ResetTracing().
* Never change types' cached keys. It could invalidate inline attribute objects.
* Lazily create object dictionaries.
* Update specialization of LOAD/STORE_ATTR.
* Don't update shared keys version for deletion of value.
* Update gdb support to handle instance values.
* Rename SPLIT_KEYS opcodes to INSTANCE_VALUE.
Redefining the PyThreadState_GET() macro in pycore_pystate.h is
useless since it doesn't affect files not including it. Either use
_PyThreadState_GET() directly, or don't use pycore_pystate.h internal
C API. For example, the _testcapi extension don't use the internal C
API, but use the public PyThreadState_Get() function instead.
Replace PyThreadState_Get() with _PyThreadState_GET(). The
_PyThreadState_GET() macro is more efficient than PyThreadState_Get()
and PyThreadState_GET() function calls which call fail with a fatal
Python error.
posixmodule.c and _ctypes extension now include <windows.h> before
pycore header files (like pycore_call.h).
_PyTraceback_Add() now uses _PyErr_Fetch()/_PyErr_Restore() instead
of PyErr_Fetch()/PyErr_Restore().
The _decimal and _xxsubinterpreters extensions are now built with the
Py_BUILD_CORE_MODULE macro defined to get access to the internal C
API.
* Move _PyObject_CallNoArgs() to pycore_call.h (internal C API).
* _ssl, _sqlite and _testcapi extensions now call the public
PyObject_CallNoArgs() function, rather than _PyObject_CallNoArgs().
* _lsprof extension is now built with Py_BUILD_CORE_MODULE macro
defined to get access to internal _PyObject_CallNoArgs().
Fix typo in the private _PyObject_CallNoArg() function name: rename
it to _PyObject_CallNoArgs() to be consistent with the public
function PyObject_CallNoArgs().
Places the locals between the specials and stack. This is the more "natural" layout for a C struct, makes the code simpler and gives a slight speedup (~1%)
* Refactor dispatch logic to make flow of control clearer. Moves lltrace and dxprofile instrumentation into DISPATCH macro.
* Remove switch in interpreter loop when using computed gotos. There is no need for two nearly-duplicate dispatch tables.
* Generalize cache names for LOAD_ATTR to allow store and delete specializations.
* Factor out specialization of attribute dictionary access.
* Specialize STORE_ATTR.
* Convert "specials" array to InterpreterFrame struct, adding f_lasti, f_state and other non-debug FrameObject fields to it.
* Refactor, calls pushing the call to the interpreter upward toward _PyEval_Vector.
* Compute f_back when on thread stack, only filling in value when frame object outlives stack invocation.
* Move ownership of InterpreterFrame in generator from frame object to generator object.
* Do not create frame objects for Python calls.
* Do not create frame objects for generators.
A TypeError is now raised instead of an AttributeError in
"with" and "async with" statements for objects which do not
support the context manager or asynchronous context manager
protocols correspondingly.
* Specialize obj.__class__ with LOAD_ATTR_SLOT
* Specialize instance attribute lookup with attribute on class, provided attribute on class is not an overriding descriptor.
* Add stat for how many times the unquickened instruction has executed.
Currently, if an arg value escapes (into the closure for an inner function) we end up allocating two indices in the fast locals even though only one gets used. Additionally, using the lower index would be better in some cases, such as with no-arg `super()`. To address this, we update the compiler to fix the offsets so each variable only gets one "fast local". As a consequence, now some cell offsets are interspersed with the locals (only when an arg escapes to an inner function).
https://bugs.python.org/issue43693
* Specialize LOAD_ATTR with LOAD_ATTR_SLOT and LOAD_ATTR_SPLIT_KEYS
* Move dict-common.h to internal/pycore_dict.h
* Add LOAD_ATTR_WITH_HINT specialized opcode.
* Quicken in function if loopy
* Specialize LOAD_ATTR for module attributes.
* Add specialization stats
This was reverted in GH-26596 (commit 6d518bb) due to some bad memory accesses.
* Add the MAKE_CELL opcode. (gh-26396)
The memory accesses have been fixed.
https://bugs.python.org/issue43693
This moves logic out of the frame initialization code and into the compiler and eval loop. Doing so simplifies the runtime code and allows us to optimize it better.
https://bugs.python.org/issue43693
These were reverted in gh-26530 (commit 17c4edc) due to refleaks.
* 2c1e258 - Compute deref offsets in compiler (gh-25152)
* b2bf2bc - Add new internal code objects fields: co_fastlocalnames and co_fastlocalkinds. (gh-26388)
This change fixes the refleaks.
https://bugs.python.org/issue43693
* Add co_firstinstr field to code object.
* Implement barebones quickening.
* Use non-quickened bytecode when tracing.
* Add NEWS item
* Add new file to Windows build.
* Don't specialize instructions with EXTENDED_ARG.
* Revert "bpo-43693: Compute deref offsets in compiler (gh-25152)"
This reverts commit b2bf2bc1ec.
* Revert "bpo-43693: Add new internal code objects fields: co_fastlocalnames and co_fastlocalkinds. (gh-26388)"
This reverts commit 2c1e2583fd.
These two commits are breaking the refleak buildbots.
Merges locals and cells into a single array.
Saves a pointer in the interpreter and means that we don't need the LOAD_CLOSURE opcode any more
https://bugs.python.org/issue43693
A number of places in the code base (notably ceval.c and frameobject.c) rely on mapping variable names to indices in the frame "locals plus" array (AKA fast locals), and thus opargs. Currently the compiler indirectly encodes that information on the code object as the tuples co_varnames, co_cellvars, and co_freevars. At runtime the dependent code must calculate the proper mapping from those, which isn't ideal and impacts performance-sensitive sections. This is something we can easily address in the compiler instead.
This change addresses the situation by replacing internal use of co_varnames, etc. with a single combined tuple of names in locals-plus order, along with a minimal array mapping each to its kind (local vs. cell vs. free). These two new PyCodeObject fields, co_fastlocalnames and co_fastllocalkinds, are not exposed to Python code for now, but co_varnames, etc. are still available with the same values as before (though computed lazily).
Aside from the (mild) performance impact, there are a number of other benefits:
* there's now a clear, direct relationship between locals-plus and variables
* code that relies on the locals-plus-to-name mapping is simpler
* marshaled code objects are smaller and serialize/de-serialize faster
Also note that we can take this approach further by expanding the possible values in co_fastlocalkinds to include specific argument types (e.g. positional-only, kwargs). Doing so would allow further speed-ups in _PyEval_MakeFrameVector(), which is where args get unpacked into the locals-plus array. It would also allow us to shrink marshaled code objects even further.
https://bugs.python.org/issue43693
* Move up the comment about fields using in hashing/comparision.
* Group the fields more clearly.
* Add co_ncellvars and co_nfreevars.
* Raise ValueError if nlocals != len(varnames), rather than aborting.
* Remove 'zombie' frames. We won't need them once we are allocating fixed-size frames.
* Add co_nlocalplus field to code object to avoid recomputing size of locals + frees + cells.
* Move locals, cells and freevars out of frame object into separate memory buffer.
* Use per-threadstate allocated memory chunks for local variables.
* Move globals and builtins from frame object to per-thread stack.
* Move (slow) locals frame object to per-thread stack.
* Move internal frame functions to internal header.
"Zero cost" exception handling.
* Uses a lookup table to determine how to handle exceptions.
* Removes SETUP_FINALLY and POP_TOP block instructions, eliminating (most of) the runtime overhead of try statements.
* Reduces the size of the frame object by about 60%.
* Add Py_TPFLAGS_SEQUENCE and Py_TPFLAGS_MAPPING, add to all relevant standard builtin classes.
* Set relevant flags on collections.abc.Sequence and Mapping.
* Use flags in MATCH_SEQUENCE and MATCH_MAPPING opcodes.
* Inherit Py_TPFLAGS_SEQUENCE and Py_TPFLAGS_MAPPING.
* Add NEWS
* Remove interpreter-state map_abc and seq_abc fields.
When printing NameError raised by the interpreter, PyErr_Display
will offer suggestions of simmilar variable names in the function that the exception
was raised from:
>>> schwarzschild_black_hole = None
>>> schwarschild_black_hole
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'schwarschild_black_hole' is not defined. Did you mean: schwarzschild_black_hole?
* Remove redundant tracing_possible field from interpreter state.
* Move 'use_tracing' from tstate onto C stack, for fastest possible checking in dispatch logic.
* Add comments stressing the importance stack discipline when dealing with CFrames.
* Add NEWS
Add the Py_Is(x, y) function to test if the 'x' object is the 'y'
object, the same as "x is y" in Python. Add also the Py_IsNone(),
Py_IsTrue(), Py_IsFalse() functions to test if an object is,
respectively, the None singleton, the True singleton or the False
singleton.
* Do fetch and decode at end of opcode then jump directly to switch.
Should allow compilers that don't support computed-gotos, specifically MSVC,
to generate better code.
* Handle check for sending None to starting generator and coroutine into bytecode.
* Document new bytecode and make it fail gracefully if mis-compiled.
* Use instruction offset, rather than bytecode offset. Streamlines interpreter dispatch a bit, and removes most EXTENDED_ARGs for jumps.
* Change some uses of PyCode_Addr2Line to PyFrame_GetLineNumber
* Remove an assertion which required CO_NEWLOCALS and CO_OPTIMIZED
code flags. It is ok to call this function on a code with these
flags set.
* Fix reference counting on builtins: remove Py_DECREF().
Fix regression introduced in the
commit 46496f9d12.
Add also a comment to document that _PyEval_BuiltinsFromGlobals()
returns a borrowed reference.
* No longer save/restore the current exception. It is no longer used
with an exception raised.
* No longer clear the current exception on error: it's now up to the
caller.
The types.FunctionType constructor now inherits the current builtins
if the globals dictionary has no "__builtins__" key, rather than
using {"None": None} as builtins: same behavior as eval() and exec()
functions.
Defining a function with "def function(...): ..." in Python is not
affected, globals cannot be overriden with this syntax: it also
inherits the current builtins.
PyFrame_New(), PyEval_EvalCode(), PyEval_EvalCodeEx(),
PyFunction_New() and PyFunction_NewWithQualName() now inherits the
current builtins namespace if the globals dictionary has no
"__builtins__" key.
* Add _PyEval_GetBuiltins() function.
* _PyEval_BuiltinsFromGlobals() now uses _PyEval_GetBuiltins() if
builtins cannot be found in globals.
* Add tstate parameter to _PyEval_BuiltinsFromGlobals().
Pass the current interpreter (interp) rather than the current Python
thread state (tstate) to internal functions which only use the
interpreter.
Modified functions:
* _PyXXX_Fini() and _PyXXX_ClearFreeList() functions
* _PyEval_SignalAsyncExc(), make_pending_calls()
* _PySys_GetObject(), sys_set_object(), sys_set_object_id(), sys_set_object_str()
* should_audit(), set_flags_from_config(), make_flags()
* _PyAtExit_Call()
* init_stdio_encoding()
* etc.
Remove the private _PyErr_OCCURRED() macro: use the public
PyErr_Occurred() function instead.
CPython internals must use the internal _PyErr_Occurred(tstate)
function instead: it is the most efficient way to check if an
exception was raised.
* Refactor _PyFrame_New_NoTrack() and PyFunction_NewWithQualName()
code.
* PyFrame_New() checks for _PyEval_BuiltinsFromGlobals() failure.
* Fix a ref leak in _PyEval_BuiltinsFromGlobals() error path.
* Complete PyFunction_GetModule() documentation: it returns a
borrowed reference and it can return NULL.
* Move _PyEval_BuiltinsFromGlobals() definition to the internal C
API.
* PyFunction_NewWithQualName() uses _Py_IDENTIFIER() API for the
"__name__" string to make it compatible with subinterpreters.
* Further refactoring of PyEval_EvalCode and friends. Break into make-frame, and eval-frame parts.
* Simplify function vector call using new _PyEval_Vector.
* Remove unused internal functions: _PyEval_EvalCodeWithName and _PyEval_EvalCode.
* Don't use legacy function PyEval_EvalCodeEx.
* Add test for frame.f_lineno with/without tracing.
* Make sure that frame.f_lineno is correct regardless of whether frame.f_trace is set.
* Update importlib
* Add NEWS
Reduce memory footprint and improve performance of loading modules having many func annotations.
>>> sys.getsizeof({"a":"int","b":"int","return":"int"})
232
>>> sys.getsizeof(("a","int","b","int","return","int"))
88
The tuple is converted into dict on the fly when `func.__annotations__` is accessed first.
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
Co-authored-by: Inada Naoki <songofacandy@gmail.com>
On Windows, fix a regression in signal handling which prevented to
interrupt a program using CTRL+C. The signal handler can be run in a
thread different than the Python thread, in which case the test
deciding if the thread can handle signals is wrong.
On Windows, _PyEval_SignalReceived() now always sets eval_breaker to
1 since it cannot test _Py_ThreadCanHandleSignals(), and
eval_frame_handle_pending() always calls
_Py_ThreadCanHandleSignals() to recompute eval_breaker.
These functions are considered not safe because they suppress all internal errors
and can return wrong result. PyDict_GetItemString and _PyDict_GetItemId can
also silence current exception in rare cases.
Remove no longer used _PyDict_GetItemId.
Add _PyDict_ContainsId and rename _PyDict_Contains into
_PyDict_Contains_KnownHash.
Remove the global _Py_CheckRecursionLimit variable: it has been
replaced by ceval.recursion_limit of the PyInterpreterState
structure.
There is no need to keep the variable for the stable ABI, since
Py_EnterRecursiveCall() and Py_LeaveRecursiveCall() were not usable
in Python 3.8 and older: these macros accessed PyThreadState members,
whereas the PyThreadState structure is opaque in the limited C API.
The new API allows to efficiently send values into native generators
and coroutines avoiding use of StopIteration exceptions to signal
returns.
ceval loop now uses this method instead of the old "private"
_PyGen_Send C API. This translates to 1.6x increased performance
of 'await' calls in micro-benchmarks.
Aside from CPython core improvements, this new API will also allow
Cython to generate more efficient code, benefiting high-performance
IO libraries like uvloop.
* Merge gen and frame state variables into one.
* Replace stack pointer with depth in PyFrameObject. Makes code easier to read and saves a word of memory.
If name is NULL, name is now set to co->co_name.
If qualname is NULL, qualname is now set to name.
qualname must not be NULL: it is used to build error messages.
Cleanup also the code: declare variables where they are initialized.
Rename "name" local variables to "varname" to avoid overriding "name"
parameter.
PyOS_AfterFork_Child() helper functions now return a PyStatus:
PyOS_AfterFork_Child() is now responsible to handle errors.
* Move _PySignal_AfterFork() to the internal C API
* Add #ifdef HAVE_FORK on _PyGILState_Reinit(), _PySignal_AfterFork()
and _PyInterpreterState_DeleteExceptMain().
In the experimental isolated subinterpreters build mode, the GIL is
now per-interpreter.
Move gil from _PyRuntimeState.ceval to PyInterpreterState.ceval.
new_interpreter() always get the config from the main interpreter.
In the experimental isolated subinterpreters build mode,
_PyThreadState_GET() gets the autoTSSkey variable and
_PyThreadState_Swap() sets the autoTSSkey variable.
* Add _PyThreadState_GetTSS()
* _PyRuntimeState_GetThreadState() and _PyThreadState_GET()
return _PyThreadState_GetTSS()
* PyEval_SaveThread() sets the autoTSSkey variable to current Python
thread state rather than NULL.
* eval_frame_handle_pending() doesn't check that
_PyThreadState_Swap() result is NULL.
* _PyThreadState_Swap() gets the current Python thread state with
_PyThreadState_GetTSS() rather than
_PyRuntimeGILState_GetThreadState().
* PyGILState_Ensure() no longer checks _PyEval_ThreadsInitialized()
since it cannot access the current interpreter.
Move recursion_limit member from _PyRuntimeState.ceval to
PyInterpreterState.ceval.
* Py_SetRecursionLimit() now only sets _Py_CheckRecursionLimit
of ceval.c if the current Python thread is part of the main
interpreter.
* Inline _Py_MakeEndRecCheck() into _Py_LeaveRecursiveCall().
* Convert _Py_RecursionLimitLowerWaterMark() macro into a static
inline function.
If _PyCode_InitOpcache() fails in _PyEval_EvalFrameDefault(), use
"goto exit_eval_frame;" rather than "return NULL;" to exit the
function in a consistent state. For example, tstate->frame is now
reset properly.
Rename _PyInterpreterState_GET_UNSAFE() to _PyInterpreterState_GET()
for consistency with _PyThreadState_GET() and to have a shorter name
(help to fit into 80 columns).
Add also "assert(tstate != NULL);" to the function.
Fix the signal handler: it now always uses the main interpreter,
rather than trying to get the current Python thread state.
The following function now accepts an interpreter, instead of a
Python thread state:
* _PyEval_SignalReceived()
* _Py_ThreadCanHandleSignals()
* _PyEval_AddPendingCall()
* COMPUTE_EVAL_BREAKER()
* SET_GIL_DROP_REQUEST(), RESET_GIL_DROP_REQUEST()
* SIGNAL_PENDING_CALLS(), UNSIGNAL_PENDING_CALLS()
* SIGNAL_PENDING_SIGNALS(), UNSIGNAL_PENDING_SIGNALS()
* SIGNAL_ASYNC_EXC(), UNSIGNAL_ASYNC_EXC()
Py_AddPendingCall() now uses the main interpreter if it fails to the
current Python thread state.
Convert _PyThreadState_GET() and PyInterpreterState_GET_UNSAFE()
macros to static inline functions.
PyInterpreterState_New() is now responsible to create pending calls,
PyInterpreterState_Delete() now deletes pending calls.
* Rename _PyEval_InitThreads() to _PyEval_InitGIL() and rename
_PyEval_InitGIL() to _PyEval_FiniGIL().
* _PyEval_InitState() and PyEval_FiniState() now create and delete
pending calls. _PyEval_InitState() now returns -1 on memory
allocation failure.
* Add init_interp_create_gil() helper function: code shared by
Py_NewInterpreter() and Py_InitializeFromConfig().
* init_interp_create_gil() now also calls _PyEval_FiniGIL(),
_PyEval_InitGIL() and _PyGILState_Init() in subinterpreters, but
these functions now do nothing when called from a subinterpreter.
Add _PyIndex_Check() function to the internal C API: fast inlined
verson of PyIndex_Check().
Add Include/internal/pycore_abstract.h header file.
Replace PyIndex_Check() with _PyIndex_Check() in C files of Objects
and Python subdirectories.
Remove _PyRuntime.getframe hook and remove _PyThreadState_GetFrame
macro which was an alias to _PyRuntime.getframe. They were only
exposed by the internal C API. Remove also PyThreadFrameGetter type.
If a thread different than the main thread schedules a pending call
(Py_AddPendingCall()), the bytecode evaluation loop is no longer
interrupted at each bytecode instruction to check for pending calls
which cannot be executed. Only the main thread can execute pending
calls.
Previously, the bytecode evaluation loop was interrupted at each
instruction until the main thread executes pending calls.
* Add _Py_ThreadCanHandlePendingCalls() function.
* SIGNAL_PENDING_CALLS() now only sets eval_breaker to 1 if the
current thread can execute pending calls. Only the main thread can
execute pending calls.
COMPUTE_EVAL_BREAKER() now also checks if the Python thread state
belongs to the main interpreter. Don't break the evaluation loop if
there are pending signals but the Python thread state it belongs to a
subinterpeter.
* Add _Py_IsMainThread() function.
* Add _Py_ThreadCanHandleSignals() function.
If a thread different than the main thread gets a signal, the
bytecode evaluation loop is no longer interrupted at each bytecode
instruction to check for pending signals which cannot be handled.
Only the main thread of the main interpreter can handle signals.
Previously, the bytecode evaluation loop was interrupted at each
instruction until the main thread handles signals.
Changes:
* COMPUTE_EVAL_BREAKER() and SIGNAL_PENDING_SIGNALS() no longer set
eval_breaker to 1 if the current thread cannot handle signals.
* take_gil() now always recomputes eval_breaker.
If Py_AddPendingCall() is called in a subinterpreter, the function is
now scheduled to be called from the subinterpreter, rather than being
called from the main interpreter.
Each subinterpreter now has its own list of scheduled calls.
* Move pending and eval_breaker fields from _PyRuntimeState.ceval
to PyInterpreterState.ceval.
* new_interpreter() now calls _PyEval_InitThreads() to create
pending calls lock.
* Fix Py_AddPendingCall() for subinterpreters. It now calls
_PyThreadState_GET() which works in a subinterpreter if the
caller holds the GIL, and only falls back on
PyGILState_GetThisThreadState() if _PyThreadState_GET()
returns NULL.
bpo-37127, bpo-39984:
* trip_signal() and Py_AddPendingCall() now get the current Python
thread state using PyGILState_GetThisThreadState() rather than
_PyRuntimeState_GetThreadState() to be able to get it even if the
GIL is released.
* _PyEval_SignalReceived() now expects tstate rather than ceval.
* Remove ceval parameter of _PyEval_AddPendingCall(): ceval is now
get from tstate parameter.
* _PyThreadState_DeleteCurrent() now takes tstate rather than
runtime.
* Add ensure_tstate_not_null() helper to pystate.c.
* Add _PyEval_ReleaseLock() function.
* _PyThreadState_DeleteCurrent() now calls
_PyEval_ReleaseLock(tstate) and frees PyThreadState memory after
this call, not before.
* PyGILState_Release(): rename "tcur" variable to "tstate".
* sys.settrace(), sys.setprofile() and _lsprof.Profiler.enable() now
properly report PySys_Audit() error if "sys.setprofile" or
"sys.settrace" audit event is denied.
* Add _PyEval_SetProfile() and _PyEval_SetTrace() function: similar
to PyEval_SetProfile() and PyEval_SetTrace() but take a tstate
parameter and return -1 on error.
* Add _PyObject_FastCallTstate() function.
PyInterpreterState.eval_frame function now requires a tstate (Python
thread state) parameter.
Add private functions to the C API to get and set the frame
evaluation function:
* Add tstate parameter to _PyFrameEvalFunction function type.
* Add _PyInterpreterState_GetEvalFrameFunc() and
_PyInterpreterState_SetEvalFrameFunc() functions.
* Add tstate parameter to _PyEval_EvalFrameDefault().
PyGILState_Ensure() doesn't call PyEval_InitThreads() anymore when a
new Python thread state is created. The GIL is created by
Py_Initialize() since Python 3.7, it's not needed to call
PyEval_InitThreads() explicitly.
Add an assertion to ensure that the GIL is already created.
* Remove ceval parameter of take_gil(): get it from tstate.
* Move exit_thread_if_finalizing() call inside take_gil(). Replace
exit_thread_if_finalizing() with tstate_must_exit(): the caller is
now responsible to call PyThread_exit_thread().
* Move is_tstate_valid() assertion inside take_gil(). Remove
is_tstate_valid(): inline code into take_gil().
* Move gil_created() assertion inside take_gil().
* exit_thread_if_finalizing() does now access directly _PyRuntime
variable, rather than using tstate->interp->runtime since tstate
can be a dangling pointer after Py_Finalize() has been called.
* exit_thread_if_finalizing() is now called *before* calling
take_gil(). _PyRuntime.finalizing is an atomic variable,
we don't need to hold the GIL to access it.
* Add ensure_tstate_not_null() function to check that tstate is not
NULL at runtime. Check tstate earlier. take_gil() does not longer
check if tstate is NULL.
Cleanup:
* PyEval_RestoreThread() no longer saves/restores errno: it's already
done inside take_gil().
* PyEval_AcquireLock(), PyEval_AcquireThread(),
PyEval_RestoreThread() and _PyEval_EvalFrameDefault() now check if
tstate is valid with the new is_tstate_valid() function which uses
_PyMem_IsPtrFreed().
The Py_FatalError() function is replaced with a macro which logs
automatically the name of the current function, unless the
Py_LIMITED_API macro is defined.
Changes:
* Add _Py_FatalErrorFunc() function.
* Remove the function name from the message of Py_FatalError() calls
which included the function name.
* Update tests.
Convert _PyRuntimeState.finalizing field to an atomic variable:
* Rename it to _finalizing
* Change its type to _Py_atomic_address
* Add _PyRuntimeState_GetFinalizing() and _PyRuntimeState_SetFinalizing()
functions
* Remove _Py_CURRENTLY_FINALIZING() function: replace it with testing
directly _PyRuntimeState_GetFinalizing() value
Convert _PyRuntimeState_GetThreadState() to static inline function.
In function `_PyEval_EvalFrameDefault`, macros PREDICT and PREDICTED use the same identifier creation scheme, which may be shared between them, reducing code repetition, and do ensure that the same identifier is generated.
gcc -Wcast-qual turns up a number of instances of casting away constness of pointers. Some of these can be safely modified, by either:
Adding the const to the type cast, as in:
- return _PyUnicode_FromUCS1((unsigned char*)s, size);
+ return _PyUnicode_FromUCS1((const unsigned char*)s, size);
or, Removing the cast entirely, because it's not necessary (but probably was at one time), as in:
- PyDTrace_FUNCTION_ENTRY((char *)filename, (char *)funcname, lineno);
+ PyDTrace_FUNCTION_ENTRY(filename, funcname, lineno);
These changes will not change code, but they will make it much easier to check for errors in consts
The bulk of this patch was generated automatically with:
for name in \
PyObject_Vectorcall \
Py_TPFLAGS_HAVE_VECTORCALL \
PyObject_VectorcallMethod \
PyVectorcall_Function \
PyObject_CallOneArg \
PyObject_CallMethodNoArgs \
PyObject_CallMethodOneArg \
;
do
echo $name
git grep -lwz _$name | xargs -0 sed -i "s/\b_$name\b/$name/g"
done
old=_PyObject_FastCallDict
new=PyObject_VectorcallDict
git grep -lwz $old | xargs -0 sed -i "s/\b$old\b/$new/g"
and then cleaned up:
- Revert changes to in docs & news
- Revert changes to backcompat defines in headers
- Nudge misaligned comments
Replace a few Py_FatalError() calls if tstate is NULL with
assert(tstate != NULL) in ceval.c.
PyEval_AcquireThread(), PyEval_ReleaseThread() and
PyEval_RestoreThread() must never be called with a NULL tstate.
* Add DICT_UPDATE and DICT_MERGE bytecodes. Use them for ** unpacking.
* Remove BUILD_MAP_UNPACK and BUILD_MAP_UNPACK_WITH_CALL, as they are now unused.
* Update magic number for ** unpacking opcodes.
* Update dis.rst to incorporate new bytecodes.
* Add blurb entry.
* Add three new bytecodes: LIST_TO_TUPLE, LIST_EXTEND, SET_UPDATE. Use them to implement star unpacking expressions.
* Remove four bytecodes BUILD_LIST_UNPACK, BUILD_TUPLE_UNPACK, BUILD_SET_UNPACK and BUILD_TUPLE_UNPACK_WITH_CALL opcodes as they are now unused.
* Update magic number and dis.rst for new bytecodes.
* Reorder the __aenter__ and __aexit__ checks for async with
* Add assertions for async with body being skipped
* Swap __aexit__ and __aenter__ loading in the documentation
Break up COMPARE_OP into four logically distinct opcodes:
* COMPARE_OP for rich comparisons
* IS_OP for 'is' and 'is not' tests
* CONTAINS_OP for 'in' and 'is not' tests
* JUMP_IF_NOT_EXC_MATCH for checking exceptions in 'try-except' statements.
When producing the bytecode of exception handlers with name binding (like `except Exception as e`) we need to produce a try-finally block to make sure that the name is deleted after the handler is executed to prevent cycles in the stack frame objects. The bytecode associated with this try-finally block does not have source lines associated and it was causing problems when the tracing functionality was running over it.
Remove BEGIN_FINALLY, END_FINALLY, CALL_FINALLY and POP_FINALLY bytecodes. Implement finally blocks by code duplication.
Reimplement frame.lineno setter using line numbers rather than bytecode offsets.
Add PyInterpreterState.runtime field: reference to the _PyRuntime
global variable. This field exists to not have to pass runtime in
addition to tstate to a function. Get runtime from tstate:
tstate->interp->runtime.
Remove "_PyRuntimeState *runtime" parameter from functions already
taking a "PyThreadState *tstate" parameter.
_PyGC_Init() first parameter becomes "PyThreadState *tstate".