A TypeError is now raised instead of an AttributeError in
"with" and "async with" statements for objects which do not
support the context manager or asynchronous context manager
protocols correspondingly.
* Specialize obj.__class__ with LOAD_ATTR_SLOT
* Specialize instance attribute lookup with attribute on class, provided attribute on class is not an overriding descriptor.
* Add stat for how many times the unquickened instruction has executed.
Currently, if an arg value escapes (into the closure for an inner function) we end up allocating two indices in the fast locals even though only one gets used. Additionally, using the lower index would be better in some cases, such as with no-arg `super()`. To address this, we update the compiler to fix the offsets so each variable only gets one "fast local". As a consequence, now some cell offsets are interspersed with the locals (only when an arg escapes to an inner function).
https://bugs.python.org/issue43693
* Specialize LOAD_ATTR with LOAD_ATTR_SLOT and LOAD_ATTR_SPLIT_KEYS
* Move dict-common.h to internal/pycore_dict.h
* Add LOAD_ATTR_WITH_HINT specialized opcode.
* Quicken in function if loopy
* Specialize LOAD_ATTR for module attributes.
* Add specialization stats
This was reverted in GH-26596 (commit 6d518bb) due to some bad memory accesses.
* Add the MAKE_CELL opcode. (gh-26396)
The memory accesses have been fixed.
https://bugs.python.org/issue43693
This moves logic out of the frame initialization code and into the compiler and eval loop. Doing so simplifies the runtime code and allows us to optimize it better.
https://bugs.python.org/issue43693
These were reverted in gh-26530 (commit 17c4edc) due to refleaks.
* 2c1e258 - Compute deref offsets in compiler (gh-25152)
* b2bf2bc - Add new internal code objects fields: co_fastlocalnames and co_fastlocalkinds. (gh-26388)
This change fixes the refleaks.
https://bugs.python.org/issue43693
* Add co_firstinstr field to code object.
* Implement barebones quickening.
* Use non-quickened bytecode when tracing.
* Add NEWS item
* Add new file to Windows build.
* Don't specialize instructions with EXTENDED_ARG.
* Revert "bpo-43693: Compute deref offsets in compiler (gh-25152)"
This reverts commit b2bf2bc1ec.
* Revert "bpo-43693: Add new internal code objects fields: co_fastlocalnames and co_fastlocalkinds. (gh-26388)"
This reverts commit 2c1e2583fd.
These two commits are breaking the refleak buildbots.
Merges locals and cells into a single array.
Saves a pointer in the interpreter and means that we don't need the LOAD_CLOSURE opcode any more
https://bugs.python.org/issue43693
A number of places in the code base (notably ceval.c and frameobject.c) rely on mapping variable names to indices in the frame "locals plus" array (AKA fast locals), and thus opargs. Currently the compiler indirectly encodes that information on the code object as the tuples co_varnames, co_cellvars, and co_freevars. At runtime the dependent code must calculate the proper mapping from those, which isn't ideal and impacts performance-sensitive sections. This is something we can easily address in the compiler instead.
This change addresses the situation by replacing internal use of co_varnames, etc. with a single combined tuple of names in locals-plus order, along with a minimal array mapping each to its kind (local vs. cell vs. free). These two new PyCodeObject fields, co_fastlocalnames and co_fastllocalkinds, are not exposed to Python code for now, but co_varnames, etc. are still available with the same values as before (though computed lazily).
Aside from the (mild) performance impact, there are a number of other benefits:
* there's now a clear, direct relationship between locals-plus and variables
* code that relies on the locals-plus-to-name mapping is simpler
* marshaled code objects are smaller and serialize/de-serialize faster
Also note that we can take this approach further by expanding the possible values in co_fastlocalkinds to include specific argument types (e.g. positional-only, kwargs). Doing so would allow further speed-ups in _PyEval_MakeFrameVector(), which is where args get unpacked into the locals-plus array. It would also allow us to shrink marshaled code objects even further.
https://bugs.python.org/issue43693
* Move up the comment about fields using in hashing/comparision.
* Group the fields more clearly.
* Add co_ncellvars and co_nfreevars.
* Raise ValueError if nlocals != len(varnames), rather than aborting.
* Remove 'zombie' frames. We won't need them once we are allocating fixed-size frames.
* Add co_nlocalplus field to code object to avoid recomputing size of locals + frees + cells.
* Move locals, cells and freevars out of frame object into separate memory buffer.
* Use per-threadstate allocated memory chunks for local variables.
* Move globals and builtins from frame object to per-thread stack.
* Move (slow) locals frame object to per-thread stack.
* Move internal frame functions to internal header.
"Zero cost" exception handling.
* Uses a lookup table to determine how to handle exceptions.
* Removes SETUP_FINALLY and POP_TOP block instructions, eliminating (most of) the runtime overhead of try statements.
* Reduces the size of the frame object by about 60%.
* Add Py_TPFLAGS_SEQUENCE and Py_TPFLAGS_MAPPING, add to all relevant standard builtin classes.
* Set relevant flags on collections.abc.Sequence and Mapping.
* Use flags in MATCH_SEQUENCE and MATCH_MAPPING opcodes.
* Inherit Py_TPFLAGS_SEQUENCE and Py_TPFLAGS_MAPPING.
* Add NEWS
* Remove interpreter-state map_abc and seq_abc fields.
When printing NameError raised by the interpreter, PyErr_Display
will offer suggestions of simmilar variable names in the function that the exception
was raised from:
>>> schwarzschild_black_hole = None
>>> schwarschild_black_hole
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'schwarschild_black_hole' is not defined. Did you mean: schwarzschild_black_hole?
* Remove redundant tracing_possible field from interpreter state.
* Move 'use_tracing' from tstate onto C stack, for fastest possible checking in dispatch logic.
* Add comments stressing the importance stack discipline when dealing with CFrames.
* Add NEWS
Add the Py_Is(x, y) function to test if the 'x' object is the 'y'
object, the same as "x is y" in Python. Add also the Py_IsNone(),
Py_IsTrue(), Py_IsFalse() functions to test if an object is,
respectively, the None singleton, the True singleton or the False
singleton.
* Do fetch and decode at end of opcode then jump directly to switch.
Should allow compilers that don't support computed-gotos, specifically MSVC,
to generate better code.
* Handle check for sending None to starting generator and coroutine into bytecode.
* Document new bytecode and make it fail gracefully if mis-compiled.
* Use instruction offset, rather than bytecode offset. Streamlines interpreter dispatch a bit, and removes most EXTENDED_ARGs for jumps.
* Change some uses of PyCode_Addr2Line to PyFrame_GetLineNumber
* Remove an assertion which required CO_NEWLOCALS and CO_OPTIMIZED
code flags. It is ok to call this function on a code with these
flags set.
* Fix reference counting on builtins: remove Py_DECREF().
Fix regression introduced in the
commit 46496f9d12.
Add also a comment to document that _PyEval_BuiltinsFromGlobals()
returns a borrowed reference.
* No longer save/restore the current exception. It is no longer used
with an exception raised.
* No longer clear the current exception on error: it's now up to the
caller.
The types.FunctionType constructor now inherits the current builtins
if the globals dictionary has no "__builtins__" key, rather than
using {"None": None} as builtins: same behavior as eval() and exec()
functions.
Defining a function with "def function(...): ..." in Python is not
affected, globals cannot be overriden with this syntax: it also
inherits the current builtins.
PyFrame_New(), PyEval_EvalCode(), PyEval_EvalCodeEx(),
PyFunction_New() and PyFunction_NewWithQualName() now inherits the
current builtins namespace if the globals dictionary has no
"__builtins__" key.
* Add _PyEval_GetBuiltins() function.
* _PyEval_BuiltinsFromGlobals() now uses _PyEval_GetBuiltins() if
builtins cannot be found in globals.
* Add tstate parameter to _PyEval_BuiltinsFromGlobals().
Pass the current interpreter (interp) rather than the current Python
thread state (tstate) to internal functions which only use the
interpreter.
Modified functions:
* _PyXXX_Fini() and _PyXXX_ClearFreeList() functions
* _PyEval_SignalAsyncExc(), make_pending_calls()
* _PySys_GetObject(), sys_set_object(), sys_set_object_id(), sys_set_object_str()
* should_audit(), set_flags_from_config(), make_flags()
* _PyAtExit_Call()
* init_stdio_encoding()
* etc.
Remove the private _PyErr_OCCURRED() macro: use the public
PyErr_Occurred() function instead.
CPython internals must use the internal _PyErr_Occurred(tstate)
function instead: it is the most efficient way to check if an
exception was raised.
* Refactor _PyFrame_New_NoTrack() and PyFunction_NewWithQualName()
code.
* PyFrame_New() checks for _PyEval_BuiltinsFromGlobals() failure.
* Fix a ref leak in _PyEval_BuiltinsFromGlobals() error path.
* Complete PyFunction_GetModule() documentation: it returns a
borrowed reference and it can return NULL.
* Move _PyEval_BuiltinsFromGlobals() definition to the internal C
API.
* PyFunction_NewWithQualName() uses _Py_IDENTIFIER() API for the
"__name__" string to make it compatible with subinterpreters.
* Further refactoring of PyEval_EvalCode and friends. Break into make-frame, and eval-frame parts.
* Simplify function vector call using new _PyEval_Vector.
* Remove unused internal functions: _PyEval_EvalCodeWithName and _PyEval_EvalCode.
* Don't use legacy function PyEval_EvalCodeEx.
* Add test for frame.f_lineno with/without tracing.
* Make sure that frame.f_lineno is correct regardless of whether frame.f_trace is set.
* Update importlib
* Add NEWS
Reduce memory footprint and improve performance of loading modules having many func annotations.
>>> sys.getsizeof({"a":"int","b":"int","return":"int"})
232
>>> sys.getsizeof(("a","int","b","int","return","int"))
88
The tuple is converted into dict on the fly when `func.__annotations__` is accessed first.
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
Co-authored-by: Inada Naoki <songofacandy@gmail.com>
On Windows, fix a regression in signal handling which prevented to
interrupt a program using CTRL+C. The signal handler can be run in a
thread different than the Python thread, in which case the test
deciding if the thread can handle signals is wrong.
On Windows, _PyEval_SignalReceived() now always sets eval_breaker to
1 since it cannot test _Py_ThreadCanHandleSignals(), and
eval_frame_handle_pending() always calls
_Py_ThreadCanHandleSignals() to recompute eval_breaker.
These functions are considered not safe because they suppress all internal errors
and can return wrong result. PyDict_GetItemString and _PyDict_GetItemId can
also silence current exception in rare cases.
Remove no longer used _PyDict_GetItemId.
Add _PyDict_ContainsId and rename _PyDict_Contains into
_PyDict_Contains_KnownHash.
Remove the global _Py_CheckRecursionLimit variable: it has been
replaced by ceval.recursion_limit of the PyInterpreterState
structure.
There is no need to keep the variable for the stable ABI, since
Py_EnterRecursiveCall() and Py_LeaveRecursiveCall() were not usable
in Python 3.8 and older: these macros accessed PyThreadState members,
whereas the PyThreadState structure is opaque in the limited C API.
The new API allows to efficiently send values into native generators
and coroutines avoiding use of StopIteration exceptions to signal
returns.
ceval loop now uses this method instead of the old "private"
_PyGen_Send C API. This translates to 1.6x increased performance
of 'await' calls in micro-benchmarks.
Aside from CPython core improvements, this new API will also allow
Cython to generate more efficient code, benefiting high-performance
IO libraries like uvloop.
* Merge gen and frame state variables into one.
* Replace stack pointer with depth in PyFrameObject. Makes code easier to read and saves a word of memory.
If name is NULL, name is now set to co->co_name.
If qualname is NULL, qualname is now set to name.
qualname must not be NULL: it is used to build error messages.
Cleanup also the code: declare variables where they are initialized.
Rename "name" local variables to "varname" to avoid overriding "name"
parameter.
PyOS_AfterFork_Child() helper functions now return a PyStatus:
PyOS_AfterFork_Child() is now responsible to handle errors.
* Move _PySignal_AfterFork() to the internal C API
* Add #ifdef HAVE_FORK on _PyGILState_Reinit(), _PySignal_AfterFork()
and _PyInterpreterState_DeleteExceptMain().
In the experimental isolated subinterpreters build mode, the GIL is
now per-interpreter.
Move gil from _PyRuntimeState.ceval to PyInterpreterState.ceval.
new_interpreter() always get the config from the main interpreter.
In the experimental isolated subinterpreters build mode,
_PyThreadState_GET() gets the autoTSSkey variable and
_PyThreadState_Swap() sets the autoTSSkey variable.
* Add _PyThreadState_GetTSS()
* _PyRuntimeState_GetThreadState() and _PyThreadState_GET()
return _PyThreadState_GetTSS()
* PyEval_SaveThread() sets the autoTSSkey variable to current Python
thread state rather than NULL.
* eval_frame_handle_pending() doesn't check that
_PyThreadState_Swap() result is NULL.
* _PyThreadState_Swap() gets the current Python thread state with
_PyThreadState_GetTSS() rather than
_PyRuntimeGILState_GetThreadState().
* PyGILState_Ensure() no longer checks _PyEval_ThreadsInitialized()
since it cannot access the current interpreter.
Move recursion_limit member from _PyRuntimeState.ceval to
PyInterpreterState.ceval.
* Py_SetRecursionLimit() now only sets _Py_CheckRecursionLimit
of ceval.c if the current Python thread is part of the main
interpreter.
* Inline _Py_MakeEndRecCheck() into _Py_LeaveRecursiveCall().
* Convert _Py_RecursionLimitLowerWaterMark() macro into a static
inline function.
If _PyCode_InitOpcache() fails in _PyEval_EvalFrameDefault(), use
"goto exit_eval_frame;" rather than "return NULL;" to exit the
function in a consistent state. For example, tstate->frame is now
reset properly.
Rename _PyInterpreterState_GET_UNSAFE() to _PyInterpreterState_GET()
for consistency with _PyThreadState_GET() and to have a shorter name
(help to fit into 80 columns).
Add also "assert(tstate != NULL);" to the function.
Fix the signal handler: it now always uses the main interpreter,
rather than trying to get the current Python thread state.
The following function now accepts an interpreter, instead of a
Python thread state:
* _PyEval_SignalReceived()
* _Py_ThreadCanHandleSignals()
* _PyEval_AddPendingCall()
* COMPUTE_EVAL_BREAKER()
* SET_GIL_DROP_REQUEST(), RESET_GIL_DROP_REQUEST()
* SIGNAL_PENDING_CALLS(), UNSIGNAL_PENDING_CALLS()
* SIGNAL_PENDING_SIGNALS(), UNSIGNAL_PENDING_SIGNALS()
* SIGNAL_ASYNC_EXC(), UNSIGNAL_ASYNC_EXC()
Py_AddPendingCall() now uses the main interpreter if it fails to the
current Python thread state.
Convert _PyThreadState_GET() and PyInterpreterState_GET_UNSAFE()
macros to static inline functions.
PyInterpreterState_New() is now responsible to create pending calls,
PyInterpreterState_Delete() now deletes pending calls.
* Rename _PyEval_InitThreads() to _PyEval_InitGIL() and rename
_PyEval_InitGIL() to _PyEval_FiniGIL().
* _PyEval_InitState() and PyEval_FiniState() now create and delete
pending calls. _PyEval_InitState() now returns -1 on memory
allocation failure.
* Add init_interp_create_gil() helper function: code shared by
Py_NewInterpreter() and Py_InitializeFromConfig().
* init_interp_create_gil() now also calls _PyEval_FiniGIL(),
_PyEval_InitGIL() and _PyGILState_Init() in subinterpreters, but
these functions now do nothing when called from a subinterpreter.
Add _PyIndex_Check() function to the internal C API: fast inlined
verson of PyIndex_Check().
Add Include/internal/pycore_abstract.h header file.
Replace PyIndex_Check() with _PyIndex_Check() in C files of Objects
and Python subdirectories.
Remove _PyRuntime.getframe hook and remove _PyThreadState_GetFrame
macro which was an alias to _PyRuntime.getframe. They were only
exposed by the internal C API. Remove also PyThreadFrameGetter type.
If a thread different than the main thread schedules a pending call
(Py_AddPendingCall()), the bytecode evaluation loop is no longer
interrupted at each bytecode instruction to check for pending calls
which cannot be executed. Only the main thread can execute pending
calls.
Previously, the bytecode evaluation loop was interrupted at each
instruction until the main thread executes pending calls.
* Add _Py_ThreadCanHandlePendingCalls() function.
* SIGNAL_PENDING_CALLS() now only sets eval_breaker to 1 if the
current thread can execute pending calls. Only the main thread can
execute pending calls.
COMPUTE_EVAL_BREAKER() now also checks if the Python thread state
belongs to the main interpreter. Don't break the evaluation loop if
there are pending signals but the Python thread state it belongs to a
subinterpeter.
* Add _Py_IsMainThread() function.
* Add _Py_ThreadCanHandleSignals() function.
If a thread different than the main thread gets a signal, the
bytecode evaluation loop is no longer interrupted at each bytecode
instruction to check for pending signals which cannot be handled.
Only the main thread of the main interpreter can handle signals.
Previously, the bytecode evaluation loop was interrupted at each
instruction until the main thread handles signals.
Changes:
* COMPUTE_EVAL_BREAKER() and SIGNAL_PENDING_SIGNALS() no longer set
eval_breaker to 1 if the current thread cannot handle signals.
* take_gil() now always recomputes eval_breaker.
If Py_AddPendingCall() is called in a subinterpreter, the function is
now scheduled to be called from the subinterpreter, rather than being
called from the main interpreter.
Each subinterpreter now has its own list of scheduled calls.
* Move pending and eval_breaker fields from _PyRuntimeState.ceval
to PyInterpreterState.ceval.
* new_interpreter() now calls _PyEval_InitThreads() to create
pending calls lock.
* Fix Py_AddPendingCall() for subinterpreters. It now calls
_PyThreadState_GET() which works in a subinterpreter if the
caller holds the GIL, and only falls back on
PyGILState_GetThisThreadState() if _PyThreadState_GET()
returns NULL.
bpo-37127, bpo-39984:
* trip_signal() and Py_AddPendingCall() now get the current Python
thread state using PyGILState_GetThisThreadState() rather than
_PyRuntimeState_GetThreadState() to be able to get it even if the
GIL is released.
* _PyEval_SignalReceived() now expects tstate rather than ceval.
* Remove ceval parameter of _PyEval_AddPendingCall(): ceval is now
get from tstate parameter.
* _PyThreadState_DeleteCurrent() now takes tstate rather than
runtime.
* Add ensure_tstate_not_null() helper to pystate.c.
* Add _PyEval_ReleaseLock() function.
* _PyThreadState_DeleteCurrent() now calls
_PyEval_ReleaseLock(tstate) and frees PyThreadState memory after
this call, not before.
* PyGILState_Release(): rename "tcur" variable to "tstate".
* sys.settrace(), sys.setprofile() and _lsprof.Profiler.enable() now
properly report PySys_Audit() error if "sys.setprofile" or
"sys.settrace" audit event is denied.
* Add _PyEval_SetProfile() and _PyEval_SetTrace() function: similar
to PyEval_SetProfile() and PyEval_SetTrace() but take a tstate
parameter and return -1 on error.
* Add _PyObject_FastCallTstate() function.
PyInterpreterState.eval_frame function now requires a tstate (Python
thread state) parameter.
Add private functions to the C API to get and set the frame
evaluation function:
* Add tstate parameter to _PyFrameEvalFunction function type.
* Add _PyInterpreterState_GetEvalFrameFunc() and
_PyInterpreterState_SetEvalFrameFunc() functions.
* Add tstate parameter to _PyEval_EvalFrameDefault().
PyGILState_Ensure() doesn't call PyEval_InitThreads() anymore when a
new Python thread state is created. The GIL is created by
Py_Initialize() since Python 3.7, it's not needed to call
PyEval_InitThreads() explicitly.
Add an assertion to ensure that the GIL is already created.
* Remove ceval parameter of take_gil(): get it from tstate.
* Move exit_thread_if_finalizing() call inside take_gil(). Replace
exit_thread_if_finalizing() with tstate_must_exit(): the caller is
now responsible to call PyThread_exit_thread().
* Move is_tstate_valid() assertion inside take_gil(). Remove
is_tstate_valid(): inline code into take_gil().
* Move gil_created() assertion inside take_gil().
* exit_thread_if_finalizing() does now access directly _PyRuntime
variable, rather than using tstate->interp->runtime since tstate
can be a dangling pointer after Py_Finalize() has been called.
* exit_thread_if_finalizing() is now called *before* calling
take_gil(). _PyRuntime.finalizing is an atomic variable,
we don't need to hold the GIL to access it.
* Add ensure_tstate_not_null() function to check that tstate is not
NULL at runtime. Check tstate earlier. take_gil() does not longer
check if tstate is NULL.
Cleanup:
* PyEval_RestoreThread() no longer saves/restores errno: it's already
done inside take_gil().
* PyEval_AcquireLock(), PyEval_AcquireThread(),
PyEval_RestoreThread() and _PyEval_EvalFrameDefault() now check if
tstate is valid with the new is_tstate_valid() function which uses
_PyMem_IsPtrFreed().
The Py_FatalError() function is replaced with a macro which logs
automatically the name of the current function, unless the
Py_LIMITED_API macro is defined.
Changes:
* Add _Py_FatalErrorFunc() function.
* Remove the function name from the message of Py_FatalError() calls
which included the function name.
* Update tests.
Convert _PyRuntimeState.finalizing field to an atomic variable:
* Rename it to _finalizing
* Change its type to _Py_atomic_address
* Add _PyRuntimeState_GetFinalizing() and _PyRuntimeState_SetFinalizing()
functions
* Remove _Py_CURRENTLY_FINALIZING() function: replace it with testing
directly _PyRuntimeState_GetFinalizing() value
Convert _PyRuntimeState_GetThreadState() to static inline function.
In function `_PyEval_EvalFrameDefault`, macros PREDICT and PREDICTED use the same identifier creation scheme, which may be shared between them, reducing code repetition, and do ensure that the same identifier is generated.
gcc -Wcast-qual turns up a number of instances of casting away constness of pointers. Some of these can be safely modified, by either:
Adding the const to the type cast, as in:
- return _PyUnicode_FromUCS1((unsigned char*)s, size);
+ return _PyUnicode_FromUCS1((const unsigned char*)s, size);
or, Removing the cast entirely, because it's not necessary (but probably was at one time), as in:
- PyDTrace_FUNCTION_ENTRY((char *)filename, (char *)funcname, lineno);
+ PyDTrace_FUNCTION_ENTRY(filename, funcname, lineno);
These changes will not change code, but they will make it much easier to check for errors in consts
The bulk of this patch was generated automatically with:
for name in \
PyObject_Vectorcall \
Py_TPFLAGS_HAVE_VECTORCALL \
PyObject_VectorcallMethod \
PyVectorcall_Function \
PyObject_CallOneArg \
PyObject_CallMethodNoArgs \
PyObject_CallMethodOneArg \
;
do
echo $name
git grep -lwz _$name | xargs -0 sed -i "s/\b_$name\b/$name/g"
done
old=_PyObject_FastCallDict
new=PyObject_VectorcallDict
git grep -lwz $old | xargs -0 sed -i "s/\b$old\b/$new/g"
and then cleaned up:
- Revert changes to in docs & news
- Revert changes to backcompat defines in headers
- Nudge misaligned comments
Replace a few Py_FatalError() calls if tstate is NULL with
assert(tstate != NULL) in ceval.c.
PyEval_AcquireThread(), PyEval_ReleaseThread() and
PyEval_RestoreThread() must never be called with a NULL tstate.
* Add DICT_UPDATE and DICT_MERGE bytecodes. Use them for ** unpacking.
* Remove BUILD_MAP_UNPACK and BUILD_MAP_UNPACK_WITH_CALL, as they are now unused.
* Update magic number for ** unpacking opcodes.
* Update dis.rst to incorporate new bytecodes.
* Add blurb entry.
* Add three new bytecodes: LIST_TO_TUPLE, LIST_EXTEND, SET_UPDATE. Use them to implement star unpacking expressions.
* Remove four bytecodes BUILD_LIST_UNPACK, BUILD_TUPLE_UNPACK, BUILD_SET_UNPACK and BUILD_TUPLE_UNPACK_WITH_CALL opcodes as they are now unused.
* Update magic number and dis.rst for new bytecodes.
* Reorder the __aenter__ and __aexit__ checks for async with
* Add assertions for async with body being skipped
* Swap __aexit__ and __aenter__ loading in the documentation
Break up COMPARE_OP into four logically distinct opcodes:
* COMPARE_OP for rich comparisons
* IS_OP for 'is' and 'is not' tests
* CONTAINS_OP for 'in' and 'is not' tests
* JUMP_IF_NOT_EXC_MATCH for checking exceptions in 'try-except' statements.
When producing the bytecode of exception handlers with name binding (like `except Exception as e`) we need to produce a try-finally block to make sure that the name is deleted after the handler is executed to prevent cycles in the stack frame objects. The bytecode associated with this try-finally block does not have source lines associated and it was causing problems when the tracing functionality was running over it.
Remove BEGIN_FINALLY, END_FINALLY, CALL_FINALLY and POP_FINALLY bytecodes. Implement finally blocks by code duplication.
Reimplement frame.lineno setter using line numbers rather than bytecode offsets.
Add PyInterpreterState.runtime field: reference to the _PyRuntime
global variable. This field exists to not have to pass runtime in
addition to tstate to a function. Get runtime from tstate:
tstate->interp->runtime.
Remove "_PyRuntimeState *runtime" parameter from functions already
taking a "PyThreadState *tstate" parameter.
_PyGC_Init() first parameter becomes "PyThreadState *tstate".
Additional note: the `method_check_args` function in `Objects/descrobject.c` is written in such a way that it applies to all kinds of descriptors. In particular, a future re-implementation of `wrapper_descriptor` could use that code.
CC @vstinner @encukou
https://bugs.python.org/issue37645
Automerge-Triggered-By: @encukou
* Add _Py_EnterRecursiveCall() and _Py_LeaveRecursiveCall() which
require a tstate argument.
* Pass tstate to _Py_MakeRecCheck() and _Py_CheckRecursiveCall().
* Convert Py_EnterRecursiveCall() and Py_LeaveRecursiveCall() macros
to static inline functions.
_PyThreadState_GET() is the most efficient way to get the tstate, and
so using it with _Py_EnterRecursiveCall() and
_Py_LeaveRecursiveCall() should be a little bit more efficient than
using Py_EnterRecursiveCall() and Py_LeaveRecursiveCall() which use
the "slower" PyThreadState_GET().
Provide Py_EnterRecursiveCall() and Py_LeaveRecursiveCall() as
regular functions for the limited API. Previously, there were defined
as macros, but these macros didn't work with the limited API which
cannot access PyThreadState.recursion_depth field.
Remove _Py_CheckRecursionLimit from the stable ABI.
Add Include/cpython/ceval.h header file.
bpo-37151: remove special case for PyCFunction from PyObject_Call
Alse, make the undocumented function PyCFunction_Call an alias
of PyObject_Call and deprecate it.
The fact that keyword names are strings is now part of the vectorcall and `METH_FASTCALL` protocols. The biggest concrete change is that `_PyStack_UnpackDict` now checks that and raises `TypeError` if not.
CC @markshannon @vstinner
https://bugs.python.org/issue37540
* Replace global var Py_VerboseFlag with interp->config.verbose.
* Add _PyErr_NoMemory(tstate) function.
* Add tstate parameter to _PyEval_SetCoroutineOriginTrackingDepth()
and move the function to the internal API.
* Replace _PySys_InitMain(runtime, interp)
with _PySys_InitMain(runtime, tstate).
It has been documented as deprecated and to be removed in 3.8;
From a comment on another thread – which I can't find ; leave get_coro_wrapper() for now, but always return `None`.
https://bugs.python.org/issue36933
Remove the PyEval_ReInitThreads() function from the Python C API.
It should not be called explicitly: use PyOS_AfterFork_Child()
instead.
Rename PyEval_ReInitThreads() to _PyEval_ReInitThreads() and add a
'runtime' parameter.
Add "struct _ceval_runtime_state *ceval = &_PyRuntime.ceval;" local
variables to function to better highlight the dependency on the
global variable _PyRuntime and to point directly to _PyRuntime.ceval
field rather than on the larger _PyRuntime.
Changes:
* Add _PyRuntimeState_GetThreadState(runtime) macro.
* Add _PyEval_AddPendingCall(ceval, ...) and
_PyThreadState_Swap(gilstate, ...) functions.
* _PyThreadState_GET() macro now calls
_PyRuntimeState_GetThreadState() using &_PyRuntime.
* Add 'ceval' parameter to COMPUTE_EVAL_BREAKER(),
SIGNAL_PENDING_SIGNALS(), _PyEval_SignalAsyncExc(),
_PyEval_SignalReceived() and _PyEval_FiniThreads() macros and
functions.
* Add 'tstate' parameter to call_function(), do_call_core() and
do_raise().
* Add 'runtime' parameter to _Py_CURRENTLY_FINALIZING(),
_Py_FinishPendingCalls() and _PyThreadState_DeleteExcept()
macros and functions.
* Declare 'runtime', 'tstate', 'ceval' and 'eval_breaker' variables
as constant.
If a "=" is specified a the end of an f-string expression, the f-string will evaluate to the text of the expression, followed by '=', followed by the repr of the value of the expression.
This commit contains the implementation of PEP570: Python positional-only parameters.
* Update Grammar/Grammar with new typedarglist and varargslist
* Regenerate grammar files
* Update and regenerate AST related files
* Update code object
* Update marshal.c
* Update compiler and symtable
* Regenerate importlib files
* Update callable objects
* Implement positional-only args logic in ceval.c
* Regenerate frozen data
* Update standard library to account for positional-only args
* Add test file for positional-only args
* Update other test files to account for positional-only args
* Add News entry
* Update inspect module and related tests
* Add _PyEval_FiniThreads2(). _PyEval_FiniThreads() now only clears
the pending lock, whereas _PyEval_FiniThreads2() destroys the GIL.
* pymain_free() now calls _PyEval_FiniThreads2().
* Py_FinalizeEx() now calls _PyEval_FiniThreads().
PyEval_AcquireLock() and PyEval_AcquireThread() now
terminate the current thread if called while the interpreter is
finalizing, making them consistent with PyEval_RestoreThread(),
Py_END_ALLOW_THREADS, and PyGILState_Ensure().
Change PyAPI_FUNC(type), PyAPI_DATA(type) and PyMODINIT_FUNC macros
of pyport.h when Py_BUILD_CORE_MODULE is defined.
The Py_BUILD_CORE_MODULE define must be now be used to build a C
extension as a dynamic library accessing Python internals: export the
PyInit_xxx() function in DLL exports on Windows.
Changes:
* Py_BUILD_CORE_BUILTIN and Py_BUILD_CORE_MODULE now imply
Py_BUILD_CORE directy in pyport.h.
* ceval.c compilation now fails with an error if Py_BUILD_CORE is not
defined, just to ensure that Python is build with the correct
defines.
* setup.py now compiles _pickle.c with Py_BUILD_CORE_MODULE define.
* setup.py compiles _json.c with Py_BUILD_CORE_MODULE define, rather
than Py_BUILD_CORE_BUILTIN define
* PCbuild/pythoncore.vcxproj: Add Py_BUILD_CORE_BUILTIN define.
This is effectively an un-revert of #11617 and #12024 (reverted in #12159). Portions of those were merged in other PRs (with lower risk) and this represents the remainder. Note that I found 3 different bugs in the original PRs and have fixed them here.
The function has no return value.
Fix the following warning on Windows:
python\ceval.c(180): warning C4098: 'PyEval_InitThreads':
'void' function returning a value
* Revert "bpo-36097: Use only public C-API in the_xxsubinterpreters module (adding as necessary). (#12003)"
This reverts commit bcfa450f21.
* Revert "bpo-33608: Simplify ceval's DISPATCH by hoisting eval_breaker ahead of time. (gh-12062)"
This reverts commit bda918bf65.
* Revert "bpo-33608: Use _Py_AddPendingCall() in _PyCrossInterpreterData_Release(). (gh-12024)"
This reverts commit b05b711a2c.
* Revert "bpo-33608: Factor out a private, per-interpreter _Py_AddPendingCall(). (GH-11617)"
This reverts commit ef4ac967e2.
Ensure that the main interpreter is active (in the main thread) for signal-handling operations. This is increasingly relevant as people use subinterpreters more.
https://bugs.python.org/issue35724
This change separates the signal handling trigger in the eval loop from the "pending calls" machinery. There is no semantic change and the difference in performance is insignificant.
The change makes both components less confusing. It also eliminates the risk of changes to the pending calls affecting signal handling. This is particularly relevant for some upcoming pending calls changes I have in the works.
In _localemodule.c and selectmodule.c, remove dead code that would
cause double decrefs if run.
In addition, replace PyList_SetItem() with PyList_SET_ITEM() in cases
where a new list is populated and there is no possibility of an error.
In addition, check if the list changed size in the loop in array_array_fromlist().
* _PyTuple_ITEMS() gives access to the tuple->ob_item field and cast the
first argument to PyTupleObject*. This internal macro is only usable if
Py_BUILD_CORE is defined.
* Replace &PyTuple_GET_ITEM(ob, 0) with _PyTuple_ITEMS(ob).
* Replace PyTuple_GET_ITEM(op, 1) with &_PyTuple_ITEMS(ob)[1].
If Py_BUILD_CORE is defined, the PyThreadState_GET() macro access
_PyRuntime which comes from the internal pycore_state.h header.
Public headers must not require internal headers.
Move PyThreadState_GET() and _PyInterpreterState_GET_UNSAFE() from
Include/pystate.h to Include/internal/pycore_state.h, and rename
PyThreadState_GET() to _PyThreadState_GET() there.
The PyThreadState_GET() macro of pystate.h is now redefined when
pycore_state.h is included, to use the fast _PyThreadState_GET().
Changes:
* Add _PyThreadState_GET() macro
* Replace "PyThreadState_GET()->interp" with
_PyInterpreterState_GET_UNSAFE()
* Replace PyThreadState_GET() with _PyThreadState_GET() in internal C
files (compiled with Py_BUILD_CORE defined), but keep
PyThreadState_GET() in the public header files.
* _testcapimodule.c: replace PyThreadState_GET() with
PyThreadState_Get(); the module is not compiled with Py_BUILD_CORE
defined.
* pycore_state.h now requires Py_BUILD_CORE to be defined.
* Remove _PyThreadState_Current
* Replace GET_TSTATE() with PyThreadState_GET()
* Replace GET_INTERP_STATE() with _PyInterpreterState_GET_UNSAFE()
* Replace direct access to _PyThreadState_Current with
PyThreadState_GET()
* Replace _PyThreadState_Current with
_PyRuntime.gilstate.tstate_current
* Rename SET_TSTATE() to _PyThreadState_SET(), name more
consistent with _PyThreadState_GET()
* Update outdated comments
`list.append([], None)` was profiled but `list.append([], None, **{})` was not profiled.
Enable profiling for later case.
https://bugs.python.org/issue34125
This will prevent emitting a resource warning when the execution was
interrupted by Ctrl-C between calling open() and entering a 'with' block
in "with open()".
* Added new opcode END_ASYNC_FOR.
* Setting global StopAsyncIteration no longer breaks "async for" loops.
* Jumping into an "async for" loop is now disabled.
* Jumping out of an "async for" loop no longer corrupts the stack.
* Simplify the compiler.
* Add coro.cr_origin and sys.set_coroutine_origin_tracking_depth
* Use coroutine origin information in the unawaited coroutine warning
* Stop using set_coroutine_wrapper in asyncio debug mode
* In BaseEventLoop.set_debug, enable debugging in the correct thread
This kludge is from 1992. Any C99 compiler is going to be able to handle the
ceval dispatch switch.
Anyway, we have much bigger switches than the ceval dispatch one around. (See,
e.g., Objects/unicodetype_db.h.)
The concrete PyDict_* API is used to interact with PyInterpreterState.modules in a number of places. This isn't compatible with all dict subclasses, nor with other Mapping implementations. This patch switches the concrete API usage to the corresponding abstract API calls.
We also add a PyImport_GetModule() function (and some other helpers) to reduce a bunch of code duplication.
* Add Py_UNREACHABLE() as an alias to abort().
* Use Py_UNREACHABLE() instead of assert(0)
* Convert more unreachable code to use Py_UNREACHABLE()
* Document Py_UNREACHABLE() and a few other macros.
PR #1638, for bpo-28411, causes problems in some (very) edge cases. Until that gets sorted out, we're reverting the merge. PR #3506, a fix on top of #1638, is also getting reverted.
* group the (stateful) runtime globals into various topical structs
* consolidate the topical structs under a single top-level _PyRuntimeState struct
* add a check-c-globals.py script that helps identify runtime globals
Other globals are excluded (see globals.txt and check-c-globals.py).
f_trace_lines: enable/disable line trace events
f_trace_opcodes: enable/disable opcode trace events
These are intended primarily for testing of the interpreter
itself, as they make it much easier to emulate signals
arriving at unfortunate times.
* group the (stateful) runtime globals into various topical structs
* consolidate the topical structs under a single top-level _PyRuntimeState struct
* add a check-c-globals.py script that helps identify runtime globals
Other globals are excluded (see globals.txt and check-c-globals.py).
* Improve signal delivery
Avoid using Py_AddPendingCall from signal handler, to avoid calling signal-unsafe functions.
* Remove unused function
* Improve comments
* Add stress test
* Adapt for --without-threads
* Add second stress test
* Add NEWS blurb
* Address comments @haypo
If we have a chain of generators/coroutines that are 'yield from'ing
each other, then resuming the stack works like:
- call send() on the outermost generator
- this enters _PyEval_EvalFrameDefault, which re-executes the
YIELD_FROM opcode
- which calls send() on the next generator
- which enters _PyEval_EvalFrameDefault, which re-executes the
YIELD_FROM opcode
- ...etc.
However, every time we enter _PyEval_EvalFrameDefault, the first thing
we do is to check for pending signals, and if there are any then we
run the signal handler. And if it raises an exception, then we
immediately propagate that exception *instead* of starting to execute
bytecode. This means that e.g. a SIGINT at the wrong moment can "break
the chain" – it can be raised in the middle of our yield from chain,
with the bottom part of the stack abandoned for the garbage collector.
The fix is pretty simple: there's already a special case in
_PyEval_EvalFrameEx where it skips running signal handlers if the next
opcode is SETUP_FINALLY. (I don't see how this accomplishes anything
useful, but that's another story.) If we extend this check to also
skip running signal handlers when the next opcode is YIELD_FROM, then
that closes the hole – now the exception can only be raised at the
innermost stack frame.
This shouldn't have any performance implications, because the opcode
check happens inside the "slow path" after we've already determined
that there's a pending signal or something similar for us to process;
the vast majority of the time this isn't true and the new check
doesn't run at all.
* bpo-6532: Make the thread id an unsigned integer.
From C API side the type of results of PyThread_start_new_thread() and
PyThread_get_thread_ident(), the id parameter of
PyThreadState_SetAsyncExc(), and the thread_id field of PyThreadState
changed from "long" to "unsigned long".
* Restore a check in thread_get_ident().
When LOAD_METHOD is used for calling C mehtod, PyMethodDescrObject
was passed to profilefunc from 5566bbb.
But lsprof traces only PyCFunctionObject. Additionally, there can be
some third party extension which assumes passed arg is
PyCFunctionObject without calling PyCFunction_Check().
So make PyCFunctionObject from PyMethodDescrObject when
tstate->c_profilefunc is set.
When you use `'%s' % SubClassOfStr()`, where `SubClassOfStr.__rmod__` exists, the reverse operation is ignored as normally such string formatting operations use the `PyUnicode_Format()` fast path. This patch tests for subclasses of `str` first and picks the slow path in that case.
Patch by Martijn Pieters.
* Move all functions to call objects in a new Objects/call.c file.
* Rename fast_function() to _PyFunction_FastCallKeywords().
* Copy null_error() from Objects/abstract.c
* Inline type_error() in call.c to not have to copy it, it was only
called once.
* Export _PyEval_EvalCodeWithName() since it is now called
from call.c.
* Move all functions to call objects in a new Objects/call.c file.
* Rename fast_function() to _PyFunction_FastCallKeywords().
* Copy null_error() from Objects/abstract.c
* Inline type_error() in call.c to not have to copy it, it was only
called once.
* Export _PyEval_EvalCodeWithName() since it is now called
from call.c.
Issue #29227: Inline call_function() into _PyEval_EvalFrameDefault() using
Py_LOCAL_INLINE to reduce the stack consumption.
It reduces the stack consumption, bytes per call, before => after:
test_python_call: 1152 => 1040 (-112 B)
test_python_getitem: 1008 => 976 (-32 B)
test_python_iterator: 1232 => 1120 (-112 B)
=> total: 3392 => 3136 (- 256 B)
Special thanks to INADA Naoki for pushing the patch through
the last mile, Serhiy Storchaka for reviewing the code, and to
Victor Stinner for suggesting the idea (originally implemented
in the PyPy project).
The PEP 523 modified PyEval_EvalFrameEx(): it's now an indirection to
interp->eval_frame().
Inline the call in performance critical code. Leave PyEval_EvalFrame()
unchanged, this function is only kept for backward compatibility.
Issue #28838: Rename parameters of the "calls" functions of the Python C API.
* Rename 'callable_object' and 'func' to 'callable': any Python callable object
is accepted, not only Python functions
* Rename 'method' and 'nameid' to 'name' (method name)
* Rename 'o' to 'obj'
* Move, fix and update documentation of PyObject_CallXXX() functions
in abstract.h
* Update also the documentaton of the C API (update parameter names)
Issue #28858: The change b9c9691c72c5 introduced a regression. It seems like
_PyObject_CallArg1() uses more stack memory than
PyObject_CallFunctionObjArgs().
* PyObject_CallFunctionObjArgs(func, NULL) => _PyObject_CallNoArg(func)
* PyObject_CallFunctionObjArgs(func, arg, NULL) => _PyObject_CallArg1(func, arg)
PyObject_CallFunctionObjArgs() allocates 40 bytes on the C stack and requires
extra work to "parse" C arguments to build a C array of PyObject*.
_PyObject_CallNoArg() and _PyObject_CallArg1() are simpler and don't allocate
memory on the C stack.
This change is part of the fastcall project. The change on listsort() is
related to the issue #23507.
Issue #28799:
* Remove the PyEval_GetCallStats() function.
* Deprecate the untested and undocumented sys.callstats() function.
* Remove the CALL_PROFILE special build
Use the sys.setprofile() function, cProfile or profile module to profile
function calls.
Issue #28782: Fix a bug in the implementation ``yield from`` when checking
if the next instruction is YIELD_FROM. Regression introduced by WORDCODE
(issue #26647).
Reviewed by Serhiy Storchaka and Yury Selivanov.
When Python is not compiled with PGO, the performance of Python on call_simple
and call_method microbenchmarks depend highly on the code placement. In the
worst case, the performance slowdown can be up to 70%.
The GCC __attribute__((hot)) attribute helps to keep hot code close to reduce
the risk of such major slowdown. This attribute is ignored when Python is
compiled with PGO.
The following functions are considered as hot according to statistics collected
by perf record/perf report:
* _PyEval_EvalFrameDefault()
* call_function()
* _PyFunction_FastCall()
* PyFrame_New()
* frame_dealloc()
* PyErr_Occurred()
* BUILD_TUPLE_UNPACK and BUILD_MAP_UNPACK_WITH_CALL no longer generated with
single tuple or dict.
* Restored more informative error messages for incorrect var-positional and
var-keyword arguments.
* Removed code duplications in _PyEval_EvalCodeWithName().
* Removed redundant runtime checks and parameters in _PyStack_AsDict().
* Added a workaround and enabled previously disabled test in test_traceback.
* Removed dead code from the dis module.
Tested on macOS 10.11 dtrace, Ubuntu 16.04 SystemTap, and libbcc.
Largely based by an initial patch by Jesús Cea Avión, with some
influence from Dave Malcolm's SystemTap patch and Nikhil Benesch's
unification patch.
Things deliberately left out for simplicity:
- ustack helpers, I have no way of testing them at this point since
they are Solaris-specific
- PyFrameObject * in function__entry/function__return, this is
SystemTap-specific
- SPARC support
- dynamic tracing
- sys module dtrace facility introspection
All of those might be added later.
Issue #27830: Add _PyObject_FastCallKeywords(): avoid the creation of a
temporary dictionary for keyword arguments.
Other changes:
* Cleanup call_function() and fast_function() (ex: rename nk to nkwargs)
* Remove now useless do_call(), replaced with _PyObject_FastCallKeywords()
Issue #27213: Rework CALL_FUNCTION* opcodes to produce shorter and more
efficient bytecode:
* CALL_FUNCTION now only accepts position arguments
* CALL_FUNCTION_KW accepts position arguments and keyword arguments, but keys
of keyword arguments are packed into a constant tuple.
* CALL_FUNCTION_EX is the most generic, it expects a tuple and a dict for
positional and keyword arguments.
CALL_FUNCTION_VAR and CALL_FUNCTION_VAR_KW opcodes have been removed.
2 tests of test_traceback are currently broken: skip test, the issue #28050 was
created to track the issue.
Patch by Demur Rumed, design by Serhiy Storchaka, reviewed by Serhiy Storchaka
and Victor Stinner.
Issue #27830: Similar to _PyObject_FastCallDict(), but keyword arguments are
also passed in the same C array than positional arguments, rather than being
passed as a Python dict.
Issue #27809: PyEval_CallObjectWithKeywords() doesn't increment temporary the
reference counter of the args tuple (positional arguments). The caller already
holds a strong reference to it.
Issue #27128. When a Python function is called with no arguments, but all
parameters have a default value: use default values as arguments for the fast
path.
Issue #27128: Modify PyEval_CallObjectWithKeywords() to use
_PyObject_FastCall() when args==NULL and kw==NULL. It avoids the creation of a
temporary empty tuple for positional arguments.
Issue #27128: Add _PyObject_FastCall(), a new calling convention avoiding a
temporary tuple to pass positional parameters in most cases, but create a
temporary tuple if needed (ex: for the tp_call slot).
The API is prepared to support keyword parameters, but the full implementation
will come later (_PyFunction_FastCall() doesn't support keyword parameters
yet).
Add also:
* _PyStack_AsTuple() helper function: convert a "stack" of parameters to
a tuple.
* _PyCFunction_FastCall(): fast call implementation for C functions
* _PyFunction_FastCall(): fast call implementation for Python functions
Issue #27558: Fix a SystemError in the implementation of "raise" statement.
In a brand new thread, raise a RuntimeError since there is no active
exception to reraise.
Patch written by Xiang Zhang.
Issue #27128, #18295: replace int type with Py_ssize_t for index variables used
for positional arguments. It should help to avoid integer overflow and help to
emit better machine code for "i++" (no trap needed for overflow).
Make also the total_args variable constant.
* Add comments
* Add empty lines for readability
* PEP 7 style for if block
* Remove useless assert(globals != NULL); (globals is tested a few lines
before)
Don't fallback to PyDict_GetItemWithError() if the hash is unknown: compute the
hash instead. Add also comments to explain the optimization a little bit.
requested name doesn't exist in globals: clear the KeyError exception before
calling PyObject_GetItem(). Fail also if the raised exception is not a
KeyError.
Summary of changes:
1. Coroutines now have a distinct, separate from generators
type at the C level: PyGen_Type, and a new typedef PyCoroObject.
PyCoroObject shares the initial segment of struct layout with
PyGenObject, making it possible to reuse existing generators
machinery. The new type is exposed as 'types.CoroutineType'.
As a consequence of having a new type, CO_GENERATOR flag is
no longer applied to coroutines.
2. Having a separate type for coroutines made it possible to add
an __await__ method to the type. Although it is not used by the
interpreter (see details on that below), it makes coroutines
naturally (without using __instancecheck__) conform to
collections.abc.Coroutine and collections.abc.Awaitable ABCs.
[The __instancecheck__ is still used for generator-based
coroutines, as we don't want to add __await__ for generators.]
3. Add new opcode: GET_YIELD_FROM_ITER. The opcode is needed to
allow passing native coroutines to the YIELD_FROM opcode.
Before this change, 'yield from o' expression was compiled to:
(o)
GET_ITER
LOAD_CONST
YIELD_FROM
Now, we use GET_YIELD_FROM_ITER instead of GET_ITER.
The reason for adding a new opcode is that GET_ITER is used
in some contexts (such as 'for .. in' loops) where passing
a coroutine object is invalid.
4. Add two new introspection functions to the inspec module:
getcoroutinestate(c) and getcoroutinelocals(c).
5. inspect.iscoroutine(o) is updated to test if 'o' is a native
coroutine object. Before this commit it used abc.Coroutine,
and it was requested to update inspect.isgenerator(o) to use
abc.Generator; it was decided, however, that inspect functions
should really be tailored for checking for native types.
6. sys.set_coroutine_wrapper(w) API is updated to work with only
native coroutines. Since types.coroutine decorator supports
any type of callables now, it would be confusing that it does
not work for all types of coroutines.
7. Exceptions logic in generators C implementation was updated
to raise clearer messages for coroutines:
Before: TypeError("generator raised StopIteration")
After: TypeError("coroutine raised StopIteration")
* adds missing INCREF in WITH_CLEANUP_START
* adds missing DECREF in WITH_CLEANUP_FINISH
* adds several new tests Yury created while investigating this
which returned an invalid result (result+error or no result without error) in
the exception message.
Add also unit test to check that the exception contains the name of the
function.
Special case: the final _PyEval_EvalFrameEx() check doesn't mention the
function since it didn't execute a single function but a whole frame.
raise a SystemError if a function returns a result and raises an exception.
The SystemError is chained to the previous exception.
Refactor also PyObject_Call() and PyCFunction_Call() to make them more readable.
Remove some checks which became useless (duplicate checks).
Change reviewed by Serhiy Storchaka.
At entry, save or swap the exception state even if PyEval_EvalFrameEx() is
called with throwflag=0. At exit, the exception state is now always restored or
swapped, not only if why is WHY_YIELD or WHY_RETURN. Patch co-written with
Antoine Pitrou.
name, and use it in the representation of a generator (``repr(gen)``). The
default name of the generator (``__name__`` attribute) is now get from the
function instead of the code. Use ``gen.gi_code.co_name`` to get the name of
the code.