This fixes both the traceback.py module and the C code for formatting syntax errors (in Python/pythonrun.c). They now both consistently do the following:
- Suppress caret if it points left of text
- Allow caret pointing just past end of line
- If caret points past end of line, clip to *just* past end of line
The syntax error formatting code in traceback.py was mostly rewritten; small, subtle changes were applied to the C code in pythonrun.c.
There's still a difference when the text contains embedded newlines. Neither handles these very well, and I don't think the case occurs in practice.
Automerge-Triggered-By: @gvanrossum
_Py_hashtable_get_entry_ptr() avoids comparing the entry hash:
compare directly keys.
Move _Py_hashtable_get_entry_ptr() just after
_Py_hashtable_get_entry_generic().
_Py_hashtable_t values become regular "void *" pointers.
* Add _Py_hashtable_entry_t.data member
* Remove _Py_hashtable_t.data_size member
* Remove _Py_hashtable_t.get_func member. It is no longer needed
to specialize _Py_hashtable_get() for a specific value size, since
all entries now have the same size (void*).
* Remove the following macros:
* _Py_HASHTABLE_GET()
* _Py_HASHTABLE_SET()
* _Py_HASHTABLE_SET_NODATA()
* _Py_HASHTABLE_POP()
* Rename _Py_hashtable_pop() to _Py_hashtable_steal()
* _Py_hashtable_foreach() callback now gets key and value rather than
entry.
* Remove _Py_hashtable_value_destroy_func type. value_destroy_func
callback now only has a single parameter: data (void*).
Rewrite _tracemalloc to store "trace_t*" rather than directly
"trace_t" in traces hash tables. Traces are now allocated on the heap
memory, outside the hash table.
Add tracemalloc_copy_traces() and tracemalloc_copy_domains() helper
functions.
Remove _Py_hashtable_copy() function since there is no API to copy a
key or a value.
Remove also _Py_hashtable_delete() function which was commented.
Rewrite _Py_hashtable_t type to always store the key as
a "const void *" pointer. Add an explicit "key" member to
_Py_hashtable_entry_t.
Remove _Py_hashtable_t.key_size member.
hash and compare functions drop their hash table parameter, and their
'key' parameter type becomes "const void *".
Add a new _Py_HashPointerRaw() function which avoids replacing -1
with -2 to micro-optimize hash table using pointer keys: using
_Py_hashtable_hash_ptr() hash function.
Optimize _Py_hashtable_get() and _Py_hashtable_get_entry() for
pointer keys:
* key_size == sizeof(void*)
* hash_func == _Py_hashtable_hash_ptr
* compare_func == _Py_hashtable_compare_direct
Changes:
* Add get_func and get_entry_func members to _Py_hashtable_t
* Convert _Py_hashtable_get() and _Py_hashtable_get_entry() functions
to static nline functions.
* Add specialized get and get entry for pointer keys.
_Py_hashtable_new() now uses PyMem_Malloc/PyMem_Free allocator by
default, rather than PyMem_RawMalloc/PyMem_RawFree.
PyMem_Malloc is faster than PyMem_RawMalloc for memory blocks smaller
than or equal to 512 bytes.
* Move Modules/hashtable.h to Include/internal/pycore_hashtable.h
* Move Modules/hashtable.c to Python/hashtable.c
* Python is now linked to hashtable.c. _tracemalloc is no longer
linked to hashtable.c. Previously, marshal.c got hashtable.c via
_tracemalloc.c which is built as a builtin module.
In the experimental isolated subinterpreters build mode, the GIL is
now per-interpreter.
Move gil from _PyRuntimeState.ceval to PyInterpreterState.ceval.
new_interpreter() always get the config from the main interpreter.
In the experimental isolated subinterpreters build mode,
_PyThreadState_GET() gets the autoTSSkey variable and
_PyThreadState_Swap() sets the autoTSSkey variable.
* Add _PyThreadState_GetTSS()
* _PyRuntimeState_GetThreadState() and _PyThreadState_GET()
return _PyThreadState_GetTSS()
* PyEval_SaveThread() sets the autoTSSkey variable to current Python
thread state rather than NULL.
* eval_frame_handle_pending() doesn't check that
_PyThreadState_Swap() result is NULL.
* _PyThreadState_Swap() gets the current Python thread state with
_PyThreadState_GetTSS() rather than
_PyRuntimeGILState_GetThreadState().
* PyGILState_Ensure() no longer checks _PyEval_ThreadsInitialized()
since it cannot access the current interpreter.
_PyErr_ChainExceptions() now ensures that the first parameter is an
exception type, as done by _PyErr_SetObject().
* The following function now check PyExceptionInstance_Check() in an
assertion using a new _PyBaseExceptionObject_cast() helper
function:
* PyException_GetTraceback(), PyException_SetTraceback()
* PyException_GetCause(), PyException_SetCause()
* PyException_GetContext(), PyException_SetContext()
* PyExceptionClass_Name() now checks PyExceptionClass_Check() with an
assertion.
* Remove XXX comment and add gi_exc_state variable to _gen_throw().
* Remove comment from test_generators
Move recursion_limit member from _PyRuntimeState.ceval to
PyInterpreterState.ceval.
* Py_SetRecursionLimit() now only sets _Py_CheckRecursionLimit
of ceval.c if the current Python thread is part of the main
interpreter.
* Inline _Py_MakeEndRecCheck() into _Py_LeaveRecursiveCall().
* Convert _Py_RecursionLimitLowerWaterMark() macro into a static
inline function.
Add --with-experimental-isolated-subinterpreters build option to
configure: better isolate subinterpreters, experimental build mode.
When used, force the usage of the libc malloc() memory allocator,
since pymalloc relies on the unique global interpreter lock (GIL).
Due to backwards compatibility concerns regarding keywords immediately followed by a string without whitespace between them (like in `bg="#d00" if clear else"#fca"`) will fail to parse,
commit 41d5b94af4 has to be reverted.
I can add another commit with the new test case I wrote to verify that the warning was being printed before my change, stopped printing after my change, and that the function does not return null after my change.
Automerge-Triggered-By: @brettcannon
Otherwise we leave a dangling pointer to free'd memory. If we
then initialize a new interpreter in the same process and call
PyImport_ExtendInittab, we will (likely) crash when calling
PyMem_RawRealloc(inittab_copy, ...) since the pointer address
is bogus.
Automerge-Triggered-By: @brettcannon
An isolated subinterpreter cannot spawn threads, spawn a child
process or call os.fork().
* Add private _Py_NewInterpreter(isolated_subinterpreter) function.
* Add isolated=True keyword-only parameter to
_xxsubinterpreters.create().
* Allow again os.fork() in "non-isolated" subinterpreters.
This implements full support for # type: <type> comments, # type: ignore <stuff> comments, and the func_type parsing mode for ast.parse() and compile().
Closes https://github.com/we-like-parsers/cpython/issues/95.
(For now, you need to use the master branch of mypy, since another issue unique to 3.9 had to be fixed there, and there's no mypy release yet.)
The only thing missing is `feature_version=N`, which is being tracked in https://github.com/we-like-parsers/cpython/issues/124.
New PyFrame_GetBack() function: get the frame next outer frame.
Replace frame->f_back with PyFrame_GetBack(frame) in most code but
frameobject.c, ceval.c and genobject.c.
Remove the following function from the C API:
* PyAsyncGen_ClearFreeLists()
* PyContext_ClearFreeList()
* PyDict_ClearFreeList()
* PyFloat_ClearFreeList()
* PyFrame_ClearFreeList()
* PyList_ClearFreeList()
* PySet_ClearFreeList()
* PyTuple_ClearFreeList()
Make these functions private, move them to the internal C API and
change their return type to void.
Call explicitly PyGC_Collect() to free all free lists.
Note: PySet_ClearFreeList() did nothing.
PyFrame_GetCode(frame): return a borrowed reference to the frame
code.
Replace frame->f_code with PyFrame_GetCode(frame) in most code,
except in frameobject.c, genobject.c and ceval.c.
Also add PyFrame_GetLineNumber() to the limited C API.
This commit also allows to pass flags to the new parser in all interfaces and fixes a bug in the parser generator that was causing to inline rules with actions, making them disappear.
If _PyCode_InitOpcache() fails in _PyEval_EvalFrameDefault(), use
"goto exit_eval_frame;" rather than "return NULL;" to exit the
function in a consistent state. For example, tstate->frame is now
reset properly.
This is invoked by mypy, using ast.parse(source, "<func_type>", "func_type"). Since the new grammar doesn't yet support the func_type_input start symbol we must use the old compiler in this case to prevent a crash.
https://bugs.python.org/issue40334
* Rename PyConfig.use_peg to _use_peg_parser
* Document PyConfig._use_peg_parser and mark it a deprecated
* Mark -X oldparser option and PYTHONOLDPARSER env var as deprecated
in the documentation.
* Add use_old_parser() and skip_if_new_parser() to test.support
* Remove sys.flags.use_peg: use_old_parser() uses
_testinternalcapi.get_configs() instead.
* Enhance test_embed tests
* subprocess._args_from_interpreter_flags() copies -X oldparser
The constant values of future flags in the __future__ module
is updated in order to prevent collision with compiler flags.
Previously PyCF_ALLOW_TOP_LEVEL_AWAIT was clashing
with CO_FUTURE_DIVISION.
* Replace PY_INT64_T with int64_t
* Replace PY_UINT32_T with uint32_t
* Replace PY_UINT64_T with uint64_t
sha3module.c no longer checks if PY_UINT64_T is defined since it's
always defined and uint64_t is always available on platforms
supported by Python.
Avoid a temporary buffer to create a bytes string: use
PyBytes_FromStringAndSize() to directly allocate a bytes object.
Cleanup also the code: PEP 7 formatting, move variable definitions
closer to where they are used. Fix assertion checking "j" index.
Rename _PyInterpreterState_GET_UNSAFE() to _PyInterpreterState_GET()
for consistency with _PyThreadState_GET() and to have a shorter name
(help to fit into 80 columns).
Add also "assert(tstate != NULL);" to the function.
Don't access PyInterpreterState.config member directly anymore, but
use new functions:
* _PyInterpreterState_GetConfig()
* _PyInterpreterState_SetConfig()
* _Py_GetConfig()
Fix the signal handler: it now always uses the main interpreter,
rather than trying to get the current Python thread state.
The following function now accepts an interpreter, instead of a
Python thread state:
* _PyEval_SignalReceived()
* _Py_ThreadCanHandleSignals()
* _PyEval_AddPendingCall()
* COMPUTE_EVAL_BREAKER()
* SET_GIL_DROP_REQUEST(), RESET_GIL_DROP_REQUEST()
* SIGNAL_PENDING_CALLS(), UNSIGNAL_PENDING_CALLS()
* SIGNAL_PENDING_SIGNALS(), UNSIGNAL_PENDING_SIGNALS()
* SIGNAL_ASYNC_EXC(), UNSIGNAL_ASYNC_EXC()
Py_AddPendingCall() now uses the main interpreter if it fails to the
current Python thread state.
Convert _PyThreadState_GET() and PyInterpreterState_GET_UNSAFE()
macros to static inline functions.
PyInterpreterState_New() is now responsible to create pending calls,
PyInterpreterState_Delete() now deletes pending calls.
* Rename _PyEval_InitThreads() to _PyEval_InitGIL() and rename
_PyEval_InitGIL() to _PyEval_FiniGIL().
* _PyEval_InitState() and PyEval_FiniState() now create and delete
pending calls. _PyEval_InitState() now returns -1 on memory
allocation failure.
* Add init_interp_create_gil() helper function: code shared by
Py_NewInterpreter() and Py_InitializeFromConfig().
* init_interp_create_gil() now also calls _PyEval_FiniGIL(),
_PyEval_InitGIL() and _PyGILState_Init() in subinterpreters, but
these functions now do nothing when called from a subinterpreter.
Add _PyIndex_Check() function to the internal C API: fast inlined
verson of PyIndex_Check().
Add Include/internal/pycore_abstract.h header file.
Replace PyIndex_Check() with _PyIndex_Check() in C files of Objects
and Python subdirectories.
Add a private _at_fork_reinit() method to _thread.Lock,
_thread.RLock, threading.RLock and threading.Condition classes:
reinitialize the lock after fork in the child process; reset the lock
to the unlocked state.
Rename also the private _reset_internal_locks() method of
threading.Event to _at_fork_reinit().
* Add _PyThread_at_fork_reinit() private function. It is excluded
from the limited C API.
* threading.Thread._reset_internal_locks() now calls
_at_fork_reinit() on self._tstate_lock rather than creating a new
Python lock object.
The _PyErr_WarnUnawaitedCoroutine() fallback now also sets the
coroutine object as the source of the warning, as done by the Python
implementation warnings._warn_unawaited_coroutine().
Moreover, don't truncate the coroutine name: Python supports
arbitrary string length to format the message.
Add _PySys_Audit() function to the internal C API: similar to
PySys_Audit(), but requires a mandatory tstate parameter.
Cleanup sys_audit_tstate() code: remove code path for NULL tstate,
since the function exits at entry if tstate is NULL. Remove also code
path for NULL tstate->interp: should_audit() now ensures that it is
not NULL (even if tstate->interp cannot be NULL in practice).
PySys_AddAuditHook() now checks if tstate is not NULL to decide if
tstate can be used or not, and tstate is set to NULL if the runtime
is not initialized yet.
Use _PySys_Audit() in sysmodule.c.
Remove two unused imports: _thread and _weakref. Avoid creating a new
winreg builtin module if it's already available in sys.modules.
The winreg module is now stored as "winreg" rather than "_winreg".
PyThreadState.frame is a borrowed reference, not a strong reference:
PyThreadState_Clear() must not call Py_CLEAR(tstate->frame).
Remove test_threading.test_warnings_at_exit(): we cannot warranty
that the Python thread state of daemon threads is cleared in a
reliable way during Python shutdown.
* Re-add removed classes Suite, slice, Param, AugLoad and AugStore.
* Add docstrings for dummy classes.
* Add docstrings for attribute aliases.
* Set __module__ to "ast" instead of "_ast".
* bpo-22490: Remove "__PYVENV_LAUNCHER__" from the shell environment on macOS
This changeset removes the environment varialbe "__PYVENV_LAUNCHER__"
during interpreter launch as it is only needed to communicate between
the stub executable in framework installs and the actual interpreter.
Leaving the environment variable present may lead to misbehaviour when
launching other scripts.
* Actually commit the changes for issue 22490...
* Correct typo
Co-Authored-By: Nicola Soranzo <nicola.soranzo@gmail.com>
* Run make patchcheck
Co-authored-by: Jason R. Coombs <jaraco@jaraco.com>
Co-authored-by: Nicola Soranzo <nicola.soranzo@gmail.com>
Remove _PyRuntime.getframe hook and remove _PyThreadState_GetFrame
macro which was an alias to _PyRuntime.getframe. They were only
exposed by the internal C API. Remove also PyThreadFrameGetter type.
If a thread different than the main thread schedules a pending call
(Py_AddPendingCall()), the bytecode evaluation loop is no longer
interrupted at each bytecode instruction to check for pending calls
which cannot be executed. Only the main thread can execute pending
calls.
Previously, the bytecode evaluation loop was interrupted at each
instruction until the main thread executes pending calls.
* Add _Py_ThreadCanHandlePendingCalls() function.
* SIGNAL_PENDING_CALLS() now only sets eval_breaker to 1 if the
current thread can execute pending calls. Only the main thread can
execute pending calls.
COMPUTE_EVAL_BREAKER() now also checks if the Python thread state
belongs to the main interpreter. Don't break the evaluation loop if
there are pending signals but the Python thread state it belongs to a
subinterpeter.
* Add _Py_IsMainThread() function.
* Add _Py_ThreadCanHandleSignals() function.
If a thread different than the main thread gets a signal, the
bytecode evaluation loop is no longer interrupted at each bytecode
instruction to check for pending signals which cannot be handled.
Only the main thread of the main interpreter can handle signals.
Previously, the bytecode evaluation loop was interrupted at each
instruction until the main thread handles signals.
Changes:
* COMPUTE_EVAL_BREAKER() and SIGNAL_PENDING_SIGNALS() no longer set
eval_breaker to 1 if the current thread cannot handle signals.
* take_gil() now always recomputes eval_breaker.
If Py_AddPendingCall() is called in a subinterpreter, the function is
now scheduled to be called from the subinterpreter, rather than being
called from the main interpreter.
Each subinterpreter now has its own list of scheduled calls.
* Move pending and eval_breaker fields from _PyRuntimeState.ceval
to PyInterpreterState.ceval.
* new_interpreter() now calls _PyEval_InitThreads() to create
pending calls lock.
* Fix Py_AddPendingCall() for subinterpreters. It now calls
_PyThreadState_GET() which works in a subinterpreter if the
caller holds the GIL, and only falls back on
PyGILState_GetThisThreadState() if _PyThreadState_GET()
returns NULL.
Do not apply AST-based optimizations if 'from __future__ import annotations' is used in order to
prevent information lost in the final version of the annotations.
bpo-37127, bpo-39984:
* trip_signal() and Py_AddPendingCall() now get the current Python
thread state using PyGILState_GetThisThreadState() rather than
_PyRuntimeState_GetThreadState() to be able to get it even if the
GIL is released.
* _PyEval_SignalReceived() now expects tstate rather than ceval.
* Remove ceval parameter of _PyEval_AddPendingCall(): ceval is now
get from tstate parameter.
* _PyThreadState_DeleteCurrent() now takes tstate rather than
runtime.
* Add ensure_tstate_not_null() helper to pystate.c.
* Add _PyEval_ReleaseLock() function.
* _PyThreadState_DeleteCurrent() now calls
_PyEval_ReleaseLock(tstate) and frees PyThreadState memory after
this call, not before.
* PyGILState_Release(): rename "tcur" variable to "tstate".
* Rename _PyInterpreterState_Get() to PyInterpreterState_Get() and
move it the limited C API.
* Add _PyInterpreterState_Get() alias to PyInterpreterState_Get() for
backward compatibility with Python 3.8.
Replace _PyInterpreterState_Get() function call with
_PyInterpreterState_GET_UNSAFE() macro which is more efficient but
don't check if tstate or interp is NULL.
_Py_GetConfigsAsDict() now uses _PyThreadState_GET().
* sys.settrace(), sys.setprofile() and _lsprof.Profiler.enable() now
properly report PySys_Audit() error if "sys.setprofile" or
"sys.settrace" audit event is denied.
* Add _PyEval_SetProfile() and _PyEval_SetTrace() function: similar
to PyEval_SetProfile() and PyEval_SetTrace() but take a tstate
parameter and return -1 on error.
* Add _PyObject_FastCallTstate() function.
PyInterpreterState.eval_frame function now requires a tstate (Python
thread state) parameter.
Add private functions to the C API to get and set the frame
evaluation function:
* Add tstate parameter to _PyFrameEvalFunction function type.
* Add _PyInterpreterState_GetEvalFrameFunc() and
_PyInterpreterState_SetEvalFrameFunc() functions.
* Add tstate parameter to _PyEval_EvalFrameDefault().
The 32-bit (49-day) TickCount relied on in EnterNonRecursiveMutex can overflow
in the gap between the 'target' time and the 'now' time WaitForSingleObjectEx
returns, causing the loop to think it needs to wait another 49 days. This is
most likely to happen when the machine is hibernated during
WaitForSingleObjectEx.
This makes acquiring a lock/event/etc from the _thread or threading module
appear to never timeout.
Replace with GetTickCount64 - this is OK now Python no longer supports XP which
lacks it, and is in use for time.monotonic().
Co-authored-by: And Clover <and.clover@bromium.com>
* Remove the slice type.
* Make Slice a kind of the expr type instead of the slice type.
* Replace ExtSlice(slices) with Tuple(slices, Load()).
* Replace Index(value) with a value itself.
All non-terminal nodes in AST for expressions are now of the expr type.
Add --with-platlibdir option to the configure script: name of the
platform-specific library directory, stored in the new sys.platlitdir
attribute. It is used to build the path of platform-specific dynamic
libraries and the path of the standard library.
It is equal to "lib" on most platforms. On Fedora and SuSE, it is
equal to "lib64" on 64-bit systems.
Co-Authored-By: Jan Matějek <jmatejek@suse.com>
Co-Authored-By: Matěj Cepl <mcepl@cepl.eu>
Co-Authored-By: Charalampos Stratakis <cstratak@redhat.com>
PyGILState_Ensure() doesn't call PyEval_InitThreads() anymore when a
new Python thread state is created. The GIL is created by
Py_Initialize() since Python 3.7, it's not needed to call
PyEval_InitThreads() explicitly.
Add an assertion to ensure that the GIL is already created.
Clear the frames of daemon threads earlier during the Python shutdown to
call objects destructors. So "unclosed file" resource warnings are now
emitted for daemon threads in a more reliable way.
Cleanup _PyThreadState_DeleteExcept() code: rename "garbage" to
"list".
* Remove ceval parameter of take_gil(): get it from tstate.
* Move exit_thread_if_finalizing() call inside take_gil(). Replace
exit_thread_if_finalizing() with tstate_must_exit(): the caller is
now responsible to call PyThread_exit_thread().
* Move is_tstate_valid() assertion inside take_gil(). Remove
is_tstate_valid(): inline code into take_gil().
* Move gil_created() assertion inside take_gil().
* exit_thread_if_finalizing() does now access directly _PyRuntime
variable, rather than using tstate->interp->runtime since tstate
can be a dangling pointer after Py_Finalize() has been called.
* exit_thread_if_finalizing() is now called *before* calling
take_gil(). _PyRuntime.finalizing is an atomic variable,
we don't need to hold the GIL to access it.
* Add ensure_tstate_not_null() function to check that tstate is not
NULL at runtime. Check tstate earlier. take_gil() does not longer
check if tstate is NULL.
Cleanup:
* PyEval_RestoreThread() no longer saves/restores errno: it's already
done inside take_gil().
* PyEval_AcquireLock(), PyEval_AcquireThread(),
PyEval_RestoreThread() and _PyEval_EvalFrameDefault() now check if
tstate is valid with the new is_tstate_valid() function which uses
_PyMem_IsPtrFreed().
The Py_FatalError() function is replaced with a macro which logs
automatically the name of the current function, unless the
Py_LIMITED_API macro is defined.
Changes:
* Add _Py_FatalErrorFunc() function.
* Remove the function name from the message of Py_FatalError() calls
which included the function name.
* Update tests.
Convert _PyRuntimeState.finalizing field to an atomic variable:
* Rename it to _finalizing
* Change its type to _Py_atomic_address
* Add _PyRuntimeState_GetFinalizing() and _PyRuntimeState_SetFinalizing()
functions
* Remove _Py_CURRENTLY_FINALIZING() function: replace it with testing
directly _PyRuntimeState_GetFinalizing() value
Convert _PyRuntimeState_GetThreadState() to static inline function.
The AST "Suite" node is no longer used and it can be removed from the ASDL definition and related structures (compiler, visitors, ...).
Co-Authored-By: Victor Stinner <vstinner@python.org>
Co-authored-by: Brett Cannon <54418+brettcannon@users.noreply.github.com>
Co-authored-by: Pablo Galindo <Pablogsal@gmail.com>
* Add _PyWarnings_InitState() which only initializes the _warnings
module state (tstate->interp->warnings) without creating a module
object
* Py_InitializeFromConfig() now calls _PyWarnings_InitState() instead
of _PyWarnings_Init()
* Rename also private functions of _warnings.c to avoid confusion
between the public C API and the private C API.
In function `_PyEval_EvalFrameDefault`, macros PREDICT and PREDICTED use the same identifier creation scheme, which may be shared between them, reducing code repetition, and do ensure that the same identifier is generated.
* Hard reset + cherry piciking the changes.
* 📜🤖 Added by blurb_it.
* Added @vstinner News
* Update Misc/NEWS.d/next/Library/2020-02-11-13-01-38.bpo-38691.oND8Sk.rst
Co-Authored-By: Victor Stinner <vstinner@python.org>
* Hard reset to master
* Hard reset to master + latest changes
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Victor Stinner <vstinner@python.org>
Move the dtoa.h header file to the internal C API as pycore_dtoa.h:
it only contains private functions (prefixed by "_Py").
The math and cmath modules must now be compiled with the
Py_BUILD_CORE macro defined.
gcc -Wcast-qual turns up a number of instances of casting away constness of pointers. Some of these can be safely modified, by either:
Adding the const to the type cast, as in:
- return _PyUnicode_FromUCS1((unsigned char*)s, size);
+ return _PyUnicode_FromUCS1((const unsigned char*)s, size);
or, Removing the cast entirely, because it's not necessary (but probably was at one time), as in:
- PyDTrace_FUNCTION_ENTRY((char *)filename, (char *)funcname, lineno);
+ PyDTrace_FUNCTION_ENTRY(filename, funcname, lineno);
These changes will not change code, but they will make it much easier to check for errors in consts
The bulk of this patch was generated automatically with:
for name in \
PyObject_Vectorcall \
Py_TPFLAGS_HAVE_VECTORCALL \
PyObject_VectorcallMethod \
PyVectorcall_Function \
PyObject_CallOneArg \
PyObject_CallMethodNoArgs \
PyObject_CallMethodOneArg \
;
do
echo $name
git grep -lwz _$name | xargs -0 sed -i "s/\b_$name\b/$name/g"
done
old=_PyObject_FastCallDict
new=PyObject_VectorcallDict
git grep -lwz $old | xargs -0 sed -i "s/\b$old\b/$new/g"
and then cleaned up:
- Revert changes to in docs & news
- Revert changes to backcompat defines in headers
- Nudge misaligned comments
Currently, during runtime destruction, `_PyImport_Cleanup` is clearing the interpreter state before clearing out the modules themselves. This leads to a segfault on modules that rely on the module state to clear themselves up.
For example, let's take the small snippet added in the issue by @DinoV :
```
import _struct
class C:
def __init__(self):
self.pack = _struct.pack
def __del__(self):
self.pack('I', -42)
_struct.x = C()
```
The module `_struct` uses the module state to run `pack`. Therefore, the module state has to be alive until after the module has been cleared out to successfully run `C.__del__`. This happens at line 606, when `_PyImport_Cleanup` calls `_PyModule_Clear`. In fact, the loop that calls `_PyModule_Clear` has in its comments:
> Now, if there are any modules left alive, clear their globals to minimize potential leaks. All C extension modules actually end up here, since they are kept alive in the interpreter state.
That means that we can't clear the module state (which is used by C Extensions) before we run that loop.
Moving `_PyInterpreterState_ClearModules` until after it, fixes the segfault in the code snippet.
Finally, this updates a test in `io` to correctly assert the error that it now throws (since it now finds the io module state). The test that uses this is: `test_create_at_shutdown_without_encoding`. Given this test is now working is a proof that the module state now stays alive even when `__del__` is called at module destruction time. Thus, I didn't add a new tests for this.
https://bugs.python.org/issue38076
Move the following functions from the public C API to the internal C
API:
* _PyDebug_PrintTotalRefs(),
* _Py_PrintReferenceAddresses()
* _Py_PrintReferences()
PyThreadState.on_delete is a callback used to notify Python when a
thread completes. _thread._set_sentinel() function creates a lock
which is released when the thread completes. It sets on_delete
callback to the internal release_sentinel() function. This lock is
known as Threading._tstate_lock in the threading module.
The release_sentinel() function uses the Python C API. The problem is
that on_delete is called late in the Python finalization, when the C
API is no longer fully working.
The PyThreadState_Clear() function now calls the
PyThreadState.on_delete callback. Previously, that happened in
PyThreadState_Delete().
The release_sentinel() function is now called when the C API is still
fully working.
Replace a few Py_FatalError() calls if tstate is NULL with
assert(tstate != NULL) in ceval.c.
PyEval_AcquireThread(), PyEval_ReleaseThread() and
PyEval_RestoreThread() must never be called with a NULL tstate.
* Add DICT_UPDATE and DICT_MERGE bytecodes. Use them for ** unpacking.
* Remove BUILD_MAP_UNPACK and BUILD_MAP_UNPACK_WITH_CALL, as they are now unused.
* Update magic number for ** unpacking opcodes.
* Update dis.rst to incorporate new bytecodes.
* Add blurb entry.
* Add three new bytecodes: LIST_TO_TUPLE, LIST_EXTEND, SET_UPDATE. Use them to implement star unpacking expressions.
* Remove four bytecodes BUILD_LIST_UNPACK, BUILD_TUPLE_UNPACK, BUILD_SET_UNPACK and BUILD_TUPLE_UNPACK_WITH_CALL opcodes as they are now unused.
* Update magic number and dis.rst for new bytecodes.
* bpo-39336: Allow setattr to fail on modules which aren't assignable
When attaching a child module to a package if the object in sys.modules raises an AttributeError (e.g. because it is immutable) it causes the whole import to fail. This now allows immutable packages to exist and an ImportWarning is reported and the AttributeError exception is ignored.
Python-ast.h contains a macro named Yield that conflicts with the Yield macro
in Windows system headers. While Python-ast.h has an "undef Yield" directive
to prevent this, it means that Python-ast.h must be included before Windows
header files or we run into a re-declaration warning. In commit c96be811fa
an include for pycore_pystate.h was added which indirectly includes Windows
header files. In this commit we re-order the includes to fix this warning.
* Reorder the __aenter__ and __aexit__ checks for async with
* Add assertions for async with body being skipped
* Swap __aexit__ and __aenter__ loading in the documentation
Break up COMPARE_OP into four logically distinct opcodes:
* COMPARE_OP for rich comparisons
* IS_OP for 'is' and 'is not' tests
* CONTAINS_OP for 'in' and 'is not' tests
* JUMP_IF_NOT_EXC_MATCH for checking exceptions in 'try-except' statements.
This adds a new function named _PyErr_GetExcInfo() that is a variation of the
original PyErr_GetExcInfo() taking a PyThreadState as its first argument.
That function allows to retrieve the exceptions information of any Python
thread -- not only the current one.
The fix changes copy_location() to require an extra node from which to extract the end location, and fixing all 5 call sites.
https://bugs.python.org/issue39235
When producing the bytecode of exception handlers with name binding (like `except Exception as e`) we need to produce a try-finally block to make sure that the name is deleted after the handler is executed to prevent cycles in the stack frame objects. The bytecode associated with this try-finally block does not have source lines associated and it was causing problems when the tracing functionality was running over it.
All keywords should first be checked for pointer identity. Only
after that failed for all keywords (unlikely) should unicode
equality be used.
The original code would call unicode equality on any non-matching
keyword argument. Meaning calling it often e.g. when a function
has many kwargs but only the last one is provided.
Each Python subinterpreter now has its own "small integer
singletons": numbers in [-5; 257] range.
It is no longer possible to change the number of small integers at
build time by overriding NSMALLNEGINTS and NSMALLPOSINTS macros:
macros should now be modified manually in pycore_pystate.h header
file.
For now, continue to share _PyLong_Zero and _PyLong_One singletons
between all subinterpreters.
When parsing an "elif" node, lineno and col_offset of the node now point to the "elif" keyword and not to its condition, making it consistent with the "if" node.
https://bugs.python.org/issue39031
Automerge-Triggered-By: @pablogsal
In Python 3.9.0a1, sys.argv[0] was made an asolute path if a filename
was specified on the command line. Revert this change, since most
users expect sys.argv to be unmodified.
new_interpreter() now calls _PySys_Create() to create a new sys
module isolated from the main interpreter. It now calls
_PySys_InitCore() and _PyImport_FixupBuiltin().
init_interp_main() now calls _PySys_InitMain().
new_interpreter() now calls _PyBuiltin_Init() to create the builtins
module and calls _PyImport_FixupBuiltin(), rather than using
_PyImport_FindBuiltin(tstate, "builtins").
pycore_init_builtins() is now responsible to initialize
intepr->builtins_copy: inline _PyImport_Init() and remove this
function.
If _PyImport_FixupExtensionObject() is called from a subinterpreter,
leave extensions unchanged and don't copy the module dictionary
into def->m_base.m_copy.