Fix data races in the method cache in free-threaded builds
These are technically data races, but I think they're benign (to
the extent that that is actually possible). We update cache entries
non-atomically but read them atomically from another thread, and there's
nothing that establishes a happens-before relationship between the
reads and writes that I can see.
Fix mimalloc allocator for huge memory allocation (around
8,589,934,592 GiB) on s390x.
Abort allocation early in mimalloc if the number of slices doesn't
fit into uint32_t, to prevent a integer overflow (cast 64-bit
size_t to uint32_t).
We want code objects to use deferred reference counting in the
free-threaded build. This requires them to be tracked by the GC, so we
set `Py_TPFLAGS_HAVE_GC` in the free-threaded build, but not the default
build.
Guido pointed out to me that some details about the per-interpreter state for the builtin types aren't especially clear. I'm addressing that by:
* adding a comment explaining that state
* adding some asserts to point out the relationship between each index and the interp/global runtime state
This change gives a significant speedup, as the METH_FASTCALL calling
convention is now used. The following bytes and bytearray methods are adapted:
- count()
- find()
- index()
- rfind()
- rindex()
Co-authored-by: Inada Naoki <songofacandy@gmail.com>
Fall back to tp_call() for cases when arguments are passed by name.
Co-authored-by: Donghee Na <donghee.na@python.org>
Co-authored-by: Victor Stinner <vstinner@python.org>
This keeps track of the per-thread total reference count operations in
PyThreadState in the free-threaded builds. The count is merged into the
interpreter's total when the thread exits.
Most mutable data is protected by a striped lock that is keyed on the
referenced object's address. The weakref's hash is protected using the
weakref's per-object lock.
Note that this only affects free-threaded builds. Apart from some minor
refactoring, the added code is all either gated by `ifdef`s or is a no-op
(e.g. `Py_BEGIN_CRITICAL_SECTION`).
I had meant to switch everything to InterpreterError when I added it a while back. At the time I missed a few key spots.
As part of this, I've added print-the-exception to _PyXI_InitTypes() and fixed an error case in `_PyStaticType_InitBuiltin().
This change gives a significant speedup, as the METH_FASTCALL calling
convention is now used. The following methods are adapted:
- str.count
- str.find
- str.index
- str.rfind
- str.rindex
Use the fully qualified type name in repr() of weakref.ref and
weakref.proxy types.
Fix a crash in proxy_repr() when the reference is dead.
Add also test_ref_repr() and test_proxy_repr().
Add a special case for `list.extend(dict)` and `list(dict)` so that those
patterns behave atomically with respect to modifications to the list or
dictionary.
This is required by multiprocessing, which assumes that
`list(_finalizer_registry)` is atomic.
Read the MRO in a thread-unsafe way in `PyType_IsSubtype` to avoid locking. Fixing this is tracked in #117306.
The motivation for this change is in support of making weakrefs thread-safe in free-threaded builds:
`WeakValueDictionary` uses a special dictionary function, `_PyDict_DelItemIf`
to remove dead weakrefs from the dictionary. `_PyDict_DelItemIf` removes a key
if a user supplied predicate evaluates to true for the value associated with
the key. Crucially for the `WeakValueDictionary` use case, the predicate
evaluation + deletion sequence is atomic, provided that the predicate doesn’t
suspend. The predicate used by `WeakValueDictionary` includes a subtype check,
which we must ensure doesn't suspend in free-threaded builds.
Rewrote binarysort() for clarity.
Also changed the signature to be more coherent (it was mixing sortslice with raw pointers).
No change in method or functionality. However, I left some experiments in, disabled for now
via `#if` tricks. Since this code was first written, some kinds of comparisons have gotten
enormously faster (like for lists of floats), which changes the tradeoffs.
For example, plain insertion sort's simpler innermost loop and highly predictable branches
leave it very competitive (even beating, by a bit) binary insertion when comparisons are
very cheap, despite that it can do many more compares. And it wins big on runs that
are already sorted (moving the next one in takes only 1 compare then).
So I left code for a plain insertion sort, to make future experimenting easier.
Also made the maximum value of minrun a `#define` (``MAX_MINRUN`) to make
experimenting with that easier too.
And another bit of `#if``-disabled code rewrites binary insertion's innermost loop to
remove its unpredictable branch. Surprisingly, this doesn't really seem to help
overall. I'm unclear on why not. It certainly adds more instructions, but they're very
simple, and it's hard to be believe they cost as much as a branch miss.
Changes to the function version cache:
- In addition to the function object, also store the code object,
and allow the latter to be retrieved even if the function has been evicted.
- Stop assigning new function versions after a critical attribute (e.g. `__code__`)
has been modified; the version is permanently reset to zero in this case.
- Changes to `__annotations__` are no longer considered critical. (This fixes gh-109998.)
Changes to the Tier 2 optimization machinery:
- If we cannot map a function version to a function, but it is still mapped to a code object,
we continue projecting the trace.
The operand of the `_PUSH_FRAME` and `_POP_FRAME` opcodes can be either NULL,
a function object, or a code object with the lowest bit set.
This allows us to trace through code that calls an ephemeral function,
i.e., a function that may not be alive when we are constructing the executor,
e.g. a generator expression or certain nested functions.
We will lose globals removal inside such functions,
but we can still do other peephole operations
(and even possibly [call inlining](https://github.com/python/cpython/pull/116290),
if we decide to do it), which only need the code object.
As before, if we cannot retrieve the code object from the cache, we stop projecting.
I added it quite a while ago as a strategy for managing interpreter lifetimes relative to the PEP 554 (now 734) implementation. Relatively recently I refactored that implementation to no longer rely on InterpreterID objects. Thus now I'm removing it.
Add Py_GetConstant() and Py_GetConstantBorrowed() functions.
In the limited C API version 3.13, getting Py_None, Py_False,
Py_True, Py_Ellipsis and Py_NotImplemented singletons is now
implemented as function calls at the stable ABI level to hide
implementation details. Getting these constants still return borrowed
references.
Add _testlimitedcapi/object.c and test_capi/test_object.py to test
Py_GetConstant() and Py_GetConstantBorrowed() functions.
Mostly we unify the two different implementations of the conversion code (from PyObject * to int64_t. We also drop the PyArg_ParseTuple()-style converter function, as well as rename and move PyInterpreterID_LookUp().
Starting in Python 3.12, we prevented calling fork() and starting new threads
during interpreter finalization (shutdown). This has led to a number of
regressions and flaky tests. We should not prevent starting new threads
(or `fork()`) until all non-daemon threads exit and finalization starts in
earnest.
This changes the checks to use `_PyInterpreterState_GetFinalizing(interp)`,
which is set immediately before terminating non-daemon threads.
Somehow we ended up with two separate counter variables tracking "the next function version".
Most likely this was a historical accident where an old branch was updated incorrectly.
This PR merges the two counters into a single one: `interp->func_state.next_version`.
Since 3.12, allocating a GC object cannot immediately trigger GC. This
allows us to simplify the logic for creating the canonical callback-less
weakref.
* GH-116554: Relax list.sort()'s notion of "descending" run
Rewrote `count_run()` so that sub-runs of equal elements no longer end a descending run. Both ascending and descending runs can have arbitrarily many sub-runs of arbitrarily many equal elements now. This is tricky, because we only use ``<`` comparisons, so checking for equality doesn't come "for free". Surprisingly, it turned out there's a very cheap (one comparison) way to determine whether an ascending run consisted of all-equal elements. That sealed the deal.
In addition, after a descending run is reversed in-place, we now go on to see whether it can be extended by an ascending run that just happens to be adjacent. This succeeds in finding at least one additional element to append about half the time, and so appears to more than repay its cost (the savings come from getting to skip a binary search, when a short run is artificially forced to length MIINRUN later, for each new element `count_run()` can add to the initial run).
While these have been in the back of my mind for years, a question on StackOverflow pushed it to action:
https://stackoverflow.com/questions/78108792/
They were wondering why it took about 4x longer to sort a list like:
[999_999, 999_999, ..., 2, 2, 1, 1, 0, 0]
than "similar" lists. Of course that runs very much faster after this patch.
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Pieter Eendebak <pieter.eendebak@gmail.com>
This makes nearly all the operations on set thread-safe in the free-threaded build, with the exception of `_PySet_NextEntry` and `setiter_iternext`.
Co-authored-by: Sam Gross <colesbury@gmail.com>
Co-authored-by: Erlend E. Aasland <erlend.aasland@protonmail.com>
This implements the delayed reuse of mimalloc pages that contain Python
objects in the free-threaded build.
Allocations of the same size class are grouped in data structures called
pages. These are different from operating system pages. For thread-safety, we
want to ensure that memory used to store PyObjects remains valid as long as
there may be concurrent lock-free readers; we want to delay using it for
other size classes, in other heaps, or returning it to the operating system.
When a mimalloc page becomes empty, instead of immediately freeing it, we tag
it with a QSBR goal and insert it into a per-thread state linked list of
pages to be freed. When mimalloc needs a fresh page, we process the queue and
free any still empty pages that are now deemed safe to be freed. Pages
waiting to be freed are still available for allocations of the same size
class and allocating from a page prevent it from being freed. There is
additional logic to handle abandoned pages when threads exit.
This sets `MI_DEBUG` to `2` in debug builds to enable `mi_assert_internal()`
calls. Expensive internal assertions are not enabled.
This also disables an assertion in free-threaded builds that would be
triggered by the free-threaded GC because we traverse heaps that are not
owned by the current thread.
The previous code had two bugs. First, the debug offset in the mimalloc
heap includes the two pymalloc debug words, but the pointer passed to
fill_mem_debug does not include them. Second, the current object heap is
correct source for allocations, but not deallocations.
This adds `_PyMem_FreeDelayed()` and supporting functions. The
`_PyMem_FreeDelayed()` function frees memory with the same allocator as
`PyMem_Free()`, but after some delay to ensure that concurrent lock-free
readers have finished.
This avoids filling the memory occupied by ob_tid, ob_ref_local, and
ob_ref_shared with debug bytes (e.g., 0xDD) in mimalloc in the
free-threaded build.
The GC keeps track of the number of allocations (less deallocations)
since the last GC. This buffers the count in thread-local state and uses
atomic operations to modify the per-interpreter count. The thread-local
buffering avoids contention on shared state.
A consequence is that the GC scheduling is not as precise, so
"test_sneaky_frame_object" is skipped because it requires that the GC be
run exactly after allocating a frame object.
Makes _PyType_Lookup thread safe, including:
Thread safety of the underlying cache.
Make mutation of mro and type members thread safe
Also _PyType_GetMRO and _PyType_GetBases are currently returning borrowed references which aren't safe.
Add PythonFinalizationError exception. This exception derived from
RuntimeError is raised when an operation is blocked during the Python
finalization.
The following functions now raise PythonFinalizationError, instead of
RuntimeError:
* _thread.start_new_thread()
* subprocess.Popen
* os.fork()
* os.fork1()
* os.forkpty()
Morever, _winapi.Overlapped finalizer now logs an unraisable
PythonFinalizationError, instead of an unraisable RuntimeError.
We add _winapi.BatchedWaitForMultipleObjects to wait for larger numbers of handles.
This is an internal module, hence undocumented, and should be used with caution.
Check the docstring for info before using BatchedWaitForMultipleObjects.
Biased reference counting maintains two refcount fields in each object:
`ob_ref_local` and `ob_ref_shared`. The true refcount is the sum of these two
fields. In some cases, when refcounting operations are split across threads,
the ob_ref_shared field can be negative (although the total refcount must be
at least zero). In this case, the thread that decremented the refcount
requests that the owning thread give up ownership and merge the refcount
fields.
Fixes a few issues related to refleak tracking in the free-threaded build:
- Count blocks in abandoned segments
- Call `_mi_page_free_collect` earlier during heap traversal in order to get an accurate count of blocks in use.
- Add missing refcount tracking in `_Py_DecRefSharedDebug` and `_Py_ExplicitMergeRefcount`.
- Pause threads in `get_num_global_allocated_blocks` to ensure that traversing the mimalloc heaps is safe.
This changes a number of internal usages of `PyDict_SetDefault` to use `PyDict_SetDefaultRef`.
Co-authored-by: Erlend E. Aasland <erlend.aasland@protonmail.com>
Starts adding thread safety to dict objects.
Use @critical_section for APIs which are exposed via argument clinic and don't directly correlate with a public C API which needs to acquire the lock
Use a _lock_held suffix for keeping changes to complicated functions simple and just wrapping them with a critical section
Acquire and release the lock in an existing function where it won't be overly disruptive to the existing logic
The `PyDict_SetDefaultRef` function is similar to `PyDict_SetDefault`,
but returns a strong reference through the optional `**result` pointer
instead of a borrowed reference.
Co-authored-by: Petr Viktorin <encukou@gmail.com>
The new `PyList_GetItemRef` is similar to `PyList_GetItem`, but returns
a strong reference instead of a borrowed reference. Additionally, if the
passed "list" object is not a list, the function sets a `TypeError`
instead of calling `PyErr_BadInternalCall()`.
* gh-112529: Remove PyGC_Head from object pre-header in free-threaded build
This avoids allocating space for PyGC_Head in the free-threaded build.
The GC implementation for free-threaded CPython does not use the
PyGC_Head structure.
* The trashcan mechanism uses the `ob_tid` field instead of `_gc_prev`
in the free-threaded build.
* The GDB libpython.py file now determines the offset of the managed
dict field based on whether the running process is a free-threaded
build. Those are identified by the `ob_ref_local` field in PyObject.
* Fixes `_PySys_GetSizeOf()` which incorrectly incorrectly included the
size of `PyGC_Head` in the size of static `PyTypeObject`.
PyObject_GetBuffer() now raises a SystemError if called with
PyBUF_READ or PyBUF_WRITE as flags. These flags should
only be used with the PyMemoryView_* C API.
For interpreters that share state with the main interpreter, this points
to the same static memory structure. For interpreters with their own
obmalloc state, it is heap allocated. Add free_obmalloc_arenas() which
will free the obmalloc arenas and radix tree structures for interpreters
with their own obmalloc state.
Co-authored-by: Eric Snow <ericsnowcurrently@gmail.com>
This adds support for visiting abandoned pages in mimalloc and improves
the performance of the page visiting code. Abandoned pages contain
memory blocks from threads that have exited. At some point, they may be
later reclaimed by other threads. We still need to visit those pages in
the free-threaded GC because they contain live objects.
This also reduces the overhead of visiting mimalloc pages:
* Special cases for full, empty, and pages containing only a single
block.
* Fix free_map to use one bit instead of one byte per block.
* Use fast integer division by a constant algorithm when computing
block offset from block size and index.
* gh-112529: Use GC heaps for GC allocations in free-threaded builds
The free-threaded build's garbage collector implementation will need to
find GC objects by traversing mimalloc heaps. This hooks up the
allocation calls with the correct heaps by using a thread-local
"current_obj_heap" variable.
* Refactor out setting heap based on type
* gh-112529: Track if debug allocator is used as underlying allocator
The GC implementation for free-threaded builds will need to accurately
detect if the debug allocator is used because it affects the offset of
the Python object from the beginning of the memory allocation. The
current implementation of `_PyMem_DebugEnabled` only considers if the
debug allocator is the outer-most allocator; it doesn't handle the case
of "hooks" like tracemalloc being used on top of the debug allocator.
This change enables more accurate detection of the debug allocator by
tracking when debug hooks are enabled.
* Simplify _PyMem_DebugEnabled
This fixes `_PyInterpreterState_GetAllocatedBlocks()` and
`_Py_GetGlobalAllocatedBlocks()` in the free-threaded builds. The
gh-113263 change that introduced multiple mimalloc heaps per-thread
broke the logic for counting the number of allocated blocks. For subtle
reasons, this led to reported reference count leaks in the refleaks
buildbots.
`PyComplex_RealAsDouble()`/`PyComplex_ImagAsDouble` now try to convert
an object to a `complex` instance using its `__complex__()` method
before falling back to the ``__float__()`` method.
PyComplex_ImagAsDouble() also will not silently return 0.0 for
non-complex types anymore. Instead we try to call PyFloat_AsDouble()
and return 0.0 only if this call is successful.
gh-113750: Fix object resurrection on free-threaded builds
This avoids the undesired re-initializing of fields like `ob_gc_bits`,
`ob_mutex`, and `ob_tid` when an object is resurrected due to its
finalizer being called.
This change has no effect on the default (with GIL) build.
* gh-112532: Tag mimalloc heaps and pages
Mimalloc pages are data structures that contain contiguous allocations
of the same block size. Note that they are distinct from operating
system pages. Mimalloc pages are contained in segments.
When a thread exits, it abandons any segments and contained pages that
have live allocations. These segments and pages may be later reclaimed
by another thread. To support GC and certain thread-safety guarantees in
free-threaded builds, we want pages to only be reclaimed by the
corresponding heap in the claimant thread. For example, we want pages
containing GC objects to only be claimed by GC heaps.
This allows heaps and pages to be tagged with an integer tag that is
used to ensure that abandoned pages are only claimed by heaps with the
same tag. Heaps can be initialized with a tag (0-15); any page allocated
by that heap copies the corresponding tag.
* Fix conversion warning
* gh-112532: Isolate abandoned segments by interpreter
Mimalloc segments are data structures that contain memory allocations along
with metadata. Each segment is "owned" by a thread. When a thread exits,
it abandons its segments to a global pool to be later reclaimed by other
threads. This changes the pool to be per-interpreter instead of process-wide.
This will be important for when we use mimalloc to find GC objects in the
`--disable-gil` builds. We want heaps to only store Python objects from a
single interpreter. Absent this change, the abandoning and reclaiming process
could break this isolation.
* Add missing '&_mi_abandoned_default' to 'tld_empty'
Fix undefined behavior warnings (UBSan -fsanitize=function), for example:
Python/generated_cases.c.h:3315:13: runtime error: call to function mappingproxy_dealloc through pointer to incorrect function type 'void (*)(struct _object *)'
descrobject.c:1160: note: mappingproxy_dealloc defined here
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior Python/generated_cases.c.h:3315:13 in
Fix undefined behavior warnings (UBSan -fsanitize=function), for example:
Objects/object.c:674:11: runtime error: call to function list_repr through pointer to incorrect function type 'struct _object *(*)(struct _object *)'
listobject.c:382: note: list_repr defined here
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior Objects/object.c:674:11 in
* gh-112532: Use separate mimalloc heaps for GC objects
In `--disable-gil` builds, we now use four separate heaps in
anticipation of using mimalloc to find GC objects when the GIL is
disabled. To support this, we also make a few changes to mimalloc:
* `mi_heap_t` and `mi_tld_t` initialization is split from allocation.
This allows us to have a `mi_tld_t` per-`PyThreadState`, which is
important to keep interpreter isolation, since the same OS thread may
run in multiple interpreters (using different PyThreadStates.)
* Heap abandoning (mi_heap_collect_ex) can now be called from a
different thread than the one that created the heap. This is necessary
because we may clear and delete the containing PyThreadStates from a
different thread during finalization and after fork().
* Use enum instead of defines and guard mimalloc includes.
* The enum typedef will be convenient for future PRs that use the type.
* Guarding the mimalloc includes allows us to unconditionally include
pycore_mimalloc.h from other header files that rely on things like
`struct _mimalloc_thread_state`.
* Only define _mimalloc_thread_state in Py_GIL_DISABLED builds
gh-112027: Don't print mimalloc warning after mmap
This changes the warning to a "verbose"-level message in prim.c. The
address passed to mmap is only a hint -- it's normal for mmap() to
sometimes not respect the hint and return a different address.
* Make memory_clear() compatible with inquiry
* Make memory_traverse() compatible with traverseproc
* Make memory_dealloc() compatible with destructor
* Make memory_repr() compatible with reprfunc
* Make memory_hash() compatible with hashfunc
* Make memoryiter_next() compatible with iternextfunc
* Make memoryiter_traverse() compatible with traverseproc
* Make memoryiter_dealloc() compatible with destructor
* Make several functions compatible with getter
* Make a few functions compatible with getter
* Make memory_item() compatible with ssizeargfunc
* Make memory_subscript() compatible with binaryfunc
* Make memory_length() compatible with lenfunc
* Make memory_ass_sub() compatible with objobjargproc
* Make memory_releasebuf() compatible with releasebufferproc
* Make memory_getbuf() compatible with getbufferproc
* Make mbuf_clear() compatible with inquiry
* Make mbuf_traverse() compatible with traverseproc
* Make mbuf_dealloc() compatible with destructor
This replaces some usages of PyThread_type_lock with PyMutex, which does not require memory allocation to initialize.
This simplifies some of the runtime initialization and is also one step towards avoiding changing the default raw memory allocator during initialize/finalization, which can be non-thread-safe in some circumstances.
Previously arbitrary errors could be cleared during formatting error
messages for ImportError or AttributeError for modules. Now all
unexpected errors are reported.
This avoids:
python3.13 Tools/unicode/makeunicodedata.py
python3.13: can't open file '.../build/debug/Tools/unicode/makeunicodedata.py': [Errno 2] No such file or directory
make: *** [Makefile:1498: regen-unicodedata] Error 2
Re-run `make regen-unicodedata` to update the script path in generated files.
_PyDict_Pop_KnownHash(): remove the default value and the return type
becomes an int.
Co-authored-by: Stefan Behnel <stefan_ml@behnel.de>
Co-authored-by: Antoine Pitrou <pitrou@free.fr>
* Split list_extend() into two sub-functions: list_extend_fast() and
list_extend_iter().
* list_inplace_concat() no longer has to call Py_DECREF() on the
list_extend() result, since list_extend() now returns an int.
Critical sections are helpers to replace the global interpreter lock
with finer grained locking. They provide similar guarantees to the GIL
and avoid the deadlock risk that plain locking involves. Critical
sections are implicitly ended whenever the GIL would be released. They
are resumed when the GIL would be acquired. Nested critical sections
behave as if the sections were interleaved.
* Revert "gh-111089: Use PyUnicode_AsUTF8() in Argument Clinic (#111585)"
This reverts commit d9b606b3d0.
* Revert "gh-111089: Use PyUnicode_AsUTF8() in getargs.c (#111620)"
This reverts commit cde1071b2a.
* Revert "gh-111089: PyUnicode_AsUTF8() now raises on embedded NUL (#111091)"
This reverts commit d731579bfb.
* Revert "gh-111089: Add PyUnicode_AsUTF8() to the limited C API (#111121)"
This reverts commit d8f32be5b6.
* Revert "gh-111089: Use PyUnicode_AsUTF8() in sqlite3 (#111122)"
This reverts commit 37e4e20eaa.
gh-106168: Update the size only after setting the item, to avoid temporary inconsistencies.
Also remove the "what's new" sentence regarding the size setting since tuples cannot grow after allocation.
Replace most of calls of _PyErr_WriteUnraisableMsg() and some
calls of PyErr_WriteUnraisable(NULL) with PyErr_FormatUnraisable().
Co-authored-by: Victor Stinner <vstinner@python.org>
Replace PyUnicode_AsUTF8AndSize() with PyUnicode_AsUTF8() to remove
the explicit check for embedded null characters.
The change avoids to have to include explicitly <string.h> to get the
strlen() function when using a recent version of the limited C API.
This is partly to clear this stuff out of pystate.c, but also in preparation for moving some code out of _xxsubinterpretersmodule.c. This change also moves this stuff to the internal API (new: Include/internal/pycore_crossinterp.h). @vstinner did this previously and I undid it. Now I'm re-doing it. :/
Don't declare _PyMem_MimallocEnabled() if WITH_PYMALLOC macro is not
defined (./configure --without-pymalloc).
Fix also a typo in _PyInterpreterState_FinalizeAllocatedBlocks().
* Add mimalloc v2.12
Modified src/alloc.c to remove include of alloc-override.c and not
compile new handler.
Did not include the following files:
- include/mimalloc-new-delete.h
- include/mimalloc-override.h
- src/alloc-override-osx.c
- src/alloc-override.c
- src/static.c
- src/region.c
mimalloc is thread safe and shares a single heap across all runtimes,
therefore finalization and getting global allocated blocks across all
runtimes is different.
* mimalloc: minimal changes for use in Python:
- remove debug spam for freeing large allocations
- use same bytes (0xDD) for freed allocations in CPython and mimalloc
This is important for the test_capi debug memory tests
* Don't export mimalloc symbol in libpython.
* Enable mimalloc as Python allocator option.
* Add mimalloc MIT license.
* Log mimalloc in Lib/test/pythoninfo.py.
* Document new mimalloc support.
* Use macro defs for exports as done in:
https://github.com/python/cpython/pull/31164/
Co-authored-by: Sam Gross <colesbury@gmail.com>
Co-authored-by: Christian Heimes <christian@python.org>
Co-authored-by: Victor Stinner <vstinner@python.org>
Fixes#109894
* set `interp.static_objects.last_resort_memory_error.args` to empty tuple to avoid crash on `PyErr_Display()` call
* allow `_PyExc_InitGlobalObjects()` to be called on subinterpreter init
---------
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
* PyUnicode_AsUTF8() now raises an exception if the string contains
embedded null characters.
* Update related C API tests (test_capi.test_unicode).
* type_new_set_doc() uses PyUnicode_AsUTF8AndSize() to silently
truncate doc containing null bytes.
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
Move the following private functions and structures to
pycore_modsupport.h internal C API:
* _PyArg_BadArgument()
* _PyArg_CheckPositional()
* _PyArg_NoKeywords()
* _PyArg_NoPositional()
* _PyArg_ParseStack()
* _PyArg_ParseStackAndKeywords()
* _PyArg_Parser structure
* _PyArg_UnpackKeywords()
* _PyArg_UnpackKeywordsWithVararg()
* _PyArg_UnpackStack()
* _Py_ANY_VARARGS()
Changes:
* Python/getargs.h now includes pycore_modsupport.h to export
functions.
* clinic.py now adds pycore_modsupport.h when one of these functions
is used.
* Add pycore_modsupport.h includes when a C extension uses one of
these functions.
* Define Py_BUILD_CORE_MODULE in C extensions which now include
directly or indirectly (via code generated by Argument Clinic)
pycore_modsupport.h:
* _csv
* _curses_panel
* _dbm
* _gdbm
* _multiprocessing.posixshmem
* _sqlite.row
* _statistics
* grp
* resource
* syslog
* _testcapi: bad_get() no longer uses METH_FASTCALL calling
convention but METH_VARARGS. Replace _PyArg_UnpackStack() with
PyArg_ParseTuple().
* _testcapi: add PYTESTCAPI_NEED_INTERNAL_API macro which is defined
by _testcapi sub-modules which need the internal C API
(pycore_modsupport.h): exceptions.c, float.c, vectorcall.c,
watchers.c.
* Remove Include/cpython/modsupport.h header file.
Include/modsupport.h no longer includes the removed header file.
* Fix mypy clinic.py