Move private debug _PyObject functions to the internal C API
(pycore_object.h):
* _PyDebugAllocatorStats()
* _PyObject_CheckConsistency()
* _PyObject_DebugTypeStats()
* _PyObject_IsFreed()
No longer export most of these functions, except of
_PyObject_IsFreed().
Move test functions using _PyObject_IsFreed() from _testcapi to
_testinternalcapi. check_pyobject_is_freed() test no longer catch
_testcapi.error: the tested function cannot raise _testcapi.error.
The risk of a race with this state is relatively low, but we play it safe anyway. We do avoid using the lock in performance-sensitive cases where the risk of a race is very, very low.
This is strictly about moving the "obmalloc" runtime state from
`_PyRuntimeState` to `PyInterpreterState`. Doing so improves isolation
between interpreters, specifically most of the memory (incl. objects)
allocated for each interpreter's use. This is important for a
per-interpreter GIL, but such isolation is valuable even without it.
FWIW, a per-interpreter obmalloc is the proverbial
canary-in-the-coalmine when it comes to the isolation of objects between
interpreters. Any object that leaks (unintentionally) to another
interpreter is highly likely to cause a crash (on debug builds at
least). That's a useful thing to know, relative to interpreter
isolation.
Add `MS_WINDOWS_DESKTOP`, `MS_WINDOWS_APPS`, `MS_WINDOWS_SYSTEM` and `MS_WINDOWS_GAMES` preprocessor definitions to allow switching off functionality missing from particular API partitions ("partitions" are used in Windows to identify overlapping subsets of APIs).
CPython only officially supports `MS_WINDOWS_DESKTOP` and `MS_WINDOWS_SYSTEM` (APPS is included by normal desktop builds, but APPS without DESKTOP is not covered). Other configurations are a convenience for people building their own runtimes.
`MS_WINDOWS_GAMES` is for the Xbox subset of the Windows API, which is also available on client OS, but is restricted compared to `MS_WINDOWS_DESKTOP`. These restrictions may change over time, as they relate to the build headers rather than the OS support, and so we assume that Xbox builds will use the latest available version of the GDK.
The global allocators were stored in 3 static global variables: _PyMem_Raw, _PyMem, and _PyObject. State for the "small block" allocator was stored in another 13. That makes a total of 16 global variables. We are moving all 16 to the _PyRuntimeState struct as part of the work for gh-81057. (If PEP 684 is accepted then we will follow up by moving them all to PyInterpreterState.)
https://github.com/python/cpython/issues/81057
Move _Py_GetAllocatedBlocks() and _PyObject_DebugMallocStats()
declarations to pycore_pymem.h. These functions are related to memory
allocators, not to the PyObject structure.
MAP_BOT_LENGTH was incorrectly used to compute MAP_TOP_MASK instead of
MAP_TOP_LENGTH. On 64-bit machines, the error causes the tree to hold
46-bits of virtual addresses, rather than the intended 48-bits.
* Remove 'zombie' frames. We won't need them once we are allocating fixed-size frames.
* Add co_nlocalplus field to code object to avoid recomputing size of locals + frees + cells.
* Move locals, cells and freevars out of frame object into separate memory buffer.
* Use per-threadstate allocated memory chunks for local variables.
* Move globals and builtins from frame object to per-thread stack.
* Move (slow) locals frame object to per-thread stack.
* Move internal frame functions to internal header.
When printing stats, move radix tree info to its own section.
Restore that the breakdown of bytes in arenas exactly accounts for the total of arena bytes allocated.
Add an assert so that invariant doesn't break again.
The radix tree approach is a relatively simple and memory sanitary
alternative to the old (slightly) unsanitary address_in_range().
To disable the radix tree map, set a preprocessor flag as follows:
-DWITH_PYMALLOC_RADIX_TREE=0.
Co-authored-by: Tim Peters <tim.peters@gmail.com>
The PEP 353, written in 2005, introduced PY_FORMAT_SIZE_T. Python no
longer supports macOS 10.4 and Visual Studio 2010, but requires more
recent macOS and Visual Studio versions. In 2020 with Python 3.10, it
is now safe to use directly "%zu" to format size_t and "%zi" to
format Py_ssize_t.
The Py_FatalError() function is replaced with a macro which logs
automatically the name of the current function, unless the
Py_LIMITED_API macro is defined.
Changes:
* Add _Py_FatalErrorFunc() function.
* Remove the function name from the message of Py_FatalError() calls
which included the function name.
* Update tests.
bpo-3605, bpo-38733: Optimize _PyErr_Occurred(): remove "tstate ==
NULL" test.
Py_FatalError() no longer calls PyErr_Occurred() if called without
holding the GIL. So PyErr_Occurred() no longer has to support
tstate==NULL case.
_Py_CheckFunctionResult(): use directly _PyErr_Occurred() to avoid
explicit "!= NULL" test.
pymalloc_alloc() now returns directly the pointer, return NULL on
memory allocation error.
allocate_from_new_pool() already uses NULL as marker for "allocation
failed".
PyObject_Malloc() and PyObject_Free() inlines pymalloc_alloc and
pymalloc_free partially.
But when PGO is not used, compiler don't know where is the hot part
in pymalloc_alloc and pymalloc_free.
Keeping an account of allocated blocks slows down _PyObject_Malloc()
and _PyObject_Free() by a measureable amount. Have
_Py_GetAllocatedBlocks() iterate over the arenas to sum up the
allocated blocks for pymalloc.
GH-14039: allow (no more than) one wholly empty arena on the usable_arenas list.
This prevents thrashing in some easily-provoked simple cases that could end up creating and destroying an arena on each loop iteration in client code. Intuitively, if the only arena on the list becomes empty, it makes scant sense to give it back to the system unless we know we'll never need another free pool again before another arena frees a pool. If the latter obtains, then - yes - this will "waste" an arena.
This adds a vector of "search fingers" so that usable_arenas can be kept in sorted order (by number of free pools) via constant-time operations instead of linear search.
This should reduce worst-case time for reclaiming a great many objects from O(A**2) to O(A), where A is the number of arenas. See bpo-37029.
* Add PyMemAllocatorName enum
* _PyPreConfig.allocator type becomes PyMemAllocatorName, instead of
char*
* Remove _PyPreConfig_Clear()
* Add _PyMem_GetAllocatorName()
* Rename _PyMem_GetAllocatorsName() to
_PyMem_GetCurrentAllocatorName()
* Remove _PyPreConfig_SetAllocator(): just call
_PyMem_SetupAllocators() directly, we don't have do reallocate the
configuration with the new allocator anymore!
* _PyPreConfig_Write() parameter becomes const, as it should be in
the first place!
Omit serialno field from debug hooks on Python memory allocators to
reduce the memory footprint by 5%.
Enable tracemalloc to get the traceback where a memory block has been
allocated when a fatal memory error is logged to decide where to put
a breakpoint.
Compile Python with PYMEM_DEBUG_SERIALNO defined to get back the
field.
Modify CLEANBYTE, DEADDYTE and FORBIDDENBYTE constants: use 0xCD,
0xDD and 0xFD, rather than 0xCB, 0xBB and 0xFB, to use the same byte
patterns than Windows CRT debug malloc() and free().
Replace _PyMem_IsFreed() function with _PyMem_IsPtrFreed() inline
function. The function is now way more efficient, it became a simple
comparison on integers, rather than a short loop. It detects also
uninitialized bytes and "forbidden bytes" filled by debug hooks
on memory allocators.
Add unit tests on _PyObject_IsFreed().
* _PyPreConfig_Write() now reallocates the pre-configuration with the
new memory allocator.
* It is no longer needed to force the "default raw memory allocator"
to clear pre-configuration and core configuration. Simplify the
code.
* _PyPreConfig_Write() now does nothing if called after
Py_Initialize(): no longer check if the allocator is the same.
* Remove _PyMem_GetDebugAllocatorsName(): dev mode sets again
allocator to "debug".
The development mode now uses the effective name of the debug memory
allocator ("pymalloc_debug" or "malloc_debug"). So the name doesn't
change after setting the memory allocator.
This function may access memory which is mapped but is considered
free by libc allocator. It behaves so by design, therefore we
need to suppress sanitizer reports.
GCC doesn't support MSan, so disable only TSan for it.