PyTime_t no longer uses an arbitrary unit, it's always a number of
nanoseconds (64-bit signed integer).
* Rename _PyTime_FromNanosecondsObject() to _PyTime_FromLong().
* Rename _PyTime_AsNanosecondsObject() to _PyTime_AsLong().
* Remove pytime_from_nanoseconds().
* Remove pytime_as_nanoseconds().
* Remove _PyTime_FromNanoseconds().
Remove references to the old names _PyTime_MIN
and _PyTime_MAX, now that PyTime_MIN and
PyTime_MAX are public.
Replace also _PyTime_MIN with PyTime_MIN.
This adds `_PyMem_FreeDelayed()` and supporting functions. The
`_PyMem_FreeDelayed()` function frees memory with the same allocator as
`PyMem_Free()`, but after some delay to ensure that concurrent lock-free
readers have finished.
<pycore_time.h> include is no longer needed to get the PyTime_t type
in internal header files. This type is now provided by <Python.h>
include. Add <pycore_time.h> includes to C files instead.
This avoids filling the memory occupied by ob_tid, ob_ref_local, and
ob_ref_shared with debug bytes (e.g., 0xDD) in mimalloc in the
free-threaded build.
This change adds an `eval_breaker` field to `PyThreadState`. The primary
motivation is for performance in free-threaded builds: with thread-local eval
breakers, we can stop a specific thread (e.g., for an async exception) without
interrupting other threads.
The source of truth for the global instrumentation version is stored in the
`instrumentation_version` field in PyInterpreterState. Threads usually read the
version from their local `eval_breaker`, where it continues to be colocated
with the eval breaker bits.
This adds a safe memory reclamation scheme based on FreeBSD's "GUS" and
quiescent state based reclamation (QSBR). The API provides a mechanism
for callers to detect when it is safe to free memory that may be
concurrently accessed by readers.
The GC keeps track of the number of allocations (less deallocations)
since the last GC. This buffers the count in thread-local state and uses
atomic operations to modify the per-interpreter count. The thread-local
buffering avoids contention on shared state.
A consequence is that the GC scheduling is not as precise, so
"test_sneaky_frame_object" is skipped because it requires that the GC be
run exactly after allocating a frame object.
Makes _PyType_Lookup thread safe, including:
Thread safety of the underlying cache.
Make mutation of mro and type members thread safe
Also _PyType_GetMRO and _PyType_GetBases are currently returning borrowed references which aren't safe.
This adds `Py_XBEGIN_CRITICAL_SECTION` and
`Py_XEND_CRITICAL_SECTION`, which accept a possibly NULL object as an
argument. If the argument is NULL, then nothing is locked or unlocked.
Otherwise, they behave like `Py_BEGIN/END_CRITICAL_SECTION`.
Use critical sections to make deque methods that operate on mutable
state thread-safe when the GIL is disabled. This is mostly accomplished
by using the @critical_section Argument Clinic directive, though there
are a few places where this was not possible and critical sections had
to be manually acquired/released.
The free-threaded GC uses mimallocs segment thread IDs to restore
the overwritten `ob_tid` thread ids in PyObjects. For that reason, it's
important that PyObjects and mimalloc use the same identifiers.
These are intended to be used in places where atomics are required in
free-threaded builds but not in the default build. We don't want to
introduce the potential performance overhead of an atomic operation in the
default build.
For the most part, these changes make is substantially easier to backport subinterpreter-related code to 3.12, especially the related modules (e.g. _xxsubinterpreters). The main motivation is to support releasing a PyPI package with the 3.13 capabilities compiled for 3.12.
A lot of the changes here involve either hiding details behind macros/functions or splitting up some files.
We add _winapi.BatchedWaitForMultipleObjects to wait for larger numbers of handles.
This is an internal module, hence undocumented, and should be used with caution.
Check the docstring for info before using BatchedWaitForMultipleObjects.
Biased reference counting maintains two refcount fields in each object:
`ob_ref_local` and `ob_ref_shared`. The true refcount is the sum of these two
fields. In some cases, when refcounting operations are split across threads,
the ob_ref_shared field can be negative (although the total refcount must be
at least zero). In this case, the thread that decremented the refcount
requests that the owning thread give up ownership and merge the refcount
fields.
Starts adding thread safety to dict objects.
Use @critical_section for APIs which are exposed via argument clinic and don't directly correlate with a public C API which needs to acquire the lock
Use a _lock_held suffix for keeping changes to complicated functions simple and just wrapping them with a critical section
Acquire and release the lock in an existing function where it won't be overly disruptive to the existing logic
This marks dead ThreadHandles as non-joinable earlier in
`PyOS_AfterFork_Child()` before we execute any Python code. The handles
are stored in a global linked list in `_PyRuntimeState` because `fork()`
affects the entire process.
Add optional 'filter' parameter to iterdump() that allows a "LIKE"
pattern for filtering database objects to dump.
Co-authored-by: Erlend E. Aasland <erlend@python.org>
* gh-112529: Remove PyGC_Head from object pre-header in free-threaded build
This avoids allocating space for PyGC_Head in the free-threaded build.
The GC implementation for free-threaded CPython does not use the
PyGC_Head structure.
* The trashcan mechanism uses the `ob_tid` field instead of `_gc_prev`
in the free-threaded build.
* The GDB libpython.py file now determines the offset of the managed
dict field based on whether the running process is a free-threaded
build. Those are identified by the `ob_ref_local` field in PyObject.
* Fixes `_PySys_GetSizeOf()` which incorrectly incorrectly included the
size of `PyGC_Head` in the size of static `PyTypeObject`.
Add an option (--enable-experimental-jit for configure-based builds
or --experimental-jit for PCbuild-based ones) to build an
*experimental* just-in-time compiler, based on copy-and-patch (https://fredrikbk.com/publications/copy-and-patch.pdf).
See Tools/jit/README.md for more information on how to install the required build-time tooling.
For interpreters that share state with the main interpreter, this points
to the same static memory structure. For interpreters with their own
obmalloc state, it is heap allocated. Add free_obmalloc_arenas() which
will free the obmalloc arenas and radix tree structures for interpreters
with their own obmalloc state.
Co-authored-by: Eric Snow <ericsnowcurrently@gmail.com>
* gh-112529: Implement GC for free-threaded builds
This implements a mark and sweep GC for the free-threaded builds of
CPython. The implementation relies on mimalloc to find GC tracked
objects (i.e., "containers").
* Bring in a subset of biased reference counting:
https://github.com/colesbury/nogil/commit/b6b12a9a94e
The NoGIL branch has functions for attempting to do an incref on an object which may or may not be in flight. This just brings those functions over so that they will be usable from in the dict implementation to get items w/o holding a lock.
There's a handful of small simple modifications:
Adding inline to the force inline functions to avoid a warning, and switching from _Py_ALWAYS_INLINE to Py_ALWAYS_INLINE as that's available
Remove _Py_REF_LOCAL_SHIFT as it doesn't exist yet (and is currently 0 in the 3.12 nogil branch anyway)
ob_ref_shared is currently Py_ssize_t and not uint32_t, so use that
_PY_LIKELY doesn't exist, so drop it
_Py_ThreadLocal becomes _Py_IsOwnedByCurrentThread
Add '_PyInterpreterState_GET()' to _Py_IncRefTotal calls.
Co-Authored-By: Sam Gross <colesbury@gmail.com>
The `--disable-gil` builds occasionally need to pause all but one thread. Some
examples include:
* Cyclic garbage collection, where this is often called a "stop the world event"
* Before calling `fork()`, to ensure a consistent state for internal data structures
* During interpreter shutdown, to ensure that daemon threads aren't accessing Python objects
This adds the following functions to implement global and per-interpreter pauses:
* `_PyEval_StopTheWorldAll()` and `_PyEval_StartTheWorldAll()` (for the global runtime)
* `_PyEval_StopTheWorld()` and `_PyEval_StartTheWorld()` (per-interpreter)
(The function names may change.)
These functions are no-ops outside of the `--disable-gil` build.
This adds support for visiting abandoned pages in mimalloc and improves
the performance of the page visiting code. Abandoned pages contain
memory blocks from threads that have exited. At some point, they may be
later reclaimed by other threads. We still need to visit those pages in
the free-threaded GC because they contain live objects.
This also reduces the overhead of visiting mimalloc pages:
* Special cases for full, empty, and pages containing only a single
block.
* Fix free_map to use one bit instead of one byte per block.
* Use fast integer division by a constant algorithm when computing
block offset from block size and index.
* gh-112529: Use GC heaps for GC allocations in free-threaded builds
The free-threaded build's garbage collector implementation will need to
find GC objects by traversing mimalloc heaps. This hooks up the
allocation calls with the correct heaps by using a thread-local
"current_obj_heap" variable.
* Refactor out setting heap based on type
* gh-112529: Track if debug allocator is used as underlying allocator
The GC implementation for free-threaded builds will need to accurately
detect if the debug allocator is used because it affects the offset of
the Python object from the beginning of the memory allocation. The
current implementation of `_PyMem_DebugEnabled` only considers if the
debug allocator is the outer-most allocator; it doesn't handle the case
of "hooks" like tracemalloc being used on top of the debug allocator.
This change enables more accurate detection of the debug allocator by
tracking when debug hooks are enabled.
* Simplify _PyMem_DebugEnabled
This splits part of Modules/gcmodule.c of into Python/gc.c, which
now contains the core garbage collection implementation. The Python
module remain in the Modules/gcmodule.c file.
* gh-112532: Tag mimalloc heaps and pages
Mimalloc pages are data structures that contain contiguous allocations
of the same block size. Note that they are distinct from operating
system pages. Mimalloc pages are contained in segments.
When a thread exits, it abandons any segments and contained pages that
have live allocations. These segments and pages may be later reclaimed
by another thread. To support GC and certain thread-safety guarantees in
free-threaded builds, we want pages to only be reclaimed by the
corresponding heap in the claimant thread. For example, we want pages
containing GC objects to only be claimed by GC heaps.
This allows heaps and pages to be tagged with an integer tag that is
used to ensure that abandoned pages are only claimed by heaps with the
same tag. Heaps can be initialized with a tag (0-15); any page allocated
by that heap copies the corresponding tag.
* Fix conversion warning
* gh-112532: Isolate abandoned segments by interpreter
Mimalloc segments are data structures that contain memory allocations along
with metadata. Each segment is "owned" by a thread. When a thread exits,
it abandons its segments to a global pool to be later reclaimed by other
threads. This changes the pool to be per-interpreter instead of process-wide.
This will be important for when we use mimalloc to find GC objects in the
`--disable-gil` builds. We want heaps to only store Python objects from a
single interpreter. Absent this change, the abandoning and reclaiming process
could break this isolation.
* Add missing '&_mi_abandoned_default' to 'tld_empty'
* gh-112532: Use separate mimalloc heaps for GC objects
In `--disable-gil` builds, we now use four separate heaps in
anticipation of using mimalloc to find GC objects when the GIL is
disabled. To support this, we also make a few changes to mimalloc:
* `mi_heap_t` and `mi_tld_t` initialization is split from allocation.
This allows us to have a `mi_tld_t` per-`PyThreadState`, which is
important to keep interpreter isolation, since the same OS thread may
run in multiple interpreters (using different PyThreadStates.)
* Heap abandoning (mi_heap_collect_ex) can now be called from a
different thread than the one that created the heap. This is necessary
because we may clear and delete the containing PyThreadStates from a
different thread during finalization and after fork().
* Use enum instead of defines and guard mimalloc includes.
* The enum typedef will be convenient for future PRs that use the type.
* Guarding the mimalloc includes allows us to unconditionally include
pycore_mimalloc.h from other header files that rely on things like
`struct _mimalloc_thread_state`.
* Only define _mimalloc_thread_state in Py_GIL_DISABLED builds
We need the TracebackException of uncaught exceptions for a single purpose: the error display. Thus we only need to pass the formatted error display between interpreters. Passing a pickled TracebackException is overkill.
The `PyThreadState_Clear()` function must only be called with the GIL
held and must be called from the same interpreter as the passed in
thread state. Otherwise, any Python objects on the thread state may be
destroyed using the wrong interpreter, leading to memory corruption.
This is also important for `Py_GIL_DISABLED` builds because free lists
will be associated with PyThreadStates and cleared in
`PyThreadState_Clear()`.
This fixes two places that called `PyThreadState_Clear()` from the wrong
interpreter and adds an assertion to `PyThreadState_Clear()`.
When an exception is uncaught in Interpreter.exec_sync(), it helps to show that exception's error display if uncaught in the calling interpreter. We do so here by generating a TracebackException in the subinterpreter and passing it between interpreters using pickle.
This replaces some usages of PyThread_type_lock with PyMutex, which does not require memory allocation to initialize.
This simplifies some of the runtime initialization and is also one step towards avoiding changing the default raw memory allocator during initialize/finalization, which can be non-thread-safe in some circumstances.
Every PyThreadState instance is now actually a _PyThreadStateImpl.
It is safe to cast from `PyThreadState*` to `_PyThreadStateImpl*` and back.
The _PyThreadStateImpl will contain fields that we do not want to expose
in the public C API.
This updates `dtoa.c` to avoid using the Bigint free-list in --disable-gil builds and
to pre-computes the needed powers of 5 during interpreter initialization.
* gh-111962: Make dtoa thread-safe in `--disable-gil` builds.
This avoids using the Bigint free-list in `--disable-gil` builds
and pre-computes the needed powers of 5 during interpreter initialization.
* Fix size of cached powers of 5 array.
We need the powers of 5 up to 5**512 because we only jump straight to
underflow when the exponent is less than -512 (or larger than 308).
* Rename Py_NOGIL to Py_GIL_DISABLED
* Changes from review
* Fix assertion placement
* Implement _Py_HashPointerRaw() as a static inline function.
* Add Py_HashPointer() tests to test_capi.test_hash.
* Keep _Py_HashPointer() function as an alias to Py_HashPointer().
Use a fraction internally in the _PyTime API to reduce the risk of
integer overflow: simplify the fraction using Greatest Common
Divisor (GCD). The fraction API is used by time functions:
perf_counter(), monotonic() and process_time().
For example, QueryPerformanceFrequency() usually returns 10 MHz on
Windows 10 and newer. The fraction SEC_TO_NS / frequency =
1_000_000_000 / 10_000_000 can be simplified to 100 / 1.
* Add _PyTimeFraction type.
* Add functions:
* _PyTimeFraction_Set()
* _PyTimeFraction_Mul()
* _PyTimeFraction_Resolution()
* No longer check "numer * denom <= _PyTime_MAX" in
_PyTimeFraction_Set(). _PyTimeFraction_Mul() uses _PyTime_Mul()
which handles integer overflow.
* Move _PyRuntimeState.time to _posixstate.ticks_per_second and
time_module_state.ticks_per_second.
* Add time_module_state.clocks_per_second.
* Rename _PyTime_GetClockWithInfo() to py_clock().
* Rename _PyTime_GetProcessTimeWithInfo() to py_process_time().
* Add process_time_times() helper function, called by
py_process_time().
* os.times() is now always built: no longer rely on HAVE_TIMES.
Add support for TLS-PSK (pre-shared key) to the ssl module.
---------
Co-authored-by: Oleg Iarygin <oleg@arhadthedev.net>
Co-authored-by: Gregory P. Smith <greg@krypto.org>
This makes the Tier 2 interpreter a little faster.
I calculated by about 3%,
though I hesitate to claim an exact number.
This starts by doubling the trace size limit (to 512),
making it more likely that loops fit in a trace.
The rest of the approach is to only load
`oparg` and `operand` in cases that use them.
The code generator know when these are used.
For `oparg`, it will conditionally emit
```
oparg = CURRENT_OPARG();
```
at the top of the case block.
(The `oparg` variable may be referenced multiple times
by the instructions code block, so it must be in a variable.)
For `operand`, it will use `CURRENT_OPERAND()` directly
instead of referencing the `operand` variable,
which no longer exists.
(There is only one place where this will be used.)
This uses the new mechanism whereby certain uops
are replaced by others during translation,
using the `_PyUop_Replacements` table.
We further special-case the `_FOR_ITER_TIER_TWO` uop
to update the deoptimization target to point
just past the corresponding `END_FOR` opcode.
Two tiny code cleanups are also part of this PR.
- Double max trace size to 256
- Add a dependency on executor_cases.c.h for ceval.o
- Mark `_SPECIALIZE_UNPACK_SEQUENCE` as `TIER_ONE_ONLY`
- Add debug output back showing the optimized trace
- Bunch of cleanups to Tools/cases_generator/
* Replace jumps with deopts in tier 2
* Fewer special cases of uop names
* Add target field to uop IR
* Remove more redundant SET_IP and _CHECK_VALIDITY micro-ops
* Extend whitelist of non-escaping API functions.
_PyDict_Pop_KnownHash(): remove the default value and the return type
becomes an int.
Co-authored-by: Stefan Behnel <stefan_ml@behnel.de>
Co-authored-by: Antoine Pitrou <pitrou@free.fr>
Critical sections are helpers to replace the global interpreter lock
with finer grained locking. They provide similar guarantees to the GIL
and avoid the deadlock risk that plain locking involves. Critical
sections are implicitly ended whenever the GIL would be released. They
are resumed when the GIL would be acquired. Nested critical sections
behave as if the sections were interleaved.
I added _Py_excinfo to the internal API (and added its functions in Python/errors.c) in gh-111530 (9322ce9). Since then I've had a nagging sense that I should have added the type and functions in its own PR. While I do plan on using _Py_excinfo outside crossinterp.c very soon (see gh-111572/gh-111573), I'd still feel more comfortable if the _Py_excinfo stuff went in as its own PR. Hence, here we are.
(FWIW, I may combine that with gh-111572, which I may, in turn, combine with gh-111573. We'll see.)
Joining a thread now ensures the underlying OS thread has exited. This is required for safer fork() in multi-threaded processes.
---------
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
gh-106168: Update the size only after setting the item, to avoid temporary inconsistencies.
Also remove the "what's new" sentence regarding the size setting since tuples cannot grow after allocation.
This moves several general internal APIs out of _xxsubinterpretersmodule.c and into the new Python/crossinterp.c (and the corresponding internal headers).
Specifically:
* _Py_excinfo, etc.: the initial implementation for non-object exception snapshots (in pycore_pyerrors.h and Python/errors.c)
* _PyXI_exception_info, etc.: helpers for passing an exception beween interpreters (wraps _Py_excinfo)
* _PyXI_namespace, etc.: helpers for copying a dict of attrs between interpreters
* _PyXI_Enter(), _PyXI_Exit(): functions that abstract out the transitions between one interpreter and a second that will do some work temporarily
Again, these were all abstracted out of _xxsubinterpretersmodule.c as generalizations. I plan on proposing these as public API at some point.
This is partly to clear this stuff out of pystate.c, but also in preparation for moving some code out of _xxsubinterpretersmodule.c. This change also moves this stuff to the internal API (new: Include/internal/pycore_crossinterp.h). @vstinner did this previously and I undid it. Now I'm re-doing it. :/
mi_atomic_load_explicit() casts 'p' argument to drop the 'const'
qualifier on Windows arm64 platform. Fix the compiler warning:
'function': different 'const' qualifiers
(compiling source file ..\Objects\mimalloc\options.c)
* Add mimalloc v2.12
Modified src/alloc.c to remove include of alloc-override.c and not
compile new handler.
Did not include the following files:
- include/mimalloc-new-delete.h
- include/mimalloc-override.h
- src/alloc-override-osx.c
- src/alloc-override.c
- src/static.c
- src/region.c
mimalloc is thread safe and shares a single heap across all runtimes,
therefore finalization and getting global allocated blocks across all
runtimes is different.
* mimalloc: minimal changes for use in Python:
- remove debug spam for freeing large allocations
- use same bytes (0xDD) for freed allocations in CPython and mimalloc
This is important for the test_capi debug memory tests
* Don't export mimalloc symbol in libpython.
* Enable mimalloc as Python allocator option.
* Add mimalloc MIT license.
* Log mimalloc in Lib/test/pythoninfo.py.
* Document new mimalloc support.
* Use macro defs for exports as done in:
https://github.com/python/cpython/pull/31164/
Co-authored-by: Sam Gross <colesbury@gmail.com>
Co-authored-by: Christian Heimes <christian@python.org>
Co-authored-by: Victor Stinner <vstinner@python.org>
* gh-106320: Re-add _PyLong_FromByteArray(), _PyLong_AsByteArray() and _PyLong_GCD() to the public header files since they are used by third-party packages and there is no efficient replacement.
See https://github.com/python/cpython/issues/111140
See https://github.com/python/cpython/issues/111139
* gh-111262: Re-add _PyDict_Pop() to have a C-API until a new public one is designed.
Fixes#109894
* set `interp.static_objects.last_resort_memory_error.args` to empty tuple to avoid crash on `PyErr_Display()` call
* allow `_PyExc_InitGlobalObjects()` to be called on subinterpreter init
---------
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
There were a few things I did in gh-110565 that need to be fixed. I also forgot to add tests in that PR.
(Note that this PR exposes a refleak introduced by gh-110246. I'll take care of that separately.)
Move the following private functions and structures to
pycore_modsupport.h internal C API:
* _PyArg_BadArgument()
* _PyArg_CheckPositional()
* _PyArg_NoKeywords()
* _PyArg_NoPositional()
* _PyArg_ParseStack()
* _PyArg_ParseStackAndKeywords()
* _PyArg_Parser structure
* _PyArg_UnpackKeywords()
* _PyArg_UnpackKeywordsWithVararg()
* _PyArg_UnpackStack()
* _Py_ANY_VARARGS()
Changes:
* Python/getargs.h now includes pycore_modsupport.h to export
functions.
* clinic.py now adds pycore_modsupport.h when one of these functions
is used.
* Add pycore_modsupport.h includes when a C extension uses one of
these functions.
* Define Py_BUILD_CORE_MODULE in C extensions which now include
directly or indirectly (via code generated by Argument Clinic)
pycore_modsupport.h:
* _csv
* _curses_panel
* _dbm
* _gdbm
* _multiprocessing.posixshmem
* _sqlite.row
* _statistics
* grp
* resource
* syslog
* _testcapi: bad_get() no longer uses METH_FASTCALL calling
convention but METH_VARARGS. Replace _PyArg_UnpackStack() with
PyArg_ParseTuple().
* _testcapi: add PYTESTCAPI_NEED_INTERNAL_API macro which is defined
by _testcapi sub-modules which need the internal C API
(pycore_modsupport.h): exceptions.c, float.c, vectorcall.c,
watchers.c.
* Remove Include/cpython/modsupport.h header file.
Include/modsupport.h no longer includes the removed header file.
* Fix mypy clinic.py
Add wrapper for timerfd_create, timerfd_settime, and timerfd_gettime to os module.
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
Co-authored-by: Adam Turner <9087854+AA-Turner@users.noreply.github.com>
Co-authored-by: Erlend E. Aasland <erlend.aasland@protonmail.com>
Co-authored-by: Victor Stinner <vstinner@python.org>
This adds a new field 'state' to PyThreadState that can take on one of three values: _Py_THREAD_ATTACHED, _Py_THREAD_DETACHED, or _Py_THREAD_GC. The "attached" and "detached" states correspond closely to acquiring and releasing the GIL. The "gc" state is current unused, but will be used to implement stop-the-world GC for --disable-gil builds in the near future.
We do the following:
* add a per-interpreter XID registry (PyInterpreterState.xidregistry)
* put heap types there (keep static types in _PyRuntimeState.xidregistry)
* clear the registries during interpreter/runtime finalization
* avoid duplicate entries in the registry (when _PyCrossInterpreterData_RegisterClass() is called more than once for a type)
* use Py_TYPE() instead of PyObject_Type() in _PyCrossInterpreterData_Lookup()
The per-interpreter registry helps preserve isolation between interpreters. This is important when heap types are registered, which is something we haven't been doing yet but I will likely do soon.
Add PyThreadState_GetUnchecked() function: similar to
PyThreadState_Get(), but don't issue a fatal error if it is NULL. The
caller is responsible to check if the result is NULL. Previously,
this function was private and known as _PyThreadState_UncheckedGet().
In a few places we switch to another interpreter without knowing if it has a thread state associated with the current thread. For the main interpreter there wasn't much of a problem, but for subinterpreters we were *mostly* okay re-using the tstate created with the interpreter (located via PyInterpreterState_ThreadHead()). There was a good chance that tstate wasn't actually in use by another thread.
However, there are no guarantees of that. Furthermore, re-using an already used tstate is currently fragile. To address this, now we create a new thread state in each of those places and use it.
One consequence of this change is that PyInterpreterState_ThreadHead() may not return NULL (though that won't happen for the main interpreter).
The existence of background threads running on a subinterpreter was preventing interpreters from getting properly destroyed, as well as impacting the ability to run the interpreter again. It also affected how we wait for non-daemon threads to finish.
We add PyInterpreterState.threads.main, with some internal C-API functions.
* Remove unused <locale.h> includes.
* Remove unused <fcntl.h> include in traceback.h.
* Remove redundant <assert.h> and <stddef.h> includes. They are already
included by "Python.h".
* Remove <object.h> include in faulthandler.c. Python.h already includes it.
* Add missing <stdbool.h> in pycore_pythread.h if HAVE_PTHREAD_STUBS
is defined.
* Fix also warnings in pthread_stubs.h: don't redefine macros if they
are already defined, like the __NEED_pthread_t macro.
* pycore_pythread.h is now the central place to make sure that
_POSIX_THREADS and _POSIX_SEMAPHORES macros are defined if
available.
* Make sure that pycore_pythread.h is included when _POSIX_THREADS
and _POSIX_SEMAPHORES macros are tested.
* PY_TIMEOUT_MAX is now defined as a constant, since its value
depends on _POSIX_THREADS, instead of being defined as a macro.
* Prevent integer overflow in the preprocessor when computing
PY_TIMEOUT_MAX_VALUE on Windows:
replace "0xFFFFFFFELL * 1000 < LLONG_MAX"
with "0xFFFFFFFELL < LLONG_MAX / 1000".
* Document the change and give hints how to fix affected code.
* Add an exception for PY_TIMEOUT_MAX name to smelly.py
* Add PY_TIMEOUT_MAX to the stable ABI
These are the most popular specializations of `LOAD_ATTR` and `STORE_ATTR`
that weren't already viable uops:
* Split LOAD_ATTR_METHOD_WITH_VALUES
* Split LOAD_ATTR_METHOD_NO_DICT
* Split LOAD_ATTR_SLOT
* Split STORE_ATTR_SLOT
* Split STORE_ATTR_INSTANCE_VALUE
Also:
* Add `-v` flag to code generator which prints a list of non-viable uops
(easter-egg: it can print execution counts -- see source)
* Double _Py_UOP_MAX_TRACE_LENGTH to 128
I had dropped one of the DEOPT_IF() calls! :-(
PyMutex is a one byte lock with fast, inlineable lock and unlock functions for the common uncontended case. The design is based on WebKit's WTF::Lock.
PyMutex is built using the _PyParkingLot APIs, which provides a cross-platform futex-like API (based on WebKit's WTF::ParkingLot). This internal API will be used for building other synchronization primitives used to implement PEP 703, such as one-time initialization and events.
This also includes tests and a mini benchmark in Tools/lockbench/lockbench.py to compare with the existing PyThread_type_lock.
Uncontended acquisition + release:
* Linux (x86-64): PyMutex: 11 ns, PyThread_type_lock: 44 ns
* macOS (arm64): PyMutex: 13 ns, PyThread_type_lock: 18 ns
* Windows (x86-64): PyMutex: 13 ns, PyThread_type_lock: 38 ns
PR Overview:
The primary purpose of this PR is to implement PyMutex, but there are a number of support pieces (described below).
* PyMutex: A 1-byte lock that doesn't require memory allocation to initialize and is generally faster than the existing PyThread_type_lock. The API is internal only for now.
* _PyParking_Lot: A futex-like API based on the API of the same name in WebKit. Used to implement PyMutex.
* _PyRawMutex: A word sized lock used to implement _PyParking_Lot.
* PyEvent: A one time event. This was used a bunch in the "nogil" fork and is useful for testing the PyMutex implementation, so I've included it as part of the PR.
* pycore_llist.h: Defines common operations on doubly-linked list. Not strictly necessary (could do the list operations manually), but they come up frequently in the "nogil" fork. ( Similar to https://man.freebsd.org/cgi/man.cgi?queue)
---------
Co-authored-by: Eric Snow <ericsnowcurrently@gmail.com>
There is a WIP proposal to enable webassembly stack switching which have been
implemented in v8:
https://github.com/WebAssembly/js-promise-integration
It is not possible to switch stacks that contain JS frames so the Emscripten JS
trampolines that allow calling functions with the wrong number of arguments
don't work in this case. However, the js-promise-integration proposal requires
the [type reflection for Wasm/JS API](https://github.com/WebAssembly/js-types)
proposal, which allows us to actually count the number of arguments a function
expects.
For better compatibility with stack switching, this PR checks if type reflection
is available, and if so we use a switch block to decide the appropriate
signature. If type reflection is unavailable, we should use the current EMJS
trampoline.
We cache the function argument counts since when I didn't cache them performance
was negatively affected.
Co-authored-by: T. Wouters <thomas@python.org>
Co-authored-by: Brett Cannon <brett@python.org>
This makes the internal representation in the code generator simpler: there's a list of ops, and a list of macros, and there's no special-casing needed for ops that aren't macros. (There's now special-casing for ops that are also macros, but that's simpler.)
* Rename SAVE_IP to _SET_IP
* Rename EXIT_TRACE to _EXIT_TRACE
* Rename SAVE_CURRENT_IP to _SAVE_CURRENT_IP
* Rename INSERT to _INSERT (This is for Ken Jin's abstract interpreter)
* Rename IS_NONE to _IS_NONE
* Rename JUMP_TO_TOP to _JUMP_TO_TOP
This adds a 16-bit inline cache entry to the conditional branch instructions POP_JUMP_IF_{FALSE,TRUE,NONE,NOT_NONE} and their instrumented variants, which is used to keep track of the branch direction.
Each time we encounter these instructions we shift the cache entry left by one and set the bottom bit to whether we jumped.
Then when it's time to translate such a branch to Tier 2 uops, we use the bit count from the cache entry to decided whether to continue translating the "didn't jump" branch or the "jumped" branch.
The counter is initialized to a pattern of alternating ones and zeros to avoid bias.
The .pyc file magic number is updated. There's a new test, some fixes for existing tests, and a few miscellaneous cleanups.
Fix _thread.start_new_thread() race condition. If a thread is created
during Python finalization, the newly spawned thread now exits
immediately instead of trying to access freed memory and lead to a
crash.
thread_run() calls PyEval_AcquireThread() which checks if the thread
must exit. The problem was that tstate was dereferenced earlier in
_PyThreadState_Bind() which leads to a crash most of the time.
Move _PyThreadState_CheckConsistency() from thread_run() to
_PyThreadState_Bind().
thread_run() of _threadmodule.c now calls
_PyThreadState_CheckConsistency() to check if tstate is a dangling
pointer when Python is built in debug mode.
Rename ceval_gil.c is_tstate_valid() to
_PyThreadState_CheckConsistency() to reuse it in _threadmodule.c.
Statistics gathering is now off by default. Use the "-X pystats"
command line option or set the new PYTHONSTATS environment variable
to 1 to turn statistics gathering on at Python startup.
Statistics are no longer dumped at exit if statistics gathering was
off or statistics have been cleared.
Changes:
* Add PYTHONSTATS environment variable.
* sys._stats_dump() now returns False if statistics are not dumped
because they are all equal to zero.
* Add PyConfig._pystats member.
* Add tests on sys functions and on setting PyConfig._pystats to 1.
* Add Include/cpython/pystats.h and Include/internal/pycore_pystats.h
header files.
* Rename '_py_stats' variable to '_Py_stats'.
* Exclude Include/cpython/pystats.h from the Py_LIMITED_API.
* Move pystats.h include from object.h to Python.h.
* Add _Py_StatsOn() and _Py_StatsOff() functions. Remove
'_py_stats_struct' variable from the API: make it static in
specialize.c.
* Document API in Include/pystats.h and Include/cpython/pystats.h.
* Complete pystats documentation in Doc/using/configure.rst.
* Don't write "all zeros" stats: if _stats_off() and _stats_clear()
or _stats_dump() were called.
* _PyEval_Fini() now always call _Py_PrintSpecializationStats() which
does nothing if stats are all zeros.
Co-authored-by: Michael Droettboom <mdboom@gmail.com>
Move the private _PyErr_WriteUnraisableMsg() functions to the
internal C API (pycore_pyerrors.h).
Move write_unraisable_exc() from _testcapi to _testinternalcapi.
* Move <ctype.h>, <limits.h> and <stdarg.h> standard includes to
Python.h.
* Move "pystats.h" include from object.h to Python.h.
* Remove redundant "pymem.h" include in objimpl.h and "pyport.h"
include in pymem.h; Python.h already includes them earlier.
* Remove redundant <wchar.h> include in unicodeobject.h; Python.h
already includes it.
* Move _SGI_MP_SOURCE define from Python.h to pyport.h.
* pycore_condvar.h includes explicitly <unistd.h> for the
_POSIX_THREADS macro.
pycore_create_interpreter() now returns a status, rather than
calling Py_FatalError().
* PyInterpreterState_New() now calls Py_ExitStatusException() instead
of calling Py_FatalError() directly.
* Replace Py_FatalError() with PyStatus in init_interpreter() and
_PyObject_InitState().
* _PyErr_SetFromPyStatus() now raises RuntimeError, instead of
ValueError. It can now call PyErr_NoMemory(), raise MemoryError,
if it detects _PyStatus_NO_MEMORY() error message.
Move the private _PyLong_Sign() and _PyLong_NumBits() functions
to the internal C API (pycore_long.h).
Modules/_testcapi/long.c now uses the internal C API.
This adds a new header that provides atomic operations on common data
types. The intention is that this will be exposed through Python.h,
although that is not the case yet. The only immediate use is in
the test file.
Co-authored-by: Sam Gross <colesbury@gmail.com>
Python built with "configure --with-trace-refs" (tracing references)
is now ABI compatible with Python release build and debug build.
Moreover, it now also supports the Limited API.
Change Py_TRACE_REFS build:
* Remove _PyObject_EXTRA_INIT macro.
* The PyObject structure no longer has two extra members (_ob_prev
and _ob_next).
* Use a hash table (_Py_hashtable_t) to trace references (all
objects): PyInterpreterState.object_state.refchain.
* Py_TRACE_REFS build is now ABI compatible with release build and
debug build.
* Limited C API extensions can now be built with Py_TRACE_REFS:
xxlimited, xxlimited_35, _testclinic_limited.
* No longer rename PyModule_Create2() and PyModule_FromDefAndSpec2()
functions to PyModule_Create2TraceRefs() and
PyModule_FromDefAndSpec2TraceRefs().
* _Py_PrintReferenceAddresses() is now called before
finalize_interp_delete() which deletes the refchain hash table.
* test_tracemalloc find_trace() now also filters by size to ignore
the memory allocated by _PyRefchain_Trace().
Test changes for Py_TRACE_REFS:
* Add test.support.Py_TRACE_REFS constant.
* Add test_sys.test_getobjects() to test sys.getobjects() function.
* test_exceptions skips test_recursion_normalizing_with_no_memory()
and test_memory_error_in_PyErr_PrintEx() if Python is built with
Py_TRACE_REFS.
* test_repl skips test_no_memory().
* test_capi skisp test_set_nomemory().
Remove _PyErr_ChainExceptions(), _PyErr_ChainExceptions1() and
_PyErr_SetFromPyStatus() functions from the public C API.
* Move the private _PyErr_ChainExceptions() and
_PyErr_ChainExceptions1() function to the internal C API
(pycore_pyerrors.h).
* Move the private _PyErr_SetFromPyStatus() to the internal C API
(pycore_initconfig.h).
* No longer export the _PyErr_ChainExceptions() function.
* Move run_in_subinterp_with_config() from _testcapi to
_testinternalcapi.
There is no need to export the _Py_ForgetReference() function of the
Py_TRACE_REFS build. It's not used by shared extensions. Enhance also
its comment.
Move PyUnstable_ExecutableKinds and associated macros from the
internal C API to the public C API.
Rename constants: replace "PY_" prefix with "PyUnstable_" prefix.
Also remove NOP instructions.
The "stubs" are not optimized in this fashion (their SAVE_IP should always be preserved since it's where to jump next, and they don't contain NOPs by their nature).
Move the following private API to the internal C API (pycore_long.h):
* _PyLong_Copy()
* _PyLong_FromDigits()
* _PyLong_New()
No longer export most of these functions.
The remove private _PyGILState_GetInterpreterStateUnsafe() function
from the public C API: move it the internal C API (pycore_pystate.h).
No longer export the function.
The remove private _Py_UniversalNewlineFgetsWithSize() function from
the public C API: move it the internal C API (pycore_fileutils.h).
No longer export the function.
Remove these private functions from the public C API:
* _PyRun_AnyFileObject()
* _PyRun_InteractiveLoopObject()
* _PyRun_SimpleFileObject()
* _Py_SourceAsString()
Move them to the internal C API: add a new pycore_pythonrun.h header
file. No longer export these functions.
Remove the private _Py_Identifier type and related private functions
from the public C API:
* _PyObject_GetAttrId()
* _PyObject_LookupSpecialId()
* _PyObject_SetAttrId()
* _PyType_LookupId()
* _Py_IDENTIFIER()
* _Py_static_string()
* _Py_static_string_init()
Move them to the internal C API: add a new pycore_identifier.h header
file. No longer export these functions.
Move these private functions to the internal C API
(pycore_abstract.h):
* _Py_convert_optional_to_ssize_t()
* _PyNumber_Index()
Argument Clinic now emits #include "pycore_abstract.h" when these
functions are used.
The parser of the c-analyzer tool now uses a list of files which use
the limited C API, rather than a list of files using the internal C
API.
Move the private _PyLong converter functions to the internal C API
* _PyLong_FileDescriptor_Converter(): moved to pycore_fileutils.h
* _PyLong_Size_t_Converter(): moved to pycore_long.h
Argument Clinic now emits includes for pycore_fileutils.h and
pycore_long.h when these functions are used.
Move these private functions to the internal C API (pycore_long.h):
* _PyLong_UnsignedInt_Converter()
* _PyLong_UnsignedLongLong_Converter()
* _PyLong_UnsignedLong_Converter()
* _PyLong_UnsignedShort_Converter()
Argument Clinic now emits #include "pycore_long.h" when these
functions are used.
Instead of using `GO_TO_INSTRUCTION(CALL_PY_EXACT_ARGS)` we just add the macro elements of the latter to the macro for the former. This requires lengthening the uops array in struct opcode_macro_expansion. (It also required changes to stacking.py that were merged already.)
Move private functions to the internal C API (pycore_sysmodule.h):
* _PySys_GetAttr()
* _PySys_GetSizeOf()
No longer export most of these functions.
Fix also a typo in Include/cpython/optimizer.h: add a missing space.
Move private functions to the internal C API (pycore_dict.h):
* _PyDictView_Intersect()
* _PyDictView_New()
* _PyDict_ContainsId()
* _PyDict_DelItemId()
* _PyDict_DelItem_KnownHash()
* _PyDict_GetItemIdWithError()
* _PyDict_GetItem_KnownHash()
* _PyDict_HasSplitTable()
* _PyDict_NewPresized()
* _PyDict_Next()
* _PyDict_Pop()
* _PyDict_SetItemId()
* _PyDict_SetItem_KnownHash()
* _PyDict_SizeOf()
No longer export most of these functions.
Move also the _PyDictViewObject structure to the internal C API.
Move dict_getitem_knownhash() function from _testcapi to the
_testinternalcapi extension. Update test_capi.test_dict for this
change.