Add Py_GetConstant() and Py_GetConstantBorrowed() functions.
In the limited C API version 3.13, getting Py_None, Py_False,
Py_True, Py_Ellipsis and Py_NotImplemented singletons is now
implemented as function calls at the stable ABI level to hide
implementation details. Getting these constants still return borrowed
references.
Add _testlimitedcapi/object.c and test_capi/test_object.py to test
Py_GetConstant() and Py_GetConstantBorrowed() functions.
Mostly we unify the two different implementations of the conversion code (from PyObject * to int64_t. We also drop the PyArg_ParseTuple()-style converter function, as well as rename and move PyInterpreterID_LookUp().
This changes the free-threaded build to perform a stop-the-world pause
before deleting other thread states when forking and during shutdown.
This fixes some crashes when using multiprocessing and during shutdown
when running with `PYTHON_GIL=0`.
This also changes `PyOS_BeforeFork` to acquire the runtime lock
(i.e., `HEAD_LOCK(&_PyRuntime)`) before forking to ensure that data
protected by the runtime lock (and not just the GIL or stop-the-world)
is in a consistent state before forking.
These writes to `pending->calls_to_do` need to be atomic, because other threads
can read (atomically) from `calls_to_do` without holding `pending->mutex`.
* document equivalent command-line options for all environment variables
* document equivalent environment variables for all command-line options
* reduce the size of variable and option descriptions to minimum
* remove the ending period in single-sentence descriptions
Co-authored-by: Éric <merwok@netwok.org>
Co-authored-by: Hugo van Kemenade <1324225+hugovk@users.noreply.github.com>
Keep Tools/build/deepfreeze.py around (we may repurpose it for deepfreezing non-code objects),
and keep basic "clean" targets that remove the output of former deep-freeze activities,
to keep the build directories of current devs clean.
Somehow we ended up with two separate counter variables tracking "the next function version".
Most likely this was a historical accident where an old branch was updated incorrectly.
This PR merges the two counters into a single one: `interp->func_state.next_version`.
There is a race between when `Thread._tstate_lock` is released[^1] in `Thread._wait_for_tstate_lock()`
and when `Thread._stop()` asserts[^2] that it is unlocked. Consider the following execution
involving threads A, B, and C:
1. A starts.
2. B joins A, blocking on its `_tstate_lock`.
3. C joins A, blocking on its `_tstate_lock`.
4. A finishes and releases its `_tstate_lock`.
5. B acquires A's `_tstate_lock` in `_wait_for_tstate_lock()`, releases it, but is swapped
out before calling `_stop()`.
6. C is scheduled, acquires A's `_tstate_lock` in `_wait_for_tstate_lock()` but is swapped
out before releasing it.
7. B is scheduled, calls `_stop()`, which asserts that A's `_tstate_lock` is not held.
However, C holds it, so the assertion fails.
The race can be reproduced[^3] by inserting sleeps at the appropriate points in
the threading code. To do so, run the `repro_join_race.py` from the linked repo.
There are two main parts to this PR:
1. `_tstate_lock` is replaced with an event that is attached to `PyThreadState`.
The event is set by the runtime prior to the thread being cleared (in the same
place that `_tstate_lock` was released). `Thread.join()` blocks waiting for the
event to be set.
2. `_PyInterpreterState_WaitForThreads()` provides the ability to wait for all
non-daemon threads to exit. To do so, an `is_daemon` predicate was added to
`PyThreadState`. This field is set each time a thread is created. `threading._shutdown()`
now calls into `_PyInterpreterState_WaitForThreads()` instead of waiting on
`_tstate_lock`s.
[^1]: 441affc9e7/Lib/threading.py (L1201)
[^2]: 441affc9e7/Lib/threading.py (L1115)
[^3]: 8194653279
---------
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Antoine Pitrou <antoine@python.org>
On Windows, time.monotonic() now uses the QueryPerformanceCounter()
clock to have a resolution better than 1 us, instead of the
gGetTickCount64() clock which has a resolution of 15.6 ms.
There are now at least two bytecodes that may attempt to optimize,
JUMP_BACK, and more recently, COLD_EXIT.
Only the JUMP_BACK was counting the attempt in the stats.
This moves that counter to uop_optimize itself so it should
always happen no matter where it is called from.
This isn't strictly necessary because the implementation of `gc_should_collect`
already checks `gcstate->enabled` in the free-threaded build, but it seems
like a good idea until the common pieces of gc.c and gc_free_threading.c are
refactored out.
This moves `current_fast_clear()` up so that the current thread state is
`NULL` while running `tstate_delete_common()`.
This doesn't fix any bugs, but it means that we are more consistent that
`_PyThreadState_GET() != NULL` means that the thread is "attached".
Return 0 on success. Set an exception and return -1 on error.
Fix os.timerfd_settime(): properly report exceptions on
_PyTime_FromSecondsDouble() failure.
No longer export _PyTime_FromSecondsDouble().
In free-threaded builds, running with `PYTHON_GIL=0` will now disable the
GIL. Follow-up issues track work to re-enable the GIL when loading an
incompatible extension, and to disable the GIL by default.
In order to support re-enabling the GIL at runtime, all GIL-related data
structures are initialized as usual, and disabling the GIL simply sets a flag
that causes `take_gil()` and `drop_gil()` to return early.
In general, when `_PyThreadState_GET()` is non-NULL then the current
thread is "attached", but there is a small window during
`PyThreadState_DeleteCurrent()` where that's not true:
tstate_delete_common() is called when the thread is detached, but before
current_fast_clear().
Co-authored-by: Eric Snow <ericsnowcurrently@gmail.com>
If a thread blocks while waiting on the `shared->mutex` lock, the array
of QSBR states may be reallocated. The `tstate->qsbr` values before the
lock is acquired may not be the same as the value after the lock is acquired.