Instead of surprise crashes and memory corruption, we now hang threads that attempt to re-enter the Python interpreter after Python runtime finalization has started. These are typically daemon threads (our long standing mis-feature) but could also be threads spawned by extension modules that then try to call into Python. This marks the `PyThread_exit_thread` public C API as deprecated as there is no plausible safe way to accomplish that on any supported platform in the face of things like C++ code with finalizers anywhere on a thread's stack. Doing this was the least bad option.
Co-authored-by: Gregory P. Smith <greg@krypto.org>
This is unsupported. Note that `skip_unless_reliable_fork()` checks for
the conditions used by the decorators that were removed, along with checking
for TSAN.
Free-threaded builds can intermittently tickle a longstanding bug (24 years!)
in the implementation of `threading.Condition`, leading to flakiness in the
test suite. Fixing the underlying issue will require more discussion, and will
likely apply to most of the concurrency primitives in the `threading` module
that are written in Python. See gh-118433 for more details.
The free-threaded build does not currently support the combination of
single-phase init modules and non-isolated subinterpreters. Ensure that
`check_multi_interp_extensions` is always `True` for subinterpreters in
the free-threaded build so that importing these modules raises an
`ImportError`.
- re-enable test_fcntl_64_bit on Linux aarch64, but disable it on all
Android ABIs
- use support.setswitchinterval in all relevant tests
- skip test_fma_zero_result on Android x86_64
- accept EACCES when calling os.get_terminal_size on Android
Starting in Python 3.12, we prevented calling fork() and starting new threads
during interpreter finalization (shutdown). This has led to a number of
regressions and flaky tests. We should not prevent starting new threads
(or `fork()`) until all non-daemon threads exit and finalization starts in
earnest.
This changes the checks to use `_PyInterpreterState_GetFinalizing(interp)`,
which is set immediately before terminating non-daemon threads.
There is a race between when `Thread._tstate_lock` is released[^1] in `Thread._wait_for_tstate_lock()`
and when `Thread._stop()` asserts[^2] that it is unlocked. Consider the following execution
involving threads A, B, and C:
1. A starts.
2. B joins A, blocking on its `_tstate_lock`.
3. C joins A, blocking on its `_tstate_lock`.
4. A finishes and releases its `_tstate_lock`.
5. B acquires A's `_tstate_lock` in `_wait_for_tstate_lock()`, releases it, but is swapped
out before calling `_stop()`.
6. C is scheduled, acquires A's `_tstate_lock` in `_wait_for_tstate_lock()` but is swapped
out before releasing it.
7. B is scheduled, calls `_stop()`, which asserts that A's `_tstate_lock` is not held.
However, C holds it, so the assertion fails.
The race can be reproduced[^3] by inserting sleeps at the appropriate points in
the threading code. To do so, run the `repro_join_race.py` from the linked repo.
There are two main parts to this PR:
1. `_tstate_lock` is replaced with an event that is attached to `PyThreadState`.
The event is set by the runtime prior to the thread being cleared (in the same
place that `_tstate_lock` was released). `Thread.join()` blocks waiting for the
event to be set.
2. `_PyInterpreterState_WaitForThreads()` provides the ability to wait for all
non-daemon threads to exit. To do so, an `is_daemon` predicate was added to
`PyThreadState`. This field is set each time a thread is created. `threading._shutdown()`
now calls into `_PyInterpreterState_WaitForThreads()` instead of waiting on
`_tstate_lock`s.
[^1]: 441affc9e7/Lib/threading.py (L1201)
[^2]: 441affc9e7/Lib/threading.py (L1115)
[^3]: 8194653279
---------
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Antoine Pitrou <antoine@python.org>
This brings the code under test.support.interpreters, and the corresponding extension modules, in line with recent updates to PEP 734.
(Note: PEP 734 has not been accepted at this time. However, we are using an internal copy of the implementation in the test suite to exercise the existing subinterpreters feature.)
`threading.Lock` is now the underlying class and is constructable rather than the old
factory function. This allows for type annotations to refer to it which had no non-ugly
way to be expressed prior to this.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Gregory P. Smith <greg@krypto.org>
Always set a _MainThread as a main thread after os.fork() is called from
a thread started not by the threading module.
A new _MainThread was already set as a new main thread after fork if
threading.current_thread() was not called for a foreign thread before fork.
Now, if it was called before fork, the implicitly created _DummyThread will
be turned into _MainThread after fork.
It fixes, in particularly, an incompatibility of _DummyThread with
the threading shutdown logic which relies on the main thread
having tstate_lock.
Co-authored-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
gh-112982 broke test_threading on one of the s390 buildbots (Fedora Clang Installed). Apparently ImportError is raised (rather than ModuleNotFoundError) for the name part of "from" imports. This fixes that by catching ImportError in test_threading.py.
Joining a thread now ensures the underlying OS thread has exited. This is required for safer fork() in multi-threaded processes.
---------
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
The existence of background threads running on a subinterpreter was preventing interpreters from getting properly destroyed, as well as impacting the ability to run the interpreter again. It also affected how we wait for non-daemon threads to finish.
We add PyInterpreterState.threads.main, with some internal C-API functions.
Disallow thread creation and fork at interpreter finalization.
in the following functions, check if interpreter is finalizing and raise `RuntimeError` with appropriate message:
* `_thread.start_new_thread` and thus `threading`
* `posix.fork`
* `posix.fork1`
* `posix.forkpty`
* `_posixsubprocess.fork_exec` when a `preexec_fn=` is supplied.
---------
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Gregory P. Smith <greg@krypto.org>
Having a separate lock means Thread.join() doesn't need to wait for the thread to be cleaned up first. It can wait for the thread's Python target to finish running. This gives us some flexibility in how we clean up threads.
(This is a minor cleanup as part of a fix for gh-104341.)
We also add PyInterpreterState.ceval.own_gil to record if the interpreter actually has its own GIL.
Note that for now we don't actually respect own_gil; all interpreters still share the one GIL. However, PyInterpreterState.ceval.own_gil does reflect PyInterpreterConfig.own_gil. That lie is a temporary one that we will fix when the GIL really becomes per-interpreter.
This is strictly about moving the "obmalloc" runtime state from
`_PyRuntimeState` to `PyInterpreterState`. Doing so improves isolation
between interpreters, specifically most of the memory (incl. objects)
allocated for each interpreter's use. This is important for a
per-interpreter GIL, but such isolation is valuable even without it.
FWIW, a per-interpreter obmalloc is the proverbial
canary-in-the-coalmine when it comes to the isolation of objects between
interpreters. Any object that leaks (unintentionally) to another
interpreter is highly likely to cause a crash (on debug builds at
least). That's a useful thing to know, relative to interpreter
isolation.
Enforcing (optionally) the restriction set by PEP 489 makes sense. Furthermore, this sets the stage for a potential restriction related to a per-interpreter GIL.
This change includes the following:
* add tests for extension module subinterpreter compatibility
* add _PyInterpreterConfig.check_multi_interp_extensions
* add Py_RTFLAGS_MULTI_INTERP_EXTENSIONS
* add _PyImport_CheckSubinterpIncompatibleExtensionAllowed()
* fail iff the module does not implement multi-phase init and the current interpreter is configured to check
https://github.com/python/cpython/issues/98627
Not comprehensive, best effort warning. There are cases when threads exist on some platforms that this code cannot detect. macOS when API permissions allow and Linux with a readable /proc procfs present are the currently supported cases where a warning should show up reliably.
Starting with a DeprecationWarning for now, it is less disruptive than something like RuntimeWarning and most likely to only be seen in people's CI tests - a good place to start with this messaging.
Previously, the optional restrictions on subinterpreters were: disallow fork, subprocess, and threads. By default, we were disallowing all three for "isolated" interpreters. We always allowed all three for the main interpreter and those created through the legacy `Py_NewInterpreter()` API.
Those settings were a bit conservative, so here we've adjusted the optional restrictions to: fork, exec, threads, and daemon threads. The default for "isolated" interpreters disables fork, exec, and daemon threads. Regular threads are allowed by default. We continue always allowing everything For the main interpreter and the legacy API.
In the code, we add `_PyInterpreterConfig.allow_exec` and `_PyInterpreterConfig.allow_daemon_threads`. We also add `Py_RTFLAGS_DAEMON_THREADS` and `Py_RTFLAGS_EXEC`.
* gh-93503: Add APIs to set profiling and tracing functions in all threads in the C-API
* Use a separate API
* Fix NEWS entry
* Add locks around the loop
* Document ignoring exceptions
* Use the new APIs in the sys module
* Update docs
Join the thread to not leak threads running in the background to the
next test.
Fix the following warning on the "AMD64 FreeBSD Shared 3.11"
buildbot:
test_args_argument (test.test_threading.ThreadTests.test_args_argument) ...
Warning -- threading_cleanup() failed to cleanup 1 threads (count: 1, dangling: 2)
Warning -- Dangling thread: <_MainThread(MainThread, started 35026161664)>
Warning -- Dangling thread: <Thread(Thread-134 (<lambda>), started 35314998016)>
ok
For threads, and for multiprocessing, it's always been the case that ``args=list`` works fine when passed to ``Process()`` or ``Thread()``, and such code is common in the wild. But, according to the docs, only a tuple can be used. This brings the docs into synch with reality.
Doc changes by Charlie Zhao.
Co-authored-by: Tim Peters <tim.peters@gmail.com>
Fix the threading._shutdown() function when the threading module was
imported first from a thread different than the main thread: no
longer log an error at Python exit.
Doing this provides significant performance gains for runtime startup (~15% with all the imported modules frozen). We don't yet freeze all the imported modules because there are a few hiccups in the build systems we need to sort out first. (See bpo-45186 and bpo-45188.)
Note that in PR GH-28320 we added a command-line flag (-X frozen_modules=[on|off]) that allows users to opt out of (or into) using frozen modules. The default is still "off" but we will change it to "on" as soon as we can do it in a way that does not cause contributors pain.
https://bugs.python.org/issue45020
The threading debug (PYTHONTHREADDEBUG environment variable) is
deprecated in Python 3.10 and will be removed in Python 3.12. This
feature requires a debug build of Python.