Having a separate lock means Thread.join() doesn't need to wait for the thread to be cleaned up first. It can wait for the thread's Python target to finish running. This gives us some flexibility in how we clean up threads.
(This is a minor cleanup as part of a fix for gh-104341.)
Not comprehensive, best effort warning. There are cases when threads exist on some platforms that this code cannot detect. macOS when API permissions allow and Linux with a readable /proc procfs present are the currently supported cases where a warning should show up reliably.
Starting with a DeprecationWarning for now, it is less disruptive than something like RuntimeWarning and most likely to only be seen in people's CI tests - a good place to start with this messaging.
Previously, the optional restrictions on subinterpreters were: disallow fork, subprocess, and threads. By default, we were disallowing all three for "isolated" interpreters. We always allowed all three for the main interpreter and those created through the legacy `Py_NewInterpreter()` API.
Those settings were a bit conservative, so here we've adjusted the optional restrictions to: fork, exec, threads, and daemon threads. The default for "isolated" interpreters disables fork, exec, and daemon threads. Regular threads are allowed by default. We continue always allowing everything For the main interpreter and the legacy API.
In the code, we add `_PyInterpreterConfig.allow_exec` and `_PyInterpreterConfig.allow_daemon_threads`. We also add `Py_RTFLAGS_DAEMON_THREADS` and `Py_RTFLAGS_EXEC`.
* gh-93503: Add APIs to set profiling and tracing functions in all threads in the C-API
* Use a separate API
* Fix NEWS entry
* Add locks around the loop
* Document ignoring exceptions
* Use the new APIs in the sys module
* Update docs
If Condition.notify() was interrupted just after it released the waiter lock,
but before removing it from the queue, the following calls of notify() failed
with RuntimeError: cannot release un-acquired lock.
For threads, and for multiprocessing, it's always been the case that ``args=list`` works fine when passed to ``Process()`` or ``Thread()``, and such code is common in the wild. But, according to the docs, only a tuple can be used. This brings the docs into synch with reality.
Doc changes by Charlie Zhao.
Co-authored-by: Tim Peters <tim.peters@gmail.com>
Removed extra comma in comment that indicates state of a `Barrier` as it was confusing and breaking the flow while reading.
Co-authored-by: Priyank <5903604+cpriyank@users.noreply.github.com>
Fix the threading._shutdown() function when the threading module was
imported first from a thread different than the main thread: no
longer log an error at Python exit.
Fix a race condition in the Thread.join() method of the threading
module. If the function is interrupted by a signal and the signal
handler raises an exception, make sure that the thread remains in a
consistent state to prevent a deadlock.
The _bootstrap_inner() method of threading.Thread now reuses its
_delete() method rather than accessing _active() directly. It became
possible since _active_limbo_lock became reentrant. Moreover, it no
longer ignores any exception when deleting the thread from the
_active dictionary.
When a Thread is not joined after it has stopped, its lock may remain in the _shutdown_locks set until interpreter shutdown. If many threads are created this way, the _shutdown_locks set could therefore grow endlessly. To avoid such a situation, purge expired locks each time a new one is added or removed.
The snake_case names have existed since Python 2.6, so there is
no reason to keep the old camelCase names around. One similar
method, threading.Thread.isAlive, was already removed in
Python 3.9 (bpo-37804).
Fix the threading.Thread class at fork: do nothing if the thread is
already stopped (ex: fork called at Python exit). Previously, an
error was logged in the child process.
Add a private _at_fork_reinit() method to _thread.Lock,
_thread.RLock, threading.RLock and threading.Condition classes:
reinitialize the lock after fork in the child process; reset the lock
to the unlocked state.
Rename also the private _reset_internal_locks() method of
threading.Event to _at_fork_reinit().
* Add _PyThread_at_fork_reinit() private function. It is excluded
from the limited C API.
* threading.Thread._reset_internal_locks() now calls
_at_fork_reinit() on self._tstate_lock rather than creating a new
Python lock object.
Remove daemon threads from :mod:`concurrent.futures` by adding
an internal `threading._register_atexit()`, which calls registered functions
prior to joining all non-daemon threads. This allows for compatibility
with subinterpreters, which don't support daemon threads.
If fork was not called by a thread spawned by threading.Thread,
threading._after_fork() now creates a _MainThread instance for
_main_thread, instead of a _DummyThread instance.
* Use the 'p' format unit instead of manually called PyObject_IsTrue().
* Pass boolean value instead 0/1 integers to functions that needs boolean.
* Convert some arguments to boolean only once.
In a subinterpreter, spawning a daemon thread now raises an
exception. Daemon threads were never supported in subinterpreters.
Previously, the subinterpreter finalization crashed with a Pyton
fatal error if a daemon thread was still running.
* Add _thread._is_main_interpreter()
* threading.Thread.start() now raises RuntimeError if the thread is a
daemon thread and the method is called from a subinterpreter.
* The _thread module now uses Argument Clinic for the new function.
* Use textwrap.dedent() in test_threading.SubinterpThreadingTests
Fix a race condition at Python shutdown when waiting for threads.
Wait until the Python thread state of all non-daemon threads get
deleted (join all non-daemon threads), rather than just wait until
Python threads complete.
* Add threading._shutdown_locks: set of Thread._tstate_lock locks
of non-daemon threads used by _shutdown() to wait until all Python
thread states get deleted. See Thread._set_tstate_lock().
* Add also threading._shutdown_locks_lock to protect access to
threading._shutdown_locks.
* Add test_finalization_shutdown() test.
Add a new threading.excepthook() function which handles uncaught
Thread.run() exception. It can be overridden to control how uncaught
exceptions are handled.
threading.ExceptHookArgs is not documented on purpose: it should not
be used directly.
* threading.excepthook() and threading.ExceptHookArgs.
* Add _PyErr_Display(): similar to PyErr_Display(), but accept a
'file' parameter.
* Add _thread._excepthook(): C implementation of the exception hook
calling _PyErr_Display().
* Add _thread._ExceptHookArgs: structseq type.
* Add threading._invoke_excepthook_wrapper() which handles the gory
details to ensure that everything remains alive during Python
shutdown.
* Add unit tests.