This is an implementation of InterpreterPoolExecutor that builds on ThreadPoolExecutor.
(Note that this is not tied to PEP 734, which is strictly about adding a new stdlib module.)
Possible future improvements:
* support passing a script for the initializer or to submit()
* support passing (most) arbitrary functions without pickling
* support passing closures
* optionally exec functions against __main__ instead of the their original module
There was a deadlock when `ProcessPoolExecutor` shuts down at the same
time that a queueing thread handles an error processing a task.
Don't use `_shutdown_lock` to protect the `_ThreadWakeup` pipes -- use
an internal lock instead. This fixes the ordering deadlock where the
`ExecutorManagerThread` holds the `_shutdown_lock` and joins the
queueing thread, while the queueing thread is attempting to acquire the
`_shutdown_lock` while closing the `_ThreadWakeup`.
The ProcessPoolForkserver combined with resource_tracker starts a thread
after forking, which is not supported by TSan.
Also skip test_multiprocessing_fork for the same reason
Check `my_object_collected.wait()` in a loop to give the main thread a
chance to merge the reference count fields. Additionally, call
`my_object_collected.set()` in a background thread to avoid deadlocking
when the destructor is called asynchronously via the eval breaker
within the body of of `my_object_collected.wait()`.
- re-enable test_fcntl_64_bit on Linux aarch64, but disable it on all
Android ABIs
- use support.setswitchinterval in all relevant tests
- skip test_fma_zero_result on Android x86_64
- accept EACCES when calling os.get_terminal_size on Android
The tests are not reliable with the GIL disabled. In theory, they can
fail with the GIL enabled too, but the failures are much more likely
with the GIL disabled.
Co-authored-by: Hugo van Kemenade <1324225+hugovk@users.noreply.github.com>
Co-authored-by: Malcolm Smith <smith@chaquo.com>
Co-authored-by: Ned Deily <nad@python.org>
There is a race between when `Thread._tstate_lock` is released[^1] in `Thread._wait_for_tstate_lock()`
and when `Thread._stop()` asserts[^2] that it is unlocked. Consider the following execution
involving threads A, B, and C:
1. A starts.
2. B joins A, blocking on its `_tstate_lock`.
3. C joins A, blocking on its `_tstate_lock`.
4. A finishes and releases its `_tstate_lock`.
5. B acquires A's `_tstate_lock` in `_wait_for_tstate_lock()`, releases it, but is swapped
out before calling `_stop()`.
6. C is scheduled, acquires A's `_tstate_lock` in `_wait_for_tstate_lock()` but is swapped
out before releasing it.
7. B is scheduled, calls `_stop()`, which asserts that A's `_tstate_lock` is not held.
However, C holds it, so the assertion fails.
The race can be reproduced[^3] by inserting sleeps at the appropriate points in
the threading code. To do so, run the `repro_join_race.py` from the linked repo.
There are two main parts to this PR:
1. `_tstate_lock` is replaced with an event that is attached to `PyThreadState`.
The event is set by the runtime prior to the thread being cleared (in the same
place that `_tstate_lock` was released). `Thread.join()` blocks waiting for the
event to be set.
2. `_PyInterpreterState_WaitForThreads()` provides the ability to wait for all
non-daemon threads to exit. To do so, an `is_daemon` predicate was added to
`PyThreadState`. This field is set each time a thread is created. `threading._shutdown()`
now calls into `_PyInterpreterState_WaitForThreads()` instead of waiting on
`_tstate_lock`s.
[^1]: 441affc9e7/Lib/threading.py (L1201)
[^2]: 441affc9e7/Lib/threading.py (L1115)
[^3]: 8194653279
---------
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Antoine Pitrou <antoine@python.org>
This builds on https://github.com/python/cpython/pull/106807, which adds
a return code to ResourceTracker, to make future debugging easier.
Testing this “in situ” proved difficult, since the global ResourceTracker is
involved in test infrastructure. So, the tests here create a new instance and
feed it fake data.
---------
Co-authored-by: Yonatan Bitton <yonatan.bitton@perception-point.io>
Co-authored-by: Yonatan Bitton <bityob@gmail.com>
Co-authored-by: Antoine Pitrou <antoine@python.org>
Biased reference counting maintains two refcount fields in each object:
`ob_ref_local` and `ob_ref_shared`. The true refcount is the sum of these two
fields. In some cases, when refcounting operations are split across threads,
the ob_ref_shared field can be negative (although the total refcount must be
at least zero). In this case, the thread that decremented the refcount
requests that the owning thread give up ownership and merge the refcount
fields.
Joining a thread now ensures the underlying OS thread has exited. This is required for safer fork() in multi-threaded processes.
---------
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
The test had an instability issue due to the ordering of the dummy
queue operation and the real wakeup pipe operations. Both primitives
are thread safe but not done atomically as a single update and may
interleave arbitrarily. With the old order of operations this can lead
to an incorrect state where the dummy queue is full but the wakeup
pipe is empty. By swapping the order in clear() I think this can no
longer happen in any possible operation interleaving (famous last
words).
concurrent.futures: The *executor manager thread* now catches
exceptions when adding an item to the *call queue*. During Python
finalization, creating a new thread can now raise RuntimeError. Catch
the exception and call terminate_broken() in this case.
Add test_python_finalization_error() to test_concurrent_futures.
concurrent.futures._ExecutorManagerThread changes:
* terminate_broken() no longer calls shutdown_workers() since the
call queue is no longer working anymore (read and write ends of
the queue pipe are closed).
* terminate_broken() now terminates child processes, not only
wait until they complete.
* _ExecutorManagerThread.terminate_broken() now holds shutdown_lock
to prevent race conditons with ProcessPoolExecutor.submit().
multiprocessing.Queue changes:
* Add _terminate_broken() method.
* _start_thread() sets _thread to None on exception to prevent
leaking "dangling threads" even if the thread was not started
yet.
Fix test_timeout() of test_concurrent_futures.test_wait. Remove the
future which may or may not complete depending if it takes longer
than the timeout ot not. Keep the second future which does not
complete before wait(). Make also the test faster: 0.5 second instead
of 6 seconds, so remove @support.requires_resource('walltime')
decorator.
test_error_at_task_unpickle() and
test_error_during_result_unpickle_in_result_handler() now restore
sys.stderr which is overriden by _raise_error_ignore_stderr().
This fixes issue #105829, https://github.com/python/cpython/issues/105829
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Antoine Pitrou <antoine@python.org>
Co-authored-by: Chris Withers <chris@withers.org>
Co-authored-by: Thomas Moreau <thomas.moreau.2010@gmail.com>
Fix a race condition in _ExecutorManagerThread.terminate_broken():
ignore the InvalidStateError on future.set_exception(). It can happen
if the future is cancelled before the caller.
Moreover, test_crash_big_data() now waits explicitly until the
executor completes.