Make error message for index() methods consistent
Remove the repr of the searched value (which can be arbitrary large)
from ValueError messages for list.index(), range.index(), deque.index(),
deque.remove() and ShareableList.index(). Make the error messages
consistent with error messages for other index() and remove()
methods.
Tools such as ruff can ignore "imported but unused" warnings if a
line ends with "# noqa: F401". It avoids the temptation to remove
an import which is used effectively.
This builds on https://github.com/python/cpython/pull/106807, which adds
a return code to ResourceTracker, to make future debugging easier.
Testing this “in situ” proved difficult, since the global ResourceTracker is
involved in test infrastructure. So, the tests here create a new instance and
feed it fake data.
---------
Co-authored-by: Yonatan Bitton <yonatan.bitton@perception-point.io>
Co-authored-by: Yonatan Bitton <bityob@gmail.com>
Co-authored-by: Antoine Pitrou <antoine@python.org>
We add _winapi.BatchedWaitForMultipleObjects to wait for larger numbers of handles.
This is an internal module, hence undocumented, and should be used with caution.
Check the docstring for info before using BatchedWaitForMultipleObjects.
`_multiprocessing` is only used under the `if _winapi:` block, this moves the import to be within the `_winapi` ImportError handling try/except for equivalent treatment.
Increase the backlog for multiprocessing.connection.Listener` objects created
by `multiprocessing.manager` and `multiprocessing.resource_sharer` to
significantly reduce the risk of getting a connection refused error when creating
a `multiprocessing.connection.Connection` to them.
On Windows, Process.terminate() no longer sets the returncode
attribute to always call WaitForSingleObject() in Process.wait().
Previously, sometimes the process was still running after
TerminateProcess() even if GetExitCodeProcess() is not STILL_ACTIVE.
Add a track parameter to shared memory to allow resource tracking via the side-launched resource tracker process to be disabled on platforms that use it (POSIX).
This allows people who do not want automated cleanup at process exit because they are using the shared memory with processes not participating in Python's resource tracking to use the shared_memory API.
Co-authored-by: Łukasz Langa <lukasz@langa.pl>
Co-authored-by: Guido van Rossum <gvanrossum@gmail.com>
Co-authored-by: Antoine Pitrou <pitrou@free.fr>
Co-authored-by: Gregory P. Smith <greg@krypto.org>
Make `multiprocessing.managers.{DictProxy,ListProxy}` generic for type annotation use. `ListProxy[str]` for example.
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Gregory P. Smith <greg@krypto.org>
concurrent.futures: The *executor manager thread* now catches
exceptions when adding an item to the *call queue*. During Python
finalization, creating a new thread can now raise RuntimeError. Catch
the exception and call terminate_broken() in this case.
Add test_python_finalization_error() to test_concurrent_futures.
concurrent.futures._ExecutorManagerThread changes:
* terminate_broken() no longer calls shutdown_workers() since the
call queue is no longer working anymore (read and write ends of
the queue pipe are closed).
* terminate_broken() now terminates child processes, not only
wait until they complete.
* _ExecutorManagerThread.terminate_broken() now holds shutdown_lock
to prevent race conditons with ProcessPoolExecutor.submit().
multiprocessing.Queue changes:
* Add _terminate_broken() method.
* _start_thread() sets _thread to None on exception to prevent
leaking "dangling threads" even if the thread was not started
yet.
On Windows, multiprocessing Popen.terminate() now catchs
PermissionError and get the process exit code. If the process is
still running, raise again the PermissionError. Otherwise, the
process terminated as expected: store its exit code.
Follow-up of gh-107219.
* Only close the connection writer on Windows.
* Also use existing constant _winapi.ERROR_OPERATION_ABORTED instead of
WSA_OPERATION_ABORTED.
Fix a race condition in concurrent.futures. When a process in the
process pool was terminated abruptly (while the future was running or
pending), close the connection write end. If the call queue is
blocked on sending bytes to a worker process, closing the connection
write end interrupts the send, so the queue can be closed.
Changes:
* _ExecutorManagerThread.terminate_broken() now closes
call_queue._writer.
* multiprocessing PipeConnection.close() now interrupts
WaitForMultipleObjects() in _send_bytes() by cancelling the
overlapped operation.
gh-107275 introduced a regression where a SemLock would fail being passed along nested child processes, as the `is_fork_ctx` attribute would be left missing after the first deserialization.
---------
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Antoine Pitrou <pitrou@free.fr>
Ensure multiprocessing SemLock is valid for spawn Process before serializing it.
Creating a multiprocessing SemLock with a fork context, and then trying to pass it to a spawn-created Process, would segfault if not detected early.
---------
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Antoine Pitrou <pitrou@free.fr>
Adding the `rtype_cache` to the `warnings.warn` message improves the
previous, somewhat vague message from
```
/Users/username/cpython/Lib/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 6 leaked semaphore objects to clean up at shutdown
```
to
```
/Users/username/cpython/Lib/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 6 leaked semaphore objects to clean up at shutdown: {'/mp-yor5cvj8', '/mp-10jx8eqr', '/mp-eobsx9tt', '/mp-0lml23vl', '/mp-9dgtsa_m', '/mp-frntyv4s'}
```
---------
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Prevent `multiprocessing.spawn` from failing to *import* in environments
where `sys.executable` is `None`. This regressed in 3.11 with the addition
of support for path-like objects in multiprocessing.
Adds a test decorator to have tests only run when part of test_multiprocessing_spawn to `_test_multiprocessing.py` so we can start to avoid re-running the same not-global-state specific test in all 3 modes when there is no need.
Fix a race condition in the internal `multiprocessing.process` cleanup
logic that could manifest as an unintended `AttributeError` when calling
`BaseProcess.close()`.
---------
Co-authored-by: Oleg Iarygin <oleg@arhadthedev.net>
Co-authored-by: Gregory P. Smith <greg@krypto.org>
bpo-17258: `multiprocessing` now supports stronger HMAC algorithms for inter-process connection authentication rather than only HMAC-MD5.
Signed-off-by: Christian Heimes <christian@python.org>
gpshead: I Reworked to be more robust while keeping the idea.
The protocol modification idea remains, but we now take advantage of the
message length as an indicator of legacy vs modern protocol version. No
more regular expression usage. We now default to HMAC-SHA256, but do so
in a way that will be compatible when communicating with older clients
or older servers. No protocol transition period is needed.
More integration tests to verify these claims remain true are required. I'm
unaware of anyone depending on multiprocessing connections between
different Python versions.
---------
Signed-off-by: Christian Heimes <christian@python.org>
Co-authored-by: Gregory P. Smith [Google] <greg@krypto.org>
This starts the process. Users who don't specify their own start method
and use the default on platforms where it is 'fork' will see a
DeprecationWarning upon multiprocessing.Pool() construction or upon
multiprocessing.Process.start() or concurrent.futures.ProcessPool use.
See the related issue and documentation within this change for details.
Describe the multiprocessing connection protocol.
It isn't a good protocol, but it is what it is. This way we can more
easily reason about making changes to it in a backwards compatible way.
Linux abstract sockets are insecure as they lack any form of filesystem
permissions so their use allows anyone on the system to inject code into
the process.
This removes the default preference for abstract sockets in
multiprocessing introduced in Python 3.9+ via
https://github.com/python/cpython/pull/18866 while fixing
https://github.com/python/cpython/issues/84031.
Explicit use of an abstract socket by a user now generates a
RuntimeWarning. If we choose to keep this warning, it should be
backported to the 3.7 and 3.8 branches.
SharedMemory.unlink() uses the unregister() function from resource_tracker. Previously it was imported in the method, but this can fail if the method is called during interpreter shutdown, for example when unlink is part of a __del__() method.
Moving the import to the top of the file, means that the unregister() method is available during interpreter shutdown.
The register call in SharedMemory.__init__() can also use this imported resource_tracker.
One more thing that can help prevent people from using `preexec_fn`.
Also adds conditional skips to two tests exposing ASAN flakiness on the Ubuntu 20.04 Address Sanitizer Github CI system. When that build is run on more modern systems the "problem" does not show up. It seems ASAN implementation related.
Co-authored-by: Zackery Spytz <zspytz@gmail.com>
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
Just in case there is ever an issue with _posixsubprocess's use of
vfork() due to the complexity of using it properly and potential
directions that Linux platforms where it defaults to on could take, this
adds a failsafe so that users can disable its use entirely by setting
a global flag.
No known reason to disable it exists. But it'd be a shame to encounter
one and not be able to use CPython without patching and rebuilding it.
See the linked issue for some discussion on reasoning.
Also documents the existing way to disable posix_spawn.