This makes the asyncio REPL (`python -m asyncio`) more usable
and similar to the regular REPL.
This exposes register_readline() as a top-level function in site.py,
but it's intentionally undocumented.
Co-authored-by: Carol Willing <carolcode@willingconsulting.com>
Co-authored-by: Itamar Oren <itamarost@gmail.com>
Nothing else in Python generally logs the contents of variables, so this
can be very unexpected for developers and could leak sensitive
information in to terminals and log files.
In some cases we might cause a StreamWriter to stay alive even when the
application has dropped all references to it. This prevents us from
doing automatical cleanup, and complaining that the StreamWriter wasn't
properly closed.
Fortunately, the extra reference was never actually used for anything so
we can just drop it.
If other exception was raised during exiting an expired
asyncio.timeout() block, insert TimeoutError in the exception context
just above the CancelledError.
When an `StopIteration` raises into `asyncio.Future`, this will cause
a thread to hang. This commit address this by not raising an exception
and silently transforming the `StopIteration` with a `RuntimeError`,
which the caller can reconstruct from `fut.exception().__cause__`
In case the spawned process is setuid, we may not be able to send
signals to it, in which case our .kill() call will raise
PermissionError.
Ignore that in order to avoid .close() raising an exception. Hopefully
the process will exit as a result of receiving EOF on its stdin.
When wrapped, `_SSLProtocolTransport._force_close(exc)` is called just like in the unwrapped scenario `_SelectorTransport._force_close(exc)` or `_ProactorBasePipeTransport._force_close(exc)` would be called, except here the exception needs to be passed through the `SSLProtocol._abort()` method, which didn't accept an exception object.
This commit ensures that this path works, in the same way that the uvloop implementation of SSLProto passes on the exception (on which the current implementation of SSLProto is based).
This affects task creation through either `asyncio.create_task()` or `TaskGroup.create_task()` -- the redundant call to `task.set_name()` is skipped. We still call `set_name()` when a task factory is involved, because the task factory call signature (unfortunately) doesn't take a `name` argument.
* Try to fix asyncio.Server.wait_closed() again
I identified the condition that `wait_closed()` is intended
to wait for: the server is closed *and* there are no more
active connections.
When this condition first becomes true, `_wakeup()` is called
(either from `close()` or from `_detach()`) and it sets `_waiters`
to `None`. So we just check for `self._waiters is None`; if it's
not `None`, we know we have to wait, and do so.
A problem was that the new test introduced in 3.12 explicitly
tested that `wait_closed()` returns immediately when the server
is *not* closed but there are currently no active connections.
This was a mistake (probably a misunderstanding of the intended
semantics). I've fixed the test, and added a separate test that
checks exactly for this scenario.
I also fixed an oddity where in `_wakeup()` the result of the
waiter was set to the waiter itself. This result is not used
anywhere and I changed this to `None`, to avoid a GC cycle.
* Update Lib/asyncio/base_events.py
---------
Co-authored-by: Carol Willing <carolcode@willingconsulting.com>
- `ThreadedChildWatcher.close()` is now *officially* a no-op; `_join_threads()` never did anything.
- Threads created by that class are now named `asyncio-waitpid-NNN`.
- `test.test_asyncio.utils.TestCase.close_loop()` now waits for the child watcher's threads, but not forever; if a thread hangs, it raises `RuntimeError`.
asyncio.TaskGroup and asyncio.Timeout classes now raise proper RuntimeError
if they are improperly used.
* When they are used without entering the context manager.
* When they are used after finishing.
* When the context manager is entered more than once (simultaneously or
sequentially).
* If there is no current task when entering the context manager.
They now remain in a consistent state after an exception is thrown,
so subsequent operations can be performed correctly (if they are allowed).
Co-authored-by: James Hilton-Balfe <gobot1234yt@gmail.com>
Replace harcoded sleep of 500 ms with synchronization using a pipe.
Fix also Process._feed_stdin(): catch also BrokenPipeError on
stdin.write(input), not only on stdin.drain().
SubprocessProtocol process_exited() method can be called before
pipe_data_received() and pipe_connection_lost() methods. Document it
and adapt the test for that.
Revert commit 282edd7b2a.
_child_watcher_callback() calls immediately _process_exited(): don't
add an additional delay with call_soon(). The reverted change didn't
make _process_exited() more determistic: it can still be called
before pipe_connection_lost() for example.
Co-authored-by: Davide Rizzo <sorcio@gmail.com>