This prevents external cancellations of a task group's parent task to
be dropped when an internal cancellation happens at the same time.
Also strengthen the semantics of uncancel() to clear self._must_cancel
when the cancellation count reaches zero.
Co-Authored-By: Tin Tvrtković <tinchester@gmail.com>
Co-Authored-By: Arthur Tacca
* as_completed returns object that is both iterator and async iterator
* Existing tests adjusted to test both the old and new style
* New test to ensure iterator can be resumed
* New test to ensure async iterator yields any passed-in Futures as-is
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
Co-authored-by: Guido van Rossum <gvanrossum@gmail.com>
The default task name is "Task-<counter>" (if no name is passed in during Task creation).
This is initialized in `Task.__init__` (C impl) using string formatting, which can be quite slow.
Actually using the task name in real world code is not very common, so this is wasted init.
Let's defer this string formatting to the first time the name is read (in `get_name` impl),
so we don't need to pay the string formatting cost if the task name is never read.
We don't change the order in which tasks are assigned numbers (if they are) --
the number is set on task creation, as a PyLong instead of a formatted string.
Co-authored-by: Łukasz Langa <lukasz@langa.pl>
asyncio.get_event_loop() now always return either running event loop or
the result of get_event_loop_policy().get_event_loop() call. The latter
should now raise an RuntimeError if no current event loop was set
instead of creating and setting a new event loop.
It affects also a number of asyncio functions and constructors which
call get_event_loop() implicitly: ensure_future(), shield(), gather(),
etc.
DeprecationWarning is no longer emitted if there is no running event loop but
the current event loop was set.
Co-authored-by: Łukasz Langa <lukasz@langa.pl>
The inspect version was not working with unittest.mock.AsyncMock.
The fix introduces special-casing of AsyncMock in
`inspect.iscoroutinefunction` equivalent to the one
performed in `asyncio.iscoroutinefunction`.
Co-authored-by: Łukasz Langa <lukasz@langa.pl>
After a long deliberation we ended up feeling that the message argument for Future.cancel(), added in 3.9, was a bad idea, so we're deprecating it in 3.11 and plan to remove it in 3.13.
Also from the _asyncio C accelerator module,
and adjust one test that the change caused to fail.
For more discussion see the discussion starting here:
https://github.com/python/cpython/pull/31394#issuecomment-1053545331
(Basically, @asvetlov proposed to return False from cancel()
when there is already a pending cancellation, and I went along,
even though it wasn't necessary for the task group implementation,
and @agronholm has come up with a counterexample that fails
because of this change. So now I'm changing it back to the old
semantics (but still bumping the counter) until we can have a
proper discussion about this.)
asyncio/taskgroups.py is an adaptation of taskgroup.py from EdgeDb, with the following key changes:
- Allow creating new tasks as long as the last task hasn't finished
- Raise [Base]ExceptionGroup (directly) rather than TaskGroupError deriving from MultiError
- Instead of monkey-patching the parent task's cancel() method,
add a new public API to Task
The Task class has a new internal flag, `_cancel_requested`, which is set when `.cancel()` is called successfully. The `.cancelling()` method returns the value of this flag. Further `.cancel()` calls while this flag is set return False. To reset this flag, call `.uncancel()`.
Thus, a Task that catches and ignores `CancelledError` should call `.uncancel()` if it wants to be cancellable again; until it does so, it is deemed to be busy with uninterruptible cleanup.
This new Task API helps solve the problem where TaskGroup needs to distinguish between whether the parent task being cancelled "from the outside" vs. "from inside".
Co-authored-by: Yury Selivanov <yury@edgedb.com>
Co-authored-by: Andrew Svetlov <andrew.svetlov@gmail.com>
Remove the @asyncio.coroutine decorator
enabling legacy generator-based coroutines to be compatible with async/await
code; remove asyncio.coroutines.CoroWrapper used for wrapping
legacy coroutine objects in the debug mode.
The decorator has been deprecated
since Python 3.8 and the removal was initially scheduled for Python 3.10.
asyncio.get_event_loop() emits now a deprecation warning when it creates a new event loop.
In future releases it will became an alias of asyncio.get_running_loop().
# Improve asyncio.wait function
The original code creates the futures set two times.
We can create this set before, avoiding the second creation.
This new behaviour [breaks the aiokafka library](https://github.com/aio-libs/aiokafka/pull/672), because it gives an iterator to that function, so the second iteration become empty.
Automerge-Triggered-By: GH:1st1
Currently, if `asyncio.wait_for()` itself is cancelled it will always
raise `CancelledError` regardless if the underlying task is still
running. This is similar to a race with the timeout, which is handled
already.
When I was fixing bpo-32751 back in GH-7216 I missed the case when
*timeout* is zero or negative. This takes care of that.
Props to @aaliddell for noticing the inconsistency.
This updates _PyErr_ChainStackItem() to use _PyErr_SetObject()
instead of _PyErr_ChainExceptions(). This prevents a hang in
certain circumstances because _PyErr_SetObject() performs checks
to prevent cycles in the exception context chain while
_PyErr_ChainExceptions() doesn't.
When an asyncio.Task is cancelled, the exception traceback now
starts with where the task was first interrupted. Previously,
the traceback only had "depth one."
Currently, if asyncio.wait_for() timeout expires, it cancels
inner future and then always raises TimeoutError. In case
those future is task, it can handle cancelation mannually,
and those process can lead to some other exception. Current
implementation silently loses thoses exception.
To resolve this, wait_for will check was the cancelation
successfull or not. In case there was exception, wait_for
will reraise it.
Co-authored-by: Roman Skurikhin <roman.skurikhin@cruxlab.com>
The previous commits on bpo-29587 got exception chaining working
with gen.throw() in the `yield` case. This patch also gets the
`yield from` case working.
As a consequence, implicit exception chaining now also works in
the asyncio scenario of awaiting on a task when an exception is
already active.
Tests are included for both the asyncio case and the pure
generator-only case.