Replace asyncio.set_event_loop() with TestCase.set_event_loop() of
test_asyncio.utils: this method calls TestCase.close_loop() which
waits until the executor completes, to avoid leaking dangling
threads.
Inherit from test_asyncio.utils.TestCase rather than
unittest.TestCase.
From 3.8 async functions used with mock.patch return an `AsyncMock`. `_accept_connection2` is an async function where create_task is also mocked. Don't mock `create_task` so that tasks are created out of coroutine returned by `AsyncMock` and the tasks are completed.
https://bugs.python.org/issue37015
This will address the common mistake many asyncio users make:
an "except Exception" clause breaking Tasks cancellation.
In addition to this change, we stop inheriting asyncio.TimeoutError
and asyncio.InvalidStateError from their concurrent.futures.*
counterparts. There's no point for these exceptions to share the
inheritance chain.
In 3.9 we'll focus on implementing supervisors and cancel scopes,
which should allow better handling of all exceptions, including
SystemExit and KeyboardInterrupt
This PR proposes a solution to [bpo-35545](https://bugs.python.org/issue35545) by adding an optional `flowinfo` and `scopeid` to `asyncio.base_events._ipaddr_info` to carry the full address information into `_ipaddr_info` and avoid discarding IPv6 specific information.
Changelog entry & regression tests to come.
https://bugs.python.org/issue35545
When the future returned by shield is cancelled, its completion callback of the
inner future is not removed. This makes the callback list of inner inner future
grow each time a shield is created and cancelled.
This change unregisters the callback from the inner future when the outer
future is cancelled.
https://bugs.python.org/issue35125
*Moved from python/asyncio#493.*
This PR fixes issue python/asyncio#480, as explained in [this comment](https://github.com/python/asyncio/issues/480#issuecomment-278703828).
The `_SelectorDatagramTransport.sendto` method has to be modified ~~so `_sock.sendto` is used in all cases (because it is tricky to reliably tell if the socket is connected or not). Could that be an issue for connected sockets?~~ *EDIT* ... so `_sock.send` is used only if `_sock` is connected.
It also protects `socket.getsockname` against `OSError` in `_SelectorTransport`. This might happen on Windows if the socket is not connected (e.g. for UDP broadcasting).
https://bugs.python.org/issue31922
As in title, expose C `raise` function as `raise_function` in `signal` module. Also drop existing `raise_signal` in `_testcapi` module and replace all usages with new function.
https://bugs.python.org/issue35568
test_asyncio/test_sendfile.py now resets the event loop policy using
tearDownModule() as done in other tests, to prevent a warning when
running tests on Windows.
There is a race condition in SelectorEventLoopUnixSockSendfileTests that causes the prepare() method return a non connected server protocol, making the cleanup() method skips the correct handling of the transport. This commit makes prepare() always return a connected server protocol that can always be cleaned up correctly.
There is a race condition regarding signal delivery in test_signal_handling_args for
test_asyncio.test_events.KqueueEventLoopTests. The signal can be received at any moment outside the time window provided in the test. The fix is to wait for the signal to be received instead with a bigger timeout.
Replace time.time() with time.monotonic() in tests to measure time
delta.
test_zipfile64: display progress every minute (60 secs) rather than
every 5 minutes (5*60 seconds).
Modify asyncio tests to utilize the certificates from the test directory
instead of its own set, as they are the same and with each update they had
to be updated as well.
The call to `_untrack_reader` is performed too soon, causing the protocol
to forget about the reader before `connection_lost` can run and feed the
EOF to the reader. See bpo-35065.
Some FreeBSD buildbots fail to run this test as the eof was not being received by the server if the size is not big enough. This behaviour only appears if the client is using TLS1.3.
inspect.isfunction() processes both inspect.isfunction(func) and
inspect.isfunction(partial(func, arg)) correctly but some other functions in the
inspect module (iscoroutinefunction, isgeneratorfunction and isasyncgenfunction)
lack this functionality. This commits adds a new check in the mentioned functions
in the inspect module so they can work correctly with arbitrarily nested partial
functions.
The C implementation of asyncio.Task currently fails to perform the
cancellation cleanup correctly in the following scenario.
async def task1():
async def task2():
await task3 # task3 is never cancelled
asyncio.current_task().cancel()
await asyncio.create_task(task2())
The actuall error is a hardcoded call to `future_cancel()` instead of
calling the `cancel()` method of a future-like object.
Thanks to Vladimir Matveev for noticing the code discrepancy and to
Yury Selivanov for coming up with a pathological scenario.
The waiting is pretty normal for any asyncio program, logging its time just adds
a noise to logs without any useful information provided.
https://bugs.python.org/issue34849
* Insert the warn in the asyncio.sleep when the loop argument is used
* Insert the warn in the asyncio.wait and asyncio.wait_for when the loop argument is used
* Better format of the code
* Add news file
* change calls for get_event_loop() to calls for get_running_loop()
* Change message to be more clear in News
* Improve the comments in test_tasks
Store a weak reference to stream readerfor breaking strong references
It breaks the strong reference loop between reader and protocol and allows to detect and close the socket if the stream is deleted (garbage collected)
Update all test certs and keys to use future proof crypto settings:
* 3072 bit RSA keys
* SHA-256 signature
Signed-off-by: Christian Heimes <christian@python.org>
Various asyncio internals expect that the default executor is a
`ThreadPoolExecutor`, so deprecate passing anything else to
`loop.set_default_executor()`.
The cancellation of an overlapped WSARecv() has a race condition
which causes data loss because of the current implementation of
proactor in asyncio.
No longer cancel overlapped WSARecv() in _ProactorReadPipeTransport
to work around the race condition.
Remove the optimized recv_into() implementation to get simple
implementation of pause_reading() using the single _pending_data
attribute.
Move _feed_data_to_bufferred_proto() to protocols.py.
Remove set_protocol() method which became useless.
Fix "<CoroWrapper ...> was never yielded from" warning in
PyTask_PyFuture_Tests.test_error_in_call_soon() of
test_asyncio.test_tasks.
Close manually the coroutine on error.
* Fix AttributeError (not all SSL exceptions have 'errno' attribute)
* Increase default handshake timeout from 10 to 60 seconds
* Make sure start_tls can be cancelled correctly
* Make sure any error in SSLProtocol gets propagated (instead of just being logged)
Currently, asyncio.wait_for(fut), upon reaching the timeout deadline,
cancels the future and returns immediately. This is problematic for
when *fut* is a Task, because it will be left running for an arbitrary
amount of time. This behavior is iself surprising and may lead to
related bugs such as the one described in bpo-33638:
condition = asyncio.Condition()
async with condition:
await asyncio.wait_for(condition.wait(), timeout=0.5)
Currently, instead of raising a TimeoutError, the above code will fail
with `RuntimeError: cannot wait on un-acquired lock`, because
`__aexit__` is reached _before_ `condition.wait()` finishes its
cancellation and re-acquires the condition lock.
To resolve this, make `wait_for` await for the task cancellation.
The tradeoff here is that the `timeout` promise may be broken if the
task decides to handle its cancellation in a slow way. This represents
a behavior change and should probably not be back-patched to 3.6 and
earlier.
Use transport.set_write_buffer_limits() in sendfile tests of
test_asyncio to make sure that the protocol is paused after sending
4 KiB. Previously,
test_sendfile_fallback_close_peer_in_the_middle_of_receiving() failed
on FreeBSD if the DATA was smaller than the default limit of 64 KiB.
In this commit:
* Support BufferedProtocol in set_protocol() and start_tls()
* Fix proactor to cancel readers reliably
* Update tests to be compatible with OpenSSL 1.1.1
* Clarify BufferedProtocol docs
* Bump TLS tests timeouts to 60 seconds; eliminate possible race from start_serving
* Rewrite test_start_tls_server_1
bpo-32622, bpo-33353: On macOS, sock.connect() changes the
SO_SNDBUF value. Only set SO_SNDBUF and SO_RCVBUF buffer sizes
once a socket is connected or binded, not before.
bpo-32622, bpo-33353: sendfile() tests of test_asyncio use socket
buffers of 1 kB "to test on relative small data sets". Send only
160 KiB rather 10 MB to make the test much faster.
Shrink also SendfileBase.DATA from 1600 KiB to 160 KiB.
On Linux, 3 test_sock_sendfile_mix_with_regular_send() runs now take
less than 1 second, instead of 18 seconds.
On FreeBSD, the 3 tests didn't hang, but took 3 minutes. Now
the 3 tests pass in less than 1 seconds.
TLS 1.3 behaves slightly different than TLS 1.2. Session tickets and TLS
client cert auth are now handled after the initialy handshake. Tests now
either send/recv data to trigger session and client certs. Or tests
ignore ConnectionResetError / BrokenPipeError on the server side to
handle clients that force-close the socket fd.
To test TLS 1.3, OpenSSL 1.1.1-pre7-dev (git master + OpenSSL PR
https://github.com/openssl/openssl/pull/6340) is required.
Signed-off-by: Christian Heimes <christian@python.org>
* bpo-33263 Fix FD leak in _SelectorSocketTransport. (GH-6450)
Under particular circumstances _SelectorSocketTransport can try to add a reader
even the transport is already being closed. This can lead to FD leak and
invalid stated of the following connections. Fixed the SelectorSocketTransport
to add the reader only if the trasport is still active.
Fix typo from commit 6370f345e1
Signed-off-by: Christian Heimes <christian@python.org>
<!--
Thanks for your contribution!
Please read this comment in its entirety. It's quite important.
# Pull Request title
It should be in the following format:
```
bpo-NNNN: Summary of the changes made
```
Where: bpo-NNNN refers to the issue number in the https://bugs.python.org.
Most PRs will require an issue number. Trivial changes, like fixing a typo, do not need an issue.
# Backport Pull Request title
If this is a backport PR (PR made against branches other than `master`),
please ensure that the PR title is in the following format:
```
[X.Y] <title from the original PR> (GH-NNNN)
```
Where: [X.Y] is the branch name, e.g. [3.6].
GH-NNNN refers to the PR number from `master`.
-->
<!-- issue-number: bpo-32262 -->
https://bugs.python.org/issue32262
<!-- /issue-number -->