*Moved from python/asyncio#493.*
This PR fixes issue python/asyncio#480, as explained in [this comment](https://github.com/python/asyncio/issues/480#issuecomment-278703828).
The `_SelectorDatagramTransport.sendto` method has to be modified ~~so `_sock.sendto` is used in all cases (because it is tricky to reliably tell if the socket is connected or not). Could that be an issue for connected sockets?~~ *EDIT* ... so `_sock.send` is used only if `_sock` is connected.
It also protects `socket.getsockname` against `OSError` in `_SelectorTransport`. This might happen on Windows if the socket is not connected (e.g. for UDP broadcasting).
https://bugs.python.org/issue31922
As in title, expose C `raise` function as `raise_function` in `signal` module. Also drop existing `raise_signal` in `_testcapi` module and replace all usages with new function.
https://bugs.python.org/issue35568
test_asyncio/test_sendfile.py now resets the event loop policy using
tearDownModule() as done in other tests, to prevent a warning when
running tests on Windows.
There is a race condition in SelectorEventLoopUnixSockSendfileTests that causes the prepare() method return a non connected server protocol, making the cleanup() method skips the correct handling of the transport. This commit makes prepare() always return a connected server protocol that can always be cleaned up correctly.
There is a race condition regarding signal delivery in test_signal_handling_args for
test_asyncio.test_events.KqueueEventLoopTests. The signal can be received at any moment outside the time window provided in the test. The fix is to wait for the signal to be received instead with a bigger timeout.
Replace time.time() with time.monotonic() in tests to measure time
delta.
test_zipfile64: display progress every minute (60 secs) rather than
every 5 minutes (5*60 seconds).
Modify asyncio tests to utilize the certificates from the test directory
instead of its own set, as they are the same and with each update they had
to be updated as well.
The call to `_untrack_reader` is performed too soon, causing the protocol
to forget about the reader before `connection_lost` can run and feed the
EOF to the reader. See bpo-35065.
Some FreeBSD buildbots fail to run this test as the eof was not being received by the server if the size is not big enough. This behaviour only appears if the client is using TLS1.3.
inspect.isfunction() processes both inspect.isfunction(func) and
inspect.isfunction(partial(func, arg)) correctly but some other functions in the
inspect module (iscoroutinefunction, isgeneratorfunction and isasyncgenfunction)
lack this functionality. This commits adds a new check in the mentioned functions
in the inspect module so they can work correctly with arbitrarily nested partial
functions.
The C implementation of asyncio.Task currently fails to perform the
cancellation cleanup correctly in the following scenario.
async def task1():
async def task2():
await task3 # task3 is never cancelled
asyncio.current_task().cancel()
await asyncio.create_task(task2())
The actuall error is a hardcoded call to `future_cancel()` instead of
calling the `cancel()` method of a future-like object.
Thanks to Vladimir Matveev for noticing the code discrepancy and to
Yury Selivanov for coming up with a pathological scenario.
The waiting is pretty normal for any asyncio program, logging its time just adds
a noise to logs without any useful information provided.
https://bugs.python.org/issue34849
* Insert the warn in the asyncio.sleep when the loop argument is used
* Insert the warn in the asyncio.wait and asyncio.wait_for when the loop argument is used
* Better format of the code
* Add news file
* change calls for get_event_loop() to calls for get_running_loop()
* Change message to be more clear in News
* Improve the comments in test_tasks
Store a weak reference to stream readerfor breaking strong references
It breaks the strong reference loop between reader and protocol and allows to detect and close the socket if the stream is deleted (garbage collected)
Update all test certs and keys to use future proof crypto settings:
* 3072 bit RSA keys
* SHA-256 signature
Signed-off-by: Christian Heimes <christian@python.org>
Various asyncio internals expect that the default executor is a
`ThreadPoolExecutor`, so deprecate passing anything else to
`loop.set_default_executor()`.
The cancellation of an overlapped WSARecv() has a race condition
which causes data loss because of the current implementation of
proactor in asyncio.
No longer cancel overlapped WSARecv() in _ProactorReadPipeTransport
to work around the race condition.
Remove the optimized recv_into() implementation to get simple
implementation of pause_reading() using the single _pending_data
attribute.
Move _feed_data_to_bufferred_proto() to protocols.py.
Remove set_protocol() method which became useless.
Fix "<CoroWrapper ...> was never yielded from" warning in
PyTask_PyFuture_Tests.test_error_in_call_soon() of
test_asyncio.test_tasks.
Close manually the coroutine on error.
* Fix AttributeError (not all SSL exceptions have 'errno' attribute)
* Increase default handshake timeout from 10 to 60 seconds
* Make sure start_tls can be cancelled correctly
* Make sure any error in SSLProtocol gets propagated (instead of just being logged)
Currently, asyncio.wait_for(fut), upon reaching the timeout deadline,
cancels the future and returns immediately. This is problematic for
when *fut* is a Task, because it will be left running for an arbitrary
amount of time. This behavior is iself surprising and may lead to
related bugs such as the one described in bpo-33638:
condition = asyncio.Condition()
async with condition:
await asyncio.wait_for(condition.wait(), timeout=0.5)
Currently, instead of raising a TimeoutError, the above code will fail
with `RuntimeError: cannot wait on un-acquired lock`, because
`__aexit__` is reached _before_ `condition.wait()` finishes its
cancellation and re-acquires the condition lock.
To resolve this, make `wait_for` await for the task cancellation.
The tradeoff here is that the `timeout` promise may be broken if the
task decides to handle its cancellation in a slow way. This represents
a behavior change and should probably not be back-patched to 3.6 and
earlier.
Use transport.set_write_buffer_limits() in sendfile tests of
test_asyncio to make sure that the protocol is paused after sending
4 KiB. Previously,
test_sendfile_fallback_close_peer_in_the_middle_of_receiving() failed
on FreeBSD if the DATA was smaller than the default limit of 64 KiB.
In this commit:
* Support BufferedProtocol in set_protocol() and start_tls()
* Fix proactor to cancel readers reliably
* Update tests to be compatible with OpenSSL 1.1.1
* Clarify BufferedProtocol docs
* Bump TLS tests timeouts to 60 seconds; eliminate possible race from start_serving
* Rewrite test_start_tls_server_1
bpo-32622, bpo-33353: On macOS, sock.connect() changes the
SO_SNDBUF value. Only set SO_SNDBUF and SO_RCVBUF buffer sizes
once a socket is connected or binded, not before.
bpo-32622, bpo-33353: sendfile() tests of test_asyncio use socket
buffers of 1 kB "to test on relative small data sets". Send only
160 KiB rather 10 MB to make the test much faster.
Shrink also SendfileBase.DATA from 1600 KiB to 160 KiB.
On Linux, 3 test_sock_sendfile_mix_with_regular_send() runs now take
less than 1 second, instead of 18 seconds.
On FreeBSD, the 3 tests didn't hang, but took 3 minutes. Now
the 3 tests pass in less than 1 seconds.
TLS 1.3 behaves slightly different than TLS 1.2. Session tickets and TLS
client cert auth are now handled after the initialy handshake. Tests now
either send/recv data to trigger session and client certs. Or tests
ignore ConnectionResetError / BrokenPipeError on the server side to
handle clients that force-close the socket fd.
To test TLS 1.3, OpenSSL 1.1.1-pre7-dev (git master + OpenSSL PR
https://github.com/openssl/openssl/pull/6340) is required.
Signed-off-by: Christian Heimes <christian@python.org>
* bpo-33263 Fix FD leak in _SelectorSocketTransport. (GH-6450)
Under particular circumstances _SelectorSocketTransport can try to add a reader
even the transport is already being closed. This can lead to FD leak and
invalid stated of the following connections. Fixed the SelectorSocketTransport
to add the reader only if the trasport is still active.
Fix typo from commit 6370f345e1
Signed-off-by: Christian Heimes <christian@python.org>
<!--
Thanks for your contribution!
Please read this comment in its entirety. It's quite important.
# Pull Request title
It should be in the following format:
```
bpo-NNNN: Summary of the changes made
```
Where: bpo-NNNN refers to the issue number in the https://bugs.python.org.
Most PRs will require an issue number. Trivial changes, like fixing a typo, do not need an issue.
# Backport Pull Request title
If this is a backport PR (PR made against branches other than `master`),
please ensure that the PR title is in the following format:
```
[X.Y] <title from the original PR> (GH-NNNN)
```
Where: [X.Y] is the branch name, e.g. [3.6].
GH-NNNN refers to the PR number from `master`.
-->
<!-- issue-number: bpo-32262 -->
https://bugs.python.org/issue32262
<!-- /issue-number -->
The proactor event loop has a race condition when reading with
pausing/resuming. `resume_reading()` unconditionally schedules the read
function to read from the current future. If `resume_reading()` was
called before the previously scheduled done callback fires, this results
in two attempts to get the data from the most recent read and an
assertion failure. This commit tracks whether or not `resume_reading`
needs to reschedule the callback to restart the loop, preventing a
second attempt to read the data.
test_asyncio hangs indefinitely on macOS 10.13.2+ on `read_pty_output()`
using the KqueueSelector. Closing `proto.transport` (as is done in
`write_pty_output()`) seems to fix it.
(cherry picked from commit 12f74d8608)
Co-authored-by: Nathan Henrie <n8henrie@users.noreply.github.com>
Also, re-enable test_read_pty_output on macOS.
* bpo-32947: OpenSSL 1.1.1-pre1 / TLS 1.3 fixes
Misc fixes and workarounds for compatibility with OpenSSL 1.1.1-pre1 and
TLS 1.3 support. With OpenSSL 1.1.1, Python negotiates TLS 1.3 by
default. Some test cases only apply to TLS 1.2. Other tests currently
fail because the threaded or async test servers stop after failure.
I'm going to address these issues when OpenSSL 1.1.1 reaches beta.
OpenSSL 1.1.1 has added a new option OP_ENABLE_MIDDLEBOX_COMPAT for TLS
1.3. The feature is enabled by default for maximum compatibility with
broken middle boxes. Users should be able to disable the hack and CPython's test suite needs
it to verify default options.
Signed-off-by: Christian Heimes <christian@python.org>
To mitigate the situation when the buildbot is under load
and is unable to send/receive data fast enough:
* reduce the size of the payload
* set a generous timeout for socket ops
bpo-31399: Let OpenSSL verify hostname and IP
The ssl module now uses OpenSSL's X509_VERIFY_PARAM_set1_host() and
X509_VERIFY_PARAM_set1_ip() API to verify hostname and IP addresses.
* Remove match_hostname calls
* Check for libssl with set1_host, libssl must provide X509_VERIFY_PARAM_set1_host()
* Add documentation for OpenSSL 1.0.2 requirement
* Don't support OpenSSL special mode with a leading dot, e.g. ".example.org" matches "www.example.org". It's not standard conform.
* Add hostname_checks_common_name
Signed-off-by: Christian Heimes <christian@python.org>
* bpo-32662: Implement Server.start_serving() and Server.serve_forever()
New methods:
* Server.start_serving(),
* Server.serve_forever(), and
* Server.is_serving().
Add 'start_serving' keyword parameter to loop.create_server() and
loop.create_unix_server().
Specifically, it's not possible to subclass Task/Future classes
and override the following methods:
* Future._schedule_callbacks
* Task._step
* Task._wakeup
* Add coro.cr_origin and sys.set_coroutine_origin_tracking_depth
* Use coroutine origin information in the unawaited coroutine warning
* Stop using set_coroutine_wrapper in asyncio debug mode
* In BaseEventLoop.set_debug, enable debugging in the correct thread
Add test certs and test for ECDSA cert and EC/RSA dual mode.
I'm also adding certs for IDNA 2003/2008 tests and simplify some test
data handling.
Signed-off-by: Christian Heimes <christian@python.org>
* Make ssh_handshake_timeout None by default.
* Raise ValueError if ssl_handshake_timeout is used without ssl.
* Raise ValueError if ssl_handshake_timeout is not positive.
asyncio.get_event_loop(), and, subsequently asyncio._get_running_loop()
are one of the most frequently executed functions in asyncio. They also
can't be sped up by third-party event loops like uvloop.
When implemented in C they become 4x faster.
* Convert asyncio/tasks.py to async/await
* Convert asyncio/queues.py to async/await
* Convert asyncio/test_utils.py to async/await
* Convert asyncio/base_subprocess.py to async/await
* Convert asyncio/subprocess.py to async/await
* Convert asyncio/streams.py to async/await
* Fix comments
* Convert asyncio/locks.py to async/await
* Convert asyncio.sleep to async def
* Add a comment
* Add missing news
* Convert stubs from AbstrctEventLoop to async functions
* Convert subprocess_shell/subprocess_exec
* Convert connect_read_pipe/connect_write_pip to async/await syntax
* Convert create_datagram_endpoint
* Convert create_unix_server/create_unix_connection
* Get rid of old style coroutines in unix_events.py
* Convert selector_events.py to async/await
* Convert wait_closed and create_connection
* Drop redundant line
* Convert base_events.py
* Code cleanup
* Drop redundant comments
* Fix indentation
* Add explicit tests for compatibility between old and new coroutines
* Convert windows event loop to use async/await
* Fix double awaiting of async function
* Convert asyncio/locks.py
* Improve docstring
* Convert tests to async/await
* Convert more tests
* Convert more tests
* Convert more tests
* Convert tests
* Improve test
* Remove asyncio.selectors and asyncio._overlapped symbols from the
namespace of the asyncio module
* Replace "from asyncio import selectors" with "import selectors"
* Replace "from asyncio import _overlapped" with "import _overlapped"
asyncio.selectors was added to support Python 3.3, which doesn't have
selectors in its standard library, and Python 3.4 in the same code
base. Same rationale for asyncio._overlapped. Python 3.3 reached its
end of life, and asyncio is no more maintained as a third party
module on PyPI.
The test.support.skip_unless_bind_unix_socket() decorator is used to skip
asyncio tests that fail because the platform lacks a functional bind()
function for unix domain sockets (as it is the case for non root users on the
recent Android versions that run now SELinux in enforcing mode).
Call doCleanups() to close the loop after calling
executor.shutdown(wait=True): see TestCase.set_event_loop() of
asyncio.test_utils.
Replace also gc.collect() with support.gc_collect().
* Explicitly call shutdown(wait=True) on executors to wait until all
threads complete to prevent side effects between tests.
* Fix test_loop_self_reading_exception(): don't mock loop.close().
Previously, the original close() method was called rather than the
mock, because how set_event_loop() registered loop.close().
* bpo-31034: Reliable signal handler for test_asyncio
Don't rely on the current SIGHUP signal handler, make sure that it's
set to the "default" signal handler: SIG_DFL.
* Add comments
* bpo-30280: asyncio now cleans up threads
asyncio base TestCase now uses threading_setup() and
threading_cleanup() of test.support to cleanup threads.
* asyncio: Fix TestBaseSelectorEventLoop cleanup
bpo-30280: TestBaseSelectorEventLoop of
test.test_asyncio.test_selector_events now correctly closes the event
loop: cleanup its executor to not leak threads.
Don't override the close() method of the event loop, only override
the_close_self_pipe() method.
when there are no more `await` or `yield (from)` before return in coroutine,
cancel was ignored.
example:
async def coro():
asyncio.Task.current_task().cancel()
return 42
...
res = await coro() # should raise CancelledError
Some built-in coroutine-like objects might not have __name__ or
__qualname__. A good example of such are 'asend', 'aclose' and
'athrow' coroutine methods of asynchronous generators.
This implementation provides additional 10-20% speed boost for
asyncio programs.
The patch also fixes _asynciomodule.c to use Arguments Clinic, and
makes '_schedule_callbacks' an overridable method (as it was in 3.5).
Issue #26777: Fix random failing of the test on the "AMD64 FreeBSD 9.x 3.5"
buildbot:
File ".../Lib/test/test_asyncio/test_tasks.py", line 2398, in go
self.assertTrue(0.09 < dt < 0.11, dt)
AssertionError: False is not true : 0.11902812402695417
Replace "< 0.11" with "< 0.15".
getaddrinfo takes an exclusive lock on some platforms, causing clients to queue
up waiting for the lock if many names are being resolved concurrently. Users
may want to handle name resolution in their own code, for the sake of caching,
using an alternate resolver, or to measure DNS duration separately from
connection duration. Skip getaddrinfo if the "host" passed into
create_connection is already resolved.
See https://github.com/python/asyncio/pull/302 for details.
Patch by A. Jesse Jiryu Davis.
This info is required on Python 3.5 and newer to get specific information on
the SSL object, like getting the binary peer certificate (instead of getting
it as text).
Issue #24763: Fix resource warnings when asyncio.BaseSubprocessTransport
constructor fails, if subprocess.Popen raises an exception for example.
Patch written by Martin Richard, test written by me.
* Fix ResourceWarning warnings in test_streams
* Return True from StreamReader.eof_received() to fix
http://bugs.python.org/issue24539 (but still needs a unittest).
Add StreamReader.__repr__() for easy debugging.
* remove unused imports
* Issue #234: Drop JoinableQueue on Python 3.5+
Merge JoinableQueue with Queue. To more closely match the standard Queue,
asyncio.Queue has "join" and "task_done". JoinableQueue is deleted.
Docstring for Queue.join shouldn't mention threads.
Restore JoinableQueue as a deprecated alias for Queue. To more closely match
the standard Queue, asyncio.Queue has "join" and "task_done". JoinableQueue
remains as a deprecated alias for Queue to avoid needlessly breaking too much
code that depended on it.
Patch written by A. Jesse Jiryu Davis <jesse@mongodb.com>.
* _check_resolved_address() is implemented with getaddrinfo() which is slow
* If available, use socket.inet_pton() instead of socket.getaddrinfo(), because
it is much faster
Microbenchmark (timeit) on Fedora 21 (Python 3.4, Linux 3.17, glibc 2.20) to
validate the IPV4 address "127.0.0.1" or the IPv6 address "::1":
* getaddrinfo() 10.4 usec per loop
* inet_pton(): 0.285 usec per loop
On glibc older than 2.14, getaddrinfo() always requests the list of all local
IP addresses to the kernel (using a NETLINK socket). getaddrinfo() has other
known issues, it's better to avoid it when it is possible.
transport was closed. The check broken a Tulip example and this limitation is
arbitrary. Check if _proc is None should be enough.
Enhance also close(): do nothing when called the second time.
Issue #23347: send_signal(), kill() and terminate() methods of
BaseSubprocessTransport now check if the transport was closed and if the
process exited.
Issue #23347: Refactor creation of subprocess transports. Changes on
BaseSubprocessTransport:
* Add a wait() method to wait until the child process exit
* The constructor now accepts an optional waiter parameter. The _post_init()
coroutine must not be called explicitly anymore. It makes subprocess
transports closer to other transports, and it gives more freedom if we want
later to change completly how subprocess transports are created.
* close() now kills the process instead of kindly terminate it: the child
process may ignore SIGTERM and continue to run. Call explicitly terminate()
and wait() if you want to kindly terminate the child process.
* close() now logs a warning in debug mode if the process is still running and
needs to be killed
* _make_subprocess_transport() is now fully asynchronous again: if the creation
of the transport failed, wait asynchronously for the process eixt. Before the
wait was synchronous. This change requires close() to *kill*, and not
terminate, the child process.
* Remove the _kill_wait() method, replaced with a more agressive close()
method. It fixes _make_subprocess_transport() on error.
BaseSubprocessTransport.close() calls the close() method of pipe transports,
whereas _kill_wait() closed directly pipes of the subprocess.Popen object
without unregistering file descriptors from the selector (which caused severe
bugs).
These changes simplifies the code of subprocess.py.
* Cleanup gather(): use cancelled() method instead of using private Future
attribute
* Fix _UnixReadPipeTransport and _UnixWritePipeTransport. Only start reading
when connection_made() has been called.
* Issue #23333: Fix BaseSelectorEventLoop._accept_connection(). Close the
transport on error. In debug mode, log errors using call_exception_handler()
* _SelectorTransport constructor: extra parameter is now optional
* Fix _SelectorDatagramTransport constructor. Only start reading after
connection_made() has been called.
* Fix _SelectorSslTransport.close(). Don't call protocol.connection_lost() if
protocol.connection_made() was not called yet: if the SSL handshake failed or
is still in progress. The close() method can be called if the creation of the
connection is cancelled, by a timeout for example.
* Remove unused SSLProtocol._closing attribute
* test_sslproto: skip test if ssl module is missing
* Python issue #23208: Don't use the traceback of the current handle if we
already know the traceback of the source. The handle may be more revelant,
but having 3 tracebacks (handle, source, exception) becomes more difficult to
read. The handle may be preferred later but it requires more work to make
this choice.
Use a coroutine with asyncio.sleep() instead of call_later() to ensure that the
schedule call is cancelled.
Add also a unit test cancelling connect_pipe().
* Handle correctly CancelledError: just exit
* On error, log the exception and exit
Don't try to close the event loop, it is probably running and so it cannot be
closed.
Override the connect_read_pipe() method of the loop to mock immediatly
pause_reading() and resume_reading() methods.
The test failed randomly on FreeBSD 9 buildbot and on Windows using trollius.
* Use test_utils.run_briefly() to execute pending calls to really close
transports
* sslproto: mock also _SSLPipe.shutdown(), it's need to close the transport
* pipe test: the test doesn't close explicitly the PipeHandle, so ignore
the warning instead
* test_popen: use the context manager ("with p:") to explicitly close pipes