asyncio.ProactorEventLoop now catchs and logs send errors when the
self-pipe is full: BaseProactorEventLoop._write_to_self() now catchs
and logs OSError exceptions, as done by
BaseSelectorEventLoop._write_to_self().
* asyncio: __del__() keep reference to warnings.warn
The __del__() methods of asyncio classes now keep a strong reference
to the warnings.warn() to be able to display the ResourceWarning
warning in more cases. Ensure that the function remains available if
instances are destroyed late during Python shutdown (while module
symbols are cleared).
* Rename warn parameter to _warn
"_warn" name is a hint that it's not the regular warnings.warn()
function.
bpo-32622, bpo-35682: Fix asyncio.ProactorEventLoop.sendfile(): don't
attempt to set the result of an internal future if it's already done.
Fix asyncio _ProactorBasePipeTransport._force_close(): don't set the
result of _empty_waiter if it's already done.
The cancellation of an overlapped WSARecv() has a race condition
which causes data loss because of the current implementation of
proactor in asyncio.
No longer cancel overlapped WSARecv() in _ProactorReadPipeTransport
to work around the race condition.
Remove the optimized recv_into() implementation to get simple
implementation of pause_reading() using the single _pending_data
attribute.
Move _feed_data_to_bufferred_proto() to protocols.py.
Remove set_protocol() method which became useless.
In this commit:
* Support BufferedProtocol in set_protocol() and start_tls()
* Fix proactor to cancel readers reliably
* Update tests to be compatible with OpenSSL 1.1.1
* Clarify BufferedProtocol docs
* Bump TLS tests timeouts to 60 seconds; eliminate possible race from start_serving
* Rewrite test_start_tls_server_1
The proactor event loop has a race condition when reading with
pausing/resuming. `resume_reading()` unconditionally schedules the read
function to read from the current future. If `resume_reading()` was
called before the previously scheduled done callback fires, this results
in two attempts to get the data from the most recent read and an
assertion failure. This commit tracks whether or not `resume_reading`
needs to reschedule the callback to restart the loop, preventing a
second attempt to read the data.
* Make ssh_handshake_timeout None by default.
* Raise ValueError if ssl_handshake_timeout is used without ssl.
* Raise ValueError if ssl_handshake_timeout is not positive.
Issue #26604:
* Add a new optional source parameter to _warnings.warn() and warnings.warn()
* Modify asyncore, asyncio and _pyio modules to set the source parameter when
logging a ResourceWarning warning
getaddrinfo takes an exclusive lock on some platforms, causing clients to queue
up waiting for the lock if many names are being resolved concurrently. Users
may want to handle name resolution in their own code, for the sake of caching,
using an alternate resolver, or to measure DNS duration separately from
connection duration. Skip getaddrinfo if the "host" passed into
create_connection is already resolved.
See https://github.com/python/asyncio/pull/302 for details.
Patch by A. Jesse Jiryu Davis.
* _check_resolved_address() is implemented with getaddrinfo() which is slow
* If available, use socket.inet_pton() instead of socket.getaddrinfo(), because
it is much faster
Microbenchmark (timeit) on Fedora 21 (Python 3.4, Linux 3.17, glibc 2.20) to
validate the IPV4 address "127.0.0.1" or the IPv6 address "::1":
* getaddrinfo() 10.4 usec per loop
* inet_pton(): 0.285 usec per loop
On glibc older than 2.14, getaddrinfo() always requests the list of all local
IP addresses to the kernel (using a NETLINK socket). getaddrinfo() has other
known issues, it's better to avoid it when it is possible.
* Rephrase also the comment explaining why the waiter is not awaken immediatly.
* SSLProtocol.eof_received() doesn't instanciate ConnectionResetError exception
directly, it will be done by Future.set_exception(). The exception is not
used if the waiter was cancelled or if there is no waiter.
* Handle correctly CancelledError: just exit
* On error, log the exception and exit
Don't try to close the event loop, it is probably running and so it cannot be
closed.
* PipeHandle now uses None instead of -1 for a closed handle
* Sort imports in windows_utils.
* Fix test_events on Python older than 3.5. Skip SSL tests on the
ProactorEventLoop if ssl.MemoryIO is missing
* Fix BaseEventLoop._create_connection_transport(). Close the transport if the
creation of the transport (if the waiter) gets an exception.
* _ProactorBasePipeTransport now sets _sock to None when the transport is
closed.
* Fix BaseSubprocessTransport.close(). Ignore pipes for which the protocol is
not set yet (still equal to None).
* TestLoop.close() now calls the close() method of the parent class
(BaseEventLoop).
* Cleanup BaseSelectorEventLoop: create the protocol on a separated line for
readability and ease debugging.
* Fix BaseSubprocessTransport._kill_wait(). Set the _returncode attribute, so
close() doesn't try to terminate the process.
* Tests: explicitly close event loops and transports
* UNIX pipe transports: add closed/closing in repr(). Add "closed" or "closing"
state in the __repr__() method of _UnixReadPipeTransport and
_UnixWritePipeTransport classes.
The new SSL implementation is based on the new ssl.MemoryBIO which is only
available on Python 3.5. On Python 3.4 and older, the legacy SSL implementation
(using SSL_write, SSL_read, etc.) is used. The proactor event loop only
supports the new implementation.
The new asyncio.sslproto module adds _SSLPipe, SSLProtocol and
_SSLProtocolTransport classes. _SSLPipe allows to "wrap" or "unwrap" a socket
(switch between cleartext and SSL/TLS).
Patch written by Antoine Pitrou. sslproto.py is based on gruvi/ssl.py of the
gruvi project written by Geert Jansen.
This change adds SSL support to ProactorEventLoop on Python 3.5 and newer!
It becomes also possible to implement STARTTTLS: switch a cleartext socket to
SSL.
Close the IocpProactor before closing the event loop. IocpProactor.close() can
call loop.call_soon(), which is forbidden when the event loop is closed.