Issue #26604:
* Add a new optional source parameter to _warnings.warn() and warnings.warn()
* Modify asyncore, asyncio and _pyio modules to set the source parameter when
logging a ResourceWarning warning
getaddrinfo takes an exclusive lock on some platforms, causing clients to queue
up waiting for the lock if many names are being resolved concurrently. Users
may want to handle name resolution in their own code, for the sake of caching,
using an alternate resolver, or to measure DNS duration separately from
connection duration. Skip getaddrinfo if the "host" passed into
create_connection is already resolved.
See https://github.com/python/asyncio/pull/302 for details.
Patch by A. Jesse Jiryu Davis.
Previous approach of installing coroutine wrapper in loop.set_debug() and
uninstalling it in loop.close() was very fragile. Most of asyncio tests
do not call loop.close() at all. Since coroutine wrapper is a global
setting, we have to make sure that it's only set when the loop is
running, and is automatically unset when it stops running.
Issue #24017.
* _check_resolved_address() is implemented with getaddrinfo() which is slow
* If available, use socket.inet_pton() instead of socket.getaddrinfo(), because
it is much faster
Microbenchmark (timeit) on Fedora 21 (Python 3.4, Linux 3.17, glibc 2.20) to
validate the IPV4 address "127.0.0.1" or the IPv6 address "::1":
* getaddrinfo() 10.4 usec per loop
* inet_pton(): 0.285 usec per loop
On glibc older than 2.14, getaddrinfo() always requests the list of all local
IP addresses to the kernel (using a NETLINK socket). getaddrinfo() has other
known issues, it's better to avoid it when it is possible.
* Remove unused SSLProtocol._closing attribute
* test_sslproto: skip test if ssl module is missing
* Python issue #23208: Don't use the traceback of the current handle if we
already know the traceback of the source. The handle may be more revelant,
but having 3 tracebacks (handle, source, exception) becomes more difficult to
read. The handle may be preferred later but it requires more work to make
this choice.
In debug mode, BaseEventLoop._run_once() now sets the
BaseEventLoop._current_handle attribute to the handle currently executed.
In release mode or when no handle is executed, the attribute is None.
BaseEventLoop.default_exception_handler() displays the traceback of the current
handle if available.
* PipeHandle now uses None instead of -1 for a closed handle
* Sort imports in windows_utils.
* Fix test_events on Python older than 3.5. Skip SSL tests on the
ProactorEventLoop if ssl.MemoryIO is missing
* Fix BaseEventLoop._create_connection_transport(). Close the transport if the
creation of the transport (if the waiter) gets an exception.
* _ProactorBasePipeTransport now sets _sock to None when the transport is
closed.
* Fix BaseSubprocessTransport.close(). Ignore pipes for which the protocol is
not set yet (still equal to None).
* TestLoop.close() now calls the close() method of the parent class
(BaseEventLoop).
* Cleanup BaseSelectorEventLoop: create the protocol on a separated line for
readability and ease debugging.
* Fix BaseSubprocessTransport._kill_wait(). Set the _returncode attribute, so
close() doesn't try to terminate the process.
* Tests: explicitly close event loops and transports
* UNIX pipe transports: add closed/closing in repr(). Add "closed" or "closing"
state in the __repr__() method of _UnixReadPipeTransport and
_UnixWritePipeTransport classes.
asyncio.BaseEventLoop now use the identifier of the current thread to ensure
that they are called from the thread running the event loop.
Before, the get_event_loop() method was used to check the thread, and no
exception was raised when the thread had no event loop. Now the methods always
raise an exception in debug mode when called from the wrong thread. It should
help to notice misusage of the API.
written by Torsten Landschoff.
create_task(), call_at(), call_soon(), call_soon_threadsafe() and
run_in_executor() now raise an error if the event loop is closed.
functions:
* add_signal_handler()
* call_at()
* call_later()
* call_soon()
* call_soon_threadsafe()
* run_in_executor()
Fix also the error message of add_signal_handler() (fix the name of the
function).
* PipeServer.close() now cancels the "accept pipe" future which cancels the
overlapped operation.
* Fix _SelectorTransport.__repr__() if the transport was closed
* Fix debug log in BaseEventLoop.create_connection(): get the socket object
from the transport because SSL transport closes the old socket and creates a
new SSL socket object. Remove also the _SelectorSslTransport._rawsock
attribute: it contained the closed socket (not very useful) and it was not
used.
* Issue #22063: socket operations (sock_recv, sock_sendall, sock_connect,
sock_accept) of the proactor event loop don't raise an exception in debug
mode if the socket are in blocking mode. Overlapped operations also work on
blocking sockets.
* Fix unit tests in debug mode: mock a non-blocking socket for socket
operations which now raise an exception if the socket is blocking.
* _fatal_error() method of _UnixReadPipeTransport and _UnixWritePipeTransport
now log all exceptions in debug mode
* Don't log expected errors in unit tests
* Tulip issue 200: _WaitHandleFuture._unregister_wait() now catchs and logs
exceptions.
* Tulip issue 200: Log errors in debug mode instead of simply ignoring them.
* Tulip issue #184: Log subprocess events in debug mode
- Log stdin, stdout and stderr transports and protocols
- Log process identifier (pid)
- Log connection of pipes
- Log process exit
- Log Process.communicate() tasks: feed stdin, read stdout and stderr
- Add __repr__() method to many classes related to subprocesses
* Add BaseSubprocessTransport._pid attribute. Store the pid so it is still
accessible after the process exited. It's more convinient for debug.
* create_connection(): add the socket in the "connected to" debug log
* Clean up some docstrings and comments. Remove unused unimplemented
_read_from_self().
* Tulip issue #183: log socket events in debug mode
- Log most important socket events: socket connected, new client, connection
reset or closed by peer (EOF), etc.
- Log time elapsed in DNS resolution (getaddrinfo)
- Log pause/resume reading
- Log time of SSL handshake
- Log SSL handshake errors
- Add a __repr__() method to many classes
* Fix ProactorEventLoop() in debug mode. ProactorEventLoop._make_self_pipe()
doesn't call call_soon() directly because it checks for the current loop
which fails, because the method is called to build the event loop.
* Cleanup _ProactorReadPipeTransport constructor. Not need to set again
_read_fut attribute to None, it is already done in the base class.
- loop, waiters and active_count attributes are now private
- attach(), detach() and wakeup() methods are now private
The sockets attribute remains public.
* Tulip issue #182: Improve logs of BaseEventLoop._run_once()
- Don't log non-blocking poll
- Only log polling with a timeout if it gets events or if it timed out after
more than 1 second.
* Fix some pyflakes warnings: remove unused imports
- repr(Task) and repr(CoroWrapper) now also includes where these objects were
created. If the coroutine is not a generator (don't use "yield from"), use
the location of the function, not the location of the coro() wrapper.
- Fix create_task(): truncate the traceback to hide the call to create_task().
- Tulip issue 185: Add a create_task() method to event loops. The create_task()
method can be overriden in custom event loop to implement their own task
class. For example, greenio and Pulsar projects use their own task class. The
create_task() method is now preferred over creating directly task using the
Task class.
- tests: fix a warning
- fix typo in the name of a test function
- Update AbstractEventLoop: add new event loop methods; update also the unit test
- Sort imports
- Simplify/optimize iscoroutine(). Inline inspect.isgenerator(obj): replace it
with isinstance(obj, types.GeneratorType)
- CoroWrapper: check at runtime if Python has the yield-from bug #21209. If
Python has the bug, check if CoroWrapper.send() was called by yield-from to
decide if parameters must be unpacked or not.
- Fix "Task was destroyed but it is pending!" warning in
test_task_source_traceback()
Handle objects are created. Pass the traceback to call_exception_handler() in
the 'source_traceback' key.
The traceback is truncated to hide internal calls in asyncio, show only the
traceback from user code.
Add tests for the new source_traceback, and a test for the 'Future/Task
exception was never retrieved' log.
exception if the current loop is not None.
Guido van Rossum wrote:
"The behavior that you can set the loop to None (and keep track of it
explicitly) is part of the spec, and this should still be supported even in
debug mode. The behavior that we raise an error if you are caught having
multiple active loops per thread is just a debugging heuristic, and it
shouldn't break code that follows the spec."
Add BaseEventLoop._closed attribute and use it to check if the event loop was
closed or not, instead of checking different attributes in each subclass of
BaseEventLoop.
run_forever() and run_until_complete() methods now raise a RuntimeError('Event loop is
closed') exception if the event loop was closed.
BaseProactorEventLoop.close() now also cancels "accept futures".
Fix ResourceWarning: create_connection(), create_datagram_endpoint() and
create_unix_server() methods of event loop now close the newly created socket
on error.
loop in debug mode. Raise a RuntimeError if the event loop of the current
thread is different. The check should help to debug thread-safetly issue.
Patch written by David Foster.
Add also a PYTHONASYNCIODEBUG environment variable to debug coroutines since
Python startup, to be able to debug coroutines defined directly in the asyncio
module.