* Make PyTraceMalloc_Track() and PyTraceMalloc_Untrack() functions
public (remove the "_" prefix)
* Remove the _PyTraceMalloc_domain_t type: use directly unsigned
int.
* Document methods
Note: methods are already tested in test_tracemalloc.
- removes PY_WARN_ON_C_LOCALE build time flag
- locale coercion and compatibility warnings are now always compiled
in, but are off by default
- adds PYTHONCOERCECLOCALE=warn runtime option to aid in
debugging potentially locale related compatibility problems
Due to not-yet-resolved test failures on *BSD systems (including
Mac OS X), this also temporarily disables UTF-8 as a locale coercion
target, and skips testing the interpreter's behavior in the POSIX locale.
When os.spawnv() fails while handling arguments, free correctly
argvlist: pass lastarg+1 rather than lastarg to free_string_array()
to also free the first item.
* bpo-29591: Upgrade Modules/expat to libexpat 2.2
* bpo-29591: Restore Python changes on expat
* bpo-29591: Remove expat config of unsupported platforms
Remove the configuration (Modules/expat/*config.h) of unsupported
platforms:
* Amiga
* MacOS Classic on PPC32
* Open Watcom
* bpo-29591: Remove useless XML_HAS_SET_HASH_SALT
The XML_HAS_SET_HASH_SALT define of Modules/expat/expat.h became
useless since our local expat copy was upgrade to expat 2.1 (it's now
expat 2.2.0).
When os.spawnve() fails while handling arguments, free correctly
argvlist: pass lastarg+1 rather than lastarg to free_string_array()
to also free the first item.
The function '_PyArg_ParseStack()' and
'_PyArg_UnpackStack' were failing (with error
"XXX() takes Y argument (Z given)") before
the function '_PyArg_NoStackKeywords()' was called.
Thus, the latter did not raise its more meaningful
error : "XXX() takes no keyword arguments".
Fix a reference leak in _io._WindowsConsoleIO: PyUnicode_FSDecoder()
always initialize decodedname when it succeed and it doesn't clear
input decodedname object.
Error messages when pass keyword arguments to some builtins that
don't support keyword arguments contained double parenthesis: "()()".
The regression was introduced by bpo-30534.
If pass a server_hostname= that fails IDNA decoding to SSLContext.wrap_socket or SSLContext.wrap_bio, then the SSLContext object had a spurious Py_DECREF called on it, eventually leading to segfaults.
* bpo-30537: use PyNumber in itertools instead of PyLong
* bpo-30537: revert changes except to islice_new
* bpo-30537: test itertools.islice and add entry to Misc/NEWS
* Simplify X.509 extension handling code
The previous implementation had grown organically over time, as OpenSSL's API evolved.
* Delete even more code
* bpo-30557: faulthandler now correctly filters and displays exception codes on Windows
* Adds test for non-fatal exceptions.
* Adds bpo number to comment.
* bpo-30544: _io._WindowsConsoleIO.write raises the wrong error when WriteConsoleW fails
* bpo-30544: _io._WindowsConsoleIO.write raises the wrong error when WriteConsoleW fails
Rather than saving the Python object and calling PyObject_IsTrue()
every time when the boolean argument is used, call it only once and
save C boolean value.
* bpo-16500: Allow registering at-fork handlers
* Address Serhiy's comments
* Add doc for new C API
* Add doc for new Python-facing function
* Add NEWS entry + doc nit
Drop handshake_done and peer_cert members from PySSLSocket struct. The
peer certificate can be acquired from *SSL directly.
SSL_get_peer_certificate() does not trigger any network activity.
Instead of manually tracking the handshake state, simply use
SSL_is_init_finished().
In combination these changes fix auto-handshake for non-blocking
MemoryBIO connections.
Signed-off-by: Christian Heimes <christian@python.org>
PEP 432 specifies a number of large changes to interpreter startup code, including exposing a cleaner C-API. The major changes depend on a number of smaller changes. This patch includes all those smaller changes.
If we have a chain of generators/coroutines that are 'yield from'ing
each other, then resuming the stack works like:
- call send() on the outermost generator
- this enters _PyEval_EvalFrameDefault, which re-executes the
YIELD_FROM opcode
- which calls send() on the next generator
- which enters _PyEval_EvalFrameDefault, which re-executes the
YIELD_FROM opcode
- ...etc.
However, every time we enter _PyEval_EvalFrameDefault, the first thing
we do is to check for pending signals, and if there are any then we
run the signal handler. And if it raises an exception, then we
immediately propagate that exception *instead* of starting to execute
bytecode. This means that e.g. a SIGINT at the wrong moment can "break
the chain" – it can be raised in the middle of our yield from chain,
with the bottom part of the stack abandoned for the garbage collector.
The fix is pretty simple: there's already a special case in
_PyEval_EvalFrameEx where it skips running signal handlers if the next
opcode is SETUP_FINALLY. (I don't see how this accomplishes anything
useful, but that's another story.) If we extend this check to also
skip running signal handlers when the next opcode is YIELD_FROM, then
that closes the hole – now the exception can only be raised at the
innermost stack frame.
This shouldn't have any performance implications, because the opcode
check happens inside the "slow path" after we've already determined
that there's a pending signal or something similar for us to process;
the vast majority of the time this isn't true and the new check
doesn't run at all.
Before, it was possible to get the following sequence of
events (especially on Windows, where the C-level signal handler for
SIGINT is run in a separate thread):
- SIGINT arrives
- trip_signal is called
- trip_signal writes to the wakeup fd
- the main thread wakes up from select()-or-equivalent
- the main thread checks for pending signals, but doesn't see any
- the main thread drains the wakeup fd
- the main thread goes back to sleep
- trip_signal sets is_tripped=1 and calls Py_AddPendingCall to notify
the main thread the it should run the Python-level signal handler
- the main thread doesn't notice because it's asleep
This has been causing repeated failures in the Trio test suite:
https://github.com/python-trio/trio/issues/119