The current test_child_terminated_in_stopped_state() function test
creates a child process which calls ptrace(PTRACE_TRACEME, 0, 0) and
then crash (SIGSEGV). The problem is that calling os.waitpid() in the
parent process is not enough to close the process: the child process
remains alive and so the unit test leaks a child process in a
strange state. Closing the child process requires non-trivial code,
maybe platform specific.
Remove the functional test and replaces it with an unit test which
mocks os.waitpid() using a new _testcapi.W_STOPCODE() function to
test the WIFSTOPPED() path.
(cherry picked from commit 7b7c6dcfff)
If we have a chain of generators/coroutines that are 'yield from'ing
each other, then resuming the stack works like:
- call send() on the outermost generator
- this enters _PyEval_EvalFrameDefault, which re-executes the
YIELD_FROM opcode
- which calls send() on the next generator
- which enters _PyEval_EvalFrameDefault, which re-executes the
YIELD_FROM opcode
- ...etc.
However, every time we enter _PyEval_EvalFrameDefault, the first thing
we do is to check for pending signals, and if there are any then we
run the signal handler. And if it raises an exception, then we
immediately propagate that exception *instead* of starting to execute
bytecode. This means that e.g. a SIGINT at the wrong moment can "break
the chain" – it can be raised in the middle of our yield from chain,
with the bottom part of the stack abandoned for the garbage collector.
The fix is pretty simple: there's already a special case in
_PyEval_EvalFrameEx where it skips running signal handlers if the next
opcode is SETUP_FINALLY. (I don't see how this accomplishes anything
useful, but that's another story.) If we extend this check to also
skip running signal handlers when the next opcode is YIELD_FROM, then
that closes the hole – now the exception can only be raised at the
innermost stack frame.
This shouldn't have any performance implications, because the opcode
check happens inside the "slow path" after we've already determined
that there's a pending signal or something similar for us to process;
the vast majority of the time this isn't true and the new check
doesn't run at all..
(cherry picked from commit ab4413a7e9)
Issue #26058: Add a new private version to the builtin dict type, incremented
at each dictionary creation and at each dictionary change.
Implementation of the PEP 509.
Issue #26530:
* Add C functions _PyTraceMalloc_Track() and _PyTraceMalloc_Untrack() to track
memory blocks using the tracemalloc module.
* Add _PyTraceMalloc_GetTraceback() to get the traceback of an object.
Issue #26563: Debug hooks on Python memory allocators now raise a fatal error
if functions of the PyMem_Malloc() family are called without holding the GIL.
Issue #26516:
* Add PYTHONMALLOC environment variable to set the Python memory
allocators and/or install debug hooks.
* PyMem_SetupDebugHooks() can now also be used on Python compiled in release
mode.
* The PYTHONMALLOCSTATS environment variable can now also be used on Python
compiled in release mode. It now has no effect if set to an empty string.
* In debug mode, debug hooks are now also installed on Python memory allocators
when Python is configured without pymalloc.
Issue #25274: sys.setrecursionlimit() now raises a RecursionError if the new
recursion limit is too low depending at the current recursion depth. Modify
also the "lower-water mark" formula to make it monotonic. This mark is used to
decide when the overflowed flag of the thread state is reset.
datetime.datetime now round microseconds to nearest with ties going to nearest
even integer (ROUND_HALF_EVEN), as round(float), instead of rounding towards
-Infinity (ROUND_FLOOR).
pytime API: replace _PyTime_ROUND_HALF_UP with _PyTime_ROUND_HALF_EVEN. Fix
also _PyTime_Divide() for negative numbers.
_PyTime_AsTimeval_impl() now reuses _PyTime_Divide() instead of reimplementing
rounding modes.
Known limitations of the current implementation:
- documentation changes are incomplete
- there's a reference leak I haven't tracked down yet
The leak is most visible by running:
./python -m test -R3:3 test_importlib
However, you can also see it by running:
./python -X showrefcount
Importing the array or _testmultiphase modules, and
then deleting them from both sys.modules and the local
namespace shows significant increases in the total
number of active references each cycle. By contrast,
with _testcapi (which continues to use single-phase
initialisation) the global refcounts stabilise after
a couple of cycles.