_testcapimodule.c must not include pycore_pathconfig.h, since it's an
internal header files.
Changes:
* Add _PyCoreConfig_AsDict() function to coreconfig.c.
* Remove pycore_pathconfig.h include from _testcapimodule.h.
* pycore_pathconfig.h now requires Py_BUILD_CORE to be defined.
* _testcapimodule.c compilation now fails if it's built with
Py_BUILD_CORE defined.
* And pycore_lifecycle.h and pycore_pathconfig.h headers to
Include/internal/
* Move Py_BUILD_CORE specific code from coreconfig.h and
pylifecycle.h to pycore_pathconfig.h and pycore_lifecycle.h
* Move _Py_wstrlist_XXX() definitions and _PyPathConfig code
from pycore_state.h to pycore_pathconfig.h
* Move "Init" and "Fini" function definitions from pylifecycle.c to
pycore_lifecycle.h.
If Py_BUILD_CORE is defined, the PyThreadState_GET() macro access
_PyRuntime which comes from the internal pycore_state.h header.
Public headers must not require internal headers.
Move PyThreadState_GET() and _PyInterpreterState_GET_UNSAFE() from
Include/pystate.h to Include/internal/pycore_state.h, and rename
PyThreadState_GET() to _PyThreadState_GET() there.
The PyThreadState_GET() macro of pystate.h is now redefined when
pycore_state.h is included, to use the fast _PyThreadState_GET().
Changes:
* Add _PyThreadState_GET() macro
* Replace "PyThreadState_GET()->interp" with
_PyInterpreterState_GET_UNSAFE()
* Replace PyThreadState_GET() with _PyThreadState_GET() in internal C
files (compiled with Py_BUILD_CORE defined), but keep
PyThreadState_GET() in the public header files.
* _testcapimodule.c: replace PyThreadState_GET() with
PyThreadState_Get(); the module is not compiled with Py_BUILD_CORE
defined.
* pycore_state.h now requires Py_BUILD_CORE to be defined.
* Add Py_STATIC_INLINE() macro to declare a "static inline" function.
If the compiler supports it, try to always inline the function even if no
optimization level was specified.
* Modify pydtrace.h to use Py_STATIC_INLINE() when WITH_DTRACE is
not defined.
* Add an unit test on Py_DECREF() to make sure that
_Py_NegativeRefcount() reports the correct filename.
* Revert "bpo-34589: Add -X coerce_c_locale command line option (GH-9378)"
This reverts commit dbdee0073c.
* Revert "bpo-34589: C locale coercion off by default (GH-9073)"
This reverts commit 7a0791b699.
* Revert "bpo-34589: Make _PyCoreConfig.coerce_c_locale private (GH-9371)"
This reverts commit 188ebfa475.
_PyCoreConfig:
* Rename coerce_c_locale to _coerce_c_locale
* Rename coerce_c_locale_warn to _coerce_c_locale_warn
These fields are now private (name prefixed by "_").
* Add _testcapi.get_coreconfig() to get the _PyCoreConfig of the
interpreter
* test.pythoninfo now gets the core configuration using
_testcapi.get_coreconfig()
Add support for the "surrogatepass" error handler in
PyUnicode_DecodeFSDefault() and PyUnicode_EncodeFSDefault()
for the UTF-8 encoding.
Changes:
* _Py_DecodeUTF8Ex() and _Py_EncodeUTF8Ex() now support the
surrogatepass error handler (_Py_ERROR_SURROGATEPASS).
* _Py_DecodeLocaleEx() and _Py_EncodeLocaleEx() now use
the _Py_error_handler enum instead of "int surrogateescape" to pass
the error handler. These functions now return -3 if the error
handler is unknown.
* Add unit tests on _Py_DecodeLocaleEx() and _Py_EncodeLocaleEx()
in test_codecs.
* Rename get_error_handler() to _Py_GetErrorHandler() and expose it
as a private function.
* _freeze_importlib doesn't need config.filesystem_errors="strict"
workaround anymore.
METH_NOARGS functions need only a single argument but they are cast
into a PyCFunction, which takes two arguments. This triggers an
invalid function cast warning in gcc8 due to the argument mismatch.
Fix this by adding a dummy unused argument.
* Add timezone to datetime C API
* Add documentation for timezone C API macros
* Add dedicated tests for datetime type check macros
* Remove superfluous C API test
* Drop support for TimeZoneType in datetime C API
* Expose UTC singleton to the datetime C API
* Update datetime C-API documentation to include links
* Add reference count information for timezone constructors
* Fix _PyMem_SetupAllocators("debug"): always restore allocators to
the defaults, rather than only caling _PyMem_SetupDebugHooks().
* Add _PyMem_SetDefaultAllocator() helper to set the "default"
allocator.
* Add _PyMem_GetAllocatorsName(): get the name of the allocators
* main() now uses debug hooks on memory allocators if Py_DEBUG is
defined, rather than calling directly malloc()
* Document default memory allocators in C API documentation
* _Py_InitializeCore() now fails with a fatal user error if
PYTHONMALLOC value is an unknown memory allocator, instead of
failing with a fatal internal error.
* Add new tests on the PYTHONMALLOC environment variable
* Add support.with_pymalloc()
* Add the _testcapi.WITH_PYMALLOC constant and expose it as
support.with_pymalloc().
* sysconfig.get_config_var('WITH_PYMALLOC') doesn't work on Windows, so
replace it with support.with_pymalloc().
* pythoninfo: add _testcapi collector for pymem
Add new time functions:
* time.clock_gettime_ns()
* time.clock_settime_ns()
* time.monotonic_ns()
* time.perf_counter_ns()
* time.process_time_ns()
* time.time_ns()
Add new _PyTime functions:
* _PyTime_FromTimespec()
* _PyTime_FromNanosecondsObject()
* _PyTime_FromTimeval()
Other changes:
* Add also os.times() tests to test_os.
* pytime_fromtimeval() and pytime_fromtimeval() now return
_PyTime_MAX or _PyTime_MIN on overflow, rather than undefined
behaviour
* _PyTime_FromNanoseconds() parameter type changes from long long to
_PyTime_t
Cleanup pymalloc:
* Rename _PyObject_Alloc() to pymalloc_alloc()
* Rename _PyObject_FreeImpl() to pymalloc_free()
* Rename _PyObject_Realloc() to pymalloc_realloc()
* pymalloc_alloc() and pymalloc_realloc() don't fallback on the raw
allocator anymore, it now must be done by the caller
* Add "success" and "failed" labels to pymalloc_alloc() and
pymalloc_free()
* pymalloc_alloc() and pymalloc_free() don't update
num_allocated_blocks anymore: it should be done in the caller
* _PyObject_Calloc() is now responsible to fill the memory block
allocated by pymalloc with zeros
* Simplify pymalloc_alloc() prototype
* _PyObject_Realloc() now calls _PyObject_Malloc() rather than
calling directly pymalloc_alloc()
_PyMem_DebugRawAlloc() and _PyMem_DebugRawRealloc():
* document the layout of a memory block
* don't increase the serial number if the allocation failed
* check for integer overflow before computing the total size
* add a 'data' variable to make the code easiler to follow
test_setallocators() of _testcapimodule.c now test also the context.
See PEP 539 for details.
Highlights of changes:
- Add Thread Specific Storage (TSS) API
- Document the Thread Local Storage (TLS) API as deprecated
- Update code that used TLS API to use TSS API
The current test_child_terminated_in_stopped_state() function test
creates a child process which calls ptrace(PTRACE_TRACEME, 0, 0) and
then crash (SIGSEGV). The problem is that calling os.waitpid() in the
parent process is not enough to close the process: the child process
remains alive and so the unit test leaks a child process in a
strange state. Closing the child process requires non-trivial code,
maybe platform specific.
Remove the functional test and replaces it with an unit test which
mocks os.waitpid() using a new _testcapi.W_STOPCODE() function to
test the WIFSTOPPED() path.
* Make PyTraceMalloc_Track() and PyTraceMalloc_Untrack() functions
public (remove the "_" prefix)
* Remove the _PyTraceMalloc_domain_t type: use directly unsigned
int.
* Document methods
Note: methods are already tested in test_tracemalloc.
If we have a chain of generators/coroutines that are 'yield from'ing
each other, then resuming the stack works like:
- call send() on the outermost generator
- this enters _PyEval_EvalFrameDefault, which re-executes the
YIELD_FROM opcode
- which calls send() on the next generator
- which enters _PyEval_EvalFrameDefault, which re-executes the
YIELD_FROM opcode
- ...etc.
However, every time we enter _PyEval_EvalFrameDefault, the first thing
we do is to check for pending signals, and if there are any then we
run the signal handler. And if it raises an exception, then we
immediately propagate that exception *instead* of starting to execute
bytecode. This means that e.g. a SIGINT at the wrong moment can "break
the chain" – it can be raised in the middle of our yield from chain,
with the bottom part of the stack abandoned for the garbage collector.
The fix is pretty simple: there's already a special case in
_PyEval_EvalFrameEx where it skips running signal handlers if the next
opcode is SETUP_FINALLY. (I don't see how this accomplishes anything
useful, but that's another story.) If we extend this check to also
skip running signal handlers when the next opcode is YIELD_FROM, then
that closes the hole – now the exception can only be raised at the
innermost stack frame.
This shouldn't have any performance implications, because the opcode
check happens inside the "slow path" after we've already determined
that there's a pending signal or something similar for us to process;
the vast majority of the time this isn't true and the new check
doesn't run at all.
Issue #26058: Add a new private version to the builtin dict type, incremented
at each dictionary creation and at each dictionary change.
Implementation of the PEP 509.
Issue #26530:
* Add C functions _PyTraceMalloc_Track() and _PyTraceMalloc_Untrack() to track
memory blocks using the tracemalloc module.
* Add _PyTraceMalloc_GetTraceback() to get the traceback of an object.
Issue #26563: Debug hooks on Python memory allocators now raise a fatal error
if functions of the PyMem_Malloc() family are called without holding the GIL.
Issue #26516:
* Add PYTHONMALLOC environment variable to set the Python memory
allocators and/or install debug hooks.
* PyMem_SetupDebugHooks() can now also be used on Python compiled in release
mode.
* The PYTHONMALLOCSTATS environment variable can now also be used on Python
compiled in release mode. It now has no effect if set to an empty string.
* In debug mode, debug hooks are now also installed on Python memory allocators
when Python is configured without pymalloc.
Issue #25274: sys.setrecursionlimit() now raises a RecursionError if the new
recursion limit is too low depending at the current recursion depth. Modify
also the "lower-water mark" formula to make it monotonic. This mark is used to
decide when the overflowed flag of the thread state is reset.
datetime.datetime now round microseconds to nearest with ties going to nearest
even integer (ROUND_HALF_EVEN), as round(float), instead of rounding towards
-Infinity (ROUND_FLOOR).
pytime API: replace _PyTime_ROUND_HALF_UP with _PyTime_ROUND_HALF_EVEN. Fix
also _PyTime_Divide() for negative numbers.
_PyTime_AsTimeval_impl() now reuses _PyTime_Divide() instead of reimplementing
rounding modes.
Known limitations of the current implementation:
- documentation changes are incomplete
- there's a reference leak I haven't tracked down yet
The leak is most visible by running:
./python -m test -R3:3 test_importlib
However, you can also see it by running:
./python -X showrefcount
Importing the array or _testmultiphase modules, and
then deleting them from both sys.modules and the local
namespace shows significant increases in the total
number of active references each cycle. By contrast,
with _testcapi (which continues to use single-phase
initialisation) the global refcounts stabilise after
a couple of cycles.
* _PyTime_AsTimeval() now ensures that tv_usec is always positive
* _PyTime_AsTimespec() now ensures that tv_nsec is always positive
* _PyTime_AsTimeval() now returns an integer on overflow instead of raising an
exception
* Rename _PyTime_FromObject() to _PyTime_FromSecondsObject()
* Add _PyTime_AsNanosecondsObject() and _testcapi.pytime_fromsecondsobject()
* Add unit tests
which returned an invalid result (result+error or no result without error) in
the exception message.
Add also unit test to check that the exception contains the name of the
function.
Special case: the final _PyEval_EvalFrameEx() check doesn't mention the
function since it didn't execute a single function but a whole frame.
- Use _testcapi.raise_signal() in test_signal
- close also os.pipe() file descriptors in some test_signal tests where they
were not closed properly
- Remove faulthandler._sigill() and faulthandler._sigbus(): reuse
_testcapi.raise_signal() in test_faulthandler
PyObject_Calloc(), _PyObject_GC_Calloc(). bytes(int) and bytearray(int) are now
using ``calloc()`` instead of ``malloc()`` for large objects which is faster
and use less memory (until the bytearray buffer is filled with data).
The new syntax is highly human readable while still preventing false
positives. The syntax also extends Python syntax to denote "self" and
positional-only parameters, allowing inspect.Signature objects to be
totally accurate for all supported builtins in Python 3.4.
annotate text signatures in docstrings, resulting in fewer false
positives. "self" parameters are also explicitly marked, allowing
inspect.Signature() to authoritatively detect (and skip) said parameters.
Issue #20326: Argument Clinic now generates separate checksums for the
input and output sections of the block, allowing external tools to verify
that the input has not changed (and thus the output is not out-of-date).
PyMethodDescr_Type, _PyMethodWrapper_Type, and PyWrapperDescr_Type)
have been modified to provide introspection information for builtins.
Also: many additional Lib, test suite, and Argument Clinic fixes.
* You may now specify an expression as the default value for a
parameter! Example: "sys.maxsize - 1". This support is
intentionally quite limited; you may only use values that
can be represented as static C values.
* Removed "doc_default", simplified support for "c_default"
and "py_default". (I'm not sure we still even need
"py_default", but I'm leaving it in for now in case a
use presents itself.)
* Parameter lines support a trailing '\\' as a line
continuation character, allowing you to break up long lines.
* The argument parsing code generated when supporting optional
groups now uses PyTuple_GET_SIZE instead of PyTuple_GetSize,
leading to a 850% speedup in parsing. (Just kidding, this
is an unmeasurable difference.)
* A bugfix for the recent regression where the generated
prototype from pydoc for builtins would be littered with
unreadable "=<object ...>"" default values for parameters
that had no default value.
* Converted some asserts into proper failure messages.
* Many doc improvements and fixes.
the function did nothing if the key already exists (if the current value is a
non-NULL pointer).
_testcapi.run_in_subinterp() now correctly sets the new Python thread state of
the current thread when a subinterpreter is created.
Fix a crash when a generator is created in a C thread that is destroyed while
the generator is still used. The issue was that a generator contains a frame,
and the frame kept a reference to the Python state of the destroyed C thread.
The crash occurs when a trace function is setup.
crash when a generator is created in a C thread that is destroyed while the
generator is still used. The issue was that a generator contains a frame, and
the frame kept a reference to the Python state of the destroyed C thread. The
crash occurs when a trace function is setup.