Fix invalid function cast warnings with gcc 8
for method conventions different from METH_NOARGS, METH_O and
METH_VARARGS excluding Argument Clinic generated code.
* Fix _PyCoreConfig_SetGlobalConfig(): set also Py_FrozenFlag
* Fix _PyCoreConfig_AsDict(): export also xoptions
* Add _Py_GetGlobalVariablesAsDict() and _testcapi.get_global_config()
* test.pythoninfo: dump also global configuration variables
* _testembed now serializes global, core and main configurations
using JSON to reuse _Py_GetGlobalVariablesAsDict(),
_PyCoreConfig_AsDict() and _PyMainInterpreterConfig_AsDict(),
rather than duplicating code.
* test_embed.InitConfigTests now test much more configuration
variables
* Fix _PyMainInterpreterConfig_Copy():
copy 'install_signal_handlers' attribute
* Add _PyMainInterpreterConfig_AsDict()
* Add unit tests on the main interpreter configuration
to test_embed.InitConfigTests
* test.pythoninfo: log also main_config
* _PyTuple_ITEMS() gives access to the tuple->ob_item field and cast the
first argument to PyTupleObject*. This internal macro is only usable if
Py_BUILD_CORE is defined.
* Replace &PyTuple_GET_ITEM(ob, 0) with _PyTuple_ITEMS(ob).
* Replace PyTuple_GET_ITEM(op, 1) with &_PyTuple_ITEMS(ob)[1].
_testcapimodule.c must not include pycore_pathconfig.h, since it's an
internal header files.
Changes:
* Add _PyCoreConfig_AsDict() function to coreconfig.c.
* Remove pycore_pathconfig.h include from _testcapimodule.h.
* pycore_pathconfig.h now requires Py_BUILD_CORE to be defined.
* _testcapimodule.c compilation now fails if it's built with
Py_BUILD_CORE defined.
* And pycore_lifecycle.h and pycore_pathconfig.h headers to
Include/internal/
* Move Py_BUILD_CORE specific code from coreconfig.h and
pylifecycle.h to pycore_pathconfig.h and pycore_lifecycle.h
* Move _Py_wstrlist_XXX() definitions and _PyPathConfig code
from pycore_state.h to pycore_pathconfig.h
* Move "Init" and "Fini" function definitions from pylifecycle.c to
pycore_lifecycle.h.
If Py_BUILD_CORE is defined, the PyThreadState_GET() macro access
_PyRuntime which comes from the internal pycore_state.h header.
Public headers must not require internal headers.
Move PyThreadState_GET() and _PyInterpreterState_GET_UNSAFE() from
Include/pystate.h to Include/internal/pycore_state.h, and rename
PyThreadState_GET() to _PyThreadState_GET() there.
The PyThreadState_GET() macro of pystate.h is now redefined when
pycore_state.h is included, to use the fast _PyThreadState_GET().
Changes:
* Add _PyThreadState_GET() macro
* Replace "PyThreadState_GET()->interp" with
_PyInterpreterState_GET_UNSAFE()
* Replace PyThreadState_GET() with _PyThreadState_GET() in internal C
files (compiled with Py_BUILD_CORE defined), but keep
PyThreadState_GET() in the public header files.
* _testcapimodule.c: replace PyThreadState_GET() with
PyThreadState_Get(); the module is not compiled with Py_BUILD_CORE
defined.
* pycore_state.h now requires Py_BUILD_CORE to be defined.
* Add Py_STATIC_INLINE() macro to declare a "static inline" function.
If the compiler supports it, try to always inline the function even if no
optimization level was specified.
* Modify pydtrace.h to use Py_STATIC_INLINE() when WITH_DTRACE is
not defined.
* Add an unit test on Py_DECREF() to make sure that
_Py_NegativeRefcount() reports the correct filename.
* Revert "bpo-34589: Add -X coerce_c_locale command line option (GH-9378)"
This reverts commit dbdee0073c.
* Revert "bpo-34589: C locale coercion off by default (GH-9073)"
This reverts commit 7a0791b699.
* Revert "bpo-34589: Make _PyCoreConfig.coerce_c_locale private (GH-9371)"
This reverts commit 188ebfa475.
_PyCoreConfig:
* Rename coerce_c_locale to _coerce_c_locale
* Rename coerce_c_locale_warn to _coerce_c_locale_warn
These fields are now private (name prefixed by "_").
* Add _testcapi.get_coreconfig() to get the _PyCoreConfig of the
interpreter
* test.pythoninfo now gets the core configuration using
_testcapi.get_coreconfig()
Add support for the "surrogatepass" error handler in
PyUnicode_DecodeFSDefault() and PyUnicode_EncodeFSDefault()
for the UTF-8 encoding.
Changes:
* _Py_DecodeUTF8Ex() and _Py_EncodeUTF8Ex() now support the
surrogatepass error handler (_Py_ERROR_SURROGATEPASS).
* _Py_DecodeLocaleEx() and _Py_EncodeLocaleEx() now use
the _Py_error_handler enum instead of "int surrogateescape" to pass
the error handler. These functions now return -3 if the error
handler is unknown.
* Add unit tests on _Py_DecodeLocaleEx() and _Py_EncodeLocaleEx()
in test_codecs.
* Rename get_error_handler() to _Py_GetErrorHandler() and expose it
as a private function.
* _freeze_importlib doesn't need config.filesystem_errors="strict"
workaround anymore.
METH_NOARGS functions need only a single argument but they are cast
into a PyCFunction, which takes two arguments. This triggers an
invalid function cast warning in gcc8 due to the argument mismatch.
Fix this by adding a dummy unused argument.
* Add timezone to datetime C API
* Add documentation for timezone C API macros
* Add dedicated tests for datetime type check macros
* Remove superfluous C API test
* Drop support for TimeZoneType in datetime C API
* Expose UTC singleton to the datetime C API
* Update datetime C-API documentation to include links
* Add reference count information for timezone constructors
* Fix _PyMem_SetupAllocators("debug"): always restore allocators to
the defaults, rather than only caling _PyMem_SetupDebugHooks().
* Add _PyMem_SetDefaultAllocator() helper to set the "default"
allocator.
* Add _PyMem_GetAllocatorsName(): get the name of the allocators
* main() now uses debug hooks on memory allocators if Py_DEBUG is
defined, rather than calling directly malloc()
* Document default memory allocators in C API documentation
* _Py_InitializeCore() now fails with a fatal user error if
PYTHONMALLOC value is an unknown memory allocator, instead of
failing with a fatal internal error.
* Add new tests on the PYTHONMALLOC environment variable
* Add support.with_pymalloc()
* Add the _testcapi.WITH_PYMALLOC constant and expose it as
support.with_pymalloc().
* sysconfig.get_config_var('WITH_PYMALLOC') doesn't work on Windows, so
replace it with support.with_pymalloc().
* pythoninfo: add _testcapi collector for pymem
Add new time functions:
* time.clock_gettime_ns()
* time.clock_settime_ns()
* time.monotonic_ns()
* time.perf_counter_ns()
* time.process_time_ns()
* time.time_ns()
Add new _PyTime functions:
* _PyTime_FromTimespec()
* _PyTime_FromNanosecondsObject()
* _PyTime_FromTimeval()
Other changes:
* Add also os.times() tests to test_os.
* pytime_fromtimeval() and pytime_fromtimeval() now return
_PyTime_MAX or _PyTime_MIN on overflow, rather than undefined
behaviour
* _PyTime_FromNanoseconds() parameter type changes from long long to
_PyTime_t
Cleanup pymalloc:
* Rename _PyObject_Alloc() to pymalloc_alloc()
* Rename _PyObject_FreeImpl() to pymalloc_free()
* Rename _PyObject_Realloc() to pymalloc_realloc()
* pymalloc_alloc() and pymalloc_realloc() don't fallback on the raw
allocator anymore, it now must be done by the caller
* Add "success" and "failed" labels to pymalloc_alloc() and
pymalloc_free()
* pymalloc_alloc() and pymalloc_free() don't update
num_allocated_blocks anymore: it should be done in the caller
* _PyObject_Calloc() is now responsible to fill the memory block
allocated by pymalloc with zeros
* Simplify pymalloc_alloc() prototype
* _PyObject_Realloc() now calls _PyObject_Malloc() rather than
calling directly pymalloc_alloc()
_PyMem_DebugRawAlloc() and _PyMem_DebugRawRealloc():
* document the layout of a memory block
* don't increase the serial number if the allocation failed
* check for integer overflow before computing the total size
* add a 'data' variable to make the code easiler to follow
test_setallocators() of _testcapimodule.c now test also the context.
See PEP 539 for details.
Highlights of changes:
- Add Thread Specific Storage (TSS) API
- Document the Thread Local Storage (TLS) API as deprecated
- Update code that used TLS API to use TSS API
The current test_child_terminated_in_stopped_state() function test
creates a child process which calls ptrace(PTRACE_TRACEME, 0, 0) and
then crash (SIGSEGV). The problem is that calling os.waitpid() in the
parent process is not enough to close the process: the child process
remains alive and so the unit test leaks a child process in a
strange state. Closing the child process requires non-trivial code,
maybe platform specific.
Remove the functional test and replaces it with an unit test which
mocks os.waitpid() using a new _testcapi.W_STOPCODE() function to
test the WIFSTOPPED() path.
* Make PyTraceMalloc_Track() and PyTraceMalloc_Untrack() functions
public (remove the "_" prefix)
* Remove the _PyTraceMalloc_domain_t type: use directly unsigned
int.
* Document methods
Note: methods are already tested in test_tracemalloc.
If we have a chain of generators/coroutines that are 'yield from'ing
each other, then resuming the stack works like:
- call send() on the outermost generator
- this enters _PyEval_EvalFrameDefault, which re-executes the
YIELD_FROM opcode
- which calls send() on the next generator
- which enters _PyEval_EvalFrameDefault, which re-executes the
YIELD_FROM opcode
- ...etc.
However, every time we enter _PyEval_EvalFrameDefault, the first thing
we do is to check for pending signals, and if there are any then we
run the signal handler. And if it raises an exception, then we
immediately propagate that exception *instead* of starting to execute
bytecode. This means that e.g. a SIGINT at the wrong moment can "break
the chain" – it can be raised in the middle of our yield from chain,
with the bottom part of the stack abandoned for the garbage collector.
The fix is pretty simple: there's already a special case in
_PyEval_EvalFrameEx where it skips running signal handlers if the next
opcode is SETUP_FINALLY. (I don't see how this accomplishes anything
useful, but that's another story.) If we extend this check to also
skip running signal handlers when the next opcode is YIELD_FROM, then
that closes the hole – now the exception can only be raised at the
innermost stack frame.
This shouldn't have any performance implications, because the opcode
check happens inside the "slow path" after we've already determined
that there's a pending signal or something similar for us to process;
the vast majority of the time this isn't true and the new check
doesn't run at all.