Each interpreter now has its own context free list:
* Move context free list into PyInterpreterState.
* Add _Py_context_state structure.
* Add tstate parameter to _PyContext_ClearFreeList()
and _PyContext_Fini().
* Pass tstate to clear_freelists().
Each interpreter now has its own asynchronous generator free lists:
* Move async gen free lists into PyInterpreterState.
* Move _PyAsyncGen_MAXFREELIST define to pycore_interp.h
* Add _Py_async_gen_state structure.
* Add tstate parameter to _PyAsyncGen_ClearFreeLists
and _PyAsyncGen_Fini().
Each interpreter now has its own list free list:
* Move list numfree and free_list into PyInterpreterState.
* Add _Py_list_state structure.
* Add tstate parameter to _PyList_ClearFreeList()
and _PyList_Fini().
* Remove "#ifdef EXPERIMENTAL_ISOLATED_SUBINTERPRETERS".
* _PyGC_Fini() clears gcstate->garbage list which can be stored in
the list free list. Call _PyGC_Fini() before _PyList_Fini() to
prevent leaking this list.
Each interpreter now has its own frame free list:
* Move frame free list into PyInterpreterState.
* Add _Py_frame_state structure.
* Add tstate parameter to _PyFrame_ClearFreeList()
and _PyFrame_Fini().
* Remove "#if PyFrame_MAXFREELIST > 0".
* Remove "#ifdef EXPERIMENTAL_ISOLATED_SUBINTERPRETERS".
Each interpreter now has its own float free list:
* Move tuple numfree and free_list into PyInterpreterState.
* Add _Py_float_state structure.
* Add tstate parameter to _PyFloat_ClearFreeList()
and _PyFloat_Fini().
Each interpreter now has its own tuple free lists:
* Move tuple numfree and free_list arrays into PyInterpreterState.
* Define PyTuple_MAXSAVESIZE and PyTuple_MAXFREELIST macros in
pycore_interp.h.
* Add _Py_tuple_state structure. Pass it explicitly to tuple_alloc().
* Add tstate parameter to _PyTuple_ClearFreeList()
* Each interpreter now has its own empty tuple singleton.
my_fgets() now calls _PyOS_InterruptOccurred(tstate) to check for
pending signals, rather calling PyOS_InterruptOccurred().
my_fgets() is called with the GIL released, whereas
PyOS_InterruptOccurred() must be called with the GIL held.
test_repl: use text=True and avoid SuppressCrashReport in
test_multiline_string_parsing().
Fix my_fgets() on Windows: fgets(fp) does crash if fileno(fp) is closed.
PyOS_AfterFork_Child() helper functions now return a PyStatus:
PyOS_AfterFork_Child() is now responsible to handle errors.
* Move _PySignal_AfterFork() to the internal C API
* Add #ifdef HAVE_FORK on _PyGILState_Reinit(), _PySignal_AfterFork()
and _PyInterpreterState_DeleteExceptMain().
Fix :mod:`ssl`` code to be compatible with OpenSSL 1.1.x builds that use
``no-deprecated`` and ``--api=1.1.0``.
Note: Tests assume full OpenSSL API and fail with limited API.
Signed-off-by: Christian Heimes <christian@python.org>
Co-authored-by: Mark Wright <gienah@gentoo.org>
Recent changes to _datetimemodule broke compilation on mingw; see the comments in this change for details.
FWIW, @corona10: this issue is why `PyType_FromModuleAndSpec` & friends take the `bases` argument at run time.
On macOS, socket.getaddrinfo() no longer uses an internal lock to
prevent race conditions when calling getaddrinfo(). getaddrinfo is
thread-safe is macOS 10.5, whereas Python 3.9 requires macOS 10.6 or
newer.
The lock was also used on FreeBSD older than 5.3, OpenBSD older than
201311 and NetBSD older than 4.
Previously, the result could have been an instance of a subclass of int.
Also revert bpo-26202 and make attributes start, stop and step of the range
object having exact type int.
Add private function _PyNumber_Index() which preserves the old behavior
of PyNumber_Index() for performance to use it in the conversion functions
like PyLong_AsLong().
If ctypes fails to convert the result of a callback or if a ctypes
callback function raises an exception, sys.unraisablehook is now
called with an exception set. Previously, the error was logged into
stderr by PyErr_Print().
```
D:\a\cpython\cpython\Modules\_zoneinfo.c(903,52): warning C4267: '=': conversion from 'size_t' to 'unsigned int', possible loss of data [D:\a\cpython\cpython\PCbuild\_zoneinfo.vcxproj]
D:\a\cpython\cpython\Modules\_zoneinfo.c(904,44): warning C4267: '=': conversion from 'size_t' to 'unsigned int', possible loss of data [D:\a\cpython\cpython\PCbuild\_zoneinfo.vcxproj]
D:\a\cpython\cpython\Modules\_zoneinfo.c(1772,31): warning C4244: '=': conversion from 'ssize_t' to 'uint8_t', possible loss of data [D:\a\cpython\cpython\PCbuild\_zoneinfo.vcxproj]
```
hashlib.compare_digest uses OpenSSL's CRYPTO_memcmp() function
when OpenSSL is available.
Note: The _operator module is a builtin module. I don't want to add
libcrypto dependency to libpython. Therefore I duplicated the wrapper
function and added a copy to _hashopenssl.c.
ctypes now raises an ArgumentError when a callback
is invoked with more than 1024 arguments.
The ctypes module allocates arguments on the stack in
ctypes_callproc() using alloca(), which is problematic
when large numbers of arguments are passed. Instead
of a stack overflow, this commit raises an ArgumentError
if more than 1024 parameters are passed.
Convert Py_REFCNT() and Py_SIZE() macros to static inline functions.
They cannot be used as l-value anymore: use Py_SET_REFCNT() and
Py_SET_SIZE() to set an object reference count and size.
Replace &Py_SIZE(self) with &((PyVarObject*)self)->ob_size
in arraymodule.c.
This change is backward incompatible on purpose, to prepare the C API
for an opaque PyObject structure.
The scripts in `Tools/peg_generator/scripts` mostly assume that
`ast.parse` and `compile` use the old parser, since this was the
state of things, while we were developing them. They need to be
updated to always use the correct parser. `_peg_parser` is being
extended to support both parsing and compiling with both parsers.
This updates _PyErr_ChainStackItem() to use _PyErr_SetObject()
instead of _PyErr_ChainExceptions(). This prevents a hang in
certain circumstances because _PyErr_SetObject() performs checks
to prevent cycles in the exception context chain while
_PyErr_ChainExceptions() doesn't.
The reset_peak function sets the peak memory size to the current size,
representing a resetting of that metric. This allows for recording the
peak of specific sections of code, ignoring other code that may have
had a higher peak (since the most recent `tracemalloc.start()` or
tracemalloc.clear_traces()` call).