Remove the global _Py_CheckRecursionLimit variable: it has been
replaced by ceval.recursion_limit of the PyInterpreterState
structure.
There is no need to keep the variable for the stable ABI, since
Py_EnterRecursiveCall() and Py_LeaveRecursiveCall() were not usable
in Python 3.8 and older: these macros accessed PyThreadState members,
whereas the PyThreadState structure is opaque in the limited C API.
The new API allows to efficiently send values into native generators
and coroutines avoiding use of StopIteration exceptions to signal
returns.
ceval loop now uses this method instead of the old "private"
_PyGen_Send C API. This translates to 1.6x increased performance
of 'await' calls in micro-benchmarks.
Aside from CPython core improvements, this new API will also allow
Cython to generate more efficient code, benefiting high-performance
IO libraries like uvloop.
* Add new capability to the PEG parser to type variable assignments. For instance:
```
| a[asdl_stmt_seq*]=';'.small_stmt+ [';'] NEWLINE { a }
```
* Add new sequence types from the asdl definition (automatically generated)
* Make `asdl_seq` type a generic aliasing pointer type.
* Create a new `asdl_generic_seq` for the generic case using `void*`.
* The old `asdl_seq_GET`/`ast_seq_SET` macros now are typed.
* New `asdl_seq_GET_UNTYPED`/`ast_seq_SET_UNTYPED` macros for dealing with generic sequences.
* Changes all possible `asdl_seq` types to use specific versions everywhere.
Partially revert commit ac46eb4ad6662cf6d771b20d8963658b2186c48c:
"bpo-38113: Update the Python-ast.c generator to PEP384 (gh-15957)".
Using a module state per module instance is causing subtle practical
problems.
For example, the Mercurial project replaces the __import__() function
to implement lazy import, whereas Python expected that "import _ast"
always return a fully initialized _ast module.
Add _PyAST_Fini() to clear the state at exit.
The _ast module has no state (set _astmodule.m_size to 0). Remove
astmodule_traverse(), astmodule_clear() and astmodule_free()
functions.
* Merge gen and frame state variables into one.
* Replace stack pointer with depth in PyFrameObject. Makes code easier to read and saves a word of memory.
Add sys.orig_argv attribute: the list of the original command line
arguments passed to the Python executable.
Rename also PyConfig._orig_argv to PyConfig.orig_argv and
document it.
Always create the empty bytes string singleton.
Optimize PyBytes_FromStringAndSize(str, 0): it no longer has to check
if the empty string singleton was created or not, it is always
available.
Add functions:
* _PyBytes_Init()
* bytes_get_empty(), bytes_new_empty()
* bytes_create_empty_string_singleton()
* unicode_create_empty_string_singleton()
_Py_unicode_state: rename empty structure member to empty_string.
Py_InitializeFromConfig() now always creates the empty tuple
singleton as soon as possible.
Optimize PyTuple_New(0): it no longer has to check if the empty tuple
was created or not, it is always creatd.
* Add tuple_create_empty_tuple_singleton() function.
* Add tuple_get_empty() function.
* Remove state parameter of tuple_alloc().
Each interpreter now has its own Unicode latin1 singletons.
Remove "ifdef EXPERIMENTAL_ISOLATED_SUBINTERPRETERS"
and "ifdef LATIN1_SINGLETONS": always enable latin1 singletons.
Optimize unicode_result_ready(): only attempt to get a latin1
singleton for PyUnicode_1BYTE_KIND.
Each interpreter now has its own MemoryError free list: it is not
longer shared by all interpreters.
Add _Py_exc_state structure and PyInterpreterState.exc_state member.
Move also errnomap into _Py_exc_state.
* Revert "bpo-40521: Make the empty frozenset per interpreter (GH-21068)"
This reverts commit 261cfedf76.
* bpo-40521: Empty frozensets are no longer singletons
* Complete the removal of the frozenset singleton
Each interpreter now has its own empty bytes string and single byte
character singletons.
Replace STRINGLIB_EMPTY macro with STRINGLIB_GET_EMPTY() macro.
Each interpreter now has its own dict free list:
* Move dict free lists into PyInterpreterState.
* Move PyDict_MAXFREELIST define to pycore_interp.h
* Add _Py_dict_state structure.
* Add tstate parameter to _PyDict_ClearFreeList() and _PyDict_Fini().
* In debug mode, ensure that the dict free lists are not used after
_PyDict_Fini() is called.
* Remove "#ifdef EXPERIMENTAL_ISOLATED_SUBINTERPRETERS".
* Rename _PyObject_GC_TRACK_impl() to _PyObject_GC_TRACK()
* Rename _PyObject_GC_UNTRACK_impl() to _PyObject_GC_UNTRACK()
* Omit filename and lineno parameters if NDEBUG is defined.
The PyObject_INIT() and PyObject_INIT_VAR() macros become aliases to,
respectively, PyObject_Init() and PyObject_InitVar() functions.
Rename _PyObject_INIT() and _PyObject_INIT_VAR() static inline
functions to, respectively, _PyObject_Init() and _PyObject_InitVar(),
and move them to pycore_object.h. Remove their return value:
their return type becomes void.
The _datetime module is now built with the Py_BUILD_CORE_MODULE macro
defined.
Remove an outdated comment on _Py_tracemalloc_config.
On Windows, #include "pyerrors.h" no longer defines "snprintf" and
"vsnprintf" macros.
PyOS_snprintf() and PyOS_vsnprintf() should be used to get portable
behavior.
Replace snprintf() calls with PyOS_snprintf() and replace vsnprintf()
calls with PyOS_vsnprintf().
In GH-2866, _Py_Bit_Length() was added to pymath.h for lack of a better
location. GH-20518 added a more appropriate header file for bit utilities. It
also shows how to properly use intrinsics. This allows reconsidering bpo-29782.
* Move the function to the new header.
* Changed return type to match __builtin_clzl() and reviewed usage.
* Use intrinsics where available.
* Pick a fallback implementation suitable for inlining.
This commit removes the old parser, the deprecated parser module, the old parser compatibility flags and environment variables and all associated support code and documentation.
The PEP 353, written in 2005, introduced PY_FORMAT_SIZE_T. Python no
longer supports macOS 10.4 and Visual Studio 2010, but requires more
recent macOS and Visual Studio versions. In 2020 with Python 3.10, it
is now safe to use directly "%zu" to format size_t and "%zi" to
format Py_ssize_t.
Export explicitly the Py_GetArgcArgv() function to the C API and
document the function. Previously, it was exported implicitly which
no longer works since Python is built with -fvisibility=hidden.
* Add PyConfig._orig_argv member.
* Py_InitializeFromConfig() no longer calls _PyConfig_Write() twice.
* PyConfig_Read() no longer initializes Py_GetArgcArgv(): it is now
_PyConfig_Write() responsibility.
* _PyConfig_Write() result type becomes PyStatus instead of void.
* Write an unit test on Py_GetArgcArgv().
* Rename pycore_byteswap.h to pycore_bitutils.h.
* Move popcount_digit() to pycore_bitutils.h as _Py_popcount32().
* _Py_popcount32() uses GCC and clang builtin function if available.
* Add unit tests to _Py_popcount32().
Each interpreter now has its own context free list:
* Move context free list into PyInterpreterState.
* Add _Py_context_state structure.
* Add tstate parameter to _PyContext_ClearFreeList()
and _PyContext_Fini().
* Pass tstate to clear_freelists().
Each interpreter now has its own asynchronous generator free lists:
* Move async gen free lists into PyInterpreterState.
* Move _PyAsyncGen_MAXFREELIST define to pycore_interp.h
* Add _Py_async_gen_state structure.
* Add tstate parameter to _PyAsyncGen_ClearFreeLists
and _PyAsyncGen_Fini().
Each interpreter now has its own list free list:
* Move list numfree and free_list into PyInterpreterState.
* Add _Py_list_state structure.
* Add tstate parameter to _PyList_ClearFreeList()
and _PyList_Fini().
* Remove "#ifdef EXPERIMENTAL_ISOLATED_SUBINTERPRETERS".
* _PyGC_Fini() clears gcstate->garbage list which can be stored in
the list free list. Call _PyGC_Fini() before _PyList_Fini() to
prevent leaking this list.
Each interpreter now has its own frame free list:
* Move frame free list into PyInterpreterState.
* Add _Py_frame_state structure.
* Add tstate parameter to _PyFrame_ClearFreeList()
and _PyFrame_Fini().
* Remove "#if PyFrame_MAXFREELIST > 0".
* Remove "#ifdef EXPERIMENTAL_ISOLATED_SUBINTERPRETERS".
Each interpreter now has its own float free list:
* Move tuple numfree and free_list into PyInterpreterState.
* Add _Py_float_state structure.
* Add tstate parameter to _PyFloat_ClearFreeList()
and _PyFloat_Fini().
Each interpreter now has its own tuple free lists:
* Move tuple numfree and free_list arrays into PyInterpreterState.
* Define PyTuple_MAXSAVESIZE and PyTuple_MAXFREELIST macros in
pycore_interp.h.
* Add _Py_tuple_state structure. Pass it explicitly to tuple_alloc().
* Add tstate parameter to _PyTuple_ClearFreeList()
* Each interpreter now has its own empty tuple singleton.
my_fgets() now calls _PyOS_InterruptOccurred(tstate) to check for
pending signals, rather calling PyOS_InterruptOccurred().
my_fgets() is called with the GIL released, whereas
PyOS_InterruptOccurred() must be called with the GIL held.
test_repl: use text=True and avoid SuppressCrashReport in
test_multiline_string_parsing().
Fix my_fgets() on Windows: fgets(fp) does crash if fileno(fp) is closed.
PyOS_AfterFork_Child() helper functions now return a PyStatus:
PyOS_AfterFork_Child() is now responsible to handle errors.
* Move _PySignal_AfterFork() to the internal C API
* Add #ifdef HAVE_FORK on _PyGILState_Reinit(), _PySignal_AfterFork()
and _PyInterpreterState_DeleteExceptMain().
Previously, the result could have been an instance of a subclass of int.
Also revert bpo-26202 and make attributes start, stop and step of the range
object having exact type int.
Add private function _PyNumber_Index() which preserves the old behavior
of PyNumber_Index() for performance to use it in the conversion functions
like PyLong_AsLong().
Convert Py_REFCNT() and Py_SIZE() macros to static inline functions.
They cannot be used as l-value anymore: use Py_SET_REFCNT() and
Py_SET_SIZE() to set an object reference count and size.
Replace &Py_SIZE(self) with &((PyVarObject*)self)->ob_size
in arraymodule.c.
This change is backward incompatible on purpose, to prepare the C API
for an opaque PyObject structure.
This updates _PyErr_ChainStackItem() to use _PyErr_SetObject()
instead of _PyErr_ChainExceptions(). This prevents a hang in
certain circumstances because _PyErr_SetObject() performs checks
to prevent cycles in the exception context chain while
_PyErr_ChainExceptions() doesn't.
When an asyncio.Task is cancelled, the exception traceback now
starts with where the task was first interrupted. Previously,
the traceback only had "depth one."
Move PyInterpreterState.fs_codec into a new
PyInterpreterState.unicode structure.
Give a name to the fs_codec structure and use this structure in
unicodeobject.c.
_Py_hashtable_t values become regular "void *" pointers.
* Add _Py_hashtable_entry_t.data member
* Remove _Py_hashtable_t.data_size member
* Remove _Py_hashtable_t.get_func member. It is no longer needed
to specialize _Py_hashtable_get() for a specific value size, since
all entries now have the same size (void*).
* Remove the following macros:
* _Py_HASHTABLE_GET()
* _Py_HASHTABLE_SET()
* _Py_HASHTABLE_SET_NODATA()
* _Py_HASHTABLE_POP()
* Rename _Py_hashtable_pop() to _Py_hashtable_steal()
* _Py_hashtable_foreach() callback now gets key and value rather than
entry.
* Remove _Py_hashtable_value_destroy_func type. value_destroy_func
callback now only has a single parameter: data (void*).
Rewrite _tracemalloc to store "trace_t*" rather than directly
"trace_t" in traces hash tables. Traces are now allocated on the heap
memory, outside the hash table.
Add tracemalloc_copy_traces() and tracemalloc_copy_domains() helper
functions.
Remove _Py_hashtable_copy() function since there is no API to copy a
key or a value.
Remove also _Py_hashtable_delete() function which was commented.
Rewrite _Py_hashtable_t type to always store the key as
a "const void *" pointer. Add an explicit "key" member to
_Py_hashtable_entry_t.
Remove _Py_hashtable_t.key_size member.
hash and compare functions drop their hash table parameter, and their
'key' parameter type becomes "const void *".
Rewrite how the _tracemalloc module stores traces of other domains.
Rather than storing the domain inside the key, it now uses a new hash
table with the domain as the key, and the data is a per-domain traces
hash table.
* Add tracemalloc_domain hash table.
* Remove _Py_tracemalloc_config.use_domain.
* Remove pointer_t and related functions.
Add a new _Py_HashPointerRaw() function which avoids replacing -1
with -2 to micro-optimize hash table using pointer keys: using
_Py_hashtable_hash_ptr() hash function.
Optimize _Py_hashtable_get() and _Py_hashtable_get_entry() for
pointer keys:
* key_size == sizeof(void*)
* hash_func == _Py_hashtable_hash_ptr
* compare_func == _Py_hashtable_compare_direct
Changes:
* Add get_func and get_entry_func members to _Py_hashtable_t
* Convert _Py_hashtable_get() and _Py_hashtable_get_entry() functions
to static nline functions.
* Add specialized get and get entry for pointer keys.
* Move Modules/hashtable.h to Include/internal/pycore_hashtable.h
* Move Modules/hashtable.c to Python/hashtable.c
* Python is now linked to hashtable.c. _tracemalloc is no longer
linked to hashtable.c. Previously, marshal.c got hashtable.c via
_tracemalloc.c which is built as a builtin module.
Declare _PyErr_GetTopmostException() with PyAPI_FUNC() to properly
export the function in the C API. The function remains private
("_Py") prefix.
Co-Authored-By: Julien Danjou <julien@danjou.info>