When multiplying lists and tuples by `n`, increment each element's refcount, by `n`, just once.
Saves `n-1` increments per element, and allows for a leaner & faster copying loop.
Code by sweeneyde (Dennis Sweeney).
This change is strictly renames and moving code around. It helps in the following ways:
* ensures type-related init functions focus strictly on one of the three aspects (state, objects, types)
* passes in PyInterpreterState * to all those functions, simplifying work on moving types/objects/state to the interpreter
* consistent naming conventions help make what's going on more clear
* keeping API related to a type in the corresponding header file makes it more obvious where to look for it
https://bugs.python.org/issue46008
Keep track of whether unsafe_tuple_compare() calls are resolved by the very
first tuple elements, and adjust strategy accordingly. This can significantly
cut the number of calls made to the full-blown PyObject_RichCompareBool(),
and especially when duplicates are rare.
Co-authored-by: Łukasz Langa <lukasz@langa.pl>
Freelists for object structs can now be disabled. A new ``configure``
option ``--without-freelists`` can be used to disable all freelists
except empty tuple singleton. Internal Py*_MAXFREELIST macros can now
be defined as 0 without causing compiler warnings and segfaults.
Signed-off-by: Christian Heimes <christian@python.org>
For list.sort(), replace our ad hoc merge ordering strategy with the principled, elegant,
and provably near-optimal one from Munro and Wild's "powersort".
* Add Py_TPFLAGS_SEQUENCE and Py_TPFLAGS_MAPPING, add to all relevant standard builtin classes.
* Set relevant flags on collections.abc.Sequence and Mapping.
* Use flags in MATCH_SEQUENCE and MATCH_MAPPING opcodes.
* Inherit Py_TPFLAGS_SEQUENCE and Py_TPFLAGS_MAPPING.
* Add NEWS
* Remove interpreter-state map_abc and seq_abc fields.
Pass the current interpreter (interp) rather than the current Python
thread state (tstate) to internal functions which only use the
interpreter.
Modified functions:
* _PyXXX_Fini() and _PyXXX_ClearFreeList() functions
* _PyEval_SignalAsyncExc(), make_pending_calls()
* _PySys_GetObject(), sys_set_object(), sys_set_object_id(), sys_set_object_str()
* should_audit(), set_flags_from_config(), make_flags()
* _PyAtExit_Call()
* init_stdio_encoding()
* etc.
No longer use deprecated aliases to functions:
* Replace PyMem_MALLOC() with PyMem_Malloc()
* Replace PyMem_REALLOC() with PyMem_Realloc()
* Replace PyMem_FREE() with PyMem_Free()
* Replace PyMem_Del() with PyMem_Free()
* Replace PyMem_DEL() with PyMem_Free()
Modify also the PyMem_DEL() macro to use directly PyMem_Free().
In debug mode, ensure that free lists are no longer used after being
finalized. Set numfree to -1 in finalization functions
(eg. _PyList_Fini()), and then check that numfree is not equal to -1
before using a free list (e.g list_dealloc()).
Each interpreter now has its own list free list:
* Move list numfree and free_list into PyInterpreterState.
* Add _Py_list_state structure.
* Add tstate parameter to _PyList_ClearFreeList()
and _PyList_Fini().
* Remove "#ifdef EXPERIMENTAL_ISOLATED_SUBINTERPRETERS".
* _PyGC_Fini() clears gcstate->garbage list which can be stored in
the list free list. Call _PyGC_Fini() before _PyList_Fini() to
prevent leaking this list.
When Python is built with experimental isolated interpreters, disable
the list free list.
Temporary workaround until this cache is made per-interpreter.
Remove the following function from the C API:
* PyAsyncGen_ClearFreeLists()
* PyContext_ClearFreeList()
* PyDict_ClearFreeList()
* PyFloat_ClearFreeList()
* PyFrame_ClearFreeList()
* PyList_ClearFreeList()
* PySet_ClearFreeList()
* PyTuple_ClearFreeList()
Make these functions private, move them to the internal C API and
change their return type to void.
Call explicitly PyGC_Collect() to free all free lists.
Note: PySet_ClearFreeList() did nothing.
Add _PyIndex_Check() function to the internal C API: fast inlined
verson of PyIndex_Check().
Add Include/internal/pycore_abstract.h header file.
Replace PyIndex_Check() with _PyIndex_Check() in C files of Objects
and Python subdirectories.
This implements things like `list[int]`,
which returns an object of type `types.GenericAlias`.
This object mostly acts as a proxy for `list`,
but has attributes `__origin__` and `__args__`
that allow recovering the parts (with values `list` and `(int,)`.
There is also an approximate notion of type variables;
e.g. `list[T]` has a `__parameters__` attribute equal to `(T,)`.
Type variables are objects of type `typing.TypeVar`.
Speed up calls to list() by using the PEP 590 vectorcall
calling convention. Patch by Mark Shannon.
Co-authored-by: Mark Shannon <mark@hotpy.org>
Co-authored-by: Dong-hee Na <donghee.na92@gmail.com>
The bulk of this patch was generated automatically with:
for name in \
PyObject_Vectorcall \
Py_TPFLAGS_HAVE_VECTORCALL \
PyObject_VectorcallMethod \
PyVectorcall_Function \
PyObject_CallOneArg \
PyObject_CallMethodNoArgs \
PyObject_CallMethodOneArg \
;
do
echo $name
git grep -lwz _$name | xargs -0 sed -i "s/\b_$name\b/$name/g"
done
old=_PyObject_FastCallDict
new=PyObject_VectorcallDict
git grep -lwz $old | xargs -0 sed -i "s/\b$old\b/$new/g"
and then cleaned up:
- Revert changes to in docs & news
- Revert changes to backcompat defines in headers
- Nudge misaligned comments
…nctions with asserts
The actual overflow can never happen because of the following:
* The size of a list can't be greater than PY_SSIZE_T_MAX / sizeof(PyObject*).
* The size of a pointer on all supported plaftorms is at least 4 bytes.
* ofs is positive and less than the list size at the beginning of each iteration.
https://bugs.python.org/issue35091
Add new trashcan macros to deal with a double deallocation that could occur when the `tp_dealloc` of a subclass calls the `tp_dealloc` of a base class and that base class uses the trashcan mechanism.
Patch by Jeroen Demeyer.