Add a special case for `list.extend(dict)` and `list(dict)` so that those
patterns behave atomically with respect to modifications to the list or
dictionary.
This is required by multiprocessing, which assumes that
`list(_finalizer_registry)` is atomic.
Rewrote binarysort() for clarity.
Also changed the signature to be more coherent (it was mixing sortslice with raw pointers).
No change in method or functionality. However, I left some experiments in, disabled for now
via `#if` tricks. Since this code was first written, some kinds of comparisons have gotten
enormously faster (like for lists of floats), which changes the tradeoffs.
For example, plain insertion sort's simpler innermost loop and highly predictable branches
leave it very competitive (even beating, by a bit) binary insertion when comparisons are
very cheap, despite that it can do many more compares. And it wins big on runs that
are already sorted (moving the next one in takes only 1 compare then).
So I left code for a plain insertion sort, to make future experimenting easier.
Also made the maximum value of minrun a `#define` (``MAX_MINRUN`) to make
experimenting with that easier too.
And another bit of `#if``-disabled code rewrites binary insertion's innermost loop to
remove its unpredictable branch. Surprisingly, this doesn't really seem to help
overall. I'm unclear on why not. It certainly adds more instructions, but they're very
simple, and it's hard to be believe they cost as much as a branch miss.
* GH-116554: Relax list.sort()'s notion of "descending" run
Rewrote `count_run()` so that sub-runs of equal elements no longer end a descending run. Both ascending and descending runs can have arbitrarily many sub-runs of arbitrarily many equal elements now. This is tricky, because we only use ``<`` comparisons, so checking for equality doesn't come "for free". Surprisingly, it turned out there's a very cheap (one comparison) way to determine whether an ascending run consisted of all-equal elements. That sealed the deal.
In addition, after a descending run is reversed in-place, we now go on to see whether it can be extended by an ascending run that just happens to be adjacent. This succeeds in finding at least one additional element to append about half the time, and so appears to more than repay its cost (the savings come from getting to skip a binary search, when a short run is artificially forced to length MIINRUN later, for each new element `count_run()` can add to the initial run).
While these have been in the back of my mind for years, a question on StackOverflow pushed it to action:
https://stackoverflow.com/questions/78108792/
They were wondering why it took about 4x longer to sort a list like:
[999_999, 999_999, ..., 2, 2, 1, 1, 0, 0]
than "similar" lists. Of course that runs very much faster after this patch.
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Pieter Eendebak <pieter.eendebak@gmail.com>
The new `PyList_GetItemRef` is similar to `PyList_GetItem`, but returns
a strong reference instead of a borrowed reference. Additionally, if the
passed "list" object is not a list, the function sets a `TypeError`
instead of calling `PyErr_BadInternalCall()`.
Fix undefined behavior warnings (UBSan -fsanitize=function), for example:
Objects/object.c:674:11: runtime error: call to function list_repr through pointer to incorrect function type 'struct _object *(*)(struct _object *)'
listobject.c:382: note: list_repr defined here
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior Objects/object.c:674:11 in
* Split list_extend() into two sub-functions: list_extend_fast() and
list_extend_iter().
* list_inplace_concat() no longer has to call Py_DECREF() on the
list_extend() result, since list_extend() now returns an int.
gh-106168: Update the size only after setting the item, to avoid temporary inconsistencies.
Also remove the "what's new" sentence regarding the size setting since tuples cannot grow after allocation.
Move private _PyEval functions to the internal C API
(pycore_ceval.h):
* _PyEval_GetBuiltin()
* _PyEval_GetBuiltinId()
* _PyEval_GetSwitchInterval()
* _PyEval_MakePendingCalls()
* _PyEval_SetProfile()
* _PyEval_SetSwitchInterval()
* _PyEval_SetTrace()
No longer export most of these functions.
Move private debug _PyObject functions to the internal C API
(pycore_object.h):
* _PyDebugAllocatorStats()
* _PyObject_CheckConsistency()
* _PyObject_DebugTypeStats()
* _PyObject_IsFreed()
No longer export most of these functions, except of
_PyObject_IsFreed().
Move test functions using _PyObject_IsFreed() from _testcapi to
_testinternalcapi. check_pyobject_is_freed() test no longer catch
_testcapi.error: the tested function cannot raise _testcapi.error.
Remove the following functions from the C API, move them to the internal C
API: add a new pycore_modsupport.h internal header file:
* PyModule_CreateInitialized()
* _PyArg_NoKwnames()
* _Py_VaBuildStack()
No longer export these functions.
PyTuple_SET_ITEM() and PyList_SET_ITEM() now check the index argument
with an assertion if Python is built in debug mode or is built with
assertions.
* list_extend() and _PyList_AppendTakeRef() now set the list size
before calling PyList_SET_ITEM().
* PyStructSequence_GetItem() and PyStructSequence_SetItem() now check
the index argument: must be lesser than REAL_SIZE(op).
* PyStructSequence_GET_ITEM() and PyStructSequence_SET_ITEM() are now
aliases to PyStructSequence_GetItem() and
PyStructSequence_SetItem().
* Eliminate all remaining uses of Py_SIZE and Py_SET_SIZE on PyLongObject, adding asserts.
* Change layout of size/sign bits in longobject to support future addition of immortal ints and tagged medium ints.
* Add functions to hide some internals of long object, and for setting sign and digit count.
* Replace uses of IS_MEDIUM_VALUE macro with _PyLong_IsCompact().
When executing the BUILD_LIST opcode, steal the references from the stack,
in a manner similar to the BUILD_TUPLE opcode. Implement this by offloading
the logic to a new private API, _PyList_FromArraySteal(), that works similarly
to _PyTuple_FromArraySteal().
This way, instead of performing multiple stack pointer adjustments while the
list is being initialized, the stack is adjusted only once and a fast memory
copy operation is performed in one fell swoop.
builtins and extension module functions and methods that expect boolean values for parameters now accept any Python object rather than just a bool or int type. This is more consistent with how native Python code itself behaves.
The implementation of __sizeof__() methods using _PyObject_SIZE() now
use an unsigned type (size_t) to compute the size, rather than a signed
type (Py_ssize_t).
Cast explicitly signed (Py_ssize_t) values to unsigned type
(Py_ssize_t).