* Rename pycore_byteswap.h to pycore_bitutils.h.
* Move popcount_digit() to pycore_bitutils.h as _Py_popcount32().
* _Py_popcount32() uses GCC and clang builtin function if available.
* Add unit tests to _Py_popcount32().
In debug mode, ensure that free lists are no longer used after being
finalized. Set numfree to -1 in finalization functions
(eg. _PyList_Fini()), and then check that numfree is not equal to -1
before using a free list (e.g list_dealloc()).
Each interpreter now has its own asynchronous generator free lists:
* Move async gen free lists into PyInterpreterState.
* Move _PyAsyncGen_MAXFREELIST define to pycore_interp.h
* Add _Py_async_gen_state structure.
* Add tstate parameter to _PyAsyncGen_ClearFreeLists
and _PyAsyncGen_Fini().
Each interpreter now has its own list free list:
* Move list numfree and free_list into PyInterpreterState.
* Add _Py_list_state structure.
* Add tstate parameter to _PyList_ClearFreeList()
and _PyList_Fini().
* Remove "#ifdef EXPERIMENTAL_ISOLATED_SUBINTERPRETERS".
* _PyGC_Fini() clears gcstate->garbage list which can be stored in
the list free list. Call _PyGC_Fini() before _PyList_Fini() to
prevent leaking this list.
Each interpreter now has its own frame free list:
* Move frame free list into PyInterpreterState.
* Add _Py_frame_state structure.
* Add tstate parameter to _PyFrame_ClearFreeList()
and _PyFrame_Fini().
* Remove "#if PyFrame_MAXFREELIST > 0".
* Remove "#ifdef EXPERIMENTAL_ISOLATED_SUBINTERPRETERS".
Each interpreter now has its own float free list:
* Move tuple numfree and free_list into PyInterpreterState.
* Add _Py_float_state structure.
* Add tstate parameter to _PyFloat_ClearFreeList()
and _PyFloat_Fini().
Each interpreter now has its own tuple free lists:
* Move tuple numfree and free_list arrays into PyInterpreterState.
* Define PyTuple_MAXSAVESIZE and PyTuple_MAXFREELIST macros in
pycore_interp.h.
* Add _Py_tuple_state structure. Pass it explicitly to tuple_alloc().
* Add tstate parameter to _PyTuple_ClearFreeList()
* Each interpreter now has its own empty tuple singleton.
* bpo-29882: Add an efficient popcount method for integers
* Update 'sign bit' and versionadded in docs
* Add entry to whatsnew document
* Doc: use positive example, mention population count
* Minor cleanups of the core code
* Move popcount_digit closer to where it's used
* Use z instead of self after conversion
* Add 'absolute value' and 'population count' to docstring
* Fix clinic error about missing summary line
* Ensure popcount_digit is portable with 64-bit ints
Co-authored-by: Mark Dickinson <dickinsm@gmail.com>
Previously, the result could have been an instance of a subclass of int.
Also revert bpo-26202 and make attributes start, stop and step of the range
object having exact type int.
Add private function _PyNumber_Index() which preserves the old behavior
of PyNumber_Index() for performance to use it in the conversion functions
like PyLong_AsLong().
Convert Py_REFCNT() and Py_SIZE() macros to static inline functions.
They cannot be used as l-value anymore: use Py_SET_REFCNT() and
Py_SET_SIZE() to set an object reference count and size.
Replace &Py_SIZE(self) with &((PyVarObject*)self)->ob_size
in arraymodule.c.
This change is backward incompatible on purpose, to prepare the C API
for an opaque PyObject structure.
* Fix outdated __int__ and nb_int references in comments
* Also update C-API documentation
* Add back missing 'method' word
* Remove .. deprecated notices
This updates _PyErr_ChainStackItem() to use _PyErr_SetObject()
instead of _PyErr_ChainExceptions(). This prevents a hang in
certain circumstances because _PyErr_SetObject() performs checks
to prevent cycles in the exception context chain while
_PyErr_ChainExceptions() doesn't.
When an asyncio.Task is cancelled, the exception traceback now
starts with where the task was first interrupted. Previously,
the traceback only had "depth one."
Move PyInterpreterState.fs_codec into a new
PyInterpreterState.unicode structure.
Give a name to the fs_codec structure and use this structure in
unicodeobject.c.
The previous commits on bpo-29587 got exception chaining working
with gen.throw() in the `yield` case. This patch also gets the
`yield from` case working.
As a consequence, implicit exception chaining now also works in
the asyncio scenario of awaiting on a task when an exception is
already active.
Tests are included for both the asyncio case and the pure
generator-only case.
Avoid unnecessary overhead in _PyDict_GetItemIdWithError() by calling
_PyDict_GetItem_KnownHash() instead of the more generic PyDict_GetItemWithError(),
since we already know the hash of interned strings.
* bpo-37986: Improve perfomance of PyLong_FromDouble()
* Use strict bound check for safety and symmetry
* Remove possibly outdated performance claims
Co-authored-by: Mark Dickinson <dickinsm@gmail.com>
Module C state is now accessible from C-defined heap type methods (PEP 573).
Patch by Marcel Plch and Petr Viktorin.
Co-authored-by: Marcel Plch <mplch@redhat.com>
Co-authored-by: Victor Stinner <vstinner@python.org>
When Python is built with experimental isolated interpreters, disable
the list free list.
Temporary workaround until this cache is made per-interpreter.
When Python is built with experimental isolated interpreters, disable
the type method cache.
Temporary workaround until the cache is made per-interpreter.
When Python is built with experimental isolated interpreters, disable
tuple, dict and free free lists.
Temporary workaround until these caches are made per-interpreter.
Add frame_alloc() and frame_get_builtins() subfunctions to simplify
_PyFrame_New_NoTrack().
When Python is built in the experimental isolated subinterpreters
mode, disable Unicode singletons and Unicode interned strings since
they are shared by all interpreters.
Temporary workaround until these caches are made per-interpreter.
_PyErr_ChainExceptions() now ensures that the first parameter is an
exception type, as done by _PyErr_SetObject().
* The following function now check PyExceptionInstance_Check() in an
assertion using a new _PyBaseExceptionObject_cast() helper
function:
* PyException_GetTraceback(), PyException_SetTraceback()
* PyException_GetCause(), PyException_SetCause()
* PyException_GetContext(), PyException_SetContext()
* PyExceptionClass_Name() now checks PyExceptionClass_Check() with an
assertion.
* Remove XXX comment and add gi_exc_state variable to _gen_throw().
* Remove comment from test_generators
This is a follow-up to GH-19823 that removes the check that the
exception value isn't NULL, prior to calling _PyErr_ChainExceptions().
This enables implicit exception chaining for gen.throw() in more
circumstances.
The commit also adds a test that a particular code snippet involving
gen.throw() doesn't crash. The test shows why the new
`gi_exc_state.exc_type != Py_None` check that was added is necessary.
Without the new check, the code snippet (as well as a number of other
tests) crashes on certain platforms (e.g. Fedora but not Mac).
Before this commit, if an exception was active inside a generator
when calling gen.throw(), that exception was lost (i.e. there was
no implicit exception chaining). This commit fixes that by
setting exc.__context__ when calling gen.throw(exc).
Before this commit, if an exception was active inside a generator
when calling gen.throw(), then that exception was lost (i.e. there
was no implicit exception chaining). This commit fixes that.
New PyFrame_GetBack() function: get the frame next outer frame.
Replace frame->f_back with PyFrame_GetBack(frame) in most code but
frameobject.c, ceval.c and genobject.c.
Remove the following function from the C API:
* PyAsyncGen_ClearFreeLists()
* PyContext_ClearFreeList()
* PyDict_ClearFreeList()
* PyFloat_ClearFreeList()
* PyFrame_ClearFreeList()
* PyList_ClearFreeList()
* PySet_ClearFreeList()
* PyTuple_ClearFreeList()
Make these functions private, move them to the internal C API and
change their return type to void.
Call explicitly PyGC_Collect() to free all free lists.
Note: PySet_ClearFreeList() did nothing.
PyFrame_GetCode(frame): return a borrowed reference to the frame
code.
Replace frame->f_code with PyFrame_GetCode(frame) in most code,
except in frameobject.c, genobject.c and ceval.c.
Also add PyFrame_GetLineNumber() to the limited C API.
Add a new separated pyframe.h header file of the PyFrame public C
API: it is included by Python.h.
Add PyFrame_GetLineNumber() to the limited C API.
Replace "struct _frame" with "PyFrameObject" in header files.
PyFrameObject is now defined as struct _frame by pyframe.h which is
included early enough in Python.h.
Added str.removeprefix and str.removesuffix methods and corresponding
bytes, bytearray, and collections.UserString methods to remove affixes
from a string if present. See PEP 616 for a full description.
* Replace PY_INT64_T with int64_t
* Replace PY_UINT32_T with uint32_t
* Replace PY_UINT64_T with uint64_t
sha3module.c no longer checks if PY_UINT64_T is defined since it's
always defined and uint64_t is always available on platforms
supported by Python.
Add a new internal pycore_byteswap.h header file with the following
functions:
* _Py_bswap16()
* _Py_bswap32()
* _Py_bswap64()
Use these functions in _ctypes, sha256 and sha512 modules,
and also use in the UTF-32 encoder.
sha256, sha512 and _ctypes modules are now built with the internal
C API.
Rename _PyInterpreterState_GET_UNSAFE() to _PyInterpreterState_GET()
for consistency with _PyThreadState_GET() and to have a shorter name
(help to fit into 80 columns).
Add also "assert(tstate != NULL);" to the function.
Don't access PyInterpreterState.config member directly anymore, but
use new functions:
* _PyInterpreterState_GetConfig()
* _PyInterpreterState_SetConfig()
* _Py_GetConfig()
* Keep constness of pointer objects.
Also moved an auto variable that got consted into its innermost necessary scope.
* move def
Co-authored-by: Benjamin Peterson <benjamin@python.org>
Always declare PyIndex_Check() as an opaque function to hide
implementation details: remove PyIndex_Check() macro. The macro
accessed directly the PyTypeObject.tp_as_number member.
Add _PyIndex_Check() function to the internal C API: fast inlined
verson of PyIndex_Check().
Add Include/internal/pycore_abstract.h header file.
Replace PyIndex_Check() with _PyIndex_Check() in C files of Objects
and Python subdirectories.
PyType_HasFeature() now always calls PyType_GetFlags() to hide
implementation details. Previously, it accessed directly the
PyTypeObject.tp_flags member when the limited C API was not used.
Add fast inlined version _PyType_HasFeature() and _PyType_IS_GC()
for object.c and typeobject.c.
The PyObject_NEW() macro becomes an alias to the PyObject_New()
macro, and the PyObject_NEW_VAR() macro becomes an alias to the
PyObject_NewVar() macro, to hide implementation details. They no
longer access directly the PyTypeObject.tp_basicsize member.
Exclude _PyObject_SIZE() and _PyObject_VAR_SIZE() macros from
the limited C API.
Replace PyObject_NEW() with PyObject_New() and replace
PyObject_NEW_VAR() with PyObject_NewVar().
This implements things like `list[int]`,
which returns an object of type `types.GenericAlias`.
This object mostly acts as a proxy for `list`,
but has attributes `__origin__` and `__args__`
that allow recovering the parts (with values `list` and `(int,)`.
There is also an approximate notion of type variables;
e.g. `list[T]` has a `__parameters__` attribute equal to `(T,)`.
Type variables are objects of type `typing.TypeVar`.
str.encode() and str.decode() no longer check the encoding and errors
in development mode or in debug mode during Python finalization. The
codecs machinery can no longer work on very late calls to
str.encode() and str.decode().
This change should help to call _PyObject_Dump() to debug during late
Python finalization.
Convert the PyObject_GET_WEAKREFS_LISTPTR() macro to a function to
hide implementation details: the macro accessed directly to the
PyTypeObject.tp_weaklistoffset member.
Add _PyObject_GET_WEAKREFS_LISTPTR() static inline function to the
internal C API.
Speed up calls to list() by using the PEP 590 vectorcall
calling convention. Patch by Mark Shannon.
Co-authored-by: Mark Shannon <mark@hotpy.org>
Co-authored-by: Dong-hee Na <donghee.na92@gmail.com>
Extension modules: m_traverse, m_clear and m_free functions of
PyModuleDef are no longer called if the module state was requested
but is not allocated yet. This is the case immediately after the
module is created and before the module is executed (Py_mod_exec
function). More precisely, these functions are not called if m_size is
greater than 0 and the module state (as returned by
PyModule_GetState()) is NULL.
Extension modules without module state (m_size <= 0) are not affected.
Co-Authored-By: Petr Viktorin <encukou@gmail.com>
Replace _PyInterpreterState_Get() function call with
_PyInterpreterState_GET_UNSAFE() macro which is more efficient but
don't check if tstate or interp is NULL.
_Py_GetConfigsAsDict() now uses _PyThreadState_GET().
Py_TRASHCAN_BEGIN_CONDITION and Py_TRASHCAN_END macro no longer
access PyThreadState attributes, but call new private
_PyTrash_begin() and _PyTrash_end() functions which hide
implementation details.
Move the static inline function flavor of Py_EnterRecursiveCall() and
Py_LeaveRecursiveCall() to the internal C API: they access
PyThreadState attributes. The limited C API provides regular
functions which hide implementation details.
We make `|=` raise TypeError, since it would be surprising if `C.__dict__ |= {'x': 0}` silently did nothing, while `C.__dict__.update({'x': 0})` is an error.
The Py_FatalError() function is replaced with a macro which logs
automatically the name of the current function, unless the
Py_LIMITED_API macro is defined.
Changes:
* Add _Py_FatalErrorFunc() function.
* Remove the function name from the message of Py_FatalError() calls
which included the function name.
* Update tests.
This continues the `range()` part of #13930. The complete pull request is stalled on discussions around dicts, but `range()` should not be controversial. (And I plan to open PRs for other parts if this is merged.)
On top of Mark's change, I unified `range_new` and `range_vectorcall`, which had a lot of duplicate code.
https://bugs.python.org/issue37207
The constants `RESTRICTED` and `PY_WRITE_RESTRICTED` no longer have a meaning in Python 3. Therefore, CPython should not use them.
CC @matrixise
https://bugs.python.org/issue36347
The fix for [bpo-39386](https://bugs.python.org/issue39386) attempted to make it so you couldn't reuse a
agen.aclose() coroutine object. It accidentally also prevented you
from calling aclose() at all on an async generator that was already
closed or exhausted. This commit fixes it so we're only blocking the
actually illegal cases, while allowing the legal cases.
The new tests failed before this patch. Also confirmed that this fixes
the test failures we were seeing in Trio with Python dev builds:
https://github.com/python-trio/trio/pull/1396https://bugs.python.org/issue39606
Move the dtoa.h header file to the internal C API as pycore_dtoa.h:
it only contains private functions (prefixed by "_Py").
The math and cmath modules must now be compiled with the
Py_BUILD_CORE macro defined.
Move the bytes_methods.h header file to the internal C API as
pycore_bytes_methods.h: it only contains private symbols (prefixed by
"_Py"), except of the PyDoc_STRVAR_shared() macro.
gcc -Wcast-qual turns up a number of instances of casting away constness of pointers. Some of these can be safely modified, by either:
Adding the const to the type cast, as in:
- return _PyUnicode_FromUCS1((unsigned char*)s, size);
+ return _PyUnicode_FromUCS1((const unsigned char*)s, size);
or, Removing the cast entirely, because it's not necessary (but probably was at one time), as in:
- PyDTrace_FUNCTION_ENTRY((char *)filename, (char *)funcname, lineno);
+ PyDTrace_FUNCTION_ENTRY(filename, funcname, lineno);
These changes will not change code, but they will make it much easier to check for errors in consts
The bulk of this patch was generated automatically with:
for name in \
PyObject_Vectorcall \
Py_TPFLAGS_HAVE_VECTORCALL \
PyObject_VectorcallMethod \
PyVectorcall_Function \
PyObject_CallOneArg \
PyObject_CallMethodNoArgs \
PyObject_CallMethodOneArg \
;
do
echo $name
git grep -lwz _$name | xargs -0 sed -i "s/\b_$name\b/$name/g"
done
old=_PyObject_FastCallDict
new=PyObject_VectorcallDict
git grep -lwz $old | xargs -0 sed -i "s/\b$old\b/$new/g"
and then cleaned up:
- Revert changes to in docs & news
- Revert changes to backcompat defines in headers
- Nudge misaligned comments
Add a fast-path for UTF-8 encoding in PyUnicode_EncodeFSDefault()
and PyUnicode_DecodeFSDefaultAndSize().
Add _PyUnicode_FiniEncodings() helper function for _PyUnicode_Fini().
In the limited C API, PyObject_INIT() and PyObject_INIT_VAR() are now
defined as aliases to PyObject_Init() and PyObject_InitVar() to make
their implementation opaque. It avoids to leak implementation details
in the limited C API.
Exclude the following functions from the limited C API, move them
from object.h to cpython/object.h:
* _Py_NewReference()
* _Py_ForgetReference()
* _PyTraceMalloc_NewReference()
* _Py_GetRefTotal()
The macro is defined after Py_DECREF() and so is no longer used by
Py_DECREF().
Moving _Py_Dealloc() macro back from cpython/object.h to object.h
would require to move a lot of definitions as well: PyTypeObject and
many related types used by PyTypeObject.
Keep _Py_Dealloc() as an opaque function call to avoid leaking
implementation details in the limited C API (object.h): remove
_Py_Dealloc() macro from cpython/object.h.
_Py_NewReference() becomes a regular opaque function, rather than a
static inline function in the C API (object.h), to better hide
implementation details.
Move _Py_tracemalloc_config from public pymem.h to internal
pycore_pymem.h header.
Make _Py_AddToAllObjects() private.
Restore PyObject_IsInstance() comment explaining why only tuples of
types are accepted, but not general sequence. Comment written by
Guido van Rossum in commit 03290ecbf1
which implements isinstance(x, (A, B, ...)). The comment was lost in
a PyObject_IsInstance() optimization:
commit ec569b7947.
Cleanup also the code. recursive_isinstance() is no longer recursive,
so rename it to object_isinstance(), whereas object_isinstance() is
recursive and so rename it to object_recursive_isinstance().
* Remove _Py_INC_REFTOTAL and _Py_DEC_REFTOTAL macros: modify
directly _Py_RefTotal.
* _Py_ForgetReference() is no longer defined if the Py_TRACE_REFS
macro is not defined.
* Remove _Py_NewReference() implementation from object.c:
unify the two implementations in object.h inline function.
* Fix Py_TRACE_REFS build: _Py_INC_TPALLOCS() macro has been removed.
Replace Py_FatalError() calls with _PyErr_WriteUnraisableMsg(),
_PyObject_ASSERT_FAILED_MSG() or Py_UNREACHABLE()
in unicode_dealloc() and unicode_release_interned().
Rename init_slotdefs() to _PyTypes_InitSlotDefs() and add a return
value of type PyStatus. The function is now called exactly once from
_PyTypes_Init(). Replace calls to init_slotdefs() with an assertion
checking that slotdefs is initialized.
Replace Py_FatalError() with _PyObject_ASSERT_FAILED_MSG() in
object.c and typeobject.c to also dump the involved Python object on
a fatal error. It should ease debug when such fatal error occurs.
If the double linked list is inconsistent, _Py_ForgetReference() no
longer dumps previous and next objects in the fatal error, it now
only dumps the current object. It ensures that the error message
is displayed even if dumping the object does crash Python.
Enhance _Py_ForgetReference() error messages;
_PyObject_ASSERT_FAILED_MSG() logs the "_Py_ForgetReference" function
name.
Improve multi-threaded performance by dropping the GIL in the fast path
of bytes.join. To avoid increasing overhead for small joins, it is only
done if the output size exceeds a threshold.
intern_strings() now raises a SystemError, rather than calling
Py_FatalError().
intern_string_constants() now reports exceptions to the caller,
rather than ignoring silently exceptions.
If the export count is negative, _memory_release() now raises a
SystemError and returns -1, rather than calling Py_FatalError()
which aborts the process.
If PyModule_Create2() is called when the Python import machinery is
not initialized, it now raises a SystemError and returns NULL,
instead of calling Py_FatalError() which aborts the process.
The caller must be prepared to handle NULL anyway.
Replace int with intptr_t to fix the warning:
objects\frameobject.c(341): warning C4244: 'initializing':
conversion from '__int64' to 'int', possible loss of data
The public API symbols being removed are:
_PyBytes_InsertThousandsGroupingLocale, _PyBytes_InsertThousandsGrouping, _Py_InitializeFromArgs, _Py_InitializeFromWideArgs, _PyFloat_Repr, _PyFloat_Digits,
_PyFloat_DigitsInit, PyFrame_ExtendStack, _PyAIterWrapper_Type, PyNullImporter_Type, PyCmpWrapper_Type, PySortWrapper_Type, PyNoArgsFunction.
Each Python subinterpreter now has its own "small integer
singletons": numbers in [-5; 257] range.
It is no longer possible to change the number of small integers at
build time by overriding NSMALLNEGINTS and NSMALLPOSINTS macros:
macros should now be modified manually in pycore_pystate.h header
file.
For now, continue to share _PyLong_Zero and _PyLong_One singletons
between all subinterpreters.
Remove BEGIN_FINALLY, END_FINALLY, CALL_FINALLY and POP_FINALLY bytecodes. Implement finally blocks by code duplication.
Reimplement frame.lineno setter using line numbers rather than bytecode offsets.
Remove PyMethod_ClearFreeList() and PyCFunction_ClearFreeList()
functions: the free lists of bound method objects have been removed.
Remove also _PyMethod_Fini() and _PyCFunction_Fini() functions.
The PyFPE_START_PROTECT() and PyFPE_END_PROTECT() macros are empty:
they have been doing nothing for the last year (since commit
735ae8d139), so stop using them.
Ignore `GeneratorExit` exceptions when throwing an exception into the `aclose` coroutine of an asynchronous generator.
https://bugs.python.org/issue35409
* Add _PyObject_VectorcallTstate() function: similar to
_PyObject_Vectorcall(), but with tstate parameter
* Add tstate parameter to _PyObject_MakeTpCall()
bpo-3605, bpo-38733: Optimize _PyErr_Occurred(): remove "tstate ==
NULL" test.
Py_FatalError() no longer calls PyErr_Occurred() if called without
holding the GIL. So PyErr_Occurred() no longer has to support
tstate==NULL case.
_Py_CheckFunctionResult(): use directly _PyErr_Occurred() to avoid
explicit "!= NULL" test.
Additional note: the `method_check_args` function in `Objects/descrobject.c` is written in such a way that it applies to all kinds of descriptors. In particular, a future re-implementation of `wrapper_descriptor` could use that code.
CC @vstinner @encukou
https://bugs.python.org/issue37645
Automerge-Triggered-By: @encukou
* Add _Py_EnterRecursiveCall() and _Py_LeaveRecursiveCall() which
require a tstate argument.
* Pass tstate to _Py_MakeRecCheck() and _Py_CheckRecursiveCall().
* Convert Py_EnterRecursiveCall() and Py_LeaveRecursiveCall() macros
to static inline functions.
_PyThreadState_GET() is the most efficient way to get the tstate, and
so using it with _Py_EnterRecursiveCall() and
_Py_LeaveRecursiveCall() should be a little bit more efficient than
using Py_EnterRecursiveCall() and Py_LeaveRecursiveCall() which use
the "slower" PyThreadState_GET().
The implementation of weakref.proxy's methods call back into the Python
API using a borrowed references of the weakly referenced object
(acquired via PyWeakref_GET_OBJECT). This API call may delete the last
reference to the object (either directly or via GC), leaving a dangling
pointer, which can be subsequently dereferenced.
To fix this, claim a temporary ownership of the referenced object when
calling the appropriate method. Some functions because at the moment they
do not need to access the borrowed referent, but to protect against
future changes to these functions, ownership need to be fixed in
all potentially affected methods.
It is similar to the more general code in the gc module, but
here we know the name of the module.
https://bugs.python.org/issue33714
Automerge-Triggered-By: @encukou
Some objects like Py_None are not initialized with conventional means
that prepare the circular linked list pointers, leaving them unlinked
from the rest of the objects. For those objects, NULL pointers does
not mean that they are freed, so we need to skip the check in those
cases.
bpo-36389, bpo-38376: The _PyObject_CheckConsistency() function is
now also available in release mode. For example, it can be used to
debug a crash in the visit_decref() function of the GC.
Modify the following functions to also work in release mode:
* _PyDict_CheckConsistency()
* _PyObject_CheckConsistency()
* _PyType_CheckConsistency()
* _PyUnicode_CheckConsistency()
Other changes:
* _PyMem_IsPtrFreed(ptr) now also returns 1 if ptr is NULL
(equals to 0).
* _PyBytesWriter_CheckConsistency() now returns 1 and is only used
with assert().
* Reorder _PyObject_Dump() to write safe fields first, and only
attempt to render repr() at the end.
bpo-37802, bpo-38321: Fix the following warnings:
longobject.c(420): warning C4244: 'function': conversion from
'unsigned __int64' to 'sdigit', possible loss of data
longobject.c(428): warning C4267: 'function': conversion from
'size_t' to 'sdigit', possible loss of data
Document that lnotab can contain invalid bytecode offsets (because of
terrible reasons that are difficult to fix). Make dis.findlinestarts()
ignore invalid offsets in lnotab. All other uses of lnotab in CPython
(various reimplementations of addr2line or line2addr in Python, C and gdb)
already ignore this, because they take an address to look for, instead.
Add tests for the result of dis.findlinestarts() on wacky constructs in
test_peepholer.py, because it's the easiest place to add them.
Even when the helper is not started yet.
This behavior follows conventional generator one.
There is no reason for `async_generator_athrow` to handle `gen.throw()` differently.
https://bugs.python.org/issue38013
In ArgumentClinic, value "NULL" should now be used only for unrepresentable default values
(like in the optional third parameter of getattr). "None" should be used if None is accepted
as argument and passing None has the same effect as not passing the argument at all.
* Fix a crash in comparing with float (and maybe other crashes).
* They are now never equal to strings and non-integer numbers.
* Comparison with a large number no longer raises OverflowError.
* Arbitrary exceptions no longer silenced in constructors and comparisons.
* TypeError raised in the constructor contains now the name of the type.
* Accept only ChannelID and int-like objects in channel functions.
* Accept only InterpreterId, int-like objects and str in the InterpreterId constructor.
* Accept int-like objects, not just int in interpreter related functions.
All call sites pass NULL for `recode_encoding`, so this path is
completely untested. That's been true since before Python 3.0.
It adds significant complexity to this logic, so it's best to
take it out.
All call sites now have a literal NULL, and that's been true since
commit 768921cf3 eliminated a conditional (`foo ? bar : NULL`) at
the call site in Python/ast.c where we're parsing a bytes literal.
But even before then, that condition `foo` had been a constant
since unadorned string literals started meaning Unicode, in commit
572dbf8f1 aka v3.0a1~1035 .
The `unicode` parameter is already unused, so mark it as unused too.
The code that acted on it was also taken out before Python 3.0, in
commit 8d30cc014 aka v3.0a1~1031 .
The function (PyBytes_DecodeEscape) is exposed in the API, but it's
never been documented.
bpo-37151: remove special case for PyCFunction from PyObject_Call
Alse, make the undocumented function PyCFunction_Call an alias
of PyObject_Call and deprecate it.
The instance destructor for a type is responsible for preparing
an instance for deallocation by decrementing the reference counts
of its referents.
If an instance belongs to a heap type, the type object of an instance
has its reference count decremented while for static types, which
are permanently allocated, the type object is unaffected by the
instance destructor.
Previously, the default instance destructor searched the class
hierarchy for an inherited instance destructor and, if present,
would invoke it.
Then, if the instance type is a heap type, it would decrement the
reference count of that heap type. However, this could result in the
premature destruction of a type because the inherited instance
destructor should have already decremented the reference count
of the type object.
This change avoids the premature destruction of the type object
by suppressing the decrement of its reference count when an
inherited, non-default instance destructor has been invoked.
Finally, an assertion on the Py_SIZE of a type was deleted. Heap
types have a non zero size, making this into an incorrect assertion.
https://github.com/python/cpython/pull/15323
This is the sort of `goto` that requires the reader to stare hard at
the code to unpick what it's doing.
On doing so, the answer is... not very much!
* It jumps from the bottom of the loop to almost the top; the effect
is to bypass the loop condition `s < end` and also the
`if`-condition `*s != '\\'`, acting as if both are true.
* We've just decremented `s`, after incrementing it in the `switch`
condition. So it has the same value as when `s == end` failed.
Before that was another increment... and before that we had
`s < end`. So `s < end` true, then increment, then `s == end`
false... that means `s < end` is still true.
* Also this means `s` points to the same character as it did for the
`switch` condition. And there was a `case '\\'`, which we didn't
hit -- so `*s != '\\'` is also true.
* That means this has no effect on the behavior! The most it might do
is an optimization -- we get to skip those two checks, because (as
just proven above) we know they're true.
* But gosh, this is the *invalid escape sequence* path. This does not
seem like the kind of code path that calls for extreme optimization
tricks.
So, take the `goto` and the label out.
Perhaps the compiler will notice the exact same facts we showed above,
and generate identical code. Or perhaps it won't! That'll be OK.
But then, crucially, if some future edit to this loop causes the
reasoning above to *stop* holding true... the compiler will adjust
this jump accordingly. One of us fallible humans might not.
* Use the 'p' format unit instead of manually called PyObject_IsTrue().
* Pass boolean value instead 0/1 integers to functions that needs boolean.
* Convert some arguments to boolean only once.
- drop TargetScopeError in favour of raising SyntaxError directly
as per the updated PEP 572
- comprehension iteration variables are explicitly local, but
named expression targets in comprehensions are nonlocal or
global. Raise SyntaxError as specified in PEP 572
- named expression targets in the outermost iterable of a
comprehension have an ambiguous target scope. Avoid resolving
that question now by raising SyntaxError. PEP 572
originally required this only for cases where the bound name
conflicts with the iteration variable in the comprehension,
but CPython can't easily restrict the exception to that case
(as it doesn't know the target variable names when visiting
the outermost iterator expression)
pymalloc_alloc() now returns directly the pointer, return NULL on
memory allocation error.
allocate_from_new_pool() already uses NULL as marker for "allocation
failed".
The fact that keyword names are strings is now part of the vectorcall and `METH_FASTCALL` protocols. The biggest concrete change is that `_PyStack_UnpackDict` now checks that and raises `TypeError` if not.
CC @markshannon @vstinner
https://bugs.python.org/issue37540
Base PR for other PRs that want to play with `type.__call__` such as #13930 and #14589.
The author is really @markshannon I just made the PR.
https://bugs.python.org/issue37207
Automerge-Triggered-By: @encukou
PyObject_Malloc() and PyObject_Free() inlines pymalloc_alloc and
pymalloc_free partially.
But when PGO is not used, compiler don't know where is the hot part
in pymalloc_alloc and pymalloc_free.
Keeping an account of allocated blocks slows down _PyObject_Malloc()
and _PyObject_Free() by a measureable amount. Have
_Py_GetAllocatedBlocks() iterate over the arenas to sum up the
allocated blocks for pymalloc.
In development mode and in debug build, encoding and errors arguments
are now checked on string encoding and decoding operations. Examples:
open(), str.encode() and bytes.decode().
By default, for best performances, the errors argument is only
checked at the first encoding/decoding error, and the encoding
argument is sometimes ignored for empty strings.
* The UTF-8 incremental decoders fails now fast if encounter
a sequence that can't be handled by the error handler.
* The UTF-16 incremental decoders with the surrogatepass error
handler decodes now a lone low surrogate with final=False.
Add a new public PyObject_CallNoArgs() function to the C API: call a
callable Python object without any arguments.
It is the most efficient way to call a callback without any argument.
On x86-64, for example, PyObject_CallFunctionObjArgs(func, NULL)
allocates 960 bytes on the stack per call, whereas
PyObject_CallNoArgs(func) only allocates 624 bytes per call.
It is excluded from stable ABI 3.8.
Replace private _PyObject_CallNoArg() with public
PyObject_CallNoArgs() in C extensions: _asyncio, _datetime,
_elementtree, _pickle, _tkinter and readline.
GH-14039: allow (no more than) one wholly empty arena on the usable_arenas list.
This prevents thrashing in some easily-provoked simple cases that could end up creating and destroying an arena on each loop iteration in client code. Intuitively, if the only arena on the list becomes empty, it makes scant sense to give it back to the system unless we know we'll never need another free pool again before another arena frees a pool. If the latter obtains, then - yes - this will "waste" an arena.
When inheriting a heap subclass from a vectorcall class that sets
`.tp_call=PyVectorcall_Call` (as recommended in PEP 590), the subclass does
not inherit `_Py_TPFLAGS_HAVE_VECTORCALL`, and thus `PyVectorcall_Call` does
not work for it.
This attempts to solve the issue by:
* always inheriting `tp_vectorcall_offset` unless `tp_call` is overridden
in the subclass
* inheriting _Py_TPFLAGS_HAVE_VECTORCALL for static types, unless `tp_call`
is overridden
* making `PyVectorcall_Call` ignore `_Py_TPFLAGS_HAVE_VECTORCALL`
This means it'll be ever more important to only call `PyVectorcall_Call`
on classes that support vectorcall. In `PyVectorcall_Call`'s intended role
as `tp_call` filler, that's not a problem.
This adds a vector of "search fingers" so that usable_arenas can be kept in sorted order (by number of free pools) via constant-time operations instead of linear search.
This should reduce worst-case time for reclaiming a great many objects from O(A**2) to O(A), where A is the number of arenas. See bpo-37029.