* Do not PUSH/POP traceback or type to the stack as part of exc_info
* Remove exc_traceback and exc_type from _PyErr_StackItem
* Add to what's new, because this change breaks things like Cython
The array of small PyLong objects has been statically declared. Here I also statically initialize them. Consequently they are no longer initialized dynamically during runtime init.
I've also moved them under a new sub-struct in _PyRuntimeState, in preparation for static allocation and initialization of other global objects.
https://bugs.python.org/issue45953
This change is strictly renames and moving code around. It helps in the following ways:
* ensures type-related init functions focus strictly on one of the three aspects (state, objects, types)
* passes in PyInterpreterState * to all those functions, simplifying work on moving types/objects/state to the interpreter
* consistent naming conventions help make what's going on more clear
* keeping API related to a type in the corresponding header file makes it more obvious where to look for it
https://bugs.python.org/issue46008
* Make generator, coroutine and async gen structs all the same size.
* Store interpreter frame in generator (and coroutine). Reduces the number of allocations neeeded for a generator from two to one.
Rename PyConfig.no_debug_ranges to PyConfig.code_debug_ranges and
invert the value.
Document -X no_debug_ranges and PYTHONNODEBUGRANGES env var in
PyConfig.code_debug_ranges documentation.
* Make internal APIs that take PyFrameConstructor take a PyFunctionObject instead.
* Add reference to function to frame, borrow references to builtins and globals.
* Add COPY_FREE_VARS instruction to allow specialization of calls to inner functions.
Keep track of whether unsafe_tuple_compare() calls are resolved by the very
first tuple elements, and adjust strategy accordingly. This can significantly
cut the number of calls made to the full-blown PyObject_RichCompareBool(),
and especially when duplicates are rare.
Co-authored-by: Łukasz Langa <lukasz@langa.pl>
MAP_BOT_LENGTH was incorrectly used to compute MAP_TOP_MASK instead of
MAP_TOP_LENGTH. On 64-bit machines, the error causes the tree to hold
46-bits of virtual addresses, rather than the intended 48-bits.
Freelists for object structs can now be disabled. A new ``configure``
option ``--without-freelists`` can be used to disable all freelists
except empty tuple singleton. Internal Py*_MAXFREELIST macros can now
be defined as 0 without causing compiler warnings and segfaults.
Signed-off-by: Christian Heimes <christian@python.org>
Move Include/longobject.h non-limited API to a new
Include/cpython/longobject.h header file.
Move the following definitions to the internal C API:
* _PyLong_DigitValue
* _PyLong_FormatAdvancedWriter()
* _PyLong_FormatWriter()
* Avoid making C calls for most calls to Python functions.
* Change initialize_locals(steal=true) and _PyTuple_FromArraySteal to consume the argument references regardless of whether they succeed or fail.
Rename Include/namespaceobject.h to
Include/internal/pycore_namespace.h.
The _testmultiphase extension is now built with the
Py_BUILD_CORE_MODULE macro defined to access _PyNamespace_Type.
object.c: remove unused "pycore_context.h" include.
Move classobject.h, context.h, genobject.h and longintrepr.h header
files from Include/ to Include/cpython/.
Remove redundant "#ifndef Py_LIMITED_API" in context.h.
Remove explicit #include "longintrepr.h" in C files. It's not needed,
Python.h already includes it.
* Move _PyObject_VectorcallTstate() and _PyObject_FastCallTstate() to
pycore_call.h (internal C API).
* Convert PyObject_CallOneArg(), PyObject_Vectorcall(),
_PyObject_FastCall() and PyVectorcall_Function() static inline
functions to regular functions.
* Add _PyVectorcall_FunctionInline() static inline function.
* PyObject_Vectorcall(), _PyObject_FastCall(), and
PyObject_CallOneArg() now call _PyThreadState_GET() rather
than PyThreadState_Get().
They support now splitting escape sequences between input chunks.
Add the third parameter "final" in codecs.raw_unicode_escape_decode().
It is True by default to match the former behavior.
They support now splitting escape sequences between input chunks.
Add the third parameter "final" in codecs.unicode_escape_decode().
It is True by default to match the former behavior.
Move Include/pystrhex.h to Include/internal/pycore_strhex.h.
The header file only contains private functions.
The following C extensions are now built with Py_BUILD_CORE_MODULE
macro defined to get access to the internal C API:
* _blake2
* _hashopenssl
* _md5
* _sha1
* _sha3
* _ssl
* binascii
* Never change types' cached keys. It could invalidate inline attribute objects.
* Lazily create object dictionaries.
* Update specialization of LOAD/STORE_ATTR.
* Don't update shared keys version for deletion of value.
* Update gdb support to handle instance values.
* Rename SPLIT_KEYS opcodes to INSTANCE_VALUE.
Redefining the PyThreadState_GET() macro in pycore_pystate.h is
useless since it doesn't affect files not including it. Either use
_PyThreadState_GET() directly, or don't use pycore_pystate.h internal
C API. For example, the _testcapi extension don't use the internal C
API, but use the public PyThreadState_Get() function instead.
Replace PyThreadState_Get() with _PyThreadState_GET(). The
_PyThreadState_GET() macro is more efficient than PyThreadState_Get()
and PyThreadState_GET() function calls which call fail with a fatal
Python error.
posixmodule.c and _ctypes extension now include <windows.h> before
pycore header files (like pycore_call.h).
_PyTraceback_Add() now uses _PyErr_Fetch()/_PyErr_Restore() instead
of PyErr_Fetch()/PyErr_Restore().
The _decimal and _xxsubinterpreters extensions are now built with the
Py_BUILD_CORE_MODULE macro defined to get access to the internal C
API.
* Move _PyObject_CallNoArgs() to pycore_call.h (internal C API).
* _ssl, _sqlite and _testcapi extensions now call the public
PyObject_CallNoArgs() function, rather than _PyObject_CallNoArgs().
* _lsprof extension is now built with Py_BUILD_CORE_MODULE macro
defined to get access to internal _PyObject_CallNoArgs().
Fix typo in the private _PyObject_CallNoArg() function name: rename
it to _PyObject_CallNoArgs() to be consistent with the public
function PyObject_CallNoArgs().
Add _PyVectorcall_Call() helper function.
Add "assert(PyCallable_Check(callable));" to PyVectorcall_Call(),
similar check than PyVectorcall_Function().
Remove the following math macros using the errno variable:
* Py_ADJUST_ERANGE1()
* Py_ADJUST_ERANGE2()
* Py_OVERFLOWED()
* Py_SET_ERANGE_IF_OVERFLOW()
* Py_SET_ERRNO_ON_MATH_ERROR()
Create pycore_pymath.h internal header file.
Rename Py_ADJUST_ERANGE1() and Py_ADJUST_ERANGE2() to
_Py_ADJUST_ERANGE1() and _Py_ADJUST_ERANGE2(), and convert these
macros to static inline functions.
Move the following macros to pycore_pymath.h:
* _Py_IntegralTypeSigned()
* _Py_IntegralTypeMax()
* _Py_IntegralTypeMin()
* _Py_InIntegralTypeRange()
Detect refcount bugs in C extensions when the empty Unicode string
singleton is destroyed by mistake.
* Move forward declarations to the top of unicodeobject.c.
* Simplifiy unicode_is_singleton().
* Constructors of subclasses of some buitin classes (e.g. tuple, list,
frozenset) no longer accept arbitrary keyword arguments.
* Subclass of set can now define a __new__() method with additional
keyword parameters without overriding also __init__().
The deallocator function of the BaseException type now uses the
trashcan mecanism to prevent stack overflow. For example, when a
RecursionError instance is raised, it can be linked to another
RecursionError through the __context__ attribute or the __traceback__
attribute, and then a chain of exceptions is created. When the chain
is destroyed, nested deallocator function calls can crash with a
stack overflow if the chain is too long compared to the available
stack memory.
Fix PyAiter_Check to only check for the `__anext__` presense (not for
`__aiter__`). Rename `PyAiter_Check()` to `PyAIter_Check()`,
`PyObject_GetAiter()` -> `PyObject_GetAIter()`.
Co-authored-by: Pablo Galindo Salgado <Pablogsal@gmail.com>
For list.sort(), replace our ad hoc merge ordering strategy with the principled, elegant,
and provably near-optimal one from Munro and Wild's "powersort".
Places the locals between the specials and stack. This is the more "natural" layout for a C struct, makes the code simpler and gives a slight speedup (~1%)
While the comment said 'We don't bother resizing localspluskinds',
this would cause .replace() to crash when it happened.
(Also types.CodeType(), but testing that is tedious, and this tests all
code paths.)
* Generalize cache names for LOAD_ATTR to allow store and delete specializations.
* Factor out specialization of attribute dictionary access.
* Specialize STORE_ATTR.
* Unify the C and Python implementations of OrderedDict.popitem().
The C implementation no longer calls ``__getitem__`` and ``__delitem__``
methods of the OrderedDict subclasses.
* Change popitem() and pop() methods of collections.OrderedDict
For consistency with dict both implementations (pure Python and C)
of these methods in OrderedDict no longer call __getitem__ and
__delitem__ methods of the OrderedDict subclasses.
Previously only the Python implementation of popitem() did not
call them.
* Convert "specials" array to InterpreterFrame struct, adding f_lasti, f_state and other non-debug FrameObject fields to it.
* Refactor, calls pushing the call to the interpreter upward toward _PyEval_Vector.
* Compute f_back when on thread stack, only filling in value when frame object outlives stack invocation.
* Move ownership of InterpreterFrame in generator from frame object to generator object.
* Do not create frame objects for Python calls.
* Do not create frame objects for generators.
* Remove code that checks Py_TPFLAGS_HAVE_VERSION_TAG
The field is always present in the type struct, as explained
in the added comment.
* Remove Py_TPFLAGS_HAVE_AM_SEND
The flag is not needed, and since it was added in 3.10 it can be removed now.
It no longer depends on the order of arguments.
hash(int | str) == hash(str | int)
Co-authored-by: Jack DeVries <58614260+jdevries3133@users.noreply.github.com>
The non-GC-type branch of subtype_dealloc is using the type of an object after freeing in the same unsafe way as GH-26274 fixes. (I believe the old news entry covers this change well enough.)
https://bugs.python.org/issue44184
Patch by Erik Welch.
bpo-19072 (#8405) allows `classmethod` to wrap other descriptors, but this does
not work when the wrapped descriptor mimics classmethod. The current PR fixes
this.
In Python 3.8 and before, one could create a callable descriptor such that this
works as expected (see Lib/test/test_decorators.py for examples):
```python
class A:
@myclassmethod
def f1(cls):
return cls
@classmethod
@myclassmethod
def f2(cls):
return cls
```
In Python 3.8 and before, `A.f2()` return `A`. Currently in Python 3.9, it
returns `type(A)`. This PR make `A.f2()` return `A` again.
As of #8405, classmethod calls `obj.__get__(type)` if `obj` has `__get__`.
This allows one to chain `@classmethod` and `@property` together. When
using classmethod-like descriptors, it's the second argument to `__get__`--the
owner or the type--that is important, but this argument is currently missing.
Since it is None, the "owner" argument is assumed to be the type of the first
argument, which, in this case, is wrong (we want `A`, not `type(A)`).
This PR updates classmethod to call `obj.__get__(type, type)` if `obj` has
`__get__`.
Co-authored-by: Erik Welch <erik.n.welch@gmail.com>
* Fix issubclass() for None.
E.g. issubclass(type(None), int | None) returns now True.
* Fix issubclass() for virtual subclasses.
E.g. issubclass(dict, int | collections.abc.Mapping) returns now True.
* Fix crash in isinstance() if the check for one of items raises exception.
Heap types with the Py_TPFLAGS_IMMUTABLETYPE flag can now inherit the
PEP 590 vectorcall protocol. Previously, this was only possible for static types.
Co-authored-by: Victor Stinner <vstinner@python.org>
This PR is part of PEP 657 and augments the compiler to emit ending
line numbers as well as starting and ending columns from the AST
into compiled code objects. This allows bytecodes to be correlated
to the exact source code ranges that generated them.
This information is made available through the following public APIs:
* The `co_positions` method on code objects.
* The C API function `PyCode_Addr2Location`.
Co-authored-by: Batuhan Taskaya <isidentical@gmail.com>
Co-authored-by: Ammar Askar <ammar@ammaraskar.com>
_PyObject_GetMethod() now uses _PyType_IsReady() to decide if
PyType_Ready() must be called or not, rather than testing if
tp->tp_dict is NULL.
Move also variable declarations closer to where they are used, and
use Py_NewRef().
Add an internal _PyType_AllocNoTrack() function to allocate an object
without tracking it in the GC.
Modify dict_new() to use _PyType_AllocNoTrack(): dict subclasses are
now only tracked once all PyDictObject members are initialized.
Calling _PyObject_GC_UNTRACK() is no longer needed for the dict type.
Similar change in tuple_subtype_new() for tuple subclasses.
Replace tuple_gc_track() with _PyObject_GC_TRACK().
Remove 4 C API private trashcan functions which were only kept for
the backward compatibility of the stable ABI with Python 3.8 and
older, since the trashcan API was not usable with the limited C API
on Python 3.8 and older. The trashcan API was excluded from the
limited C API in Python 3.9.
Removed functions:
* _PyTrash_deposit_object()
* _PyTrash_destroy_chain()
* _PyTrash_thread_deposit_object()
* _PyTrash_thread_destroy_chain()
The trashcan C API was never usable with the limited C API, since old
trashcan macros accessed directly PyThreadState members like
"_tstate->trash_delete_nesting", whereas the PyThreadState structure
is opaque in the limited C API.
Exclude also the PyTrash_UNWIND_LEVEL constant from the C API.
The trashcan C API was modified in Python 3.9 by commit
38965ec541 and in Python 3.10 by commit
ed1a5a5bac to hide implementation
details.
PyModuleDef_Init() no longer tries to make PyModule_Type type: it's
already done by _PyTypes_Init() at Python startup. Replace
PyType_Ready() call with an assertion.
1. Remove conditions already checked by assert()
2. Remove object_init() call that effectively creates an empty tuple and
checks that this tuple is empty
Currently, if an arg value escapes (into the closure for an inner function) we end up allocating two indices in the fast locals even though only one gets used. Additionally, using the lower index would be better in some cases, such as with no-arg `super()`. To address this, we update the compiler to fix the offsets so each variable only gets one "fast local". As a consequence, now some cell offsets are interspersed with the locals (only when an arg escapes to an inner function).
https://bugs.python.org/issue43693
* Specialize LOAD_ATTR with LOAD_ATTR_SLOT and LOAD_ATTR_SPLIT_KEYS
* Move dict-common.h to internal/pycore_dict.h
* Add LOAD_ATTR_WITH_HINT specialized opcode.
* Quicken in function if loopy
* Specialize LOAD_ATTR for module attributes.
* Add specialization stats
This is the same fix as for PyFrame_LocalsToFast() in gh-26609, but applied to PyFrame_FastToLocalsWithError(). (It should have been in that PR.)
https://bugs.python.org/issue43693
This was reverted in GH-26596 (commit 6d518bb) due to some bad memory accesses.
* Add the MAKE_CELL opcode. (gh-26396)
The memory accesses have been fixed.
https://bugs.python.org/issue43693
This moves logic out of the frame initialization code and into the compiler and eval loop. Doing so simplifies the runtime code and allows us to optimize it better.
https://bugs.python.org/issue43693
These were reverted in gh-26530 (commit 17c4edc) due to refleaks.
* 2c1e258 - Compute deref offsets in compiler (gh-25152)
* b2bf2bc - Add new internal code objects fields: co_fastlocalnames and co_fastlocalkinds. (gh-26388)
This change fixes the refleaks.
https://bugs.python.org/issue43693
* Add co_firstinstr field to code object.
* Implement barebones quickening.
* Use non-quickened bytecode when tracing.
* Add NEWS item
* Add new file to Windows build.
* Don't specialize instructions with EXTENDED_ARG.
* Revert "bpo-43693: Compute deref offsets in compiler (gh-25152)"
This reverts commit b2bf2bc1ec.
* Revert "bpo-43693: Add new internal code objects fields: co_fastlocalnames and co_fastlocalkinds. (gh-26388)"
This reverts commit 2c1e2583fd.
These two commits are breaking the refleak buildbots.
Merges locals and cells into a single array.
Saves a pointer in the interpreter and means that we don't need the LOAD_CLOSURE opcode any more
https://bugs.python.org/issue43693
A number of places in the code base (notably ceval.c and frameobject.c) rely on mapping variable names to indices in the frame "locals plus" array (AKA fast locals), and thus opargs. Currently the compiler indirectly encodes that information on the code object as the tuples co_varnames, co_cellvars, and co_freevars. At runtime the dependent code must calculate the proper mapping from those, which isn't ideal and impacts performance-sensitive sections. This is something we can easily address in the compiler instead.
This change addresses the situation by replacing internal use of co_varnames, etc. with a single combined tuple of names in locals-plus order, along with a minimal array mapping each to its kind (local vs. cell vs. free). These two new PyCodeObject fields, co_fastlocalnames and co_fastllocalkinds, are not exposed to Python code for now, but co_varnames, etc. are still available with the same values as before (though computed lazily).
Aside from the (mild) performance impact, there are a number of other benefits:
* there's now a clear, direct relationship between locals-plus and variables
* code that relies on the locals-plus-to-name mapping is simpler
* marshaled code objects are smaller and serialize/de-serialize faster
Also note that we can take this approach further by expanding the possible values in co_fastlocalkinds to include specific argument types (e.g. positional-only, kwargs). Doing so would allow further speed-ups in _PyEval_MakeFrameVector(), which is where args get unpacked into the locals-plus array. It would also allow us to shrink marshaled code objects even further.
https://bugs.python.org/issue43693
The PyType_Ready() function now raises an error if a type is defined
with the Py_TPFLAGS_HAVE_GC flag set but has no traverse function
(PyTypeObject.tp_traverse).
* Move up the comment about fields using in hashing/comparision.
* Group the fields more clearly.
* Add co_ncellvars and co_nfreevars.
* Raise ValueError if nlocals != len(varnames), rather than aborting.
Fix a regression in type() when a metaclass raises an exception. The
C function type_new() must properly report the exception when a
metaclass constructor raises an exception and the winner class is not
the metaclass.
Fix a crash at Python exit when a deallocator function removes the
last strong reference to a heap type.
Don't read type memory after calling basedealloc() since
basedealloc() can deallocate the type and free its memory.
_PyMem_IsPtrFreed() argument is now constant.
* Remove 'zombie' frames. We won't need them once we are allocating fixed-size frames.
* Add co_nlocalplus field to code object to avoid recomputing size of locals + frees + cells.
* Move locals, cells and freevars out of frame object into separate memory buffer.
* Use per-threadstate allocated memory chunks for local variables.
* Move globals and builtins from frame object to per-thread stack.
* Move (slow) locals frame object to per-thread stack.
* Move internal frame functions to internal header.
These are passed and called as PyCFunction, however they are defined here without the (ignored) args parameter.
This works fine in some C compilers, but fails in webassembly or anything else that has strict function pointer call type checking.
"Zero cost" exception handling.
* Uses a lookup table to determine how to handle exceptions.
* Removes SETUP_FINALLY and POP_TOP block instructions, eliminating (most of) the runtime overhead of try statements.
* Reduces the size of the frame object by about 60%.