Remove the following math macros using the errno variable:
* Py_ADJUST_ERANGE1()
* Py_ADJUST_ERANGE2()
* Py_OVERFLOWED()
* Py_SET_ERANGE_IF_OVERFLOW()
* Py_SET_ERRNO_ON_MATH_ERROR()
Create pycore_pymath.h internal header file.
Rename Py_ADJUST_ERANGE1() and Py_ADJUST_ERANGE2() to
_Py_ADJUST_ERANGE1() and _Py_ADJUST_ERANGE2(), and convert these
macros to static inline functions.
Move the following macros to pycore_pymath.h:
* _Py_IntegralTypeSigned()
* _Py_IntegralTypeMax()
* _Py_IntegralTypeMin()
* _Py_InIntegralTypeRange()
Detect refcount bugs in C extensions when the empty Unicode string
singleton is destroyed by mistake.
* Move forward declarations to the top of unicodeobject.c.
* Simplifiy unicode_is_singleton().
* Constructors of subclasses of some buitin classes (e.g. tuple, list,
frozenset) no longer accept arbitrary keyword arguments.
* Subclass of set can now define a __new__() method with additional
keyword parameters without overriding also __init__().
The deallocator function of the BaseException type now uses the
trashcan mecanism to prevent stack overflow. For example, when a
RecursionError instance is raised, it can be linked to another
RecursionError through the __context__ attribute or the __traceback__
attribute, and then a chain of exceptions is created. When the chain
is destroyed, nested deallocator function calls can crash with a
stack overflow if the chain is too long compared to the available
stack memory.
Fix PyAiter_Check to only check for the `__anext__` presense (not for
`__aiter__`). Rename `PyAiter_Check()` to `PyAIter_Check()`,
`PyObject_GetAiter()` -> `PyObject_GetAIter()`.
Co-authored-by: Pablo Galindo Salgado <Pablogsal@gmail.com>
For list.sort(), replace our ad hoc merge ordering strategy with the principled, elegant,
and provably near-optimal one from Munro and Wild's "powersort".
Places the locals between the specials and stack. This is the more "natural" layout for a C struct, makes the code simpler and gives a slight speedup (~1%)
While the comment said 'We don't bother resizing localspluskinds',
this would cause .replace() to crash when it happened.
(Also types.CodeType(), but testing that is tedious, and this tests all
code paths.)
* Generalize cache names for LOAD_ATTR to allow store and delete specializations.
* Factor out specialization of attribute dictionary access.
* Specialize STORE_ATTR.
* Unify the C and Python implementations of OrderedDict.popitem().
The C implementation no longer calls ``__getitem__`` and ``__delitem__``
methods of the OrderedDict subclasses.
* Change popitem() and pop() methods of collections.OrderedDict
For consistency with dict both implementations (pure Python and C)
of these methods in OrderedDict no longer call __getitem__ and
__delitem__ methods of the OrderedDict subclasses.
Previously only the Python implementation of popitem() did not
call them.
* Convert "specials" array to InterpreterFrame struct, adding f_lasti, f_state and other non-debug FrameObject fields to it.
* Refactor, calls pushing the call to the interpreter upward toward _PyEval_Vector.
* Compute f_back when on thread stack, only filling in value when frame object outlives stack invocation.
* Move ownership of InterpreterFrame in generator from frame object to generator object.
* Do not create frame objects for Python calls.
* Do not create frame objects for generators.
* Remove code that checks Py_TPFLAGS_HAVE_VERSION_TAG
The field is always present in the type struct, as explained
in the added comment.
* Remove Py_TPFLAGS_HAVE_AM_SEND
The flag is not needed, and since it was added in 3.10 it can be removed now.
It no longer depends on the order of arguments.
hash(int | str) == hash(str | int)
Co-authored-by: Jack DeVries <58614260+jdevries3133@users.noreply.github.com>
The non-GC-type branch of subtype_dealloc is using the type of an object after freeing in the same unsafe way as GH-26274 fixes. (I believe the old news entry covers this change well enough.)
https://bugs.python.org/issue44184
Patch by Erik Welch.
bpo-19072 (#8405) allows `classmethod` to wrap other descriptors, but this does
not work when the wrapped descriptor mimics classmethod. The current PR fixes
this.
In Python 3.8 and before, one could create a callable descriptor such that this
works as expected (see Lib/test/test_decorators.py for examples):
```python
class A:
@myclassmethod
def f1(cls):
return cls
@classmethod
@myclassmethod
def f2(cls):
return cls
```
In Python 3.8 and before, `A.f2()` return `A`. Currently in Python 3.9, it
returns `type(A)`. This PR make `A.f2()` return `A` again.
As of #8405, classmethod calls `obj.__get__(type)` if `obj` has `__get__`.
This allows one to chain `@classmethod` and `@property` together. When
using classmethod-like descriptors, it's the second argument to `__get__`--the
owner or the type--that is important, but this argument is currently missing.
Since it is None, the "owner" argument is assumed to be the type of the first
argument, which, in this case, is wrong (we want `A`, not `type(A)`).
This PR updates classmethod to call `obj.__get__(type, type)` if `obj` has
`__get__`.
Co-authored-by: Erik Welch <erik.n.welch@gmail.com>
* Fix issubclass() for None.
E.g. issubclass(type(None), int | None) returns now True.
* Fix issubclass() for virtual subclasses.
E.g. issubclass(dict, int | collections.abc.Mapping) returns now True.
* Fix crash in isinstance() if the check for one of items raises exception.
Heap types with the Py_TPFLAGS_IMMUTABLETYPE flag can now inherit the
PEP 590 vectorcall protocol. Previously, this was only possible for static types.
Co-authored-by: Victor Stinner <vstinner@python.org>
This PR is part of PEP 657 and augments the compiler to emit ending
line numbers as well as starting and ending columns from the AST
into compiled code objects. This allows bytecodes to be correlated
to the exact source code ranges that generated them.
This information is made available through the following public APIs:
* The `co_positions` method on code objects.
* The C API function `PyCode_Addr2Location`.
Co-authored-by: Batuhan Taskaya <isidentical@gmail.com>
Co-authored-by: Ammar Askar <ammar@ammaraskar.com>
_PyObject_GetMethod() now uses _PyType_IsReady() to decide if
PyType_Ready() must be called or not, rather than testing if
tp->tp_dict is NULL.
Move also variable declarations closer to where they are used, and
use Py_NewRef().
Add an internal _PyType_AllocNoTrack() function to allocate an object
without tracking it in the GC.
Modify dict_new() to use _PyType_AllocNoTrack(): dict subclasses are
now only tracked once all PyDictObject members are initialized.
Calling _PyObject_GC_UNTRACK() is no longer needed for the dict type.
Similar change in tuple_subtype_new() for tuple subclasses.
Replace tuple_gc_track() with _PyObject_GC_TRACK().
Remove 4 C API private trashcan functions which were only kept for
the backward compatibility of the stable ABI with Python 3.8 and
older, since the trashcan API was not usable with the limited C API
on Python 3.8 and older. The trashcan API was excluded from the
limited C API in Python 3.9.
Removed functions:
* _PyTrash_deposit_object()
* _PyTrash_destroy_chain()
* _PyTrash_thread_deposit_object()
* _PyTrash_thread_destroy_chain()
The trashcan C API was never usable with the limited C API, since old
trashcan macros accessed directly PyThreadState members like
"_tstate->trash_delete_nesting", whereas the PyThreadState structure
is opaque in the limited C API.
Exclude also the PyTrash_UNWIND_LEVEL constant from the C API.
The trashcan C API was modified in Python 3.9 by commit
38965ec541 and in Python 3.10 by commit
ed1a5a5bac to hide implementation
details.
PyModuleDef_Init() no longer tries to make PyModule_Type type: it's
already done by _PyTypes_Init() at Python startup. Replace
PyType_Ready() call with an assertion.
1. Remove conditions already checked by assert()
2. Remove object_init() call that effectively creates an empty tuple and
checks that this tuple is empty
Currently, if an arg value escapes (into the closure for an inner function) we end up allocating two indices in the fast locals even though only one gets used. Additionally, using the lower index would be better in some cases, such as with no-arg `super()`. To address this, we update the compiler to fix the offsets so each variable only gets one "fast local". As a consequence, now some cell offsets are interspersed with the locals (only when an arg escapes to an inner function).
https://bugs.python.org/issue43693
* Specialize LOAD_ATTR with LOAD_ATTR_SLOT and LOAD_ATTR_SPLIT_KEYS
* Move dict-common.h to internal/pycore_dict.h
* Add LOAD_ATTR_WITH_HINT specialized opcode.
* Quicken in function if loopy
* Specialize LOAD_ATTR for module attributes.
* Add specialization stats
This is the same fix as for PyFrame_LocalsToFast() in gh-26609, but applied to PyFrame_FastToLocalsWithError(). (It should have been in that PR.)
https://bugs.python.org/issue43693
This was reverted in GH-26596 (commit 6d518bb) due to some bad memory accesses.
* Add the MAKE_CELL opcode. (gh-26396)
The memory accesses have been fixed.
https://bugs.python.org/issue43693
This moves logic out of the frame initialization code and into the compiler and eval loop. Doing so simplifies the runtime code and allows us to optimize it better.
https://bugs.python.org/issue43693
These were reverted in gh-26530 (commit 17c4edc) due to refleaks.
* 2c1e258 - Compute deref offsets in compiler (gh-25152)
* b2bf2bc - Add new internal code objects fields: co_fastlocalnames and co_fastlocalkinds. (gh-26388)
This change fixes the refleaks.
https://bugs.python.org/issue43693
* Add co_firstinstr field to code object.
* Implement barebones quickening.
* Use non-quickened bytecode when tracing.
* Add NEWS item
* Add new file to Windows build.
* Don't specialize instructions with EXTENDED_ARG.
* Revert "bpo-43693: Compute deref offsets in compiler (gh-25152)"
This reverts commit b2bf2bc1ec.
* Revert "bpo-43693: Add new internal code objects fields: co_fastlocalnames and co_fastlocalkinds. (gh-26388)"
This reverts commit 2c1e2583fd.
These two commits are breaking the refleak buildbots.
Merges locals and cells into a single array.
Saves a pointer in the interpreter and means that we don't need the LOAD_CLOSURE opcode any more
https://bugs.python.org/issue43693
A number of places in the code base (notably ceval.c and frameobject.c) rely on mapping variable names to indices in the frame "locals plus" array (AKA fast locals), and thus opargs. Currently the compiler indirectly encodes that information on the code object as the tuples co_varnames, co_cellvars, and co_freevars. At runtime the dependent code must calculate the proper mapping from those, which isn't ideal and impacts performance-sensitive sections. This is something we can easily address in the compiler instead.
This change addresses the situation by replacing internal use of co_varnames, etc. with a single combined tuple of names in locals-plus order, along with a minimal array mapping each to its kind (local vs. cell vs. free). These two new PyCodeObject fields, co_fastlocalnames and co_fastllocalkinds, are not exposed to Python code for now, but co_varnames, etc. are still available with the same values as before (though computed lazily).
Aside from the (mild) performance impact, there are a number of other benefits:
* there's now a clear, direct relationship between locals-plus and variables
* code that relies on the locals-plus-to-name mapping is simpler
* marshaled code objects are smaller and serialize/de-serialize faster
Also note that we can take this approach further by expanding the possible values in co_fastlocalkinds to include specific argument types (e.g. positional-only, kwargs). Doing so would allow further speed-ups in _PyEval_MakeFrameVector(), which is where args get unpacked into the locals-plus array. It would also allow us to shrink marshaled code objects even further.
https://bugs.python.org/issue43693
The PyType_Ready() function now raises an error if a type is defined
with the Py_TPFLAGS_HAVE_GC flag set but has no traverse function
(PyTypeObject.tp_traverse).
* Move up the comment about fields using in hashing/comparision.
* Group the fields more clearly.
* Add co_ncellvars and co_nfreevars.
* Raise ValueError if nlocals != len(varnames), rather than aborting.
Fix a regression in type() when a metaclass raises an exception. The
C function type_new() must properly report the exception when a
metaclass constructor raises an exception and the winner class is not
the metaclass.
Fix a crash at Python exit when a deallocator function removes the
last strong reference to a heap type.
Don't read type memory after calling basedealloc() since
basedealloc() can deallocate the type and free its memory.
_PyMem_IsPtrFreed() argument is now constant.
* Remove 'zombie' frames. We won't need them once we are allocating fixed-size frames.
* Add co_nlocalplus field to code object to avoid recomputing size of locals + frees + cells.
* Move locals, cells and freevars out of frame object into separate memory buffer.
* Use per-threadstate allocated memory chunks for local variables.
* Move globals and builtins from frame object to per-thread stack.
* Move (slow) locals frame object to per-thread stack.
* Move internal frame functions to internal header.
These are passed and called as PyCFunction, however they are defined here without the (ignored) args parameter.
This works fine in some C compilers, but fails in webassembly or anything else that has strict function pointer call type checking.
"Zero cost" exception handling.
* Uses a lookup table to determine how to handle exceptions.
* Removes SETUP_FINALLY and POP_TOP block instructions, eliminating (most of) the runtime overhead of try statements.
* Reduces the size of the frame object by about 60%.
check_set_special_type_attr() and type_set_annotations()
now check for immutable flag (Py_TPFLAGS_IMMUTABLETYPE).
Co-authored-by: Victor Stinner <vstinner@python.org>
The PyStdPrinter_Type type now uses the
Py_TPFLAGS_DISALLOW_INSTANTIATION flag to disallow instantiation,
rather than seting a tp_init method which always fail.
Write also unit tests for PyStdPrinter_Type.
Add a new Py_TPFLAGS_DISALLOW_INSTANTIATION type flag to disallow
creating type instances: set tp_new to NULL and don't create the
"__new__" key in the type dictionary.
The flag is set automatically on static types if tp_base is NULL or
&PyBaseObject_Type and tp_new is NULL.
Use the flag on the following types:
* _curses.ncurses_version type
* _curses_panel.panel
* _tkinter.Tcl_Obj
* _tkinter.tkapp
* _tkinter.tktimertoken
* _xxsubinterpretersmodule.ChannelID
* sys.flags type
* sys.getwindowsversion() type
* sys.version_info type
Update MyStr example in the C API documentation to use
Py_TPFLAGS_DISALLOW_INSTANTIATION.
Add _PyStructSequence_InitType() function to create a structseq type
with the Py_TPFLAGS_DISALLOW_INSTANTIATION flag set.
type_new() calls _PyType_CheckConsistency() at exit.
* Add Py_TPFLAGS_SEQUENCE and Py_TPFLAGS_MAPPING, add to all relevant standard builtin classes.
* Set relevant flags on collections.abc.Sequence and Mapping.
* Use flags in MATCH_SEQUENCE and MATCH_MAPPING opcodes.
* Inherit Py_TPFLAGS_SEQUENCE and Py_TPFLAGS_MAPPING.
* Add NEWS
* Remove interpreter-state map_abc and seq_abc fields.
While working on another issue, I noticed two minor nits in the C implementation of the module object. Both are related to getting a module's name.
First, the C function module_dir() (module.__dir__) starts by ensuring the module dict is valid. If the module dict is invalid, it wants to format an exception using the name of the module, which it gets from PyModule_GetName(). However, PyModule_GetName() gets the name of the module from the dict. So getting the name in this circumstance will never succeed.
When module_dir() wants to format the error but can't get the name, it knows that PyModule_GetName() must have already raised an exception. So it leaves that exception alone and returns an error. The end result is that the exception raised here is kind of useless and misleading: dir(module) on a module with no __dict__ raises SystemError("nameless module"). I changed the code to actually raise the exception it wanted to raise, just without a real module name: TypeError("<module>.__dict__ is not a dictionary"). This seems more useful, and would do a better job putting the programmer who encountered this on the right track of figuring out what was going on.
Second, the C API function PyModule_GetNameObject() checks to see if the module has a dict. If m->md_dict is not NULL, it calls _PyDict_GetItemIdWithError(). However, it's possible for m->md_dict to be None. And if you call _PyDict_GetItemIdWithError(Py_None, ...) it will *crash*.
Unfortunately, this crash was due to my own bug in the other branch. Fixing my code made the crash go away. I assert that this is still possible at the API level.
The fix is easy: add a PyDict_Check() to PyModule_GetNameObject().
Unfortunately, I don't know how to add a unit test for this. Having changed module_dir() above, I can't find any other interfaces callable from Python that eventually call PyModule_GetNameObject(). So I don't know how to trick the runtime into reproducing this error.
Since both these changes are minor--each entails only a small edit to only one line--I didn't bother with a news item.
Change class and module objects to lazy-create empty annotations dicts on demand. The annotations dicts are stored in the object's `__dict__` for backwards compatibility.
Accessing the following attributes will now fire PEP 578 style audit hooks as ("object.__getattr__", obj, name):
* PyTracebackObject: tb_frame
* PyFrameObject: f_code
* PyGenObject: gi_code, gi_frame
* PyCoroObject: cr_code, cr_frame
* PyAsyncGenObject: ag_code, ag_frame
Add an AUDIT_READ attribute flag aliased to READ_RESTRICTED.
Update obsolete flag documentation.
* Add length parameter to PyLineTable_InitAddressRange and doen't use sentinel values at end of table. Makes the line number table more robust.
* Update PyCodeAddressRange to match PEP 626.
Introduce Py_TPFLAGS_IMMUTABLETYPE flag for immutable type objects, and
modify PyType_Ready() to set it for static types.
Co-authored-by: Victor Stinner <vstinner@python.org>
To improve the user experience understanding what part of the error messages associated with SyntaxErrors is wrong, we can highlight the whole error range and not only place the caret at the first character. In this way:
>>> foo(x, z for z in range(10), t, w)
File "<stdin>", line 1
foo(x, z for z in range(10), t, w)
^
SyntaxError: Generator expression must be parenthesized
becomes
>>> foo(x, z for z in range(10), t, w)
File "<stdin>", line 1
foo(x, z for z in range(10), t, w)
^^^^^^^^^^^^^^^^^^^^
SyntaxError: Generator expression must be parenthesized
Add pycore_moduleobject.h internal header file with static inline
functions to access module members:
* _PyModule_GetDict()
* _PyModule_GetDef()
* _PyModule_GetState()
These functions don't check at runtime if their argument has a valid
type and can be inlined even if Python is not built with LTO.
_PyType_GetModuleByDef() uses _PyModule_GetDef().
Replace PyModule_GetState() with _PyModule_GetState() in the
extension modules, considered as performance sensitive:
* _abc
* _functools
* _operator
* _pickle
* _queue
* _random
* _sre
* _struct
* _thread
* _winapi
* array
* posix
The following extensions are now built with the Py_BUILD_CORE_MODULE
macro defined, to be able to use the internal pycore_moduleobject.h
header: _abc, array, _operator, _queue, _sre, _struct.
PyType_Ready() now ensures that a type MRO cannot be empty.
_PyType_GetModuleByDef() no longer checks "i < PyTuple_GET_SIZE(mro)"
at the first loop iteration to optimize the most common case, when
the argument is the defining class.
_PyType_GetModuleByDef() no longer checks if types are heap types.
_PyType_GetModuleByDef() must only be called on a heap type created
by PyType_FromModuleAndSpec() or on its subclasses.
type_ready_mro() ensures that a static type cannot inherit from a
heap type.
When printing NameError raised by the interpreter, PyErr_Display
will offer suggestions of simmilar variable names in the function that the exception
was raised from:
>>> schwarzschild_black_hole = None
>>> schwarschild_black_hole
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'schwarschild_black_hole' is not defined. Did you mean: schwarzschild_black_hole?
When printing AttributeError, PyErr_Display will offer suggestions of similar
attribute names in the object that the exception was raised from:
>>> collections.namedtoplo
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'collections' has no attribute 'namedtoplo'. Did you mean: namedtuple?
* Rename functions
* Only pass type parameter to "add_xxx" functions.
* Clarify the role of the type_ready_inherit_as_structs() function.
* Move type_dict_set_doc() code to call it in type_ready_fill_dict().
* Split PyType_Ready() into sub-functions.
* type_ready_mro() now checks if bases are static types earlier.
* Check tp_name earlier, in type_ready_checks().
* Add _PyType_IsReady() macro to check if a type is ready.
Add the Py_Is(x, y) function to test if the 'x' object is the 'y'
object, the same as "x is y" in Python. Add also the Py_IsNone(),
Py_IsTrue(), Py_IsFalse() functions to test if an object is,
respectively, the None singleton, the True singleton or the False
singleton.
* Split type_new() into into many small functions.
* Add type_new_ctx structure to pass variables between subfunctions.
* Initialize some PyTypeObject and PyHeapTypeObject members earlier
in type_new_alloc().
* Rename variables to more specific names.
* Add "__weakref__" identifier for type_new_visit_slots().
* Factorize code to convert a method to a classmethod
(__init_subclass__ and __class_getitem__).
* Add braces to respect PEP 7.
* Move variable declarations where the variables are initialized.
Static methods (@staticmethod) and class methods (@classmethod) now
inherit the method attributes (__module__, __name__, __qualname__,
__doc__, __annotations__) and have a new __wrapped__ attribute.
Changes:
* Add a repr() method to staticmethod and classmethod types.
* Add tests on the @classmethod decorator.
* Handle check for sending None to starting generator and coroutine into bytecode.
* Document new bytecode and make it fail gracefully if mis-compiled.
The limited C API is now supported if Python is built in debug mode
(if the Py_DEBUG macro is defined). In the limited C API, the
Py_INCREF() and Py_DECREF() functions are now implemented as opaque
function calls, rather than accessing directly the PyObject.ob_refcnt
member, if Python is built in debug mode and the Py_LIMITED_API macro
targets Python 3.10 or newer. It became possible to support the
limited C API in debug mode because the PyObject structure is the
same in release and debug mode since Python 3.8 (see bpo-36465).
The limited C API is still not supported in the --with-trace-refs
special build (Py_TRACE_REFS macro).
Reorganize pycore_interp_init() to initialize singletons before the
the first PyType_Ready() call. Fix an issue when Python is configured
using --without-doc-strings.
* Use instruction offset, rather than bytecode offset. Streamlines interpreter dispatch a bit, and removes most EXTENDED_ARGs for jumps.
* Change some uses of PyCode_Addr2Line to PyFrame_GetLineNumber
When printing stats, move radix tree info to its own section.
Restore that the breakdown of bytes in arenas exactly accounts for the total of arena bytes allocated.
Add an assert so that invariant doesn't break again.
* Remove m68k-specific hack from ascii_decode
On m68k, alignments of primitives is more relaxed, with 4-byte and
8-byte types only requiring 2-byte alignment, thus using sizeof(size_t)
does not work. Instead, use the portable alternative.
Note that this is a minimal fix that only relaxes the assertion and the
condition for when to use the optimised version remains overly strict.
Such issues will be fixed tree-wide in the next commit.
NB: In C11 we could use _Alignof(size_t) instead, but for compatibility
we use autoconf.
* Optimise string routines for architectures with non-natural alignment
C only requires that sizeof(x) is a multiple of alignof(x), not that the
two are equal. Thus anywhere where we optimise based on alignment we
should be using alignof(x) not sizeof(x).
This is more annoying than it would be in C11 where we could just use
_Alignof(x) (and alignof(x) in C++11), but since we still require only
C99 we must plumb the information all the way from autoconf through the
various typedefs and defines.
The radix tree approach is a relatively simple and memory sanitary
alternative to the old (slightly) unsanitary address_in_range().
To disable the radix tree map, set a preprocessor flag as follows:
-DWITH_PYMALLOC_RADIX_TREE=0.
Co-authored-by: Tim Peters <tim.peters@gmail.com>
See [PEP 597](https://www.python.org/dev/peps/pep-0597/).
* Add `-X warn_default_encoding` and `PYTHONWARNDEFAULTENCODING`.
* Add EncodingWarning
* Add io.text_encoding()
* open(), TextIOWrapper() emits EncodingWarning when encoding is omitted and warn_default_encoding is enabled.
* _pyio.TextIOWrapper() uses UTF-8 as fallback default encoding used when failed to import locale module. (used during building Python)
* bz2, configparser, gzip, lzma, pathlib, tempfile modules use io.text_encoding().
* What's new entry
Remove the compiler functions using "struct _mod" type, because the
public AST C API was removed:
* PyAST_Compile()
* PyAST_CompileEx()
* PyAST_CompileObject()
* PyFuture_FromAST()
* PyFuture_FromASTObject()
These functions were undocumented and excluded from the limited C API.
Rename functions:
* PyAST_CompileObject() => _PyAST_Compile()
* PyFuture_FromASTObject() => _PyFuture_FromAST()
Moreover, _PyFuture_FromAST() is no longer exported (replace
PyAPI_FUNC() with extern). _PyAST_Compile() remains exported for
test_peg_generator.
Remove also compatibility functions:
* PyAST_Compile()
* PyAST_CompileEx()
* PyFuture_FromAST()
The common case going through _PyType_Lookup is to have a cache hit. There are some small tweaks that can make this a little cheaper:
* The name field identity is used for a cache hit and is kept alive by the cache. So there's no need to read the hash code o the name - instead, the address can be used as the hash.
* There's no need to check if the name is cachable on the lookup either, it probably is, and if it is, it'll be in the cache.
* If we clear the version tag when invalidating a type then we don't actually need to check for a valid version tag bit.
* Remove an assertion which required CO_NEWLOCALS and CO_OPTIMIZED
code flags. It is ok to call this function on a code with these
flags set.
* Fix reference counting on builtins: remove Py_DECREF().
Fix regression introduced in the
commit 46496f9d12.
Add also a comment to document that _PyEval_BuiltinsFromGlobals()
returns a borrowed reference.
Python no longer fails at startup with a fatal error if a command
line argument contains an invalid Unicode character.
The Py_DecodeLocale() function now escapes byte sequences which would
be decoded as Unicode characters outside the [U+0000; U+10ffff]
range.
Use MAX_UNICODE constant in unicodeobject.c.
Implement an enhanced variant of Crochemore and Perrin's Two-Way string searching algorithm, which reduces worst-case time from quadratic (the product of the string and pattern lengths) to linear. This applies to forward searches (like``find``, ``index``, ``replace``); the algorithm for reverse searches (like ``rfind``) is not changed.
Co-authored-by: Tim Peters <tim.peters@gmail.com>
* No longer save/restore the current exception. It is no longer used
with an exception raised.
* No longer clear the current exception on error: it's now up to the
caller.
For some mysterious reason we have PySet_Check, PyFrozenSet_Check, PyAnySet_Check, PyAnySet_CheckExact and PyFrozenSet_CheckExact but no PySet_CheckExact.
The types.FunctionType constructor now inherits the current builtins
if the globals dictionary has no "__builtins__" key, rather than
using {"None": None} as builtins: same behavior as eval() and exec()
functions.
Defining a function with "def function(...): ..." in Python is not
affected, globals cannot be overriden with this syntax: it also
inherits the current builtins.
PyFrame_New(), PyEval_EvalCode(), PyEval_EvalCodeEx(),
PyFunction_New() and PyFunction_NewWithQualName() now inherits the
current builtins namespace if the globals dictionary has no
"__builtins__" key.
* Add _PyEval_GetBuiltins() function.
* _PyEval_BuiltinsFromGlobals() now uses _PyEval_GetBuiltins() if
builtins cannot be found in globals.
* Add tstate parameter to _PyEval_BuiltinsFromGlobals().
Pass the current interpreter (interp) rather than the current Python
thread state (tstate) to internal functions which only use the
interpreter.
Modified functions:
* _PyXXX_Fini() and _PyXXX_ClearFreeList() functions
* _PyEval_SignalAsyncExc(), make_pending_calls()
* _PySys_GetObject(), sys_set_object(), sys_set_object_id(), sys_set_object_str()
* should_audit(), set_flags_from_config(), make_flags()
* _PyAtExit_Call()
* init_stdio_encoding()
* etc.
* Refactor _PyFrame_New_NoTrack() and PyFunction_NewWithQualName()
code.
* PyFrame_New() checks for _PyEval_BuiltinsFromGlobals() failure.
* Fix a ref leak in _PyEval_BuiltinsFromGlobals() error path.
* Complete PyFunction_GetModule() documentation: it returns a
borrowed reference and it can return NULL.
* Move _PyEval_BuiltinsFromGlobals() definition to the internal C
API.
* PyFunction_NewWithQualName() uses _Py_IDENTIFIER() API for the
"__name__" string to make it compatible with subinterpreters.
Expose the new PyFunctionObject.func_builtins member in Python as a
new __builtins__ attribute on functions.
Document also the behavior change in What's New in Python 3.10.
* Further refactoring of PyEval_EvalCode and friends. Break into make-frame, and eval-frame parts.
* Simplify function vector call using new _PyEval_Vector.
* Remove unused internal functions: _PyEval_EvalCodeWithName and _PyEval_EvalCode.
* Don't use legacy function PyEval_EvalCodeEx.
When Python is built in debug mode (with C assertions), calling a
type slot like sq_length (__len__() in Python) now fails with a fatal
error if the slot succeeded with an exception set, or failed with no
exception set. The error message contains the slot, the type name,
and the current exception (if an exception is set).
* Check the result of all slots using _Py_CheckSlotResult().
* No longer pass op_name to ternary_op() in release mode.
* Replace operator with dunder Python method name in error messages.
For example, replace "*" with "__mul__".
* Fix compiler_exit_scope() when an exception is set.
* Fix bytearray.extend() when an exception is set: don't call
bytearray_setslice() with an exception set.
* bpo-42979: Enhance abstract.c assertions checking slot result
Add _Py_CheckSlotResult() function which fails with a fatal error if
a slot function succeeded with an exception set or failed with no
exception set: write the slot name, the type name and the current
exception (if an exception is set).
The Py_FatalError() function and the faulthandler module now dump the
list of extension modules on a fatal error.
Add _Py_DumpExtensionModules() and _PyModule_IsExtension() internal
functions.
Before, using the * operator to repeat a bytearray would copy data from the start of
the internal buffer (ob_bytes) and not from the start of the actual data (ob_start).
* Add test for frame.f_lineno with/without tracing.
* Make sure that frame.f_lineno is correct regardless of whether frame.f_trace is set.
* Update importlib
* Add NEWS
In is_typing_name(), va_end() is not always called before the
function returns. It is undefined behavior to call va_start()
without also calling va_end().
Previously this didn't raise an error. Now it will:
```python
from collections.abc import Callable
isinstance(int, list | Callable[..., str])
```
Also added tests in Union since there were previously none for stuff like ``isinstance(list, list | list[int])`` either.
Backport to 3.9 not required.
Automerge-Triggered-By: GH:gvanrossum
Make the Unicode dictionary of interned strings compatible with
subinterpreters.
Remove the INTERN_NAME_STRINGS macro in typeobject.c: names are
always now interned (even if EXPERIMENTAL_ISOLATED_SUBINTERPRETERS
macro is defined).
_PyUnicode_ClearInterned() now uses PyDict_Next() to no longer
allocate memory, to ensure that the interned dictionary is cleared.
Make the type attribute lookup cache per-interpreter.
Add private _PyType_InitCache() function, called by PyInterpreterState_New().
Continue to share next_version_tag between interpreters, since static
types are still shared by interpreters.
Remove MCACHE macro: the cache is no longer disabled if the
EXPERIMENTAL_ISOLATED_SUBINTERPRETERS macro is defined.
Make _PyUnicode_FromId() function compatible with subinterpreters.
Each interpreter now has an array of identifier objects (interned
strings decoded from UTF-8).
* Add PyInterpreterState.unicode.identifiers: array of identifiers
objects.
* Add _PyRuntimeState.unicode_ids used to allocate unique indexes
to _Py_Identifier.
* Rewrite the _Py_Identifier structure.
Microbenchmark on _PyUnicode_FromId(&PyId_a) with _Py_IDENTIFIER(a):
[ref] 2.42 ns +- 0.00 ns -> [atomic] 3.39 ns +- 0.00 ns: 1.40x slower
This change adds 1 ns per _PyUnicode_FromId() call in average.
Use `_PyArg_NoKeywords` instead of `_PyArg_NoKwnames` when checking the `kwds` tuple when creating `GenericAlias`. This fixes an interpreter crash when passing in keyword arguments to `GenericAlias`'s constructor.
Needs backport to 3.9.
Automerge-Triggered-By: GH:gvanrossum
Several built-in and standard library types now ensure that their internal result tuples are always tracked by the garbage collector:
- collections.OrderedDict.items
- dict.items
- enumerate
- functools.reduce
- itertools.combinations
- itertools.combinations_with_replacement
- itertools.permutations
- itertools.product
- itertools.zip_longest
- zip
Previously, they could have become untracked by a prior garbage collection.
No longer use deprecated aliases to functions:
* Replace PyObject_MALLOC() with PyObject_Malloc()
* Replace PyObject_REALLOC() with PyObject_Realloc()
* Replace PyObject_FREE() with PyObject_Free()
* Replace PyObject_Del() with PyObject_Free()
* Replace PyObject_DEL() with PyObject_Free()
No longer use deprecated aliases to functions:
* Replace PyMem_MALLOC() with PyMem_Malloc()
* Replace PyMem_REALLOC() with PyMem_Realloc()
* Replace PyMem_FREE() with PyMem_Free()
* Replace PyMem_Del() with PyMem_Free()
* Replace PyMem_DEL() with PyMem_Free()
Modify also the PyMem_DEL() macro to use directly PyMem_Free().
Reduce memory footprint and improve performance of loading modules having many func annotations.
>>> sys.getsizeof({"a":"int","b":"int","return":"int"})
232
>>> sys.getsizeof(("a","int","b","int","return","int"))
88
The tuple is converted into dict on the fly when `func.__annotations__` is accessed first.
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
Co-authored-by: Inada Naoki <songofacandy@gmail.com>
The Py_TRASHCAN_BEGIN macro no longer accesses PyTypeObject attributes,
but now can get the condition by calling the new private
_PyTrash_cond() function which hides implementation details.
* Speed up comparison of bytes objects with non-bytes objects when
option -b is specified.
* Speed up comparison of bytarray objects with non-buffer object.
* There were leaks if Py_tp_bases is used more than once or if some call is
failed before setting tp_bases.
* There was a crash if the bases argument or the Py_tp_bases slot is not a tuple.
* The documentation was not accurate.
bpo-1635741, bpo-40170: When called on a static type with NULL
tp_base, PyType_Ready() no longer increments the reference count of
the PyBaseObject_Type ("object). PyTypeObject.tp_base is a strong
reference on a heap type, but it is borrowed reference on a static
type.
Fix 99 reference leaks at Python exit (showrefcount 18623 => 18524).
Use PyLong_FromLong(0) and PyLong_FromLong(1) of the public C API
instead. For Python internals, _PyLong_GetZero() and _PyLong_GetOne()
of pycore_long.h can be used.
Add _PyLong_GetZero() and _PyLong_GetOne() functions and a new
internal pycore_long.h header file.
Python cannot be built without small integer singletons anymore.
* UCD_Check() uses PyModule_Check()
* Simplify the internal _PyUnicode_Name_CAPI structure:
* Remove size and state members
* Remove state and self parameters of getcode() and getname()
functions
* Remove global_module_state
The private _PyUnicode_Name_CAPI structure of the PyCapsule API
unicodedata.ucnhash_CAPI moves to the internal C API. Moreover, the
structure gets a new state member which must be passed to the
getcode() and getname() functions.
* Move Include/ucnhash.h to Include/internal/pycore_ucnhash.h
* unicodedata module is now built with Py_BUILD_CORE_MODULE.
* unicodedata: move hashAPI variable into unicodedata_module_state.
If PyDict_GetItemWithError is only used to check whether the key is in dict,
it is better to use PyDict_Contains instead.
And if it is used in combination with PyDict_SetItem, PyDict_SetDefault can
replace the combination.
These functions are considered not safe because they suppress all internal errors
and can return wrong result. PyDict_GetItemString and _PyDict_GetItemId can
also silence current exception in rare cases.
Remove no longer used _PyDict_GetItemId.
Add _PyDict_ContainsId and rename _PyDict_Contains into
_PyDict_Contains_KnownHash.
Remove complex special methods __int__, __float__, __floordiv__,
__mod__, __divmod__, __rfloordiv__, __rmod__ and __rdivmod__
which always raised a TypeError.
Enable recursion checks which were disabled when get __bases__ of
non-type objects in issubclass() and isinstance() and when intern
strings. It fixes a stack overflow when getting __bases__ leads
to infinite recursion.
Originally recursion checks was disabled for PyDict_GetItem() which
silences all errors including the one raised in case of detected
recursion and can return incorrect result. But now the code uses
PyDict_GetItemWithError() and PyDict_SetDefault() instead.
* bpo-26680: Adds support for int.is_integer() for compatibility with float.is_integer().
The int.is_integer() method always returns True.
* bpo-26680: Adds a test to ensure that False.is_integer() and True.is_integer() are always True.
* bpo-26680: Adds Real.is_integer() with a trivial implementation using conversion to int.
This default implementation is intended to reduce the workload for subclass
implementers. It is not robust in the presence of infinities or NaNs and
may have suboptimal performance for other types.
* bpo-26680: Adds Rational.is_integer which returns True if the denominator is one.
This implementation assumes the Rational is represented in it's
lowest form, as required by the class docstring.
* bpo-26680: Adds Integral.is_integer which always returns True.
* bpo-26680: Adds tests for Fraction.is_integer called as an instance method.
The tests for the Rational abstract base class use an unbound
method to sidestep the inability to directly instantiate Rational.
These tests check that everything works correct as an instance method.
* bpo-26680: Updates documentation for Real.is_integer and built-ins int and float.
The call x.is_integer() is now listed in the table of operations
which apply to all numeric types except complex, with a reference
to the full documentation for Real.is_integer(). Mention of
is_integer() has been removed from the section 'Additional Methods
on Float'.
The documentation for Real.is_integer() describes its purpose, and
mentions that it should be overridden for performance reasons, or
to handle special values like NaN.
* bpo-26680: Adds Decimal.is_integer to the Python and C implementations.
The C implementation of Decimal already implements and uses
mpd_isinteger internally, we just expose the existing function to
Python.
The Python implementation uses internal conversion to integer
using to_integral_value().
In both cases, the corresponding context methods are also
implemented.
Tests and documentation are included.
* bpo-26680: Updates the ACKS file.
* bpo-26680: NEWS entries for int, the numeric ABCs and Decimal.
Co-authored-by: Robert Smallshire <rob@sixty-north.com>
Use Py_ssize_t type rather than int, to store lengths in
unionobject.c. Fix the warning:
Objects\unionobject.c(205,1): warning C4244: 'initializing':
conversion from 'Py_ssize_t' to 'int', possible loss of data
Use Py_ssize_t type rather than int, to store lengths in
unionobject.c. Fix warnings:
Objects\unionobject.c(189,71): warning C4244: '+=':
conversion from 'Py_ssize_t' to 'int', possible loss of data
Objects\unionobject.c(182,1): warning C4244: 'initializing':
conversion from 'Py_ssize_t' to 'int', possible loss of data
Objects\unionobject.c(205,1): warning C4244: 'initializing':
conversion from 'Py_ssize_t' to 'int', possible loss of data
Objects\unionobject.c(437,1): warning C4244: 'initializing':
conversion from 'Py_ssize_t' to 'int', possible loss of data
Use _PyType_HasFeature() in the _io module and in structseq
implementation. Replace PyType_HasFeature() opaque function call with
_PyType_HasFeature() inlined function.
The new API allows to efficiently send values into native generators
and coroutines avoiding use of StopIteration exceptions to signal
returns.
ceval loop now uses this method instead of the old "private"
_PyGen_Send C API. This translates to 1.6x increased performance
of 'await' calls in micro-benchmarks.
Aside from CPython core improvements, this new API will also allow
Cython to generate more efficient code, benefiting high-performance
IO libraries like uvloop.
When allocating MemoryError classes, there is some logic to use
pre-allocated instances in a freelist only if the type that is being
allocated is not a subclass of MemoryError. Unfortunately in the
destructor this logic is not present so the freelist is altered even
with subclasses of MemoryError.
My mentee @xvxvxvxvxv noticed that iterating over array.array is
slightly faster than iterating over bytes. Looking at the source I
observed that arrayiter_next() calls `getitem(ao, it->index++)` wheras
striter_next() uses the idiom (paraphrased)
item = PyLong_FromLong(seq->ob_sval[it->it_index]);
if (item != NULL)
++it->it_next;
return item;
I'm not 100% sure but I think that the second version has fewer
opportunity for the CPU to overlap the `index++` operation with the
rest of the code (which in both cases involves a call). So here I am
optimistically incrementing the index -- if the PyLong_FromLong() call
fails, this will leave the iterator pointing at the next byte, but
honestly I doubt that anyone would seriously consider resuming use of
the iterator after that kind of failure (it would have to be a
MemoryError). And the author of arrayiter_next() made the same
consideration (or never ever gave it a thought :-).
With this, a loop like
for _ in b: pass
is now slightly *faster* than the same thing over an equivalent array,
rather than slightly *slower* (in both cases a few percent).