Fix a regression in type() when a metaclass raises an exception. The
C function type_new() must properly report the exception when a
metaclass constructor raises an exception and the winner class is not
the metaclass.
Fix a crash at Python exit when a deallocator function removes the
last strong reference to a heap type.
Don't read type memory after calling basedealloc() since
basedealloc() can deallocate the type and free its memory.
_PyMem_IsPtrFreed() argument is now constant.
* Remove 'zombie' frames. We won't need them once we are allocating fixed-size frames.
* Add co_nlocalplus field to code object to avoid recomputing size of locals + frees + cells.
* Move locals, cells and freevars out of frame object into separate memory buffer.
* Use per-threadstate allocated memory chunks for local variables.
* Move globals and builtins from frame object to per-thread stack.
* Move (slow) locals frame object to per-thread stack.
* Move internal frame functions to internal header.
These are passed and called as PyCFunction, however they are defined here without the (ignored) args parameter.
This works fine in some C compilers, but fails in webassembly or anything else that has strict function pointer call type checking.
"Zero cost" exception handling.
* Uses a lookup table to determine how to handle exceptions.
* Removes SETUP_FINALLY and POP_TOP block instructions, eliminating (most of) the runtime overhead of try statements.
* Reduces the size of the frame object by about 60%.
check_set_special_type_attr() and type_set_annotations()
now check for immutable flag (Py_TPFLAGS_IMMUTABLETYPE).
Co-authored-by: Victor Stinner <vstinner@python.org>
The PyStdPrinter_Type type now uses the
Py_TPFLAGS_DISALLOW_INSTANTIATION flag to disallow instantiation,
rather than seting a tp_init method which always fail.
Write also unit tests for PyStdPrinter_Type.
Add a new Py_TPFLAGS_DISALLOW_INSTANTIATION type flag to disallow
creating type instances: set tp_new to NULL and don't create the
"__new__" key in the type dictionary.
The flag is set automatically on static types if tp_base is NULL or
&PyBaseObject_Type and tp_new is NULL.
Use the flag on the following types:
* _curses.ncurses_version type
* _curses_panel.panel
* _tkinter.Tcl_Obj
* _tkinter.tkapp
* _tkinter.tktimertoken
* _xxsubinterpretersmodule.ChannelID
* sys.flags type
* sys.getwindowsversion() type
* sys.version_info type
Update MyStr example in the C API documentation to use
Py_TPFLAGS_DISALLOW_INSTANTIATION.
Add _PyStructSequence_InitType() function to create a structseq type
with the Py_TPFLAGS_DISALLOW_INSTANTIATION flag set.
type_new() calls _PyType_CheckConsistency() at exit.
* Add Py_TPFLAGS_SEQUENCE and Py_TPFLAGS_MAPPING, add to all relevant standard builtin classes.
* Set relevant flags on collections.abc.Sequence and Mapping.
* Use flags in MATCH_SEQUENCE and MATCH_MAPPING opcodes.
* Inherit Py_TPFLAGS_SEQUENCE and Py_TPFLAGS_MAPPING.
* Add NEWS
* Remove interpreter-state map_abc and seq_abc fields.
While working on another issue, I noticed two minor nits in the C implementation of the module object. Both are related to getting a module's name.
First, the C function module_dir() (module.__dir__) starts by ensuring the module dict is valid. If the module dict is invalid, it wants to format an exception using the name of the module, which it gets from PyModule_GetName(). However, PyModule_GetName() gets the name of the module from the dict. So getting the name in this circumstance will never succeed.
When module_dir() wants to format the error but can't get the name, it knows that PyModule_GetName() must have already raised an exception. So it leaves that exception alone and returns an error. The end result is that the exception raised here is kind of useless and misleading: dir(module) on a module with no __dict__ raises SystemError("nameless module"). I changed the code to actually raise the exception it wanted to raise, just without a real module name: TypeError("<module>.__dict__ is not a dictionary"). This seems more useful, and would do a better job putting the programmer who encountered this on the right track of figuring out what was going on.
Second, the C API function PyModule_GetNameObject() checks to see if the module has a dict. If m->md_dict is not NULL, it calls _PyDict_GetItemIdWithError(). However, it's possible for m->md_dict to be None. And if you call _PyDict_GetItemIdWithError(Py_None, ...) it will *crash*.
Unfortunately, this crash was due to my own bug in the other branch. Fixing my code made the crash go away. I assert that this is still possible at the API level.
The fix is easy: add a PyDict_Check() to PyModule_GetNameObject().
Unfortunately, I don't know how to add a unit test for this. Having changed module_dir() above, I can't find any other interfaces callable from Python that eventually call PyModule_GetNameObject(). So I don't know how to trick the runtime into reproducing this error.
Since both these changes are minor--each entails only a small edit to only one line--I didn't bother with a news item.
Change class and module objects to lazy-create empty annotations dicts on demand. The annotations dicts are stored in the object's `__dict__` for backwards compatibility.
Accessing the following attributes will now fire PEP 578 style audit hooks as ("object.__getattr__", obj, name):
* PyTracebackObject: tb_frame
* PyFrameObject: f_code
* PyGenObject: gi_code, gi_frame
* PyCoroObject: cr_code, cr_frame
* PyAsyncGenObject: ag_code, ag_frame
Add an AUDIT_READ attribute flag aliased to READ_RESTRICTED.
Update obsolete flag documentation.
* Add length parameter to PyLineTable_InitAddressRange and doen't use sentinel values at end of table. Makes the line number table more robust.
* Update PyCodeAddressRange to match PEP 626.
Introduce Py_TPFLAGS_IMMUTABLETYPE flag for immutable type objects, and
modify PyType_Ready() to set it for static types.
Co-authored-by: Victor Stinner <vstinner@python.org>
To improve the user experience understanding what part of the error messages associated with SyntaxErrors is wrong, we can highlight the whole error range and not only place the caret at the first character. In this way:
>>> foo(x, z for z in range(10), t, w)
File "<stdin>", line 1
foo(x, z for z in range(10), t, w)
^
SyntaxError: Generator expression must be parenthesized
becomes
>>> foo(x, z for z in range(10), t, w)
File "<stdin>", line 1
foo(x, z for z in range(10), t, w)
^^^^^^^^^^^^^^^^^^^^
SyntaxError: Generator expression must be parenthesized
Add pycore_moduleobject.h internal header file with static inline
functions to access module members:
* _PyModule_GetDict()
* _PyModule_GetDef()
* _PyModule_GetState()
These functions don't check at runtime if their argument has a valid
type and can be inlined even if Python is not built with LTO.
_PyType_GetModuleByDef() uses _PyModule_GetDef().
Replace PyModule_GetState() with _PyModule_GetState() in the
extension modules, considered as performance sensitive:
* _abc
* _functools
* _operator
* _pickle
* _queue
* _random
* _sre
* _struct
* _thread
* _winapi
* array
* posix
The following extensions are now built with the Py_BUILD_CORE_MODULE
macro defined, to be able to use the internal pycore_moduleobject.h
header: _abc, array, _operator, _queue, _sre, _struct.
PyType_Ready() now ensures that a type MRO cannot be empty.
_PyType_GetModuleByDef() no longer checks "i < PyTuple_GET_SIZE(mro)"
at the first loop iteration to optimize the most common case, when
the argument is the defining class.
_PyType_GetModuleByDef() no longer checks if types are heap types.
_PyType_GetModuleByDef() must only be called on a heap type created
by PyType_FromModuleAndSpec() or on its subclasses.
type_ready_mro() ensures that a static type cannot inherit from a
heap type.
When printing NameError raised by the interpreter, PyErr_Display
will offer suggestions of simmilar variable names in the function that the exception
was raised from:
>>> schwarzschild_black_hole = None
>>> schwarschild_black_hole
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'schwarschild_black_hole' is not defined. Did you mean: schwarzschild_black_hole?
When printing AttributeError, PyErr_Display will offer suggestions of similar
attribute names in the object that the exception was raised from:
>>> collections.namedtoplo
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'collections' has no attribute 'namedtoplo'. Did you mean: namedtuple?
* Rename functions
* Only pass type parameter to "add_xxx" functions.
* Clarify the role of the type_ready_inherit_as_structs() function.
* Move type_dict_set_doc() code to call it in type_ready_fill_dict().
* Split PyType_Ready() into sub-functions.
* type_ready_mro() now checks if bases are static types earlier.
* Check tp_name earlier, in type_ready_checks().
* Add _PyType_IsReady() macro to check if a type is ready.
Add the Py_Is(x, y) function to test if the 'x' object is the 'y'
object, the same as "x is y" in Python. Add also the Py_IsNone(),
Py_IsTrue(), Py_IsFalse() functions to test if an object is,
respectively, the None singleton, the True singleton or the False
singleton.
* Split type_new() into into many small functions.
* Add type_new_ctx structure to pass variables between subfunctions.
* Initialize some PyTypeObject and PyHeapTypeObject members earlier
in type_new_alloc().
* Rename variables to more specific names.
* Add "__weakref__" identifier for type_new_visit_slots().
* Factorize code to convert a method to a classmethod
(__init_subclass__ and __class_getitem__).
* Add braces to respect PEP 7.
* Move variable declarations where the variables are initialized.
Static methods (@staticmethod) and class methods (@classmethod) now
inherit the method attributes (__module__, __name__, __qualname__,
__doc__, __annotations__) and have a new __wrapped__ attribute.
Changes:
* Add a repr() method to staticmethod and classmethod types.
* Add tests on the @classmethod decorator.
* Handle check for sending None to starting generator and coroutine into bytecode.
* Document new bytecode and make it fail gracefully if mis-compiled.
The limited C API is now supported if Python is built in debug mode
(if the Py_DEBUG macro is defined). In the limited C API, the
Py_INCREF() and Py_DECREF() functions are now implemented as opaque
function calls, rather than accessing directly the PyObject.ob_refcnt
member, if Python is built in debug mode and the Py_LIMITED_API macro
targets Python 3.10 or newer. It became possible to support the
limited C API in debug mode because the PyObject structure is the
same in release and debug mode since Python 3.8 (see bpo-36465).
The limited C API is still not supported in the --with-trace-refs
special build (Py_TRACE_REFS macro).