Fix a crash at Python exit when a deallocator function removes the
last strong reference to a heap type.
Don't read type memory after calling basedealloc() since
basedealloc() can deallocate the type and free its memory.
_PyMem_IsPtrFreed() argument is now constant.
* Remove 'zombie' frames. We won't need them once we are allocating fixed-size frames.
* Add co_nlocalplus field to code object to avoid recomputing size of locals + frees + cells.
* Move locals, cells and freevars out of frame object into separate memory buffer.
* Use per-threadstate allocated memory chunks for local variables.
* Move globals and builtins from frame object to per-thread stack.
* Move (slow) locals frame object to per-thread stack.
* Move internal frame functions to internal header.
These are passed and called as PyCFunction, however they are defined here without the (ignored) args parameter.
This works fine in some C compilers, but fails in webassembly or anything else that has strict function pointer call type checking.
"Zero cost" exception handling.
* Uses a lookup table to determine how to handle exceptions.
* Removes SETUP_FINALLY and POP_TOP block instructions, eliminating (most of) the runtime overhead of try statements.
* Reduces the size of the frame object by about 60%.
check_set_special_type_attr() and type_set_annotations()
now check for immutable flag (Py_TPFLAGS_IMMUTABLETYPE).
Co-authored-by: Victor Stinner <vstinner@python.org>
The PyStdPrinter_Type type now uses the
Py_TPFLAGS_DISALLOW_INSTANTIATION flag to disallow instantiation,
rather than seting a tp_init method which always fail.
Write also unit tests for PyStdPrinter_Type.
Add a new Py_TPFLAGS_DISALLOW_INSTANTIATION type flag to disallow
creating type instances: set tp_new to NULL and don't create the
"__new__" key in the type dictionary.
The flag is set automatically on static types if tp_base is NULL or
&PyBaseObject_Type and tp_new is NULL.
Use the flag on the following types:
* _curses.ncurses_version type
* _curses_panel.panel
* _tkinter.Tcl_Obj
* _tkinter.tkapp
* _tkinter.tktimertoken
* _xxsubinterpretersmodule.ChannelID
* sys.flags type
* sys.getwindowsversion() type
* sys.version_info type
Update MyStr example in the C API documentation to use
Py_TPFLAGS_DISALLOW_INSTANTIATION.
Add _PyStructSequence_InitType() function to create a structseq type
with the Py_TPFLAGS_DISALLOW_INSTANTIATION flag set.
type_new() calls _PyType_CheckConsistency() at exit.
* Add Py_TPFLAGS_SEQUENCE and Py_TPFLAGS_MAPPING, add to all relevant standard builtin classes.
* Set relevant flags on collections.abc.Sequence and Mapping.
* Use flags in MATCH_SEQUENCE and MATCH_MAPPING opcodes.
* Inherit Py_TPFLAGS_SEQUENCE and Py_TPFLAGS_MAPPING.
* Add NEWS
* Remove interpreter-state map_abc and seq_abc fields.
While working on another issue, I noticed two minor nits in the C implementation of the module object. Both are related to getting a module's name.
First, the C function module_dir() (module.__dir__) starts by ensuring the module dict is valid. If the module dict is invalid, it wants to format an exception using the name of the module, which it gets from PyModule_GetName(). However, PyModule_GetName() gets the name of the module from the dict. So getting the name in this circumstance will never succeed.
When module_dir() wants to format the error but can't get the name, it knows that PyModule_GetName() must have already raised an exception. So it leaves that exception alone and returns an error. The end result is that the exception raised here is kind of useless and misleading: dir(module) on a module with no __dict__ raises SystemError("nameless module"). I changed the code to actually raise the exception it wanted to raise, just without a real module name: TypeError("<module>.__dict__ is not a dictionary"). This seems more useful, and would do a better job putting the programmer who encountered this on the right track of figuring out what was going on.
Second, the C API function PyModule_GetNameObject() checks to see if the module has a dict. If m->md_dict is not NULL, it calls _PyDict_GetItemIdWithError(). However, it's possible for m->md_dict to be None. And if you call _PyDict_GetItemIdWithError(Py_None, ...) it will *crash*.
Unfortunately, this crash was due to my own bug in the other branch. Fixing my code made the crash go away. I assert that this is still possible at the API level.
The fix is easy: add a PyDict_Check() to PyModule_GetNameObject().
Unfortunately, I don't know how to add a unit test for this. Having changed module_dir() above, I can't find any other interfaces callable from Python that eventually call PyModule_GetNameObject(). So I don't know how to trick the runtime into reproducing this error.
Since both these changes are minor--each entails only a small edit to only one line--I didn't bother with a news item.
Change class and module objects to lazy-create empty annotations dicts on demand. The annotations dicts are stored in the object's `__dict__` for backwards compatibility.
Accessing the following attributes will now fire PEP 578 style audit hooks as ("object.__getattr__", obj, name):
* PyTracebackObject: tb_frame
* PyFrameObject: f_code
* PyGenObject: gi_code, gi_frame
* PyCoroObject: cr_code, cr_frame
* PyAsyncGenObject: ag_code, ag_frame
Add an AUDIT_READ attribute flag aliased to READ_RESTRICTED.
Update obsolete flag documentation.
* Add length parameter to PyLineTable_InitAddressRange and doen't use sentinel values at end of table. Makes the line number table more robust.
* Update PyCodeAddressRange to match PEP 626.
Introduce Py_TPFLAGS_IMMUTABLETYPE flag for immutable type objects, and
modify PyType_Ready() to set it for static types.
Co-authored-by: Victor Stinner <vstinner@python.org>
To improve the user experience understanding what part of the error messages associated with SyntaxErrors is wrong, we can highlight the whole error range and not only place the caret at the first character. In this way:
>>> foo(x, z for z in range(10), t, w)
File "<stdin>", line 1
foo(x, z for z in range(10), t, w)
^
SyntaxError: Generator expression must be parenthesized
becomes
>>> foo(x, z for z in range(10), t, w)
File "<stdin>", line 1
foo(x, z for z in range(10), t, w)
^^^^^^^^^^^^^^^^^^^^
SyntaxError: Generator expression must be parenthesized
Add pycore_moduleobject.h internal header file with static inline
functions to access module members:
* _PyModule_GetDict()
* _PyModule_GetDef()
* _PyModule_GetState()
These functions don't check at runtime if their argument has a valid
type and can be inlined even if Python is not built with LTO.
_PyType_GetModuleByDef() uses _PyModule_GetDef().
Replace PyModule_GetState() with _PyModule_GetState() in the
extension modules, considered as performance sensitive:
* _abc
* _functools
* _operator
* _pickle
* _queue
* _random
* _sre
* _struct
* _thread
* _winapi
* array
* posix
The following extensions are now built with the Py_BUILD_CORE_MODULE
macro defined, to be able to use the internal pycore_moduleobject.h
header: _abc, array, _operator, _queue, _sre, _struct.
PyType_Ready() now ensures that a type MRO cannot be empty.
_PyType_GetModuleByDef() no longer checks "i < PyTuple_GET_SIZE(mro)"
at the first loop iteration to optimize the most common case, when
the argument is the defining class.
_PyType_GetModuleByDef() no longer checks if types are heap types.
_PyType_GetModuleByDef() must only be called on a heap type created
by PyType_FromModuleAndSpec() or on its subclasses.
type_ready_mro() ensures that a static type cannot inherit from a
heap type.
When printing NameError raised by the interpreter, PyErr_Display
will offer suggestions of simmilar variable names in the function that the exception
was raised from:
>>> schwarzschild_black_hole = None
>>> schwarschild_black_hole
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'schwarschild_black_hole' is not defined. Did you mean: schwarzschild_black_hole?
When printing AttributeError, PyErr_Display will offer suggestions of similar
attribute names in the object that the exception was raised from:
>>> collections.namedtoplo
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'collections' has no attribute 'namedtoplo'. Did you mean: namedtuple?
* Rename functions
* Only pass type parameter to "add_xxx" functions.
* Clarify the role of the type_ready_inherit_as_structs() function.
* Move type_dict_set_doc() code to call it in type_ready_fill_dict().
* Split PyType_Ready() into sub-functions.
* type_ready_mro() now checks if bases are static types earlier.
* Check tp_name earlier, in type_ready_checks().
* Add _PyType_IsReady() macro to check if a type is ready.
Add the Py_Is(x, y) function to test if the 'x' object is the 'y'
object, the same as "x is y" in Python. Add also the Py_IsNone(),
Py_IsTrue(), Py_IsFalse() functions to test if an object is,
respectively, the None singleton, the True singleton or the False
singleton.
* Split type_new() into into many small functions.
* Add type_new_ctx structure to pass variables between subfunctions.
* Initialize some PyTypeObject and PyHeapTypeObject members earlier
in type_new_alloc().
* Rename variables to more specific names.
* Add "__weakref__" identifier for type_new_visit_slots().
* Factorize code to convert a method to a classmethod
(__init_subclass__ and __class_getitem__).
* Add braces to respect PEP 7.
* Move variable declarations where the variables are initialized.
Static methods (@staticmethod) and class methods (@classmethod) now
inherit the method attributes (__module__, __name__, __qualname__,
__doc__, __annotations__) and have a new __wrapped__ attribute.
Changes:
* Add a repr() method to staticmethod and classmethod types.
* Add tests on the @classmethod decorator.
* Handle check for sending None to starting generator and coroutine into bytecode.
* Document new bytecode and make it fail gracefully if mis-compiled.
The limited C API is now supported if Python is built in debug mode
(if the Py_DEBUG macro is defined). In the limited C API, the
Py_INCREF() and Py_DECREF() functions are now implemented as opaque
function calls, rather than accessing directly the PyObject.ob_refcnt
member, if Python is built in debug mode and the Py_LIMITED_API macro
targets Python 3.10 or newer. It became possible to support the
limited C API in debug mode because the PyObject structure is the
same in release and debug mode since Python 3.8 (see bpo-36465).
The limited C API is still not supported in the --with-trace-refs
special build (Py_TRACE_REFS macro).
Reorganize pycore_interp_init() to initialize singletons before the
the first PyType_Ready() call. Fix an issue when Python is configured
using --without-doc-strings.
* Use instruction offset, rather than bytecode offset. Streamlines interpreter dispatch a bit, and removes most EXTENDED_ARGs for jumps.
* Change some uses of PyCode_Addr2Line to PyFrame_GetLineNumber
When printing stats, move radix tree info to its own section.
Restore that the breakdown of bytes in arenas exactly accounts for the total of arena bytes allocated.
Add an assert so that invariant doesn't break again.
* Remove m68k-specific hack from ascii_decode
On m68k, alignments of primitives is more relaxed, with 4-byte and
8-byte types only requiring 2-byte alignment, thus using sizeof(size_t)
does not work. Instead, use the portable alternative.
Note that this is a minimal fix that only relaxes the assertion and the
condition for when to use the optimised version remains overly strict.
Such issues will be fixed tree-wide in the next commit.
NB: In C11 we could use _Alignof(size_t) instead, but for compatibility
we use autoconf.
* Optimise string routines for architectures with non-natural alignment
C only requires that sizeof(x) is a multiple of alignof(x), not that the
two are equal. Thus anywhere where we optimise based on alignment we
should be using alignof(x) not sizeof(x).
This is more annoying than it would be in C11 where we could just use
_Alignof(x) (and alignof(x) in C++11), but since we still require only
C99 we must plumb the information all the way from autoconf through the
various typedefs and defines.
The radix tree approach is a relatively simple and memory sanitary
alternative to the old (slightly) unsanitary address_in_range().
To disable the radix tree map, set a preprocessor flag as follows:
-DWITH_PYMALLOC_RADIX_TREE=0.
Co-authored-by: Tim Peters <tim.peters@gmail.com>
See [PEP 597](https://www.python.org/dev/peps/pep-0597/).
* Add `-X warn_default_encoding` and `PYTHONWARNDEFAULTENCODING`.
* Add EncodingWarning
* Add io.text_encoding()
* open(), TextIOWrapper() emits EncodingWarning when encoding is omitted and warn_default_encoding is enabled.
* _pyio.TextIOWrapper() uses UTF-8 as fallback default encoding used when failed to import locale module. (used during building Python)
* bz2, configparser, gzip, lzma, pathlib, tempfile modules use io.text_encoding().
* What's new entry
Remove the compiler functions using "struct _mod" type, because the
public AST C API was removed:
* PyAST_Compile()
* PyAST_CompileEx()
* PyAST_CompileObject()
* PyFuture_FromAST()
* PyFuture_FromASTObject()
These functions were undocumented and excluded from the limited C API.
Rename functions:
* PyAST_CompileObject() => _PyAST_Compile()
* PyFuture_FromASTObject() => _PyFuture_FromAST()
Moreover, _PyFuture_FromAST() is no longer exported (replace
PyAPI_FUNC() with extern). _PyAST_Compile() remains exported for
test_peg_generator.
Remove also compatibility functions:
* PyAST_Compile()
* PyAST_CompileEx()
* PyFuture_FromAST()
The common case going through _PyType_Lookup is to have a cache hit. There are some small tweaks that can make this a little cheaper:
* The name field identity is used for a cache hit and is kept alive by the cache. So there's no need to read the hash code o the name - instead, the address can be used as the hash.
* There's no need to check if the name is cachable on the lookup either, it probably is, and if it is, it'll be in the cache.
* If we clear the version tag when invalidating a type then we don't actually need to check for a valid version tag bit.
* Remove an assertion which required CO_NEWLOCALS and CO_OPTIMIZED
code flags. It is ok to call this function on a code with these
flags set.
* Fix reference counting on builtins: remove Py_DECREF().
Fix regression introduced in the
commit 46496f9d12.
Add also a comment to document that _PyEval_BuiltinsFromGlobals()
returns a borrowed reference.
Python no longer fails at startup with a fatal error if a command
line argument contains an invalid Unicode character.
The Py_DecodeLocale() function now escapes byte sequences which would
be decoded as Unicode characters outside the [U+0000; U+10ffff]
range.
Use MAX_UNICODE constant in unicodeobject.c.
Implement an enhanced variant of Crochemore and Perrin's Two-Way string searching algorithm, which reduces worst-case time from quadratic (the product of the string and pattern lengths) to linear. This applies to forward searches (like``find``, ``index``, ``replace``); the algorithm for reverse searches (like ``rfind``) is not changed.
Co-authored-by: Tim Peters <tim.peters@gmail.com>
* No longer save/restore the current exception. It is no longer used
with an exception raised.
* No longer clear the current exception on error: it's now up to the
caller.
For some mysterious reason we have PySet_Check, PyFrozenSet_Check, PyAnySet_Check, PyAnySet_CheckExact and PyFrozenSet_CheckExact but no PySet_CheckExact.
The types.FunctionType constructor now inherits the current builtins
if the globals dictionary has no "__builtins__" key, rather than
using {"None": None} as builtins: same behavior as eval() and exec()
functions.
Defining a function with "def function(...): ..." in Python is not
affected, globals cannot be overriden with this syntax: it also
inherits the current builtins.
PyFrame_New(), PyEval_EvalCode(), PyEval_EvalCodeEx(),
PyFunction_New() and PyFunction_NewWithQualName() now inherits the
current builtins namespace if the globals dictionary has no
"__builtins__" key.
* Add _PyEval_GetBuiltins() function.
* _PyEval_BuiltinsFromGlobals() now uses _PyEval_GetBuiltins() if
builtins cannot be found in globals.
* Add tstate parameter to _PyEval_BuiltinsFromGlobals().
Pass the current interpreter (interp) rather than the current Python
thread state (tstate) to internal functions which only use the
interpreter.
Modified functions:
* _PyXXX_Fini() and _PyXXX_ClearFreeList() functions
* _PyEval_SignalAsyncExc(), make_pending_calls()
* _PySys_GetObject(), sys_set_object(), sys_set_object_id(), sys_set_object_str()
* should_audit(), set_flags_from_config(), make_flags()
* _PyAtExit_Call()
* init_stdio_encoding()
* etc.
* Refactor _PyFrame_New_NoTrack() and PyFunction_NewWithQualName()
code.
* PyFrame_New() checks for _PyEval_BuiltinsFromGlobals() failure.
* Fix a ref leak in _PyEval_BuiltinsFromGlobals() error path.
* Complete PyFunction_GetModule() documentation: it returns a
borrowed reference and it can return NULL.
* Move _PyEval_BuiltinsFromGlobals() definition to the internal C
API.
* PyFunction_NewWithQualName() uses _Py_IDENTIFIER() API for the
"__name__" string to make it compatible with subinterpreters.
Expose the new PyFunctionObject.func_builtins member in Python as a
new __builtins__ attribute on functions.
Document also the behavior change in What's New in Python 3.10.
* Further refactoring of PyEval_EvalCode and friends. Break into make-frame, and eval-frame parts.
* Simplify function vector call using new _PyEval_Vector.
* Remove unused internal functions: _PyEval_EvalCodeWithName and _PyEval_EvalCode.
* Don't use legacy function PyEval_EvalCodeEx.
When Python is built in debug mode (with C assertions), calling a
type slot like sq_length (__len__() in Python) now fails with a fatal
error if the slot succeeded with an exception set, or failed with no
exception set. The error message contains the slot, the type name,
and the current exception (if an exception is set).
* Check the result of all slots using _Py_CheckSlotResult().
* No longer pass op_name to ternary_op() in release mode.
* Replace operator with dunder Python method name in error messages.
For example, replace "*" with "__mul__".
* Fix compiler_exit_scope() when an exception is set.
* Fix bytearray.extend() when an exception is set: don't call
bytearray_setslice() with an exception set.
* bpo-42979: Enhance abstract.c assertions checking slot result
Add _Py_CheckSlotResult() function which fails with a fatal error if
a slot function succeeded with an exception set or failed with no
exception set: write the slot name, the type name and the current
exception (if an exception is set).
The Py_FatalError() function and the faulthandler module now dump the
list of extension modules on a fatal error.
Add _Py_DumpExtensionModules() and _PyModule_IsExtension() internal
functions.
Before, using the * operator to repeat a bytearray would copy data from the start of
the internal buffer (ob_bytes) and not from the start of the actual data (ob_start).
* Add test for frame.f_lineno with/without tracing.
* Make sure that frame.f_lineno is correct regardless of whether frame.f_trace is set.
* Update importlib
* Add NEWS
In is_typing_name(), va_end() is not always called before the
function returns. It is undefined behavior to call va_start()
without also calling va_end().
Previously this didn't raise an error. Now it will:
```python
from collections.abc import Callable
isinstance(int, list | Callable[..., str])
```
Also added tests in Union since there were previously none for stuff like ``isinstance(list, list | list[int])`` either.
Backport to 3.9 not required.
Automerge-Triggered-By: GH:gvanrossum
Make the Unicode dictionary of interned strings compatible with
subinterpreters.
Remove the INTERN_NAME_STRINGS macro in typeobject.c: names are
always now interned (even if EXPERIMENTAL_ISOLATED_SUBINTERPRETERS
macro is defined).
_PyUnicode_ClearInterned() now uses PyDict_Next() to no longer
allocate memory, to ensure that the interned dictionary is cleared.
Make the type attribute lookup cache per-interpreter.
Add private _PyType_InitCache() function, called by PyInterpreterState_New().
Continue to share next_version_tag between interpreters, since static
types are still shared by interpreters.
Remove MCACHE macro: the cache is no longer disabled if the
EXPERIMENTAL_ISOLATED_SUBINTERPRETERS macro is defined.
Make _PyUnicode_FromId() function compatible with subinterpreters.
Each interpreter now has an array of identifier objects (interned
strings decoded from UTF-8).
* Add PyInterpreterState.unicode.identifiers: array of identifiers
objects.
* Add _PyRuntimeState.unicode_ids used to allocate unique indexes
to _Py_Identifier.
* Rewrite the _Py_Identifier structure.
Microbenchmark on _PyUnicode_FromId(&PyId_a) with _Py_IDENTIFIER(a):
[ref] 2.42 ns +- 0.00 ns -> [atomic] 3.39 ns +- 0.00 ns: 1.40x slower
This change adds 1 ns per _PyUnicode_FromId() call in average.
Use `_PyArg_NoKeywords` instead of `_PyArg_NoKwnames` when checking the `kwds` tuple when creating `GenericAlias`. This fixes an interpreter crash when passing in keyword arguments to `GenericAlias`'s constructor.
Needs backport to 3.9.
Automerge-Triggered-By: GH:gvanrossum
Several built-in and standard library types now ensure that their internal result tuples are always tracked by the garbage collector:
- collections.OrderedDict.items
- dict.items
- enumerate
- functools.reduce
- itertools.combinations
- itertools.combinations_with_replacement
- itertools.permutations
- itertools.product
- itertools.zip_longest
- zip
Previously, they could have become untracked by a prior garbage collection.
No longer use deprecated aliases to functions:
* Replace PyObject_MALLOC() with PyObject_Malloc()
* Replace PyObject_REALLOC() with PyObject_Realloc()
* Replace PyObject_FREE() with PyObject_Free()
* Replace PyObject_Del() with PyObject_Free()
* Replace PyObject_DEL() with PyObject_Free()
No longer use deprecated aliases to functions:
* Replace PyMem_MALLOC() with PyMem_Malloc()
* Replace PyMem_REALLOC() with PyMem_Realloc()
* Replace PyMem_FREE() with PyMem_Free()
* Replace PyMem_Del() with PyMem_Free()
* Replace PyMem_DEL() with PyMem_Free()
Modify also the PyMem_DEL() macro to use directly PyMem_Free().
Reduce memory footprint and improve performance of loading modules having many func annotations.
>>> sys.getsizeof({"a":"int","b":"int","return":"int"})
232
>>> sys.getsizeof(("a","int","b","int","return","int"))
88
The tuple is converted into dict on the fly when `func.__annotations__` is accessed first.
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
Co-authored-by: Inada Naoki <songofacandy@gmail.com>
The Py_TRASHCAN_BEGIN macro no longer accesses PyTypeObject attributes,
but now can get the condition by calling the new private
_PyTrash_cond() function which hides implementation details.
* Speed up comparison of bytes objects with non-bytes objects when
option -b is specified.
* Speed up comparison of bytarray objects with non-buffer object.
* There were leaks if Py_tp_bases is used more than once or if some call is
failed before setting tp_bases.
* There was a crash if the bases argument or the Py_tp_bases slot is not a tuple.
* The documentation was not accurate.
bpo-1635741, bpo-40170: When called on a static type with NULL
tp_base, PyType_Ready() no longer increments the reference count of
the PyBaseObject_Type ("object). PyTypeObject.tp_base is a strong
reference on a heap type, but it is borrowed reference on a static
type.
Fix 99 reference leaks at Python exit (showrefcount 18623 => 18524).
Use PyLong_FromLong(0) and PyLong_FromLong(1) of the public C API
instead. For Python internals, _PyLong_GetZero() and _PyLong_GetOne()
of pycore_long.h can be used.
Add _PyLong_GetZero() and _PyLong_GetOne() functions and a new
internal pycore_long.h header file.
Python cannot be built without small integer singletons anymore.
* UCD_Check() uses PyModule_Check()
* Simplify the internal _PyUnicode_Name_CAPI structure:
* Remove size and state members
* Remove state and self parameters of getcode() and getname()
functions
* Remove global_module_state
The private _PyUnicode_Name_CAPI structure of the PyCapsule API
unicodedata.ucnhash_CAPI moves to the internal C API. Moreover, the
structure gets a new state member which must be passed to the
getcode() and getname() functions.
* Move Include/ucnhash.h to Include/internal/pycore_ucnhash.h
* unicodedata module is now built with Py_BUILD_CORE_MODULE.
* unicodedata: move hashAPI variable into unicodedata_module_state.
If PyDict_GetItemWithError is only used to check whether the key is in dict,
it is better to use PyDict_Contains instead.
And if it is used in combination with PyDict_SetItem, PyDict_SetDefault can
replace the combination.
These functions are considered not safe because they suppress all internal errors
and can return wrong result. PyDict_GetItemString and _PyDict_GetItemId can
also silence current exception in rare cases.
Remove no longer used _PyDict_GetItemId.
Add _PyDict_ContainsId and rename _PyDict_Contains into
_PyDict_Contains_KnownHash.
Remove complex special methods __int__, __float__, __floordiv__,
__mod__, __divmod__, __rfloordiv__, __rmod__ and __rdivmod__
which always raised a TypeError.
Enable recursion checks which were disabled when get __bases__ of
non-type objects in issubclass() and isinstance() and when intern
strings. It fixes a stack overflow when getting __bases__ leads
to infinite recursion.
Originally recursion checks was disabled for PyDict_GetItem() which
silences all errors including the one raised in case of detected
recursion and can return incorrect result. But now the code uses
PyDict_GetItemWithError() and PyDict_SetDefault() instead.
* bpo-26680: Adds support for int.is_integer() for compatibility with float.is_integer().
The int.is_integer() method always returns True.
* bpo-26680: Adds a test to ensure that False.is_integer() and True.is_integer() are always True.
* bpo-26680: Adds Real.is_integer() with a trivial implementation using conversion to int.
This default implementation is intended to reduce the workload for subclass
implementers. It is not robust in the presence of infinities or NaNs and
may have suboptimal performance for other types.
* bpo-26680: Adds Rational.is_integer which returns True if the denominator is one.
This implementation assumes the Rational is represented in it's
lowest form, as required by the class docstring.
* bpo-26680: Adds Integral.is_integer which always returns True.
* bpo-26680: Adds tests for Fraction.is_integer called as an instance method.
The tests for the Rational abstract base class use an unbound
method to sidestep the inability to directly instantiate Rational.
These tests check that everything works correct as an instance method.
* bpo-26680: Updates documentation for Real.is_integer and built-ins int and float.
The call x.is_integer() is now listed in the table of operations
which apply to all numeric types except complex, with a reference
to the full documentation for Real.is_integer(). Mention of
is_integer() has been removed from the section 'Additional Methods
on Float'.
The documentation for Real.is_integer() describes its purpose, and
mentions that it should be overridden for performance reasons, or
to handle special values like NaN.
* bpo-26680: Adds Decimal.is_integer to the Python and C implementations.
The C implementation of Decimal already implements and uses
mpd_isinteger internally, we just expose the existing function to
Python.
The Python implementation uses internal conversion to integer
using to_integral_value().
In both cases, the corresponding context methods are also
implemented.
Tests and documentation are included.
* bpo-26680: Updates the ACKS file.
* bpo-26680: NEWS entries for int, the numeric ABCs and Decimal.
Co-authored-by: Robert Smallshire <rob@sixty-north.com>
Use Py_ssize_t type rather than int, to store lengths in
unionobject.c. Fix the warning:
Objects\unionobject.c(205,1): warning C4244: 'initializing':
conversion from 'Py_ssize_t' to 'int', possible loss of data
Use Py_ssize_t type rather than int, to store lengths in
unionobject.c. Fix warnings:
Objects\unionobject.c(189,71): warning C4244: '+=':
conversion from 'Py_ssize_t' to 'int', possible loss of data
Objects\unionobject.c(182,1): warning C4244: 'initializing':
conversion from 'Py_ssize_t' to 'int', possible loss of data
Objects\unionobject.c(205,1): warning C4244: 'initializing':
conversion from 'Py_ssize_t' to 'int', possible loss of data
Objects\unionobject.c(437,1): warning C4244: 'initializing':
conversion from 'Py_ssize_t' to 'int', possible loss of data
Use _PyType_HasFeature() in the _io module and in structseq
implementation. Replace PyType_HasFeature() opaque function call with
_PyType_HasFeature() inlined function.
The new API allows to efficiently send values into native generators
and coroutines avoiding use of StopIteration exceptions to signal
returns.
ceval loop now uses this method instead of the old "private"
_PyGen_Send C API. This translates to 1.6x increased performance
of 'await' calls in micro-benchmarks.
Aside from CPython core improvements, this new API will also allow
Cython to generate more efficient code, benefiting high-performance
IO libraries like uvloop.
When allocating MemoryError classes, there is some logic to use
pre-allocated instances in a freelist only if the type that is being
allocated is not a subclass of MemoryError. Unfortunately in the
destructor this logic is not present so the freelist is altered even
with subclasses of MemoryError.
My mentee @xvxvxvxvxv noticed that iterating over array.array is
slightly faster than iterating over bytes. Looking at the source I
observed that arrayiter_next() calls `getitem(ao, it->index++)` wheras
striter_next() uses the idiom (paraphrased)
item = PyLong_FromLong(seq->ob_sval[it->it_index]);
if (item != NULL)
++it->it_next;
return item;
I'm not 100% sure but I think that the second version has fewer
opportunity for the CPU to overlap the `index++` operation with the
rest of the code (which in both cases involves a call). So here I am
optimistically incrementing the index -- if the PyLong_FromLong() call
fails, this will leave the iterator pointing at the next byte, but
honestly I doubt that anyone would seriously consider resuming use of
the iterator after that kind of failure (it would have to be a
MemoryError). And the author of arrayiter_next() made the same
consideration (or never ever gave it a thought :-).
With this, a loop like
for _ in b: pass
is now slightly *faster* than the same thing over an equivalent array,
rather than slightly *slower* (in both cases a few percent).
Walk down the MRO backwards to find the type that originally defined the final `tp_setattro`, then make sure we are not jumping over intermediate C-level bases with the Python-level call.
Automerge-Triggered-By: @gvanrossum
* Merge gen and frame state variables into one.
* Replace stack pointer with depth in PyFrameObject. Makes code easier to read and saves a word of memory.
* Add failing test.
* bpo-29590: fix stack trace for gen.throw() with yield from (GH-NNNN)
When gen.throw() is called on a generator after a "yield from", the
intermediate stack trace entries are lost. This commit fixes that.
Always create the empty bytes string singleton.
Optimize PyBytes_FromStringAndSize(str, 0): it no longer has to check
if the empty string singleton was created or not, it is always
available.
Add functions:
* _PyBytes_Init()
* bytes_get_empty(), bytes_new_empty()
* bytes_create_empty_string_singleton()
* unicode_create_empty_string_singleton()
_Py_unicode_state: rename empty structure member to empty_string.
Py_InitializeFromConfig() now always creates the empty tuple
singleton as soon as possible.
Optimize PyTuple_New(0): it no longer has to check if the empty tuple
was created or not, it is always creatd.
* Add tuple_create_empty_tuple_singleton() function.
* Add tuple_get_empty() function.
* Remove state parameter of tuple_alloc().
Each interpreter now has its own Unicode latin1 singletons.
Remove "ifdef EXPERIMENTAL_ISOLATED_SUBINTERPRETERS"
and "ifdef LATIN1_SINGLETONS": always enable latin1 singletons.
Optimize unicode_result_ready(): only attempt to get a latin1
singleton for PyUnicode_1BYTE_KIND.
Functions of unicodeobject.c, like PyUnicode_New(), no longer check
if the empty Unicode singleton has been initialized or not. Consider
that it is always initialized. The Unicode API must not be used
before _PyUnicode_Init() or after _PyUnicode_Fini().
Each interpreter now has its own MemoryError free list: it is not
longer shared by all interpreters.
Add _Py_exc_state structure and PyInterpreterState.exc_state member.
Move also errnomap into _Py_exc_state.
* Revert "bpo-40521: Make the empty frozenset per interpreter (GH-21068)"
This reverts commit 261cfedf76.
* bpo-40521: Empty frozensets are no longer singletons
* Complete the removal of the frozenset singleton
Each interpreter now has its own empty bytes string and single byte
character singletons.
Replace STRINGLIB_EMPTY macro with STRINGLIB_GET_EMPTY() macro.
Each interpreter now has its own dict free list:
* Move dict free lists into PyInterpreterState.
* Move PyDict_MAXFREELIST define to pycore_interp.h
* Add _Py_dict_state structure.
* Add tstate parameter to _PyDict_ClearFreeList() and _PyDict_Fini().
* In debug mode, ensure that the dict free lists are not used after
_PyDict_Fini() is called.
* Remove "#ifdef EXPERIMENTAL_ISOLATED_SUBINTERPRETERS".
Unexpected errors in calling the __iter__ method are no longer
masked by TypeError in the "in" operator and functions
operator.contains(), operator.indexOf() and operator.countOf().
The PyObject_INIT() and PyObject_INIT_VAR() macros become aliases to,
respectively, PyObject_Init() and PyObject_InitVar() functions.
Rename _PyObject_INIT() and _PyObject_INIT_VAR() static inline
functions to, respectively, _PyObject_Init() and _PyObject_InitVar(),
and move them to pycore_object.h. Remove their return value:
their return type becomes void.
The _datetime module is now built with the Py_BUILD_CORE_MODULE macro
defined.
Remove an outdated comment on _Py_tracemalloc_config.
In GH-2866, _Py_Bit_Length() was added to pymath.h for lack of a better
location. GH-20518 added a more appropriate header file for bit utilities. It
also shows how to properly use intrinsics. This allows reconsidering bpo-29782.
* Move the function to the new header.
* Changed return type to match __builtin_clzl() and reviewed usage.
* Use intrinsics where available.
* Pick a fallback implementation suitable for inlining.
The PEP 353, written in 2005, introduced PY_FORMAT_SIZE_T. Python no
longer supports macOS 10.4 and Visual Studio 2010, but requires more
recent macOS and Visual Studio versions. In 2020 with Python 3.10, it
is now safe to use directly "%zu" to format size_t and "%zi" to
format Py_ssize_t.
* Rename pycore_byteswap.h to pycore_bitutils.h.
* Move popcount_digit() to pycore_bitutils.h as _Py_popcount32().
* _Py_popcount32() uses GCC and clang builtin function if available.
* Add unit tests to _Py_popcount32().
In debug mode, ensure that free lists are no longer used after being
finalized. Set numfree to -1 in finalization functions
(eg. _PyList_Fini()), and then check that numfree is not equal to -1
before using a free list (e.g list_dealloc()).
Each interpreter now has its own asynchronous generator free lists:
* Move async gen free lists into PyInterpreterState.
* Move _PyAsyncGen_MAXFREELIST define to pycore_interp.h
* Add _Py_async_gen_state structure.
* Add tstate parameter to _PyAsyncGen_ClearFreeLists
and _PyAsyncGen_Fini().
Each interpreter now has its own list free list:
* Move list numfree and free_list into PyInterpreterState.
* Add _Py_list_state structure.
* Add tstate parameter to _PyList_ClearFreeList()
and _PyList_Fini().
* Remove "#ifdef EXPERIMENTAL_ISOLATED_SUBINTERPRETERS".
* _PyGC_Fini() clears gcstate->garbage list which can be stored in
the list free list. Call _PyGC_Fini() before _PyList_Fini() to
prevent leaking this list.
Each interpreter now has its own frame free list:
* Move frame free list into PyInterpreterState.
* Add _Py_frame_state structure.
* Add tstate parameter to _PyFrame_ClearFreeList()
and _PyFrame_Fini().
* Remove "#if PyFrame_MAXFREELIST > 0".
* Remove "#ifdef EXPERIMENTAL_ISOLATED_SUBINTERPRETERS".
Each interpreter now has its own float free list:
* Move tuple numfree and free_list into PyInterpreterState.
* Add _Py_float_state structure.
* Add tstate parameter to _PyFloat_ClearFreeList()
and _PyFloat_Fini().
Each interpreter now has its own tuple free lists:
* Move tuple numfree and free_list arrays into PyInterpreterState.
* Define PyTuple_MAXSAVESIZE and PyTuple_MAXFREELIST macros in
pycore_interp.h.
* Add _Py_tuple_state structure. Pass it explicitly to tuple_alloc().
* Add tstate parameter to _PyTuple_ClearFreeList()
* Each interpreter now has its own empty tuple singleton.
* bpo-29882: Add an efficient popcount method for integers
* Update 'sign bit' and versionadded in docs
* Add entry to whatsnew document
* Doc: use positive example, mention population count
* Minor cleanups of the core code
* Move popcount_digit closer to where it's used
* Use z instead of self after conversion
* Add 'absolute value' and 'population count' to docstring
* Fix clinic error about missing summary line
* Ensure popcount_digit is portable with 64-bit ints
Co-authored-by: Mark Dickinson <dickinsm@gmail.com>
Previously, the result could have been an instance of a subclass of int.
Also revert bpo-26202 and make attributes start, stop and step of the range
object having exact type int.
Add private function _PyNumber_Index() which preserves the old behavior
of PyNumber_Index() for performance to use it in the conversion functions
like PyLong_AsLong().
Convert Py_REFCNT() and Py_SIZE() macros to static inline functions.
They cannot be used as l-value anymore: use Py_SET_REFCNT() and
Py_SET_SIZE() to set an object reference count and size.
Replace &Py_SIZE(self) with &((PyVarObject*)self)->ob_size
in arraymodule.c.
This change is backward incompatible on purpose, to prepare the C API
for an opaque PyObject structure.
* Fix outdated __int__ and nb_int references in comments
* Also update C-API documentation
* Add back missing 'method' word
* Remove .. deprecated notices
This updates _PyErr_ChainStackItem() to use _PyErr_SetObject()
instead of _PyErr_ChainExceptions(). This prevents a hang in
certain circumstances because _PyErr_SetObject() performs checks
to prevent cycles in the exception context chain while
_PyErr_ChainExceptions() doesn't.
When an asyncio.Task is cancelled, the exception traceback now
starts with where the task was first interrupted. Previously,
the traceback only had "depth one."