Co-authored-by: Hugo van Kemenade <1324225+hugovk@users.noreply.github.com>
Co-authored-by: Malcolm Smith <smith@chaquo.com>
Co-authored-by: Ned Deily <nad@python.org>
* EOFError no longer overrides other errors such as MemoryError or OSError at
the start of the object.
* Raise more relevant error when the NULL object occurs as a code object
component.
* Minimize an overhead of calling PyErr_Occurred().
Remove old aliases which were kept backwards compatibility with
Python 3.8:
* _PyObject_CallMethodNoArgs()
* _PyObject_CallMethodOneArg()
* _PyObject_CallOneArg()
* _PyObject_FastCallDict()
* _PyObject_Vectorcall()
* _PyObject_VectorcallMethod()
* _PyVectorcall_Function()
Update code which used these aliases to use new names.
Here we are doing no more than adding the value for Py_mod_multiple_interpreters and using it for stdlib modules. We will start checking for it in gh-104206 (once PyInterpreterState.ceval.own_gil is added in gh-104204).
* Eliminate all remaining uses of Py_SIZE and Py_SET_SIZE on PyLongObject, adding asserts.
* Change layout of size/sign bits in longobject to support future addition of immortal ints and tagged medium ints.
* Add functions to hide some internals of long object, and for setting sign and digit count.
* Replace uses of IS_MEDIUM_VALUE macro with _PyLong_IsCompact().
For wasmtime 2.0, the stack depth cost is 6% higher. This causes the default max `marshal` recursion depth to blow the stack.
As the default marshal depth is 2000 and Windows is set to 1000, split the difference and choose 1500 for WASI to be safe.
Use FLAG_REF always for interned strings.
Refcounts of interned string is very unstable.
When compiling same source, refcounts of interned string in the output may be 1 or >1.
It makes FLAG_REF usage unstable.
To help reproducible build, use FLAG_REF for interned string even if refcnt(obj)==1.
* Stores all location info in linetable to conform to PEP 626.
* Remove column table from code objects.
* Remove end-line table from code objects.
* Document new location table format
Remove the Include/code.h header file. C extensions should only
include the main <Python.h> header file.
Python.h includes directly Include/cpython/code.h instead.
* Moves the bytecode to the end of the corresponding PyCodeObject, and quickens it in-place.
* Removes the almost-always-unused co_varnames, co_freevars, and co_cellvars member caches
* _PyOpcode_Deopt is a new mapping from all opcodes to their un-quickened forms.
* _PyOpcode_InlineCacheEntries is renamed to _PyOpcode_Caches
* _Py_IncrementCountAndMaybeQuicken is renamed to _PyCode_Warmup
* _Py_Quicken is renamed to _PyCode_Quicken
* _co_quickened is renamed to _co_code_adaptive (and is now a read-only memoryview).
* Do not emit unused nonzero opargs anymore in the compiler.
Add new functions to pack and unpack C double (serialize and
deserialize):
* PyFloat_Pack2(), PyFloat_Pack4(), PyFloat_Pack8()
* PyFloat_Unpack2(), PyFloat_Unpack4(), PyFloat_Unpack8()
Document these functions and add unit tests.
Rename private functions and move them from the internal C API
to the public C API:
* _PyFloat_Pack2() => PyFloat_Pack2()
* _PyFloat_Pack4() => PyFloat_Pack4()
* _PyFloat_Pack8() => PyFloat_Pack8()
* _PyFloat_Unpack2() => PyFloat_Unpack2()
* _PyFloat_Unpack4() => PyFloat_Unpack4()
* _PyFloat_Unpack8() => PyFloat_Unpack8()
Replace the "unsigned char*" type with "char*" which is more common
and easy to use.
We're no longer using _Py_IDENTIFIER() (or _Py_static_string()) in any core CPython code. It is still used in a number of non-builtin stdlib modules.
The replacement is: PyUnicodeObject (not pointer) fields under _PyRuntimeState, statically initialized as part of _PyRuntime. A new _Py_GET_GLOBAL_IDENTIFIER() macro facilitates lookup of the fields (along with _Py_GET_GLOBAL_STRING() for non-identifier strings).
https://bugs.python.org/issue46541#msg411799 explains the rationale for this change.
The core of the change is in:
* (new) Include/internal/pycore_global_strings.h - the declarations for the global strings, along with the macros
* Include/internal/pycore_runtime_init.h - added the static initializers for the global strings
* Include/internal/pycore_global_objects.h - where the struct in pycore_global_strings.h is hooked into _PyRuntimeState
* Tools/scripts/generate_global_objects.py - added generation of the global string declarations and static initializers
I've also added a --check flag to generate_global_objects.py (along with make check-global-objects) to check for unused global strings. That check is added to the PR CI config.
The remainder of this change updates the core code to use _Py_GET_GLOBAL_IDENTIFIER() instead of _Py_IDENTIFIER() and the related _Py*Id functions (likewise for _Py_GET_GLOBAL_STRING() instead of _Py_static_string()). This includes adding a few functions where there wasn't already an alternative to _Py*Id(), replacing the _Py_Identifier * parameter with PyObject *.
The following are not changed (yet):
* stop using _Py_IDENTIFIER() in the stdlib modules
* (maybe) get rid of _Py_IDENTIFIER(), etc. entirely -- this may not be doable as at least one package on PyPI using this (private) API
* (maybe) intern the strings during runtime init
https://bugs.python.org/issue46541
Move classobject.h, context.h, genobject.h and longintrepr.h header
files from Include/ to Include/cpython/.
Remove redundant "#ifndef Py_LIMITED_API" in context.h.
Remove explicit #include "longintrepr.h" in C files. It's not needed,
Python.h already includes it.
* Move _PyObject_CallNoArgs() to pycore_call.h (internal C API).
* _ssl, _sqlite and _testcapi extensions now call the public
PyObject_CallNoArgs() function, rather than _PyObject_CallNoArgs().
* _lsprof extension is now built with Py_BUILD_CORE_MODULE macro
defined to get access to internal _PyObject_CallNoArgs().
Fix typo in the private _PyObject_CallNoArg() function name: rename
it to _PyObject_CallNoArgs() to be consistent with the public
function PyObject_CallNoArgs().
This PR is part of PEP 657 and augments the compiler to emit ending
line numbers as well as starting and ending columns from the AST
into compiled code objects. This allows bytecodes to be correlated
to the exact source code ranges that generated them.
This information is made available through the following public APIs:
* The `co_positions` method on code objects.
* The C API function `PyCode_Addr2Location`.
Co-authored-by: Batuhan Taskaya <isidentical@gmail.com>
Co-authored-by: Ammar Askar <ammar@ammaraskar.com>
These were reverted in gh-26530 (commit 17c4edc) due to refleaks.
* 2c1e258 - Compute deref offsets in compiler (gh-25152)
* b2bf2bc - Add new internal code objects fields: co_fastlocalnames and co_fastlocalkinds. (gh-26388)
This change fixes the refleaks.
https://bugs.python.org/issue43693
* Revert "bpo-43693: Compute deref offsets in compiler (gh-25152)"
This reverts commit b2bf2bc1ec.
* Revert "bpo-43693: Add new internal code objects fields: co_fastlocalnames and co_fastlocalkinds. (gh-26388)"
This reverts commit 2c1e2583fd.
These two commits are breaking the refleak buildbots.
A number of places in the code base (notably ceval.c and frameobject.c) rely on mapping variable names to indices in the frame "locals plus" array (AKA fast locals), and thus opargs. Currently the compiler indirectly encodes that information on the code object as the tuples co_varnames, co_cellvars, and co_freevars. At runtime the dependent code must calculate the proper mapping from those, which isn't ideal and impacts performance-sensitive sections. This is something we can easily address in the compiler instead.
This change addresses the situation by replacing internal use of co_varnames, etc. with a single combined tuple of names in locals-plus order, along with a minimal array mapping each to its kind (local vs. cell vs. free). These two new PyCodeObject fields, co_fastlocalnames and co_fastllocalkinds, are not exposed to Python code for now, but co_varnames, etc. are still available with the same values as before (though computed lazily).
Aside from the (mild) performance impact, there are a number of other benefits:
* there's now a clear, direct relationship between locals-plus and variables
* code that relies on the locals-plus-to-name mapping is simpler
* marshaled code objects are smaller and serialize/de-serialize faster
Also note that we can take this approach further by expanding the possible values in co_fastlocalkinds to include specific argument types (e.g. positional-only, kwargs). Doing so would allow further speed-ups in _PyEval_MakeFrameVector(), which is where args get unpacked into the locals-plus array. It would also allow us to shrink marshaled code objects even further.
https://bugs.python.org/issue43693
"Zero cost" exception handling.
* Uses a lookup table to determine how to handle exceptions.
* Removes SETUP_FINALLY and POP_TOP block instructions, eliminating (most of) the runtime overhead of try statements.
* Reduces the size of the frame object by about 60%.
No longer use deprecated aliases to functions:
* Replace PyMem_MALLOC() with PyMem_Malloc()
* Replace PyMem_REALLOC() with PyMem_Realloc()
* Replace PyMem_FREE() with PyMem_Free()
* Replace PyMem_Del() with PyMem_Free()
* Replace PyMem_DEL() with PyMem_Free()
Modify also the PyMem_DEL() macro to use directly PyMem_Free().
_Py_hashtable_t values become regular "void *" pointers.
* Add _Py_hashtable_entry_t.data member
* Remove _Py_hashtable_t.data_size member
* Remove _Py_hashtable_t.get_func member. It is no longer needed
to specialize _Py_hashtable_get() for a specific value size, since
all entries now have the same size (void*).
* Remove the following macros:
* _Py_HASHTABLE_GET()
* _Py_HASHTABLE_SET()
* _Py_HASHTABLE_SET_NODATA()
* _Py_HASHTABLE_POP()
* Rename _Py_hashtable_pop() to _Py_hashtable_steal()
* _Py_hashtable_foreach() callback now gets key and value rather than
entry.
* Remove _Py_hashtable_value_destroy_func type. value_destroy_func
callback now only has a single parameter: data (void*).
Rewrite _Py_hashtable_t type to always store the key as
a "const void *" pointer. Add an explicit "key" member to
_Py_hashtable_entry_t.
Remove _Py_hashtable_t.key_size member.
hash and compare functions drop their hash table parameter, and their
'key' parameter type becomes "const void *".
* Move Modules/hashtable.h to Include/internal/pycore_hashtable.h
* Move Modules/hashtable.c to Python/hashtable.c
* Python is now linked to hashtable.c. _tracemalloc is no longer
linked to hashtable.c. Previously, marshal.c got hashtable.c via
_tracemalloc.c which is built as a builtin module.