This change is almost entirely moving code around and hiding import state behind internal API. We introduce no changes to behavior, nor to non-internal API. (Since there was already going to be a lot of churn, I took this as an opportunity to re-organize import.c into topically-grouped sections of code.) The motivation is to simplify a number of upcoming changes.
Specific changes:
* move existing import-related code to import.c, wherever possible
* add internal API for interacting with import state (both global and per-interpreter)
* use only API outside of import.c (to limit churn there when changing the location, etc.)
* consolidate the import-related state of PyInterpreterState into a single struct field (this changes layout slightly)
* add macros for import state in import.c (to simplify changing the location)
* group code in import.c into sections
*remove _PyState_AddModule()
https://github.com/python/cpython/issues/101758
* Make sure that the current exception is always normalized.
* Remove redundant type and traceback fields for the current exception.
* Add new API functions: PyErr_GetRaisedException, PyErr_SetRaisedException
* Add new API functions: PyException_GetArgs, PyException_SetArgs
A PyThreadState can be in one of many states in its lifecycle, represented by some status value. Those statuses haven't been particularly clear, so we're addressing that here. Specifically:
* made the distinct lifecycle statuses clear on PyThreadState
* identified expectations of how various lifecycle-related functions relate to status
* noted the various places where those expectations don't match the actual behavior
At some point we'll need to address the mismatches.
(This change also includes some cleanup.)
https://github.com/python/cpython/issues/59956
`warnings.warn()` gains the ability to skip stack frames based on code
filename prefix rather than only a numeric `stacklevel=` via a new
`skip_file_prefixes=` keyword argument.
To use this, ensure that clang support was selected in Visual Studio Installer, then set the PlatformToolset environment variable to "ClangCL" and build as normal from the command line.
It remains unsupported, but at least is possible now for experimentation.
We've factored out a struct from the two PyThreadState fields. This accomplishes two things:
* make it clear that the trashcan-related code doesn't need any other parts of PyThreadState
* allows us to use the trashcan mechanism even when there isn't a "current" thread state
We still expect the caller to hold the GIL.
https://github.com/python/cpython/issues/59956
The objective of this change is to help make the GILState-related code easier to understand. This mostly involves moving code around and some semantically equivalent refactors. However, there are a also a small number of slight changes in structure and behavior:
* tstate_current is moved out of _PyRuntimeState.gilstate
* autoTSSkey is moved out of _PyRuntimeState.gilstate
* autoTSSkey is initialized earlier
* autoTSSkey is re-initialized (after fork) earlier
https://github.com/python/cpython/issues/59956
When executing the BUILD_LIST opcode, steal the references from the stack,
in a manner similar to the BUILD_TUPLE opcode. Implement this by offloading
the logic to a new private API, _PyList_FromArraySteal(), that works similarly
to _PyTuple_FromArraySteal().
This way, instead of performing multiple stack pointer adjustments while the
list is being initialized, the stack is adjusted only once and a fast memory
copy operation is performed in one fell swoop.
Not comprehensive, best effort warning. There are cases when threads exist on some platforms that this code cannot detect. macOS when API permissions allow and Linux with a readable /proc procfs present are the currently supported cases where a warning should show up reliably.
Starting with a DeprecationWarning for now, it is less disruptive than something like RuntimeWarning and most likely to only be seen in people's CI tests - a good place to start with this messaging.
* move _PyRuntime.global_objects.interned to _PyRuntime.cached_objects.interned_strings (and use _Py_CACHED_OBJECT())
* rename _PyRuntime.global_objects to _PyRuntime.static_objects
(This also relates to gh-96075.)
https://github.com/python/cpython/issues/90111
* Add API to allow extensions to set callback function on creation and destruction of PyCodeObject
Co-authored-by: Ye11ow-Flash <janshah@cs.stonybrook.edu>
* Change _PyDict_KeysSize() and shared_keys_usable_size() return type
from signed (Py_ssize_t) to unsigned (size_t) type.
* new_values() argument type is now unsigned (size_t).
* init_inline_values() now uses size_t rather than int for the 'i'
iterator variable.
* type.__sizeof__() implementation now uses unsigned (size_t) type.
The following macros are modified to use _Py_RVALUE(), so they can no
longer be used as l-value:
* DK_LOG_SIZE()
* _PyCode_CODE()
* _PyList_ITEMS()
* _PyTuple_ITEMS()
* _Py_SLIST_HEAD()
* _Py_SLIST_ITEM_NEXT()
_PyCode_CODE() is private and other macros are part of the internal
C API.
Convert macros to static inline functions to avoid macro pitfalls,
like duplication of side effects:
* DK_ENTRIES()
* DK_UNICODE_ENTRIES()
* PyCode_GetNumFree()
* PyFloat_AS_DOUBLE()
* PyInstanceMethod_GET_FUNCTION()
* PyMemoryView_GET_BASE()
* PyMemoryView_GET_BUFFER()
* PyMethod_GET_FUNCTION()
* PyMethod_GET_SELF()
* PySet_GET_SIZE()
* _PyHeapType_GET_MEMBERS()
Changes:
* PyCode_GetNumFree() casts PyCode_GetNumFree.co_nfreevars from int
to Py_ssize_t to be future proof, and because Py_ssize_t is
commonly used in the C API.
* PyCode_GetNumFree() doesn't cast its argument: the replaced macro
already required the exact type PyCodeObject*.
* Add assertions in some functions using "CAST" macros to check
the arguments type when Python is built with assertions
(debug build).
* Remove an outdated comment in unicodeobject.h.
Newly supported interpreter definition syntax:
- `op(NAME, (input_stack_effects -- output_stack_effects)) { ... }`
- `macro(NAME) = OP1 + OP2;`
Also some other random improvements:
- Convert `WITH_EXCEPT_START` to use stack effects
- Fix lexer to balk at unrecognized characters, e.g. `@`
- Fix moved output names; support object pointers in cache
- Introduce `error()` method to print errors
- Introduce read_uint16(p) as equivalent to `*p`
Co-authored-by: Brandt Bucher <brandtbucher@gmail.com>
This is part of the effort to consolidate global variables, to make them easier to manage (and make it easier to later move some of them to PyInterpreterState).
https://github.com/python/cpython/issues/81057
Introduce the autocommit attribute to Connection and the autocommit
parameter to connect() for PEP 249-compliant transaction handling.
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: C.A.M. Gerlach <CAM.Gerlach@Gerlach.CAM>
Co-authored-by: Géry Ogam <gery.ogam@gmail.com>
We actually don't move PyImport_Inittab. Instead, we make a copy that we keep on _PyRuntimeState and use only that after Py_Initialize(). We also prevent folks from modifying PyImport_Inittab (the best we can) after that point.
https://github.com/python/cpython/issues/81057
The global allocators were stored in 3 static global variables: _PyMem_Raw, _PyMem, and _PyObject. State for the "small block" allocator was stored in another 13. That makes a total of 16 global variables. We are moving all 16 to the _PyRuntimeState struct as part of the work for gh-81057. (If PEP 684 is accepted then we will follow up by moving them all to PyInterpreterState.)
https://github.com/python/cpython/issues/81057
As we consolidate global variables, we find some objects that are almost suitable to add to _PyRuntimeState.global_objects, but have some small/sneaky bit of per-interpreter state (e.g. a weakref list). We're adding PyInterpreterState.static_objects so we can move such objects there. (We'll removed the _not_used field once we've added others.)
https://github.com/python/cpython/issues/81057
Up until now we had a single generated initializer macro for all the statically declared global objects in _PyRuntimeState, including several one-offs (e.g. the empty tuple). The one-offs don't need to be generated, but were because we had one big initializer. Having separate initializers for set of generated global objects allows us to generate only the ones we need to. This allows us to add initializers for one-off global objects without having to generate them.
https://github.com/python/cpython/issues/81057
* Adds EXIT_INTERPRETER instruction to exit PyEval_EvalDefault()
* Simplifies RETURN_VALUE, YIELD_VALUE and RETURN_GENERATOR instructions as they no longer need to check for entry frames.
Add _PyStaticObject_CheckRefcnt() function to make
_PyStaticObjects_CheckRefcnt() shorter. Use
_PyObject_ASSERT_FAILED_MSG() to log the object causing the fatal
error.
We do the following:
* move the generated _PyUnicode_InitStaticStrings() to its own file
* move the generated _PyStaticObjects_CheckRefcnt() to its own file
* include pycore_global_objects.h in extension modules instead of pycore_runtime_init.h
These changes help us avoid including things that aren't needed.
https://github.com/python/cpython/issues/90868
Change FOR_ITER to have the same stack effect regardless of whether it branches or not.
Performance is unchanged as FOR_ITER (and specialized forms jump over the cleanup code).
(see https://github.com/python/cpython/issues/98608)
This change does the following:
1. change the argument to a new `_PyInterpreterConfig` struct
2. rename the function to `_Py_NewInterpreterFromConfig()`, inspired by `Py_InitializeFromConfig()` (takes a `_PyInterpreterConfig` instead of `isolated_subinterpreter`)
3. split up the boolean `isolated_subinterpreter` into the corresponding multiple granular settings
* allow_fork
* allow_subprocess
* allow_threads
4. add `PyInterpreterState.feature_flags` to store those settings
5. add a function for checking if a feature is enabled on an opaque `PyInterpreterState *`
6. drop `PyConfig._isolated_interpreter`
The existing default (see `Py_NewInterpeter()` and `Py_Initialize*()`) allows fork, subprocess, and threads and the optional "isolated" interpreter (see the `_xxsubinterpreters` module) disables all three. None of that changes here; the defaults are preserved.
Note that the given `_PyInterpreterConfig` will not be used outside `_Py_NewInterpreterFromConfig()`, nor preserved. This contrasts with how `PyConfig` is currently preserved, used, and even modified outside `Py_InitializeFromConfig()`. I'd rather just avoid that mess from the start for `_PyInterpreterConfig`. We can preserve it later if we find an actual need.
This change allows us to follow up with a number of improvements (e.g. stop disallowing subprocess and support disallowing exec instead).
(Note that this PR adds "private" symbols. We'll probably make them public, and add docs, in a separate change.)
Added os.setns and os.unshare to easily switch between namespaces
on Linux.
Co-authored-by: Christian Heimes <christian@python.org>
Co-authored-by: CAM Gerlach <CAM.Gerlach@Gerlach.CAM>
Co-authored-by: Victor Stinner <vstinner@python.org>
_Py_block_ty defines four types of block, FunctionBlock, ClassBlock, ModuleBlock and AnnotationBlock.
But _symtable_entry.ste_type only comments three of them, I think it's better both sides are consistent.
It had to live as a global outside of PyConfig for stable ABI reasons in
the pre-3.12 backports.
This removes the `_Py_global_config_int_max_str_digits` and gets rid of
the equivalent field in the internal `struct _is PyInterpreterState` as
code can just use the existing nested config struct within that.
Adds tests to verify unique settings and configs in subinterpreters.
Converting a large enough `int` to a decimal string raises `ValueError` as expected. However, the raise comes _after_ the quadratic-time base-conversion algorithm has run to completion. For effective DOS prevention, we need some kind of check before entering the quadratic-time loop. Oops! =)
The quick fix: essentially we catch _most_ values that exceed the threshold up front. Those that slip through will still be on the small side (read: sufficiently fast), and will get caught by the existing check so that the limit remains exact.
The justification for the current check. The C code check is:
```c
max_str_digits / (3 * PyLong_SHIFT) <= (size_a - 11) / 10
```
In GitHub markdown math-speak, writing $M$ for `max_str_digits`, $L$ for `PyLong_SHIFT` and $s$ for `size_a`, that check is:
$$\left\lfloor\frac{M}{3L}\right\rfloor \le \left\lfloor\frac{s - 11}{10}\right\rfloor$$
From this it follows that
$$\frac{M}{3L} < \frac{s-1}{10}$$
hence that
$$\frac{L(s-1)}{M} > \frac{10}{3} > \log_2(10).$$
So
$$2^{L(s-1)} > 10^M.$$
But our input integer $a$ satisfies $|a| \ge 2^{L(s-1)}$, so $|a|$ is larger than $10^M$. This shows that we don't accidentally capture anything _below_ the intended limit in the check.
<!-- gh-issue-number: gh-95778 -->
* Issue: gh-95778
<!-- /gh-issue-number -->
Co-authored-by: Gregory P. Smith [Google LLC] <greg@krypto.org>
Integer to and from text conversions via CPython's bignum `int` type is not safe against denial of service attacks due to malicious input. Very large input strings with hundred thousands of digits can consume several CPU seconds.
This PR comes fresh from a pile of work done in our private PSRT security response team repo.
Signed-off-by: Christian Heimes [Red Hat] <christian@python.org>
Tons-of-polishing-up-by: Gregory P. Smith [Google] <greg@krypto.org>
Reviews via the private PSRT repo via many others (see the NEWS entry in the PR).
<!-- gh-issue-number: gh-95778 -->
* Issue: gh-95778
<!-- /gh-issue-number -->
I wrote up [a one pager for the release managers](https://docs.google.com/document/d/1KjuF_aXlzPUxTK4BMgezGJ2Pn7uevfX7g0_mvgHlL7Y/edit#). Much of that text wound up in the Issue. Backports PRs already exist. See the issue for links.
⚠️⚠️ Note for reviewers, hackers and fellow systems/low-level/compiler engineers ⚠️⚠️
If you have a lot of experience with this kind of shenanigans and want to improve the **first** version, **please make a PR against my branch** or **reach out by email** or **suggest code changes directly on GitHub**.
If you have any **refinements or optimizations** please, wait until the first version is merged before starting hacking or proposing those so we can keep this PR productive.
We only statically initialize for core code and builtin modules. Extension modules still create
the tuple at runtime. We'll solve that part of interpreter isolation separately.
This change includes generated code. The non-generated changes are in:
* Tools/clinic/clinic.py
* Python/getargs.c
* Include/cpython/modsupport.h
* Makefile.pre.in (re-generate global strings after running clinic)
* very minor tweaks to Modules/_codecsmodule.c and Python/Python-tokenize.c
All other changes are generated code (clinic, global strings).
* Store tp_weaklist on the interpreter state for static builtin types.
* Factor out _PyStaticType_GET_WEAKREFS_LISTPTR().
* Add _PyStaticType_ClearWeakRefs().
* Add a comment about how _PyStaticType_ClearWeakRefs() loops.
* Document the change.
* Update Doc/whatsnew/3.12.rst
* Fix a typo.
This is the last precursor to storing tp_subclasses (and tp_weaklist) on the interpreter state for static builtin types.
Here we add per-type storage on PyInterpreterState, but only for the static builtin types. This involves the following:
* add PyInterpreterState.types
* move PyInterpreterState.type_cache to it
* add a "num_builtins_initialized" field
* add a "builtins" field (a static array big enough for all the static builtin types)
* add _PyStaticType_GetState() to look up a static builtin type's state
* (temporarily) add PyTypeObject.tp_static_builtin_index (to hold the type's index into PyInterpreterState.types.builtins)
We will be eliminating tp_static_builtin_index in a later change.
* Add _Py_memory_repeat function to pycore_list
* Add _Py_RefcntAdd function to pycore_object
* Use the new functions in tuplerepeat, list_repeat, and list_inplace_repeat
This is the first of several precursors to storing tp_subclasses (and tp_weaklist) on the interpreter state for static builtin types.
We do the following:
* add `_PyStaticType_InitBuiltin()`
* add `_Py_TPFLAGS_STATIC_BUILTIN`
* set it on all static builtin types in `_PyStaticType_InitBuiltin()`
* shuffle some code around to be able to use _PyStaticType_InitBuiltin()
* rename `_PyStructSequence_InitType()` to `_PyStructSequence_InitBuiltinWithFlags()`
* add `_PyStructSequence_InitBuiltin()`.
It combines PyImport_ImportModule() and PyObject_GetAttrString()
and saves 4-6 lines of code on every use.
Add also _PyImport_GetModuleAttr() which takes Python strings as arguments.
This was added for bpo-40514 (gh-84694) to test out a per-interpreter GIL. However, it has since proven unnecessary to keep the experiment in the repo. (It can be done as a branch in a fork like normal.) So here we are removing:
* the configure option
* the macro
* the code enabled by the macro
Also while there, clarify a few things about why we reduce the hash to 32 bits.
Co-authored-by: Eli Libman <eli@hyro.ai>
Co-authored-by: Yury Selivanov <yury@edgedb.com>
Co-authored-by: Łukasz Langa <lukasz@langa.pl>
Remove the token.h header file. There was never any public tokenizer
C API. The token.h header file was only designed to be used by Python
internals.
Move Include/token.h to Include/internal/pycore_token.h. Including
this header file now requires that the Py_BUILD_CORE macro is
defined. It no longer checks for the Py_LIMITED_API macro.
Rename functions:
* PyToken_OneChar() => _PyToken_OneChar()
* PyToken_TwoChars() => _PyToken_TwoChars()
* PyToken_ThreeChars() => _PyToken_ThreeChars()
Convert the following macros to static inline functions:
* _Py_AS_GC()
* _PyGCHead_FINALIZED(), _PyGCHead_SET_FINALIZED()
* _PyGCHead_NEXT(), _PyGCHead_SET_NEXT()
* _PyGCHead_PREV(), _PyGCHead_SET_PREV()
* _PyGC_FINALIZED(), _PyGC_SET_FINALIZED()
* _PyObject_GC_IS_TRACKED()
* _PyObject_GC_MAY_BE_TRACKED()
Add a macro wrapping the _PyObject_GC_IS_TRACKED() function to cast
the argument to PyObject*.
Currently, calling Py_EnterRecursiveCall() and
Py_LeaveRecursiveCall() may use a function call or a static inline
function call, depending if the internal pycore_ceval.h header file
is included or not. Use a different name for the static inline
function to ensure that the static inline function is always used in
Python internals for best performance. Similar approach than
PyThreadState_GET() (function call) and _PyThreadState_GET() (static
inline function).
* Rename _Py_EnterRecursiveCall() to _Py_EnterRecursiveCallTstate()
* Rename _Py_LeaveRecursiveCall() to _Py_LeaveRecursiveCallTstate()
* pycore_ceval.h: Rename Py_EnterRecursiveCall() to
_Py_EnterRecursiveCall() and Py_LeaveRecursiveCall() and
_Py_LeaveRecursiveCall()
When Python is built with "./configure --enable-pystats" (if the
Py_STATS macro is defined), the _Py_GetSpecializationStats() function
must be exported, since it's used by the _opcode extension which is
built as a shared library.
Move the following API from Include/opcode.h (public C API) to a new
Include/internal/pycore_opcode.h header file (internal C API):
* EXTRA_CASES
* _PyOpcode_Caches
* _PyOpcode_Deopt
* _PyOpcode_Jump
* _PyOpcode_OpName
* _PyOpcode_RelativeJump
Fix signal.NSIG value on FreeBSD to accept signal numbers greater
than 32, like signal.SIGRTMIN and signal.SIGRTMAX.
* Add Py_NSIG constant.
* Add pycore_signal.h internal header file.
* _Py_Sigset_Converter() now includes the range of valid signals in
the error message.
Py_REFCNT(), Py_TYPE(), Py_SIZE() and Py_IS_TYPE() functions argument
type is now "PyObject*", rather than "const PyObject*".
* Replace also "const PyObject*" with "PyObject*" in functions:
* _Py_strhex_impl()
* _Py_strhex_with_sep()
* _Py_strhex_bytes_with_sep()
* Remove _PyObject_CAST_CONST() and _PyVarObject_CAST_CONST() macros.
* Py_IS_TYPE() can now use Py_TYPE() in its implementation.
* Stores all location info in linetable to conform to PEP 626.
* Remove column table from code objects.
* Remove end-line table from code objects.
* Document new location table format
* Revert "bpo-46850: Move _PyInterpreterState_SetEvalFrameFunc() to internal C API (GH-32054)"
This reverts commit f877b40e3f.
* Revert "bpo-46850: Move _PyEval_EvalFrameDefault() to internal C API (GH-32052)"
This reverts commit b9a5522dd9.
The fact interpreter frames were split out from full frame objects
rather than always being part of the eval loop implementation means
that it's tricky to infer the expected naming conventions simply
from looking at the code.
Documenting the de facto conventions in pycore_frame.h means future
readers of the code will have a clear explanation of the rationale
for those conventions (i.e. minimising non-functional code churn).
Move the private _PyFrameEvalFunction type, and private
_PyInterpreterState_GetEvalFrameFunc() and
_PyInterpreterState_SetEvalFrameFunc() functions to the internal C
API. The _PyFrameEvalFunction callback function type now uses the
_PyInterpreterFrame type which is part of the internal C API.
Update the _PyFrameEvalFunction documentation.
Move the private undocumented _PyEval_EvalFrameDefault() function to
the internal C API. The function now uses the _PyInterpreterFrame
type which is part of the internal C API.