This is the implementation of PEP683
Motivation:
The PR introduces the ability to immortalize instances in CPython which bypasses reference counting. Tagging objects as immortal allows up to skip certain operations when we know that the object will be around for the entire execution of the runtime.
Note that this by itself will bring a performance regression to the runtime due to the extra reference count checks. However, this brings the ability of having truly immutable objects that are useful in other contexts such as immutable data sharing between sub-interpreters.
* The majority of the monitoring code is in instrumentation.c
* The new instrumentation bytecodes are in bytecodes.c
* legacy_tracing.c adapts the new API to the old sys.setrace and sys.setprofile APIs
The function is like Py_AtExit() but for a single interpreter. This is a companion to the atexit module's register() function, taking a C callback instead of a Python one.
We also update the _xxinterpchannels module to use _Py_AtExit(), which is the motivating case. (This is inspired by pain points felt while working on gh-101660.)
In gh-102744 we added is_core_module() (in Python/import.c), which relies on get_core_module_dict() (also added in that PR). The problem is that_PyImport_FixupBuiltin(), which ultimately calls is_core_module(), is called on the builtins module before interp->builtins_copyis set. Consequently, the builtins module isn't considered a "core" module while it is getting "fixed up" and its module def m_copy erroneously gets set. Under isolated interpreters this causes problems since sys and builtins are allowed even though they are still single-phase init modules. (This was discovered while working on gh-101660.)
The solution is to stop relying on get_core_module_dict() in is_core_module().
Sharing mutable (or non-immortal) objects between interpreters is generally not safe. We can work around that but not easily.
There are two restrictions that are critical for objects that break interpreter isolation.
The first is that the object's state be guarded by a global lock. For now the GIL meets this requirement, but a granular global lock is needed once we have a per-interpreter GIL.
The second restriction is that the object (and, for a container, its items) be deallocated/resized only when the interpreter in which it was allocated is the current one. This is because every interpreter has (or will have, see gh-101660) its own object allocator. Deallocating an object with a different allocator can cause crashes.
The dict for the cache of module defs is completely internal, which simplifies what we have to do to meet those requirements. To do so, we do the following:
* add a mechanism for re-using a temporary thread state tied to the main interpreter in an arbitrary thread
* add _PyRuntime.imports.extensions.main_tstate`
* add _PyThreadState_InitDetached() and _PyThreadState_ClearDetached() (pystate.c)
* add _PyThreadState_BindDetached() and _PyThreadState_UnbindDetached() (pystate.c)
* make sure the cache dict (_PyRuntime.imports.extensions.dict) and its items are all owned by the main interpreter)
* add a placeholder using for a granular global lock
Note that the cache is only used for legacy extension modules and not for multi-phase init modules.
https://github.com/python/cpython/issues/100227
This reverts commit 87be8d9.
This approach to keeping the interned strings safe is turning out to be too complex for my taste (due to obmalloc isolation). For now I'm going with the simpler solution, making the dict per-interpreter. We can revisit that later if we want a sharing solution.
This is effectively two changes. The first (the bulk of the change) is where we add _Py_AddToGlobalDict() (and _PyRuntime.cached_objects.main_tstate, etc.). The second (much smaller) change is where we update PyUnicode_InternInPlace() to use _Py_AddToGlobalDict() instead of calling PyDict_SetDefault() directly.
Basically, _Py_AddToGlobalDict() is a wrapper around PyDict_SetDefault() that should be used whenever we need to add a value to a runtime-global dict object (in the few cases where we are leaving the container global rather than moving it to PyInterpreterState, e.g. the interned strings dict). _Py_AddToGlobalDict() does all the necessary work to make sure the target global dict is shared safely between isolated interpreters. This is especially important as we move the obmalloc state to each interpreter (gh-101660), as well as, potentially, the GIL (PEP 684).
https://github.com/python/cpython/issues/100227
* Eliminate all remaining uses of Py_SIZE and Py_SET_SIZE on PyLongObject, adding asserts.
* Change layout of size/sign bits in longobject to support future addition of immortal ints and tagged medium ints.
* Add functions to hide some internals of long object, and for setting sign and digit count.
* Replace uses of IS_MEDIUM_VALUE macro with _PyLong_IsCompact().
Aside from sys and builtins, _io is the only core builtin module that hasn't been ported to multi-phase init. We may do so later (e.g. gh-101948), but in the meantime we must at least take care of the module's static types properly. (This came up while working on gh-101660.)
https://github.com/python/cpython/issues/94673
The error-handling code in new_interpreter() has been broken for a while. We hadn't noticed because those code mostly doesn't fail. (I noticed while working on gh-101660.) The problem is that we try to clear/delete the newly-created thread/interpreter using itself, which just failed. The solution is to switch back to the calling thread state first.
https://github.com/python/cpython/issues/98608
Moving it valuable with a per-interpreter GIL. However, it is also useful without one, since it allows us to identify refleaks within a single interpreter or where references are escaping an interpreter. This becomes more important as we move the obmalloc state to PyInterpreterState.
https://github.com/python/cpython/issues/102304
Prior to this change, errors in _Py_NewInterpreterFromConfig() were always fatal. Instead, callers should be able to handle such errors and keep going. That's what this change supports. (This was an oversight in the original implementation of _Py_NewInterpreterFromConfig().) Note that the existing [fatal] behavior of the public Py_NewInterpreter() is preserved.
https://github.com/python/cpython/issues/98608
The essentially eliminates the global variable, with the associated benefits. This is also a precursor to isolating this bit of state to PyInterpreterState.
Folks that currently read _Py_RefTotal directly would have to start using _Py_GetGlobalRefTotal() instead.
https://github.com/python/cpython/issues/102304
This deprecates `st_ctime` fields on Windows, with the intent to change them to contain the correct value in 3.14. For now, they should keep returning the creation time as they always have.
This behavior is optional, because in some extreme cases it
may just make debugging harder. The tool defaults it to off,
but it is on in Makefile.pre.in.
Also note that this makes diffs to generated_cases.c.h noisier,
since whenever you insert or delete a line in bytecodes.c,
all subsequent #line directives will change.
It doesn't make sense to use multi-phase init for these modules. Using a per-interpreter "m_copy" (instead of PyModuleDef.m_base.m_copy) makes this work okay. (This came up while working on gh-101660.)
Note that we might instead end up disallowing re-load for sys/builtins since they are so special.
https://github.com/python/cpython/issues/102660
* Rename local variables, names and consts, from the interpeter loop. Will allow non-code objects in frames for better introspection of C builtins and extensions.
* Remove unused dummy variables.
Add `MS_WINDOWS_DESKTOP`, `MS_WINDOWS_APPS`, `MS_WINDOWS_SYSTEM` and `MS_WINDOWS_GAMES` preprocessor definitions to allow switching off functionality missing from particular API partitions ("partitions" are used in Windows to identify overlapping subsets of APIs).
CPython only officially supports `MS_WINDOWS_DESKTOP` and `MS_WINDOWS_SYSTEM` (APPS is included by normal desktop builds, but APPS without DESKTOP is not covered). Other configurations are a convenience for people building their own runtimes.
`MS_WINDOWS_GAMES` is for the Xbox subset of the Windows API, which is also available on client OS, but is restricted compared to `MS_WINDOWS_DESKTOP`. These restrictions may change over time, as they relate to the build headers rather than the OS support, and so we assume that Xbox builds will use the latest available version of the GDK.
Specific changes:
* move the import lock to PyInterpreterState
* move the "find_and_load" diagnostic state to PyInterpreterState
Note that the import lock exists to keep multiple imports of the same module in the same interpreter (but in different threads) from stomping on each other. Independently, we use a distinct global lock to protect globally shared import state, especially related to loaded extension modules. For now we can rely on the GIL as that lock but with a per-interpreter GIL we'll need a new global lock.
The remaining state in _PyRuntimeState.imports will (probably) continue being global.
https://github.com/python/cpython/issues/100227
Some incompatible changes had gone in, and the "ignore" lists weren't properly undated. This change fixes that. It's necessary prior to enabling test_check_c_globals, which I hope to do soon.
Note that this does include moving last_resort_memory_error to PyInterpreterState.
https://github.com/python/cpython/issues/90110
This is related to fixing the refleaks introduced by commit 096d009. I haven't been able to find the leak yet, but these changes are a consequence of that effort. This includes some cleanup, some tweaks to the existing tests, and a bunch of new test cases. The only change here that might have impact outside the tests in question is in imp.py, where I update imp.load_dynamic() to use spec_from_file_location() instead of creating a ModuleSpec directly.
Also note that I've updated the tests to only skip if we're checking for refleaks (regrtest's --huntrleaks), whereas in gh-101969 I had skipped the tests entirely. The tests will be useful for some upcoming work and I'd rather the refleaks not hold that up. (It isn't clear how quickly we'll be able to fix the leaking code, though it will certainly be done in the short term.)
https://github.com/python/cpython/issues/102251
We're adding the function back, only for the stable ABI symbol and not as any form of API. I had removed it yesterday.
This undocumented "private" function was added with the implementation for PEP 3121 (3.0, 2007) for internal use and later moved out of the limited API (3.6, 2016) and then into the internal API (3.9, 2019). I removed it completely yesterday, including from the stable ABI manifest (where it was added because the symbol happened to be exported). It's unlikely that anyone is using _PyState_AddModule(), especially any stable ABI extensions built against 3.2-3.5, but we're playing it safe.
https://github.com/python/cpython/issues/101758
* fileutils: handle non-blocking pipe IO on Windows
Handle erroring operations on non-blocking pipes by reading the _doserrno code.
Limit writes on non-blocking pipes that are too large.
* Support blocking functions on Windows
Use the GetNamedPipeHandleState and SetNamedPipeHandleState Win32 API functions to add support for os.get_blocking and os.set_blocking.
This merges their code. They're backed by the same single HACL* static library, having them be a single module simplifies maintenance.
This should unbreak the wasm enscripten builds that currently fail due to linking in --whole-archive mode and the HACL* library appearing twice.
Long unnoticed error fixed: _sha512.SHA384Type was doubly assigned and was actually SHA512Type. Nobody depends on those internal names.
Also rename LIBHACL_ make vars to LIBHACL_SHA2_ in preperation for other future HACL things.
Enforcing (optionally) the restriction set by PEP 489 makes sense. Furthermore, this sets the stage for a potential restriction related to a per-interpreter GIL.
This change includes the following:
* add tests for extension module subinterpreter compatibility
* add _PyInterpreterConfig.check_multi_interp_extensions
* add Py_RTFLAGS_MULTI_INTERP_EXTENSIONS
* add _PyImport_CheckSubinterpIncompatibleExtensionAllowed()
* fail iff the module does not implement multi-phase init and the current interpreter is configured to check
https://github.com/python/cpython/issues/98627
This change is almost entirely moving code around and hiding import state behind internal API. We introduce no changes to behavior, nor to non-internal API. (Since there was already going to be a lot of churn, I took this as an opportunity to re-organize import.c into topically-grouped sections of code.) The motivation is to simplify a number of upcoming changes.
Specific changes:
* move existing import-related code to import.c, wherever possible
* add internal API for interacting with import state (both global and per-interpreter)
* use only API outside of import.c (to limit churn there when changing the location, etc.)
* consolidate the import-related state of PyInterpreterState into a single struct field (this changes layout slightly)
* add macros for import state in import.c (to simplify changing the location)
* group code in import.c into sections
*remove _PyState_AddModule()
https://github.com/python/cpython/issues/101758
The new test exercises the most important variants for single-phase init extension modules. We also add some explanation about those variants to import.c.
https://github.com/python/cpython/issues/101758
* Write output and metadata in a single run
This halves the time to run the cases generator
(most of the time goes into parsing the input).
* Declare or define opcode metadata based on NEED_OPCODE_TABLES
* Use generated metadata for stack_effect()
* compile.o depends on opcode_metadata.h
* Return -1 from _PyOpcode_num_popped/pushed for unknown opcode
* Make sure that the current exception is always normalized.
* Remove redundant type and traceback fields for the current exception.
* Add new API functions: PyErr_GetRaisedException, PyErr_SetRaisedException
* Add new API functions: PyException_GetArgs, PyException_SetArgs
New generator feature: Generate useful glue for output arrays, so you can just write values to the output array (no bounds checking). Rewrote UNPACK_SEQUENCE_TWO_TUPLE to use this, and also UNPACK_SEQUENCE_{TUPLE,LIST}.
The GILState API (PEP 311) implementation from 2003 made the assumption that only one thread state would ever be used for any given OS thread, explicitly disregarding the case of subinterpreters. However, PyThreadState_Swap() still facilitated switching between subinterpreters, meaning the "current" thread state (holding the GIL), and the GILState thread state could end up out of sync, causing problems (including crashes).
This change addresses the issue by keeping the two in sync in PyThreadState_Swap(). I verified the fix against gh-99040.
Note that the other GILState-subinterpreter incompatibility (with autoInterpreterState) is not resolved here.
https://github.com/python/cpython/issues/59956
A PyThreadState can be in one of many states in its lifecycle, represented by some status value. Those statuses haven't been particularly clear, so we're addressing that here. Specifically:
* made the distinct lifecycle statuses clear on PyThreadState
* identified expectations of how various lifecycle-related functions relate to status
* noted the various places where those expectations don't match the actual behavior
At some point we'll need to address the mismatches.
(This change also includes some cleanup.)
https://github.com/python/cpython/issues/59956
`warnings.warn()` gains the ability to skip stack frames based on code
filename prefix rather than only a numeric `stacklevel=` via a new
`skip_file_prefixes=` keyword argument.
We've factored out a struct from the two PyThreadState fields. This accomplishes two things:
* make it clear that the trashcan-related code doesn't need any other parts of PyThreadState
* allows us to use the trashcan mechanism even when there isn't a "current" thread state
We still expect the caller to hold the GIL.
https://github.com/python/cpython/issues/59956
This is a follow-up to gh-101161. The objective is to make it easier to read Python/pystate.c by grouping the functions there in a consistent way. This exclusively involves moving code around and adding various kinds of comments.
https://github.com/python/cpython/issues/59956
The objective of this change is to help make the GILState-related code easier to understand. This mostly involves moving code around and some semantically equivalent refactors. However, there are a also a small number of slight changes in structure and behavior:
* tstate_current is moved out of _PyRuntimeState.gilstate
* autoTSSkey is moved out of _PyRuntimeState.gilstate
* autoTSSkey is initialized earlier
* autoTSSkey is re-initialized (after fork) earlier
https://github.com/python/cpython/issues/59956
You can now write things like this:
```
inst(BUILD_STRING, (pieces[oparg] -- str)) { ... }
inst(LIST_APPEND, (list, unused[oparg-1], v -- list, unused[oparg-1])) { ... }
```
Note that array output effects are only partially supported (they must be named `unused` or correspond to an input effect).
For these the instr_format field uses IX instead of IB.
Register instructions use IX, IB, IBBX, IBBB, etc.
Also: Include the closing '}' in Block.tokens, for completeness
(These aren't used yet, but may be coming soon,
and it's easier to keep this tool the same between branches.)
Added a sanity check for all this to compile.c.
Co-authored-by: Irit Katriel <iritkatriel@yahoo.com>
When executing the BUILD_LIST opcode, steal the references from the stack,
in a manner similar to the BUILD_TUPLE opcode. Implement this by offloading
the logic to a new private API, _PyList_FromArraySteal(), that works similarly
to _PyTuple_FromArraySteal().
This way, instead of performing multiple stack pointer adjustments while the
list is being initialized, the stack is adjusted only once and a fast memory
copy operation is performed in one fell swoop.
The presence of this macro indicates that a particular instruction
may be considered for conversion to a register-based format
(see https://github.com/faster-cpython/ideas/issues/485).
An invariant (currently unchecked) is that `DEOPT_IF()` may only
occur *before* `DECREF_INPUTS()`, and `ERROR_IF()` may only occur
*after* it. One reason not to check this is that there are a few
places where we insert *two* `DECREF_INPUTS()` calls, in different
branches of the code. The invariant checking would have to be able
to do some flow control analysis to understand this.
Note that many instructions, especially specialized ones,
can't be converted to use this macro straightforwardly.
This is because the generator currently only generates plain
`Py_DECREF(variable)` statements, and cannot generate
things like `_Py_DECREF_SPECIALIZED()` let alone deal with
`_PyList_AppendTakeRef()`.
Stack effects can now have a type, e.g. `inst(X, (left, right -- jump/uint64_t)) { ... }`.
Instructions converted to the non-legacy format:
* COMPARE_OP
* COMPARE_OP_FLOAT_JUMP
* COMPARE_OP_INT_JUMP
* COMPARE_OP_STR_JUMP
* STORE_ATTR
* DELETE_ATTR
* STORE_GLOBAL
* STORE_ATTR_INSTANCE_VALUE
* STORE_ATTR_WITH_HINT
* STORE_ATTR_SLOT, and complete the store_attr family
* Complete the store_subscr family: STORE_SUBSCR{,DICT,LIST_INT}
(STORE_SUBSCR was alread half converted,
but wasn't using cache effects yet.)
* DELETE_SUBSCR
* PRINT_EXPR
* INTERPRETER_EXIT (a bit weird, ends in return)
* RETURN_VALUE
* GET_AITER (had to restructure it some)
The original had mysterious `SET_TOP(NULL)` before `goto error`.
I assume those just account for `obj` having been decref'ed,
so I got rid of them in favor of the cleanup implied by `ERROR_IF()`.
* LIST_APPEND (a bit unhappy with it)
* SET_ADD (also a bit unhappy with it)
Various other improvements/refactorings as well.
builtins and extension module functions and methods that expect boolean values for parameters now accept any Python object rather than just a bool or int type. This is more consistent with how native Python code itself behaves.
* Add API to allow extensions to set callback function on creation and destruction of PyCodeObject
Co-authored-by: Ye11ow-Flash <janshah@cs.stonybrook.edu>
Newly supported interpreter definition syntax:
- `op(NAME, (input_stack_effects -- output_stack_effects)) { ... }`
- `macro(NAME) = OP1 + OP2;`
Also some other random improvements:
- Convert `WITH_EXCEPT_START` to use stack effects
- Fix lexer to balk at unrecognized characters, e.g. `@`
- Fix moved output names; support object pointers in cache
- Introduce `error()` method to print errors
- Introduce read_uint16(p) as equivalent to `*p`
Co-authored-by: Brandt Bucher <brandtbucher@gmail.com>
Fix potential race condition in code patterns:
* Replace "Py_DECREF(var); var = new;" with "Py_SETREF(var, new);"
* Replace "Py_XDECREF(var); var = new;" with "Py_XSETREF(var, new);"
* Replace "Py_CLEAR(var); var = new;" with "Py_XSETREF(var, new);"
Other changes:
* Replace "old = var; var = new; Py_DECREF(var)"
with "Py_SETREF(var, new);"
* Replace "old = var; var = new; Py_XDECREF(var)"
with "Py_XSETREF(var, new);"
* And remove the "old" variable.
Fix a number of compile errors with GCC-12 on macOS:
1. In pylifecycle.c the compile rejects _Pragma within a declaration
2. posixmodule.c was missing a number of ..._RUNTIME macros for non-clang on macOS
3. _ctypes assumed that __builtin_available is always present on macOS
This is part of the effort to consolidate global variables, to make them easier to manage (and make it easier to later move some of them to PyInterpreterState).
https://github.com/python/cpython/issues/81057
We actually don't move PyImport_Inittab. Instead, we make a copy that we keep on _PyRuntimeState and use only that after Py_Initialize(). We also prevent folks from modifying PyImport_Inittab (the best we can) after that point.
https://github.com/python/cpython/issues/81057
The global allocators were stored in 3 static global variables: _PyMem_Raw, _PyMem, and _PyObject. State for the "small block" allocator was stored in another 13. That makes a total of 16 global variables. We are moving all 16 to the _PyRuntimeState struct as part of the work for gh-81057. (If PEP 684 is accepted then we will follow up by moving them all to PyInterpreterState.)
https://github.com/python/cpython/issues/81057
As we consolidate global variables, we find some objects that are almost suitable to add to _PyRuntimeState.global_objects, but have some small/sneaky bit of per-interpreter state (e.g. a weakref list). We're adding PyInterpreterState.static_objects so we can move such objects there. (We'll removed the _not_used field once we've added others.)
https://github.com/python/cpython/issues/81057
* Adds EXIT_INTERPRETER instruction to exit PyEval_EvalDefault()
* Simplifies RETURN_VALUE, YIELD_VALUE and RETURN_GENERATOR instructions as they no longer need to check for entry frames.
Replace Py_INCREF() and Py_XINCREF() with Py_NewRef() and
Py_XNewRef() in C files of the Python/ directory.
Update Parser/asdl_c.py to regenerate Python/Python-ast.c.
We do the following:
* move the generated _PyUnicode_InitStaticStrings() to its own file
* move the generated _PyStaticObjects_CheckRefcnt() to its own file
* include pycore_global_objects.h in extension modules instead of pycore_runtime_init.h
These changes help us avoid including things that aren't needed.
https://github.com/python/cpython/issues/90868
Remove the distutils package. It was deprecated in Python 3.10 by PEP
632 "Deprecate distutils module". For projects still using distutils
and cannot be updated to something else, the setuptools project can
be installed: it still provides distutils.
* Remove Lib/distutils/ directory
* Remove test_distutils
* Remove references to distutils
* Skip test_check_c_globals and test_peg_generator since they use
distutils
Fix use-after-free in Py_SetPythonHome(NULL), Py_SetProgramName(NULL)
and _Py_SetProgramFullPath(NULL) function calls.
Issue reported by Benedikt Reinartz.
The switch cases (really TARGET(opcode) macros) have been moved from ceval.c to generated_cases.c.h. That file is generated from instruction definitions in bytecodes.c (which impersonates a C file so the C code it contains can be edited without custom support in e.g. VS Code).
The code generator lives in Tools/cases_generator (it has a README.md explaining how it works). The DSL used to describe the instructions is a work in progress, described in https://github.com/faster-cpython/ideas/blob/main/3.12/interpreter_definition.md.
This is surely a work-in-progress. An easy next step could be auto-generating super-instructions.
**IMPORTANT: Merge Conflicts**
If you get a merge conflict for instruction implementations in ceval.c, your best bet is to port your changes to bytecodes.c. That file looks almost the same as the original cases, except instead of `TARGET(NAME)` it uses `inst(NAME)`, and the trailing `DISPATCH()` call is omitted (the code generator adds it automatically).
For wasmtime 2.0, the stack depth cost is 6% higher. This causes the default max `marshal` recursion depth to blow the stack.
As the default marshal depth is 2000 and Windows is set to 1000, split the difference and choose 1500 for WASI to be safe.
This reduces confusion between jumps at the bytecode level
(e.g. JUMPTO(), JUMPBY(), and various JUMP_*() opcodes)
and jumps in the C code (which are 'goto' statements).
Previously, the optional restrictions on subinterpreters were: disallow fork, subprocess, and threads. By default, we were disallowing all three for "isolated" interpreters. We always allowed all three for the main interpreter and those created through the legacy `Py_NewInterpreter()` API.
Those settings were a bit conservative, so here we've adjusted the optional restrictions to: fork, exec, threads, and daemon threads. The default for "isolated" interpreters disables fork, exec, and daemon threads. Regular threads are allowed by default. We continue always allowing everything For the main interpreter and the legacy API.
In the code, we add `_PyInterpreterConfig.allow_exec` and `_PyInterpreterConfig.allow_daemon_threads`. We also add `Py_RTFLAGS_DAEMON_THREADS` and `Py_RTFLAGS_EXEC`.
* As most of `test_embed` now uses `Py_InitializeFromConfig`, add
a specific test case to cover `Py_Initialize` (and `Py_InitializeEx`)
* Rename `_testembed` init helper to clarify the API used
* Add a `PyConfig_Clear` call in `Py_InitializeEx` to make
the code more obviously correct (it already didn't leak as
none of the dynamically allocated config fields were being
populated, but it's clearer if the wrappers follow the
documented API usage guidelines)
Change FOR_ITER to have the same stack effect regardless of whether it branches or not.
Performance is unchanged as FOR_ITER (and specialized forms jump over the cleanup code).
(see https://github.com/python/cpython/issues/98608)
This change does the following:
1. change the argument to a new `_PyInterpreterConfig` struct
2. rename the function to `_Py_NewInterpreterFromConfig()`, inspired by `Py_InitializeFromConfig()` (takes a `_PyInterpreterConfig` instead of `isolated_subinterpreter`)
3. split up the boolean `isolated_subinterpreter` into the corresponding multiple granular settings
* allow_fork
* allow_subprocess
* allow_threads
4. add `PyInterpreterState.feature_flags` to store those settings
5. add a function for checking if a feature is enabled on an opaque `PyInterpreterState *`
6. drop `PyConfig._isolated_interpreter`
The existing default (see `Py_NewInterpeter()` and `Py_Initialize*()`) allows fork, subprocess, and threads and the optional "isolated" interpreter (see the `_xxsubinterpreters` module) disables all three. None of that changes here; the defaults are preserved.
Note that the given `_PyInterpreterConfig` will not be used outside `_Py_NewInterpreterFromConfig()`, nor preserved. This contrasts with how `PyConfig` is currently preserved, used, and even modified outside `Py_InitializeFromConfig()`. I'd rather just avoid that mess from the start for `_PyInterpreterConfig`. We can preserve it later if we find an actual need.
This change allows us to follow up with a number of improvements (e.g. stop disallowing subprocess and support disallowing exec instead).
(Note that this PR adds "private" symbols. We'll probably make them public, and add docs, in a separate change.)
Add Python implementations of certain longobject.c functions. These use
asymptotically faster algorithms that can be used for operations on
integers with many digits. In those cases, the performance overhead of
the Python implementation is not significant since the asymptotic
behavior is what dominates runtime. Functions provided by this module
should be considered private and not part of any public API.
Co-author: Tim Peters <tim.peters@gmail.com>
Co-author: Mark Dickinson <dickinsm@gmail.com>
Co-author: Bjorn Martinsson
* The compiler analyzes the usage of the first 64 local variables all at once using bit masks.
* Local variables beyond the first 64 are only partially analyzed, achieving linear time.