This reduces confusion between jumps at the bytecode level
(e.g. JUMPTO(), JUMPBY(), and various JUMP_*() opcodes)
and jumps in the C code (which are 'goto' statements).
Previously, the optional restrictions on subinterpreters were: disallow fork, subprocess, and threads. By default, we were disallowing all three for "isolated" interpreters. We always allowed all three for the main interpreter and those created through the legacy `Py_NewInterpreter()` API.
Those settings were a bit conservative, so here we've adjusted the optional restrictions to: fork, exec, threads, and daemon threads. The default for "isolated" interpreters disables fork, exec, and daemon threads. Regular threads are allowed by default. We continue always allowing everything For the main interpreter and the legacy API.
In the code, we add `_PyInterpreterConfig.allow_exec` and `_PyInterpreterConfig.allow_daemon_threads`. We also add `Py_RTFLAGS_DAEMON_THREADS` and `Py_RTFLAGS_EXEC`.
* As most of `test_embed` now uses `Py_InitializeFromConfig`, add
a specific test case to cover `Py_Initialize` (and `Py_InitializeEx`)
* Rename `_testembed` init helper to clarify the API used
* Add a `PyConfig_Clear` call in `Py_InitializeEx` to make
the code more obviously correct (it already didn't leak as
none of the dynamically allocated config fields were being
populated, but it's clearer if the wrappers follow the
documented API usage guidelines)
Change FOR_ITER to have the same stack effect regardless of whether it branches or not.
Performance is unchanged as FOR_ITER (and specialized forms jump over the cleanup code).
(see https://github.com/python/cpython/issues/98608)
This change does the following:
1. change the argument to a new `_PyInterpreterConfig` struct
2. rename the function to `_Py_NewInterpreterFromConfig()`, inspired by `Py_InitializeFromConfig()` (takes a `_PyInterpreterConfig` instead of `isolated_subinterpreter`)
3. split up the boolean `isolated_subinterpreter` into the corresponding multiple granular settings
* allow_fork
* allow_subprocess
* allow_threads
4. add `PyInterpreterState.feature_flags` to store those settings
5. add a function for checking if a feature is enabled on an opaque `PyInterpreterState *`
6. drop `PyConfig._isolated_interpreter`
The existing default (see `Py_NewInterpeter()` and `Py_Initialize*()`) allows fork, subprocess, and threads and the optional "isolated" interpreter (see the `_xxsubinterpreters` module) disables all three. None of that changes here; the defaults are preserved.
Note that the given `_PyInterpreterConfig` will not be used outside `_Py_NewInterpreterFromConfig()`, nor preserved. This contrasts with how `PyConfig` is currently preserved, used, and even modified outside `Py_InitializeFromConfig()`. I'd rather just avoid that mess from the start for `_PyInterpreterConfig`. We can preserve it later if we find an actual need.
This change allows us to follow up with a number of improvements (e.g. stop disallowing subprocess and support disallowing exec instead).
(Note that this PR adds "private" symbols. We'll probably make them public, and add docs, in a separate change.)
Add Python implementations of certain longobject.c functions. These use
asymptotically faster algorithms that can be used for operations on
integers with many digits. In those cases, the performance overhead of
the Python implementation is not significant since the asymptotic
behavior is what dominates runtime. Functions provided by this module
should be considered private and not part of any public API.
Co-author: Tim Peters <tim.peters@gmail.com>
Co-author: Mark Dickinson <dickinsm@gmail.com>
Co-author: Bjorn Martinsson
* The compiler analyzes the usage of the first 64 local variables all at once using bit masks.
* Local variables beyond the first 64 are only partially analyzed, achieving linear time.
Make sys.setprofile() and sys.settrace() functions reentrant. They
can no long fail with: RuntimeError("Cannot install a trace function
while another trace function is being installed").
Make _PyEval_SetTrace() and _PyEval_SetProfile() functions reentrant,
rather than detecting and rejecting reentrant calls. Only delete the
reference to function arguments once the new function is fully set,
when a reentrant call is safe. Call also _PySys_Audit() earlier.
The `}` marked with `/* End instructions */` is the end of the switch.
There is another pair of `{}` around the switch, which is vestigial
from ancient times when it was `for (;;) { switch (opcode) { ... } }`.
All `DISPATCH` macro calls should be inside that pair.
In `_warnings.c`, in the C equivalent of `warnings.warn_explicit()`, if the module globals are given (and not None), the warning will attempt to get the source line for the issued warning. To do this, it needs the module's loader.
Previously, it would only look up `__loader__` in the module globals. In https://github.com/python/cpython/issues/86298 we want to defer to the `__spec__.loader` if available.
The first step on this journey is to check that `loader == __spec__.loader` and issue another warning if it is not. This commit does that.
Since this is a PoC, only manual testing for now.
```python
# /tmp/foo.py
import warnings
import bar
warnings.warn_explicit(
'warning!',
RuntimeWarning,
'bar.py', 2,
module='bar knee',
module_globals=bar.__dict__,
)
```
```python
# /tmp/bar.py
import sys
import os
import pathlib
# __loader__ = pathlib.Path()
```
Then running this: `./python.exe -Wdefault /tmp/foo.py`
Produces:
```
bar.py:2: RuntimeWarning: warning!
import os
```
Uncomment the `__loader__ = ` line in `bar.py` and try it again:
```
sys:1: ImportWarning: Module bar; __loader__ != __spec__.loader (<_frozen_importlib_external.SourceFileLoader object at 0x109f7dfa0> != PosixPath('.'))
bar.py:2: RuntimeWarning: warning!
import os
```
Automerge-Triggered-By: GH:warsaw
This is a small performance improvement, especially for one or two hot
places such as _handle_fromlist() that are called a lot and the
.format() method was being used just to join two strings with a dot.
Otherwise it is merely a readability improvement.
We keep `_ERR_MSG` and `_ERR_MSG_PREFIX` as those may be used elsewhere for canonical looking error messages.
Right now, the tokenizer only returns type and two pointers to the start and end of the token.
This PR modifies the tokenizer to return the type and set all of the necessary information,
so that the parser does not have to this.
Remove the sys.getdxp() function and the Tools/scripts/analyze_dxp.py
script. DXP stands for "dynamic execution pairs". They were related
to DYNAMIC_EXECUTION_PROFILE and DXPAIRS macros which have been
removed in Python 3.11. Python can now be built with "./configure
--enable-pystats" to gather statistics on Python opcodes.
It had to live as a global outside of PyConfig for stable ABI reasons in
the pre-3.12 backports.
This removes the `_Py_global_config_int_max_str_digits` and gets rid of
the equivalent field in the internal `struct _is PyInterpreterState` as
code can just use the existing nested config struct within that.
Adds tests to verify unique settings and configs in subinterpreters.
Fix command line parsing: reject "-X int_max_str_digits" option with
no value (invalid) when the PYTHONINTMAXSTRDIGITS environment
variable is set to a valid limit.
At Python exit, sometimes a thread holding the GIL can wait forever
for a thread (usually a daemon thread) which requested to drop the
GIL, whereas the thread already exited. To fix the race condition,
the thread which requested the GIL drop now resets its request before
exiting.
take_gil() now calls RESET_GIL_DROP_REQUEST() before
PyThread_exit_thread() if it called SET_GIL_DROP_REQUEST to fix a
race condition with drop_gil().
Issue discovered and analyzed by Mingliang ZHAO.