- Convert `unspecified` and `unknown` to be members of a `Sentinels` enum, rather than instances of bespoke classes.
- An enum feels more idiomatic here, and works better with type checkers.
- Convert some `==` and `!=` checks for these values to identity checks, which are more idiomatic with sentinels.
- _Don't_ do the same for `Null`, as this needs to be a distinct type due to its usage in `clinic.py`.
- Use `object` as the annotation for `default` across `clinic.py`. `default` can be literally any object, so `object` is the correct annotation here.
---
Co-authored-by: Erlend E. Aasland <erlend.aasland@protonmail.com>
Introduce TypeSet, and use it to annotate the 'accept' keyword of
various C converters. Also add some missing return annotations for
converter init functions.
* Add basic mypy workflow to CI
* Make the type check pass
---------
Co-authored-by: Erlend E. Aasland <erlend.aasland@protonmail.com>
Co-authored-by: Nikita Sobolev <mail@sobolevn.me>
Co-authored-by: Hugo van Kemenade <hugovk@users.noreply.github.com>
When monitoring LINE events, instrument all instructions that can have a predecessor on a different line.
Then check that the a new line has been hit in the instrumentation code.
This brings the behavior closer to that of 3.11, simplifying implementation and porting of tools.
Use the unused keyword param in the converter to explicitly
mark an argument as unused:
/*[clinic input]
SomeBaseClass.stubmethod
flag: bool(unused=True)
[clinic start generated code]*/
Replaces our built-in SHA3 implementation with a verified one from the HACL* project.
This implementation is used when OpenSSL does not provide SHA3 or is not present.
3.11 shiped with a very slow tiny sha3 implementation to get off of the <=3.10 reference implementation that wound up having serious bugs. This brings us back to a reasonably performing built-in implementation consistent with what we've just replaced our other guaranteed available standard hash algorithms with: code from the HACL* project.
---------
Co-authored-by: Gregory P. Smith <greg@krypto.org>
This is strictly about moving the "obmalloc" runtime state from
`_PyRuntimeState` to `PyInterpreterState`. Doing so improves isolation
between interpreters, specifically most of the memory (incl. objects)
allocated for each interpreter's use. This is important for a
per-interpreter GIL, but such isolation is valuable even without it.
FWIW, a per-interpreter obmalloc is the proverbial
canary-in-the-coalmine when it comes to the isolation of objects between
interpreters. Any object that leaks (unintentionally) to another
interpreter is highly likely to cause a crash (on debug builds at
least). That's a useful thing to know, relative to interpreter
isolation.
This PR makes three minor linting adjustments to the `wasm` module
caught by [ruff](https://github.com/charliermarsh/ruff).
<!-- gh-issue-number: gh-103801 -->
* Issue: gh-103801
<!-- /gh-issue-number -->
---------
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
This is the implementation of PEP683
Motivation:
The PR introduces the ability to immortalize instances in CPython which bypasses reference counting. Tagging objects as immortal allows up to skip certain operations when we know that the object will be around for the entire execution of the runtime.
Note that this by itself will bring a performance regression to the runtime due to the extra reference count checks. However, this brings the ability of having truly immutable objects that are useful in other contexts such as immutable data sharing between sub-interpreters.
Remove the bundled setuptools wheel from ensurepip, and stop installing setuptools in environments created by venv.
Co-Authored-by: Hugo van Kemenade <hugovk@users.noreply.github.com>
Co-authored-by: C.A.M. Gerlach <CAM.Gerlach@Gerlach.CAM>
Co-authored-by: Oleg Iarygin <oleg@arhadthedev.net>
* The majority of the monitoring code is in instrumentation.c
* The new instrumentation bytecodes are in bytecodes.c
* legacy_tracing.c adapts the new API to the old sys.setrace and sys.setprofile APIs
I've also added a small comment to `Tools/c-analyzer/cpython/_parser.py` to trigger the `patchcheck` CI. It must pass now.
Automerge-Triggered-By: GH:ericsnowcurrently
On content update, builds `python` and the docs. Also adds a Dockerfile that should include everything but autoconf 2.69 that's necessary to build CPython and the entire stdlib on Fedora.
Co-authored-by: Ronald Oussoren <ronaldoussoren@mac.com>
Co-authored-by: Dusty Phillips <dusty@phillips.codes>
We can revisit the options for keeping it global later, if desired. For now the approach seems quite complex, so we've gone with the simpler isolation solution in the meantime.
https://github.com/python/cpython/issues/100227
* Eliminate all remaining uses of Py_SIZE and Py_SET_SIZE on PyLongObject, adding asserts.
* Change layout of size/sign bits in longobject to support future addition of immortal ints and tagged medium ints.
* Add functions to hide some internals of long object, and for setting sign and digit count.
* Replace uses of IS_MEDIUM_VALUE macro with _PyLong_IsCompact().
The essentially eliminates the global variable, with the associated benefits. This is also a precursor to isolating this bit of state to PyInterpreterState.
Folks that currently read _Py_RefTotal directly would have to start using _Py_GetGlobalRefTotal() instead.
https://github.com/python/cpython/issues/102304
This behavior is optional, because in some extreme cases it
may just make debugging harder. The tool defaults it to off,
but it is on in Makefile.pre.in.
Also note that this makes diffs to generated_cases.c.h noisier,
since whenever you insert or delete a line in bytecodes.c,
all subsequent #line directives will change.
This will keep us from adding new unsupported (i.e. non-const) C global variables, which would break interpreter isolation.
FYI, historically it is very uncommon for new global variables to get added. Furthermore, it is rare for new code to break the c-analyzer. So the check should almost always pass unnoticed.
Note that I've removed test_check_c_globals. A test wasn't a great fit conceptually and was super slow on debug builds. A CI check is a better fit.
This also resolves gh-100237.
https://github.com/python/cpython/issues/81057
distutils was removed in November. However, the c-analyzer relies on it. To solve that here, we vendor the parts the tool needs so it can be run against 3.12+. (Also see gh-92584.)
Note that we may end up removing this code later in favor of a solution in common with the peg_generator tool (which also relies on distutils). At the least, the copy here makes sure the c-analyzer tool works on 3.12+ in the meantime.
Some incompatible changes had gone in, and the "ignore" lists weren't properly undated. This change fixes that. It's necessary prior to enabling test_check_c_globals, which I hope to do soon.
Note that this does include moving last_resort_memory_error to PyInterpreterState.
https://github.com/python/cpython/issues/90110
Prevent test_tools from copying 1000M of "source"
It doesn't need a git repo, just the checkout. We skip .git metadata, Doc/build, Doc/venv, and `__pycache__` subdirs, that developers often have in their clients to reduce the size of the source tree copy ten-fold.
This should significantly reduce IO and presumably time on buildbots during this long test.
* Write output and metadata in a single run
This halves the time to run the cases generator
(most of the time goes into parsing the input).
* Declare or define opcode metadata based on NEED_OPCODE_TABLES
* Use generated metadata for stack_effect()
* compile.o depends on opcode_metadata.h
* Return -1 from _PyOpcode_num_popped/pushed for unknown opcode
New generator feature: Generate useful glue for output arrays, so you can just write values to the output array (no bounds checking). Rewrote UNPACK_SEQUENCE_TWO_TUPLE to use this, and also UNPACK_SEQUENCE_{TUPLE,LIST}.
You can now write things like this:
```
inst(BUILD_STRING, (pieces[oparg] -- str)) { ... }
inst(LIST_APPEND, (list, unused[oparg-1], v -- list, unused[oparg-1])) { ... }
```
Note that array output effects are only partially supported (they must be named `unused` or correspond to an input effect).
For these the instr_format field uses IX instead of IB.
Register instructions use IX, IB, IBBX, IBBB, etc.
Also: Include the closing '}' in Block.tokens, for completeness
- This doesn't cover everything (far from it) but it's a start.
- This uses pytest, which isn't ideal, but was quickest to get started.
Co-authored-by: Kumar Aditya <59607654+kumaraditya303@users.noreply.github.com>
(These aren't used yet, but may be coming soon,
and it's easier to keep this tool the same between branches.)
Added a sanity check for all this to compile.c.
Co-authored-by: Irit Katriel <iritkatriel@yahoo.com>
The presence of this macro indicates that a particular instruction
may be considered for conversion to a register-based format
(see https://github.com/faster-cpython/ideas/issues/485).
An invariant (currently unchecked) is that `DEOPT_IF()` may only
occur *before* `DECREF_INPUTS()`, and `ERROR_IF()` may only occur
*after* it. One reason not to check this is that there are a few
places where we insert *two* `DECREF_INPUTS()` calls, in different
branches of the code. The invariant checking would have to be able
to do some flow control analysis to understand this.
Note that many instructions, especially specialized ones,
can't be converted to use this macro straightforwardly.
This is because the generator currently only generates plain
`Py_DECREF(variable)` statements, and cannot generate
things like `_Py_DECREF_SPECIALIZED()` let alone deal with
`_PyList_AppendTakeRef()`.
We can't move it to _PyRuntimeState because the symbol is exposed in the stable ABI. We'll have to sort that out before a per-interpreter GIL, but it shouldn't be too hard.
https://github.com/python/cpython/issues/81057
Stack effects can now have a type, e.g. `inst(X, (left, right -- jump/uint64_t)) { ... }`.
Instructions converted to the non-legacy format:
* COMPARE_OP
* COMPARE_OP_FLOAT_JUMP
* COMPARE_OP_INT_JUMP
* COMPARE_OP_STR_JUMP
* STORE_ATTR
* DELETE_ATTR
* STORE_GLOBAL
* STORE_ATTR_INSTANCE_VALUE
* STORE_ATTR_WITH_HINT
* STORE_ATTR_SLOT, and complete the store_attr family
* Complete the store_subscr family: STORE_SUBSCR{,DICT,LIST_INT}
(STORE_SUBSCR was alread half converted,
but wasn't using cache effects yet.)
* DELETE_SUBSCR
* PRINT_EXPR
* INTERPRETER_EXIT (a bit weird, ends in return)
* RETURN_VALUE
* GET_AITER (had to restructure it some)
The original had mysterious `SET_TOP(NULL)` before `goto error`.
I assume those just account for `obj` having been decref'ed,
so I got rid of them in favor of the cleanup implied by `ERROR_IF()`.
* LIST_APPEND (a bit unhappy with it)
* SET_ADD (also a bit unhappy with it)
Various other improvements/refactorings as well.
Newly supported interpreter definition syntax:
- `op(NAME, (input_stack_effects -- output_stack_effects)) { ... }`
- `macro(NAME) = OP1 + OP2;`
Also some other random improvements:
- Convert `WITH_EXCEPT_START` to use stack effects
- Fix lexer to balk at unrecognized characters, e.g. `@`
- Fix moved output names; support object pointers in cache
- Introduce `error()` method to print errors
- Introduce read_uint16(p) as equivalent to `*p`
Co-authored-by: Brandt Bucher <brandtbucher@gmail.com>
This is part of the effort to consolidate global variables, to make them easier to manage (and make it easier to later move some of them to PyInterpreterState).
https://github.com/python/cpython/issues/81057
We actually don't move PyImport_Inittab. Instead, we make a copy that we keep on _PyRuntimeState and use only that after Py_Initialize(). We also prevent folks from modifying PyImport_Inittab (the best we can) after that point.
https://github.com/python/cpython/issues/81057
The global allocators were stored in 3 static global variables: _PyMem_Raw, _PyMem, and _PyObject. State for the "small block" allocator was stored in another 13. That makes a total of 16 global variables. We are moving all 16 to the _PyRuntimeState struct as part of the work for gh-81057. (If PEP 684 is accepted then we will follow up by moving them all to PyInterpreterState.)
https://github.com/python/cpython/issues/81057
As we consolidate global variables, we find some objects that are almost suitable to add to _PyRuntimeState.global_objects, but have some small/sneaky bit of per-interpreter state (e.g. a weakref list). We're adding PyInterpreterState.static_objects so we can move such objects there. (We'll removed the _not_used field once we've added others.)
https://github.com/python/cpython/issues/81057
Up until now we had a single generated initializer macro for all the statically declared global objects in _PyRuntimeState, including several one-offs (e.g. the empty tuple). The one-offs don't need to be generated, but were because we had one big initializer. Having separate initializers for set of generated global objects allows us to generate only the ones we need to. This allows us to add initializers for one-off global objects without having to generate them.
https://github.com/python/cpython/issues/81057
* Adds EXIT_INTERPRETER instruction to exit PyEval_EvalDefault()
* Simplifies RETURN_VALUE, YIELD_VALUE and RETURN_GENERATOR instructions as they no longer need to check for entry frames.
Add _PyStaticObject_CheckRefcnt() function to make
_PyStaticObjects_CheckRefcnt() shorter. Use
_PyObject_ASSERT_FAILED_MSG() to log the object causing the fatal
error.
We do the following:
* move the generated _PyUnicode_InitStaticStrings() to its own file
* move the generated _PyStaticObjects_CheckRefcnt() to its own file
* include pycore_global_objects.h in extension modules instead of pycore_runtime_init.h
These changes help us avoid including things that aren't needed.
https://github.com/python/cpython/issues/90868
This makes it more clear that a given test is definitely testing against a single-phase init (legacy) extension module. The new module is a companion to _testmultiphase.
https://github.com/python/cpython/issues/98627
This adds support for comparing pystats collected from two different builds.
- The `--json-output` can be used to load in a set of raw stats and output a
JSON file.
- Two of these JSON files can be provided on the next run, and then comparative
results between the two are output.
Remove the distutils package. It was deprecated in Python 3.10 by PEP
632 "Deprecate distutils module". For projects still using distutils
and cannot be updated to something else, the setuptools project can
be installed: it still provides distutils.
* Remove Lib/distutils/ directory
* Remove test_distutils
* Remove references to distutils
* Skip test_check_c_globals and test_peg_generator since they use
distutils
A backslash-character pair that is not a valid escape sequence now
generates a SyntaxWarning, instead of DeprecationWarning. For
example, re.compile("\d+\.\d+") now emits a SyntaxWarning ("\d" is an
invalid escape sequence), use raw strings for regular expression:
re.compile(r"\d+\.\d+"). In a future Python version, SyntaxError will
eventually be raised, instead of SyntaxWarning.
Octal escapes with value larger than 0o377 (ex: "\477"), deprecated
in Python 3.11, now produce a SyntaxWarning, instead of
DeprecationWarning. In a future Python version they will be
eventually a SyntaxError.
codecs.escape_decode() and codecs.unicode_escape_decode() are left
unchanged: they still emit DeprecationWarning.
* The parser only emits SyntaxWarning for Python 3.12 (feature
version), and still emits DeprecationWarning on older Python
versions.
* Fix SyntaxWarning by using raw strings in Tools/c-analyzer/ and
wasm_build.py.
This got introduced in commit 5884449539
to determine if readline is already linked against curses or tinfo in
the setup.py, which is no longer present.
The switch cases (really TARGET(opcode) macros) have been moved from ceval.c to generated_cases.c.h. That file is generated from instruction definitions in bytecodes.c (which impersonates a C file so the C code it contains can be edited without custom support in e.g. VS Code).
The code generator lives in Tools/cases_generator (it has a README.md explaining how it works). The DSL used to describe the instructions is a work in progress, described in https://github.com/faster-cpython/ideas/blob/main/3.12/interpreter_definition.md.
This is surely a work-in-progress. An easy next step could be auto-generating super-instructions.
**IMPORTANT: Merge Conflicts**
If you get a merge conflict for instruction implementations in ceval.c, your best bet is to port your changes to bytecodes.c. That file looks almost the same as the original cases, except instead of `TARGET(NAME)` it uses `inst(NAME)`, and the trailing `DISPATCH()` call is omitted (the code generator adds it automatically).
Add Python implementations of certain longobject.c functions. These use
asymptotically faster algorithms that can be used for operations on
integers with many digits. In those cases, the performance overhead of
the Python implementation is not significant since the asymptotic
behavior is what dominates runtime. Functions provided by this module
should be considered private and not part of any public API.
Co-author: Tim Peters <tim.peters@gmail.com>
Co-author: Mark Dickinson <dickinsm@gmail.com>
Co-author: Bjorn Martinsson
The "pyperf command" tool be used instead. Example:
$ python3 -m pyperf command -- python3 -c pass
.....................
command: Mean +- std dev: 17.8 ms +- 0.4 ms
pyperf also computes the standard deviation which gives an idea of
the benchmark looks reliable or not.
Remove outdated example scripts of the Tools/scripts/ directory:
* gprof2html.py
* md5sum.py
* nm2def.py
* pathfix.py
* win_add2path.py
Remove test_gprof2html, test_md5sum and test_pathfix of test_tools.
Remove diff.py and ndiff.py scripts of Tools/scripts/: move them to
Doc/includes/.
* diff.py and ndiff.py files are no longer executable. Remove also
their shebang ("#!/usr/bin/env python3").
* Remove the -profile command from ndiff.py to simply the code.
* Remove ndiff.py copyright and history command. The Python
documentation examples are distributed under the "Zero Clause BSD
License".
Relevant tests moved from test_exceptions to test_traceback to be able to
compare both implementations.
Co-authored-by: Carl Friedrich Bolz-Tereick <cfbolz@gmx.de>
Remove the sys.getdxp() function and the Tools/scripts/analyze_dxp.py
script. DXP stands for "dynamic execution pairs". They were related
to DYNAMIC_EXECUTION_PROFILE and DXPAIRS macros which have been
removed in Python 3.11. Python can now be built with "./configure
--enable-pystats" to gather statistics on Python opcodes.
Remove the Tools/demo/ directory which contained old demo scripts. A
copy can be found in the old-demos project:
https://github.com/gvanrossum/old-demos
Remove the following old demo scripts:
* beer.py
* eiffel.py
* hanoi.py
* life.py
* markov.py
* mcast.py
* queens.py
* redemo.py
* rpython.py
* rpythond.py
* sortvisu.py
* spreadsheet.py
* vector.py
Changes:
* Remove a reference to the redemo.py script in the regex howto
documentation.
* Remove a reference to the removed Tools/demo/ directory in the
curses documentation.
* Update PC/layout/ to remove the reference to Tools/demo/ directory.
Fix a shell code injection vulnerability in the
get-remote-certificate.py example script. The script no longer uses a
shell to run "openssl" commands. Issue reported and initial fix by
Caleb Shortt.
Remove the Windows code path to send "quit" on stdin to the "openssl
s_client" command: use DEVNULL on all platforms instead.
Co-authored-by: Caleb Shortt <caleb@rgauge.com>
- pre-build Emscripten ports and system libraries
- check for broken EMSDK versions
- use EMSDK's node for wasm32-emscripten
- warn when PKG_CONFIG_PATH is set
- add support level information
Here we automatically ignore uses of _PyArg_Parser, "kwlist" arrays, and module/type defs. That way new uses don't trigger false positives in the c-analyzer check script.
- support EMSDK tot-upstream and git releases
- allow WASM assents for wasm64-emscripten and WASI. This makes single file distributions on WASI easier.
- decouple WASM assets from browser builds
We broke it with a recent `_PyArg_Parser` change.
Also:
* moved the `_PyArg_Parser` whitelist entries over to ignored.tsv now that they are thread-safe
* added some known globals from a currently-excluded file
* dropped some outdated globals from the whitelist
* Make sure that tp_dictoffset is correct with Py_TPFLAGS_MANAGED_DICT is set.
* Avoid traversing managed dict twice when subclassing class with Py_TPFLAGS_MANAGED_DICT set.
Automate WASM build with a new Python script. The script provides
several build profiles with configure flags for Emscripten flavors
and WASI. The script can detect and use Emscripten SDK and WASI SDK from
default locations or env vars.
``configure`` now detects Node arguments and creates HOSTRUNNER
arguments for Node 16. It also sets correct arguments for
``wasm64-emscripten``.
Co-authored-by: Brett Cannon <brett@python.org>
We only statically initialize for core code and builtin modules. Extension modules still create
the tuple at runtime. We'll solve that part of interpreter isolation separately.
This change includes generated code. The non-generated changes are in:
* Tools/clinic/clinic.py
* Python/getargs.c
* Include/cpython/modsupport.h
* Makefile.pre.in (re-generate global strings after running clinic)
* very minor tweaks to Modules/_codecsmodule.c and Python/Python-tokenize.c
All other changes are generated code (clinic, global strings).
gh-93243
This PR is required to reduce diffs of the following porting (no need to either maintain documentation and tests consistent with each porting step, or try to port everything and remove smtpd in a single PR).
Automerge-Triggered-By: GH:warsaw
wasi-env now sets WASIX flags. This allows us to control all build
parameter for wasm32-wasi buildbot from CPython repository.
Also export and improve SYSROOT parameter.