I've also added a small comment to `Tools/c-analyzer/cpython/_parser.py` to trigger the `patchcheck` CI. It must pass now.
Automerge-Triggered-By: GH:ericsnowcurrently
On content update, builds `python` and the docs. Also adds a Dockerfile that should include everything but autoconf 2.69 that's necessary to build CPython and the entire stdlib on Fedora.
Co-authored-by: Ronald Oussoren <ronaldoussoren@mac.com>
Co-authored-by: Dusty Phillips <dusty@phillips.codes>
We can revisit the options for keeping it global later, if desired. For now the approach seems quite complex, so we've gone with the simpler isolation solution in the meantime.
https://github.com/python/cpython/issues/100227
* Eliminate all remaining uses of Py_SIZE and Py_SET_SIZE on PyLongObject, adding asserts.
* Change layout of size/sign bits in longobject to support future addition of immortal ints and tagged medium ints.
* Add functions to hide some internals of long object, and for setting sign and digit count.
* Replace uses of IS_MEDIUM_VALUE macro with _PyLong_IsCompact().
The essentially eliminates the global variable, with the associated benefits. This is also a precursor to isolating this bit of state to PyInterpreterState.
Folks that currently read _Py_RefTotal directly would have to start using _Py_GetGlobalRefTotal() instead.
https://github.com/python/cpython/issues/102304
This behavior is optional, because in some extreme cases it
may just make debugging harder. The tool defaults it to off,
but it is on in Makefile.pre.in.
Also note that this makes diffs to generated_cases.c.h noisier,
since whenever you insert or delete a line in bytecodes.c,
all subsequent #line directives will change.
This will keep us from adding new unsupported (i.e. non-const) C global variables, which would break interpreter isolation.
FYI, historically it is very uncommon for new global variables to get added. Furthermore, it is rare for new code to break the c-analyzer. So the check should almost always pass unnoticed.
Note that I've removed test_check_c_globals. A test wasn't a great fit conceptually and was super slow on debug builds. A CI check is a better fit.
This also resolves gh-100237.
https://github.com/python/cpython/issues/81057
distutils was removed in November. However, the c-analyzer relies on it. To solve that here, we vendor the parts the tool needs so it can be run against 3.12+. (Also see gh-92584.)
Note that we may end up removing this code later in favor of a solution in common with the peg_generator tool (which also relies on distutils). At the least, the copy here makes sure the c-analyzer tool works on 3.12+ in the meantime.
Some incompatible changes had gone in, and the "ignore" lists weren't properly undated. This change fixes that. It's necessary prior to enabling test_check_c_globals, which I hope to do soon.
Note that this does include moving last_resort_memory_error to PyInterpreterState.
https://github.com/python/cpython/issues/90110
Prevent test_tools from copying 1000M of "source"
It doesn't need a git repo, just the checkout. We skip .git metadata, Doc/build, Doc/venv, and `__pycache__` subdirs, that developers often have in their clients to reduce the size of the source tree copy ten-fold.
This should significantly reduce IO and presumably time on buildbots during this long test.
* Write output and metadata in a single run
This halves the time to run the cases generator
(most of the time goes into parsing the input).
* Declare or define opcode metadata based on NEED_OPCODE_TABLES
* Use generated metadata for stack_effect()
* compile.o depends on opcode_metadata.h
* Return -1 from _PyOpcode_num_popped/pushed for unknown opcode
New generator feature: Generate useful glue for output arrays, so you can just write values to the output array (no bounds checking). Rewrote UNPACK_SEQUENCE_TWO_TUPLE to use this, and also UNPACK_SEQUENCE_{TUPLE,LIST}.