22b0de2755
This PR sets up tagged pointers for CPython. The general idea is to create a separate struct _PyStackRef for everything on the evaluation stack to store the bits. This forces the C compiler to warn us if we try to cast things or pull things out of the struct directly. Only for free threading: We tag the low bit if something is deferred - that means we skip incref and decref operations on it. This behavior may change in the future if Mark's plans to defer all objects in the interpreter loop pans out. This implies a strict stack reference discipline is required. ALL incref and decref operations on stackrefs must use the stackref variants. It is unsafe to untag something then do normal incref/decref ops on it. The new incref and decref variants are called dup and close. They mimic a "handle" API operating on these stackrefs. Please read Include/internal/pycore_stackref.h for more information! --------- Co-authored-by: Mark Shannon <9448417+markshannon@users.noreply.github.com> |
||
---|---|---|
.. | ||
README.md | ||
_typing_backports.py | ||
analyzer.py | ||
cwriter.py | ||
generators_common.py | ||
interpreter_definition.md | ||
lexer.py | ||
mypy.ini | ||
opcode_id_generator.py | ||
opcode_metadata_generator.py | ||
optimizer_generator.py | ||
parser.py | ||
parsing.py | ||
plexer.py | ||
py_metadata_generator.py | ||
stack.py | ||
target_generator.py | ||
tier1_generator.py | ||
tier2_generator.py | ||
uop_id_generator.py | ||
uop_metadata_generator.py |
README.md
Tooling to generate interpreters
Documentation for the instruction definitions in Python/bytecodes.c
("the DSL") is here.
What's currently here:
analyzer.py
: code for convertingAST
generated byParser
to more high-level structure for easier interactionlexer.py
: lexer for C, originally written by Mark Shannonplexer.py
: OO interface on top of lexer.py; main class:PLexer
parsing.py
: Parser for instruction definition DSL; main class:Parser
parser.py
helper for interactions withparsing.py
tierN_generator.py
: a couple of driver scripts to readPython/bytecodes.c
and writePython/generated_cases.c.h
(and several other files)optimizer_generator.py
: readsPython/bytecodes.c
andPython/optimizer_bytecodes.c
and writesPython/optimizer_cases.c.h
stack.py
: code to handle generalized stack effectscwriter.py
: code which understands tokens and how to format C code; main class:CWriter
generators_common.py
: helpers for generatorsopcode_id_generator.py
: generate a list of opcodes and write them toInclude/opcode_ids.h
opcode_metadata_generator.py
: reads the instruction definitions and write the metadata toInclude/internal/pycore_opcode_metadata.h
py_metadata_generator.py
: reads the instruction definitions and write the metadata toLib/_opcode_metadata.py
target_generator.py
: generate targets for computed goto dispatch and write them toPython/opcode_targets.h
uop_id_generator.py
: generate a list of uop IDs and write them toInclude/internal/pycore_uop_ids.h
uop_metadata_generator.py
: reads the instruction definitions and write the metadata toInclude/internal/pycore_uop_metadata.h
Note that there is some dummy C code at the top and bottom of
Python/bytecodes.c
to fool text editors like VS Code into believing this is valid C code.
A bit about the parser
The parser class uses a pretty standard recursive descent scheme,
but with unlimited backtracking.
The PLexer
class tokenizes the entire input before parsing starts.
We do not run the C preprocessor.
Each parsing method returns either an AST node (a Node
instance)
or None
, or raises SyntaxError
(showing the error in the C source).
Most parsing methods are decorated with @contextual
, which automatically
resets the tokenizer input position when None
is returned.
Parsing methods may also raise SyntaxError
, which is irrecoverable.
When a parsing method returns None
, it is possible that after backtracking
a different parsing method returns a valid AST.
Neither the lexer nor the parsers are complete or fully correct.
Most known issues are tersely indicated by # TODO:
comments.
We plan to fix issues as they become relevant.