65b7b6bd23
* Write output and metadata in a single run This halves the time to run the cases generator (most of the time goes into parsing the input). * Declare or define opcode metadata based on NEED_OPCODE_TABLES * Use generated metadata for stack_effect() * compile.o depends on opcode_metadata.h * Return -1 from _PyOpcode_num_popped/pushed for unknown opcode |
||
---|---|---|
.. | ||
README.md | ||
generate_cases.py | ||
interpreter_definition.md | ||
lexer.py | ||
parser.py | ||
plexer.py | ||
test_generator.py |
README.md
Tooling to generate interpreters
Documentation for the instruction definitions in Python/bytecodes.c
("the DSL") is here.
What's currently here:
lexer.py
: lexer for C, originally written by Mark Shannonplexer.py
: OO interface on top of lexer.py; main class:PLexer
parser.py
: Parser for instruction definition DSL; main classParser
generate_cases.py
: driver script to readPython/bytecodes.c
and writePython/generated_cases.c.h
test_generator.py
: tests, require manual running usingpytest
Note that there is some dummy C code at the top and bottom of
Python/bytecodes.c
to fool text editors like VS Code into believing this is valid C code.
A bit about the parser
The parser class uses a pretty standard recursive descent scheme,
but with unlimited backtracking.
The PLexer
class tokenizes the entire input before parsing starts.
We do not run the C preprocessor.
Each parsing method returns either an AST node (a Node
instance)
or None
, or raises SyntaxError
(showing the error in the C source).
Most parsing methods are decorated with @contextual
, which automatically
resets the tokenizer input position when None
is returned.
Parsing methods may also raise SyntaxError
, which is irrecoverable.
When a parsing method returns None
, it is possible that after backtracking
a different parsing method returns a valid AST.
Neither the lexer nor the parsers are complete or fully correct.
Most known issues are tersely indicated by # TODO:
comments.
We plan to fix issues as they become relevant.