Cache in C PEG-generator reworked:
we save artificial rules in cache by Node string representation as a key instead of Node object itself.
As a result total count of artificial rules in parsers.c is lowered from 283 to 170.
More natural number ordering is used for the names of artificial rules.
Auxiliary method CCallMakerVisitor._generate_artificial_rule_call is added.
Its purpose is abstracting work with artificial rules cache.
Explicit using of "is_repeat1" kwarg is added to visit_Repeat0 and visit_Repeat1 methods.
Its slightly improve code readabitily.
The Full Grammar specification in the docs omits rule actions, so grammar rules that raise a syntax error looked like valid syntax.
This was solved in ef940de by hiding those rules in the custom syntax highlighter.
This moves all syntax-error alternatives to invalid rules, adds a validator that ensures that actions containing RAISE_SYNTAX_ERROR are in invalid rules, and reverts the syntax highlighter hack.
The test_peg_generator test tried to link the python313_d.lib library,
which failed because the library is now named python313t_d.lib. The
underlying problem is that the "compiler" attribute was not set when
we call get_libraries() from distutils.
Disable compiler optimization within test_peg_generator.
This speed up test_peg_generator by always disabling compiler
optimizations by using -O0 or equivalent when the test is building its
own C extensions.
A build not using --with-pydebug in order to speed up test execution
winds up with this test taking a very long time as it would do
repeated compilation of parser C code using the same optimization
flags as CPython was built with.
This speeds the test up 6-8x on gps-raspbian.
Also incorporate's #31017's win32 conditional and flags.
Co-authored-by: Kumar Aditya kumaraditya303
There are two errors that this commit fixes:
* The parser was not correctly computing the offset and the string
source for E_LINECONT errors due to the incorrect usage of strtok().
* The parser was not correctly unwinding the call stack when a tokenizer
exception happened in rules involving optionals ('?', [...]) as we
always make them return valid results by using the comma operator. We
need to check first if we don't have an error before continuing.
Like #28744 but for the Tools directory.
[skip issue] Opening a related issue is pending python/psf-infra-meta#130
Automerge-Triggered-By: GH:pablogsal
The invalid assignment rules are very delicate since the parser can
easily raise an invalid assignment when a keyword argument is provided.
As they are very deep into the grammar tree, is very difficult to
specify in which contexts these rules can be used and in which don't.
For that, we need to use a different version of the rule that doesn't do
error checking in those situations where we don't want the rule to raise
(keyword arguments and generator expressions).
We also need to check if we are in left-recursive rule, as those can try
to eagerly advance the parser even if the parse will fail at the end of
the expression. Failing to do this allows the parser to start parsing a
call as a tuple and incorrectly identify a keyword argument as an
invalid assignment, before it realizes that it was not a tuple after all.