Disable compiler optimization within test_peg_generator.
This speed up test_peg_generator by always disabling compiler
optimizations by using -O0 or equivalent when the test is building its
own C extensions.
A build not using --with-pydebug in order to speed up test execution
winds up with this test taking a very long time as it would do
repeated compilation of parser C code using the same optimization
flags as CPython was built with.
This speeds the test up 6-8x on gps-raspbian.
Also incorporate's #31017's win32 conditional and flags.
Co-authored-by: Kumar Aditya kumaraditya303
There are two errors that this commit fixes:
* The parser was not correctly computing the offset and the string
source for E_LINECONT errors due to the incorrect usage of strtok().
* The parser was not correctly unwinding the call stack when a tokenizer
exception happened in rules involving optionals ('?', [...]) as we
always make them return valid results by using the comma operator. We
need to check first if we don't have an error before continuing.
Like #28744 but for the Tools directory.
[skip issue] Opening a related issue is pending python/psf-infra-meta#130
Automerge-Triggered-By: GH:pablogsal
The invalid assignment rules are very delicate since the parser can
easily raise an invalid assignment when a keyword argument is provided.
As they are very deep into the grammar tree, is very difficult to
specify in which contexts these rules can be used and in which don't.
For that, we need to use a different version of the rule that doesn't do
error checking in those situations where we don't want the rule to raise
(keyword arguments and generator expressions).
We also need to check if we are in left-recursive rule, as those can try
to eagerly advance the parser even if the parse will fail at the end of
the expression. Failing to do this allows the parser to start parsing a
call as a tuple and incorrectly identify a keyword argument as an
invalid assignment, before it realizes that it was not a tuple after all.
test_peg_generator now defines _Py_TEST_PEGEN macro when building C
code to not call PyAST_Validate() in Parser/pegen.c. Moreover, it
defines Py_BUILD_CORE_MODULE macro to get access to the internal
C API.
Remove "global_ast_state" from Python-ast.c when it's built by
test_peg_generator: always get the AST state from the current interpreter.
* Add to the peg generator a new directive ('&&') that allows to expect
a token and hard fail the parsing if the token is not found. This
allows to quickly emmit syntax errors for missing tokens.
* Use the new grammar element to hard-fail if the ':' is missing before
suites.
Left-recursive rules need to check for errors explicitly, since
even if the rule returns NULL, the parsing might continue and lead
to long-distance failures.
Co-authored-by: Pablo Galindo <Pablogsal@gmail.com>
* Implement running the parser a second time for the errors messages
The first parser run is only responsible for detecting whether
there is a `SyntaxError` or not. If there isn't the AST gets returned.
Otherwise, the parser is run a second time with all the `invalid_*`
rules enabled so that all the customized error messages get produced.
This commit reverts commit ac0333e1e1 as the original links are working again and they provide extended features such as comments and alternative versions.
* Add new capability to the PEG parser to type variable assignments. For instance:
```
| a[asdl_stmt_seq*]=';'.small_stmt+ [';'] NEWLINE { a }
```
* Add new sequence types from the asdl definition (automatically generated)
* Make `asdl_seq` type a generic aliasing pointer type.
* Create a new `asdl_generic_seq` for the generic case using `void*`.
* The old `asdl_seq_GET`/`ast_seq_SET` macros now are typed.
* New `asdl_seq_GET_UNTYPED`/`ast_seq_SET_UNTYPED` macros for dealing with generic sequences.
* Changes all possible `asdl_seq` types to use specific versions everywhere.
Currently, empty sequences in gather rules make the conditional for
gather rules fail as empty sequences evaluate as "False". We need to
explicitly check for "None" (the failure condition) to avoid false
negatives.
This commit removes the old parser, the deprecated parser module, the old parser compatibility flags and environment variables and all associated support code and documentation.
These are like keywords but they only work in context; they are not reserved except when there is an exact match.
This would enable things like match statements without reserving `match` (which would be bad for the `re.match()` function and probably lots of other places).
Automerge-Triggered-By: @gvanrossum
When there are 2 negative lookaheads in the same rule, let's say `!"(" blabla "," !")"`, there will the 2 `FunctionCall`'s where assigned value is None. Currently when the `add_var` is called
the first one will be ignored but when the second lookahead's var is sent to dedupe it
will be returned as `None_1` and this won't be ignored by the declaration generator in the `visit_Alt`. This patch adds an explicit check to `add_var` to distinguish whether if there is a variable or not.