cpython/Parser/Python.asdl

145 lines
6.0 KiB
Plaintext
Raw Normal View History

-- ASDL's 4 builtin types are:
-- identifier, int, string, constant
2011-03-12 20:28:16 -04:00
module Python
{
mod = Module(stmt* body, type_ignore* type_ignores)
2012-01-16 10:54:28 -04:00
| Interactive(stmt* body)
| Expression(expr body)
| FunctionType(expr* argtypes, expr returns)
stmt = FunctionDef(identifier name, arguments args,
stmt* body, expr* decorator_list, expr? returns,
string? type_comment)
| AsyncFunctionDef(identifier name, arguments args,
stmt* body, expr* decorator_list, expr? returns,
string? type_comment)
| ClassDef(identifier name,
2012-01-16 10:54:28 -04:00
expr* bases,
keyword* keywords,
stmt* body,
expr* decorator_list)
2012-01-16 10:54:28 -04:00
| Return(expr? value)
| Delete(expr* targets)
| Assign(expr* targets, expr value, string? type_comment)
2012-01-16 10:54:28 -04:00
| AugAssign(expr target, operator op, expr value)
-- 'simple' indicates that we annotate simple name without parens
| AnnAssign(expr target, expr annotation, expr? value, int simple)
2012-01-16 10:54:28 -04:00
-- use 'orelse' because else is a keyword in target languages
| For(expr target, expr iter, stmt* body, stmt* orelse, string? type_comment)
| AsyncFor(expr target, expr iter, stmt* body, stmt* orelse, string? type_comment)
2012-01-16 10:54:28 -04:00
| While(expr test, stmt* body, stmt* orelse)
| If(expr test, stmt* body, stmt* orelse)
| With(withitem* items, stmt* body, string? type_comment)
| AsyncWith(withitem* items, stmt* body, string? type_comment)
2012-01-16 10:54:28 -04:00
| Match(expr subject, match_case* cases)
2012-01-16 10:54:28 -04:00
| Raise(expr? exc, expr? cause)
| Try(stmt* body, excepthandler* handlers, stmt* orelse, stmt* finalbody)
| Assert(expr test, expr? msg)
| Import(alias* names)
| ImportFrom(identifier? module, alias* names, int? level)
| Global(identifier* names)
| Nonlocal(identifier* names)
| Expr(expr value)
| Pass | Break | Continue
-- col_offset is the byte offset in the utf8 string the parser uses
bpo-33416: Add end positions to Python AST (GH-11605) The majority of this PR is tediously passing `end_lineno` and `end_col_offset` everywhere. Here are non-trivial points: * It is not possible to reconstruct end positions in AST "on the fly", some information is lost after an AST node is constructed, so we need two more attributes for every AST node `end_lineno` and `end_col_offset`. * I add end position information to both CST and AST. Although it may be technically possible to avoid adding end positions to CST, the code becomes more cumbersome and less efficient. * Since the end position is not known for non-leaf CST nodes while the next token is added, this requires a bit of extra care (see `_PyNode_FinalizeEndPos`). Unless I made some mistake, the algorithm should be linear. * For statements, I "trim" the end position of suites to not include the terminal newlines and dedent (this seems to be what people would expect), for example in ```python class C: pass pass ``` the end line and end column for the class definition is (2, 8). * For `end_col_offset` I use the common Python convention for indexing, for example for `pass` the `end_col_offset` is 4 (not 3), so that `[0:4]` gives one the source code that corresponds to the node. * I added a helper function `ast.get_source_segment()`, to get source text segment corresponding to a given AST node. It is also useful for testing. An (inevitable) downside of this PR is that AST now takes almost 25% more memory. I think however it is probably justified by the benefits.
2019-01-22 07:18:22 -04:00
attributes (int lineno, int col_offset, int? end_lineno, int? end_col_offset)
2012-01-16 10:54:28 -04:00
-- BoolOp() can use left & right?
expr = BoolOp(boolop op, expr* values)
bpo-35224: PEP 572 Implementation (#10497) * Add tokenization of := - Add token to Include/token.h. Add token to documentation in Doc/library/token.rst. - Run `./python Lib/token.py` to regenerate Lib/token.py. - Update Parser/tokenizer.c: add case to handle `:=`. * Add initial usage of := in grammar. * Update Python.asdl to match the grammar updates. Regenerated Include/Python-ast.h and Python/Python-ast.c * Update AST and compiler files in Python/ast.c and Python/compile.c. Basic functionality, this isn't scoped properly * Regenerate Lib/symbol.py using `./python Lib/symbol.py` * Tests - Fix failing tests in test_parser.py due to changes in token numbers for internal representation * Tests - Add simple test for := token * Tests - Add simple tests for named expressions using expr and suite * Tests - Update number of levels for nested expressions to prevent stack overflow * Update symbol table to handle NamedExpr * Update Grammar to allow assignment expressions in if statements. Regenerate Python/graminit.c accordingly using `make regen-grammar` * Tests - Add additional tests for named expressions in RoundtripLegalSyntaxTestCase, based on examples and information directly from PEP 572 Note: failing tests are currently commented out (4 out of 24 tests currently fail) * Tests - Add temporary syntax test failure tests in test_parser.py Note: There is an outstanding TODO for this -- syntax tests need to be moved to a different file (presumably test_syntax.py), but this is covering what needs to be tested at the moment, and it's more convenient to run a single test for the time being * Add support for allowing assignment expressions as function argument annotations. Uncomment tests for these cases because they all pass now! * Tests - Move existing syntax tests out of test_parser.py and into test_named_expressions.py. Refactor syntax tests to use unittest * Add TargetScopeError exception to extend SyntaxError Note: This simply creates the TargetScopeError exception, it is not yet used anywhere * Tests - Update tests per PEP 572 Continue refactoring test suite: The named expression test suite now checks for any invalid cases that throw exceptions (no longer limited to SyntaxErrors), assignment tests to ensure that variables are properly assigned, and scope tests to ensure that variable availability and values are correct Note: - There are still tests that are marked to skip, as they are not yet implemented - There are approximately 300 lines of the PEP that have not yet been addressed, though these may be deferred * Documentation - Small updates to XXX/todo comments - Remove XXX from child description in ast.c - Add comment with number of previously supported nested expressions for 3.7.X in test_parser.py * Fix assert in seq_for_testlist() * Cleanup - Denote "Not implemented -- No keyword args" on failing test case. Fix PEP8 error for blank lines at beginning of test classes in test_parser.py * Tests - Wrap all file opens in `with...as` to ensure files are closed * WIP: handle f(a := 1) * Tests and Cleanup - No longer skips keyword arg test. Keyword arg test now uses a simpler test case and does not rely on an external file. Remove print statements from ast.c * Tests - Refactor last remaining test case that relied on on external file to use a simpler test case without the dependency * Tests - Add better description of remaning skipped tests. Add test checking scope when using assignment expression in a function argument * Tests - Add test for nested comprehension, testing value and scope. Fix variable name in skipped comprehension scope test * Handle restriction of LHS for named expressions - can only assign to LHS of type NAME. Specifically, restrict assignment to tuples This adds an alternative set_context specifically for named expressions, set_namedexpr_context. Thus, context is now set differently for standard assignment versus assignment for named expressions in order to handle restrictions. * Tests - Update negative test case for assigning to lambda to match new error message. Add negative test case for assigning to tuple * Tests - Reorder test cases to group invalid syntax cases and named assignment target errors * Tests - Update test case for named expression in function argument - check that result and variable are set correctly * Todo - Add todo for TargetScopeError based on Guido's comment (https://github.com/python/cpython/commit/2b3acd37bdfc2d35e5094228c6684050d2aa8b0a#r30472562) * Tests - Add named expression tests for assignment operator in function arguments Note: One of two tests are skipped, as function arguments are currently treating an assignment expression inside of parenthesis as one child, which does not properly catch the named expression, nor does it count arguments properly * Add NamedStore to expr_context. Regenerate related code with `make regen-ast` * Add usage of NamedStore to ast_for_named_expr in ast.c. Update occurances of checking for Store to also handle NamedStore where appropriate * Add ste_comprehension to _symtable_entry to track if the namespace is a comprehension. Initialize ste_comprehension to 0. Set set_comprehension to 1 in symtable_handle_comprehension * s/symtable_add_def/symtable_add_def_helper. Add symtable_add_def to handle grabbing st->st_cur and passing it to symtable_add_def_helper. This now allows us to call the original code from symtable_add_def by instead calling symtable_add_def_helper with a different ste. * Refactor symtable_record_directive to take lineno and col_offset as arguments instead of stmt_ty. This allows symtable_record_directive to be used for stmt_ty and expr_ty * Handle elevating scope for named expressions in comprehensions. * Handle error for usage of named expression inside a class block * Tests - No longer skip scope tests. Add additional scope tests * Cleanup - Update error message for named expression within a comprehension within a class. Update comments. Add assert for symtable_extend_namedexpr_scope to validate that we always find at least a ModuleScope if we don't find a Class or FunctionScope * Cleanup - Add missing case for NamedStore in expr_context_name. Remove unused var in set_namedexpr_content * Refactor - Consolidate set_context and set_namedexpr_context to reduce duplicated code. Special cases for named expressions are handled by checking if ctx is NamedStore * Cleanup - Add additional use cases for ast_for_namedexpr in usage comment. Fix multiple blank lines in test_named_expressions * Tests - Remove unnecessary test case. Renumber test case function names * Remove TargetScopeError for now. Will add back if needed * Cleanup - Small comment nit for consistency * Handle positional argument check with named expression * Add TargetScopeError exception definition. Add documentation for TargetScopeError in c-api docs. Throw TargetScopeError instead of SyntaxError when using a named expression in a comprehension within a class scope * Increase stack size for parser by 200. This is a minimal change (approx. 5kb) and should not have an impact on any systems. Update parser test to allow 99 nested levels again * Add TargetScopeError to exception_hierarchy.txt for test_baseexception.py_ * Tests - Major update for named expression tests, both in test_named_expressions and test_parser - Add test for TargetScopeError - Add tests for named expressions in comprehension scope and edge cases - Add tests for named expressions in function arguments (declarations and call sites) - Reorganize tests to group them more logically * Cleanup - Remove unnecessary comment * Cleanup - Comment nitpicks * Explicitly disallow assignment expressions to a name inside parentheses, e.g.: ((x) := 0) - Add check for LHS types to detect a parenthesis then a name (see note) - Add test for this scenario - Update tests for changed error message for named assignment to a tuple (also, see note) Note: This caused issues with the previous error handling for named assignment to a LHS that contained an expression, such as a tuple. Thus, the check for the LHS of a named expression must be changed to be more specific if we wish to maintain the previous error messages * Cleanup - Wrap lines more strictly in test file * Revert "Explicitly disallow assignment expressions to a name inside parentheses, e.g.: ((x) := 0)" This reverts commit f1531400ca7d7a2d148830c8ac703f041740896d. * Add NEWS.d entry * Tests - Fix error in test_pickle.test_exceptions by adding TargetScopeError to list of exceptions * Tests - Update error message tests to reflect improved messaging convention (s/can't/cannot) * Remove cases that cannot be reached in compile.c. Small linting update. * Update Grammar/Tokens to add COLONEQUAL. Regenerate all files * Update TargetScopeError PRE_INIT and POST_INIT, as this was purposefully left out when fixing rebase conflicts * Add NamedStore back and regenerate files * Pass along line number and end col info for named expression * Simplify News entry * Fix compiler warning and explicity mark fallthrough
2019-01-24 19:49:56 -04:00
| NamedExpr(expr target, expr value)
2012-01-16 10:54:28 -04:00
| BinOp(expr left, operator op, expr right)
| UnaryOp(unaryop op, expr operand)
| Lambda(arguments args, expr body)
| IfExp(expr test, expr body, expr orelse)
| Dict(expr* keys, expr* values)
| Set(expr* elts)
| ListComp(expr elt, comprehension* generators)
| SetComp(expr elt, comprehension* generators)
| DictComp(expr key, expr value, comprehension* generators)
| GeneratorExp(expr elt, comprehension* generators)
-- the grammar constrains where yield expressions can occur
| Await(expr value)
2012-01-16 10:54:28 -04:00
| Yield(expr? value)
| YieldFrom(expr value)
2012-01-16 10:54:28 -04:00
-- need sequences for compare to distinguish between
-- x < 4 < 3 and (x < 4) < 3
| Compare(expr left, cmpop* ops, expr* comparators)
| Call(expr func, expr* args, keyword* keywords)
| FormattedValue(expr value, int? conversion, expr? format_spec)
| JoinedStr(expr* values)
| Constant(constant value, string? kind)
2012-01-16 10:54:28 -04:00
-- the following expression can appear in assignment context
| Attribute(expr value, identifier attr, expr_context ctx)
| Subscript(expr value, expr slice, expr_context ctx)
2012-01-16 10:54:28 -04:00
| Starred(expr value, expr_context ctx)
| Name(identifier id, expr_context ctx)
| List(expr* elts, expr_context ctx)
2012-01-16 10:54:28 -04:00
| Tuple(expr* elts, expr_context ctx)
-- can appear only in Subscript
| Slice(expr? lower, expr? upper, expr? step)
2012-01-16 10:54:28 -04:00
-- col_offset is the byte offset in the utf8 string the parser uses
bpo-33416: Add end positions to Python AST (GH-11605) The majority of this PR is tediously passing `end_lineno` and `end_col_offset` everywhere. Here are non-trivial points: * It is not possible to reconstruct end positions in AST "on the fly", some information is lost after an AST node is constructed, so we need two more attributes for every AST node `end_lineno` and `end_col_offset`. * I add end position information to both CST and AST. Although it may be technically possible to avoid adding end positions to CST, the code becomes more cumbersome and less efficient. * Since the end position is not known for non-leaf CST nodes while the next token is added, this requires a bit of extra care (see `_PyNode_FinalizeEndPos`). Unless I made some mistake, the algorithm should be linear. * For statements, I "trim" the end position of suites to not include the terminal newlines and dedent (this seems to be what people would expect), for example in ```python class C: pass pass ``` the end line and end column for the class definition is (2, 8). * For `end_col_offset` I use the common Python convention for indexing, for example for `pass` the `end_col_offset` is 4 (not 3), so that `[0:4]` gives one the source code that corresponds to the node. * I added a helper function `ast.get_source_segment()`, to get source text segment corresponding to a given AST node. It is also useful for testing. An (inevitable) downside of this PR is that AST now takes almost 25% more memory. I think however it is probably justified by the benefits.
2019-01-22 07:18:22 -04:00
attributes (int lineno, int col_offset, int? end_lineno, int? end_col_offset)
2012-01-16 10:54:28 -04:00
expr_context = Load | Store | Del
2012-01-16 10:54:28 -04:00
boolop = And | Or
2012-01-16 10:54:28 -04:00
operator = Add | Sub | Mult | MatMult | Div | Mod | Pow | LShift
| RShift | BitOr | BitXor | BitAnd | FloorDiv
2012-01-16 10:54:28 -04:00
unaryop = Invert | Not | UAdd | USub
2012-01-16 10:54:28 -04:00
cmpop = Eq | NotEq | Lt | LtE | Gt | GtE | Is | IsNot | In | NotIn
comprehension = (expr target, expr iter, expr* ifs, int is_async)
2012-01-16 10:54:28 -04:00
excepthandler = ExceptHandler(expr? type, identifier? name, stmt* body)
bpo-33416: Add end positions to Python AST (GH-11605) The majority of this PR is tediously passing `end_lineno` and `end_col_offset` everywhere. Here are non-trivial points: * It is not possible to reconstruct end positions in AST "on the fly", some information is lost after an AST node is constructed, so we need two more attributes for every AST node `end_lineno` and `end_col_offset`. * I add end position information to both CST and AST. Although it may be technically possible to avoid adding end positions to CST, the code becomes more cumbersome and less efficient. * Since the end position is not known for non-leaf CST nodes while the next token is added, this requires a bit of extra care (see `_PyNode_FinalizeEndPos`). Unless I made some mistake, the algorithm should be linear. * For statements, I "trim" the end position of suites to not include the terminal newlines and dedent (this seems to be what people would expect), for example in ```python class C: pass pass ``` the end line and end column for the class definition is (2, 8). * For `end_col_offset` I use the common Python convention for indexing, for example for `pass` the `end_col_offset` is 4 (not 3), so that `[0:4]` gives one the source code that corresponds to the node. * I added a helper function `ast.get_source_segment()`, to get source text segment corresponding to a given AST node. It is also useful for testing. An (inevitable) downside of this PR is that AST now takes almost 25% more memory. I think however it is probably justified by the benefits.
2019-01-22 07:18:22 -04:00
attributes (int lineno, int col_offset, int? end_lineno, int? end_col_offset)
arguments = (arg* posonlyargs, arg* args, arg? vararg, arg* kwonlyargs,
expr* kw_defaults, arg? kwarg, expr* defaults)
arg = (identifier arg, expr? annotation, string? type_comment)
bpo-33416: Add end positions to Python AST (GH-11605) The majority of this PR is tediously passing `end_lineno` and `end_col_offset` everywhere. Here are non-trivial points: * It is not possible to reconstruct end positions in AST "on the fly", some information is lost after an AST node is constructed, so we need two more attributes for every AST node `end_lineno` and `end_col_offset`. * I add end position information to both CST and AST. Although it may be technically possible to avoid adding end positions to CST, the code becomes more cumbersome and less efficient. * Since the end position is not known for non-leaf CST nodes while the next token is added, this requires a bit of extra care (see `_PyNode_FinalizeEndPos`). Unless I made some mistake, the algorithm should be linear. * For statements, I "trim" the end position of suites to not include the terminal newlines and dedent (this seems to be what people would expect), for example in ```python class C: pass pass ``` the end line and end column for the class definition is (2, 8). * For `end_col_offset` I use the common Python convention for indexing, for example for `pass` the `end_col_offset` is 4 (not 3), so that `[0:4]` gives one the source code that corresponds to the node. * I added a helper function `ast.get_source_segment()`, to get source text segment corresponding to a given AST node. It is also useful for testing. An (inevitable) downside of this PR is that AST now takes almost 25% more memory. I think however it is probably justified by the benefits.
2019-01-22 07:18:22 -04:00
attributes (int lineno, int col_offset, int? end_lineno, int? end_col_offset)
-- keyword arguments supplied to call (NULL identifier for **kwargs)
keyword = (identifier? arg, expr value)
attributes (int lineno, int col_offset, int? end_lineno, int? end_col_offset)
2012-01-16 10:54:28 -04:00
-- import name with optional 'as' alias.
alias = (identifier name, identifier? asname)
attributes (int lineno, int col_offset, int? end_lineno, int? end_col_offset)
2012-01-16 10:54:28 -04:00
withitem = (expr context_expr, expr? optional_vars)
match_case = (pattern pattern, expr? guard, stmt* body)
pattern = MatchValue(expr value)
| MatchSingleton(constant value)
| MatchSequence(pattern* patterns)
| MatchMapping(expr* keys, pattern* patterns, identifier? rest)
| MatchClass(expr cls, pattern* patterns, identifier* kwd_attrs, pattern* kwd_patterns)
| MatchStar(identifier? name)
-- The optional "rest" MatchMapping parameter handles capturing extra mapping keys
| MatchAs(pattern? pattern, identifier? name)
| MatchOr(pattern* patterns)
attributes (int lineno, int col_offset, int end_lineno, int end_col_offset)
type_ignore = TypeIgnore(int lineno, string tag)
}