Commit Graph

139 Commits

Author SHA1 Message Date
Serhiy Storchaka 1a0c7b9ba4
gh-121905: Consistently use "floating-point" instead of "floating point" (GH-121907) 2024-07-19 08:06:02 +00:00
Lysandros Nikolaou 4b5d3e0e72
gh-120343: Fix column offsets of multiline tokens in tokenize (#120391) 2024-06-12 20:52:55 +02:00
Lysandros Nikolaou 1b62bcee94
gh-120343: Do not reset byte_col_offset_diff after multiline tokens (#120352)
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
2024-06-11 17:00:53 +00:00
Pablo Galindo Salgado ecf16ee50e
gh-115154: Fix untokenize handling of unicode named literals (#115171) 2024-02-19 14:54:10 +00:00
Pablo Galindo Salgado a135a6d2c6
gh-112943: Correctly compute end offsets for multiline tokens in the tokenize module (#112949) 2023-12-11 11:44:22 +00:00
Nikita Sobolev e9b5399bee
gh-111031: Check more files in `test_tokenize` (#111032) 2023-10-19 09:29:45 +01:00
Lysandros Nikolaou 17d65547df
gh-104169: Fix test_peg_generator after tokenizer refactoring (#110727)
* Fix test_peg_generator after tokenizer refactoring
* Remove references to tokenizer.c in comments etc.
2023-10-12 09:34:35 +02:00
Nikita Sobolev 732532b0af
gh-108303: Move all inspect test files to `test_inspect/` (#109607) 2023-10-10 22:15:11 +02:00
Pablo Galindo Salgado cc389ef627
gh-110259: Fix f-strings with multiline expressions and format specs (#110271)
Signed-off-by: Pablo Galindo <pablogsal@gmail.com>
2023-10-05 14:26:44 +01:00
Nikita Sobolev 1110c5bc82
gh-108303: Move tokenize-related data to Lib/test/tokenizedata (GH-109265) 2023-09-12 09:37:42 +03:00
Anthony Shaw 2c4c26c4ce
gh-108469: Update ast.unparse for unescaped quote support from PEP701 [3.12] (#108553)
Co-authored-by: sunmy2019 <59365878+sunmy2019@users.noreply.github.com>
2023-09-05 21:01:23 +01:00
Pablo Galindo Salgado da8f87b7ea
gh-107015: Remove async_hacks from the tokenizer (#107018) 2023-07-26 16:34:15 +01:00
Nikita Sobolev d830c4a944
gh-106200: Remove unused imports (#106201) 2023-06-28 11:55:41 +00:00
Lysandros Nikolaou ab3823a97b
gh-71299: Fix __all__ in tokenize (#105907)
Co-authored-by: Unit03
2023-06-19 13:31:57 +02:00
Lysandros Nikolaou d382ad4915
gh-105820: Fix tok_mode expression buffer in file & readline tokenizer (#105828) 2023-06-15 16:21:24 +00:00
Lysandros Nikolaou abfbab6415
gh-105718: Fix buffer allocation in tokenizer with readline (#105728) 2023-06-13 16:18:11 +01:00
Pablo Galindo Salgado b047fa5e56
gh-105549: Tokenize separately NUMBER and NAME tokens and allow 0-prefixed literals (#105555) 2023-06-09 21:39:01 +01:00
Pablo Galindo Salgado d7f46bcd98
gh-105564: Don't include artificial newlines in the line attribute of tokens (#105565) 2023-06-09 17:01:26 +01:00
Pablo Galindo Salgado 7279fb6408
gh-105435: Fix spurious NEWLINE token if file ends with comment without a newline (#105442) 2023-06-07 13:31:48 +01:00
Pablo Galindo Salgado ffd2654550
gh-105390: Correctly raise TokenError instead of SyntaxError for tokenize errors (#105399) 2023-06-07 12:04:40 +01:00
Pablo Galindo Salgado c0a6ed3934
gh-105259: Ensure we don't show newline characters for trailing NEWLINE tokens (#105364) 2023-06-06 12:52:16 +01:00
Lysandros Nikolaou 70f315c2d6
gh-105042: Disable unmatched parens syntax error in python tokenize (#105061) 2023-05-30 22:52:52 +01:00
Pablo Galindo Salgado 9216e69a87
gh-105069: Add a readline-like callable to the tokenizer to consume input iteratively (#105070) 2023-05-30 22:43:34 +01:00
Marta Gómez Macías 96fff35325
gh-105017: Include CRLF lines in strings and column numbers (#105030)
Co-authored-by: Pablo Galindo <pablogsal@gmail.com>
2023-05-28 15:15:53 +01:00
Marta Gómez Macías 86d8f48935
gh-105017: Fix including additional NL token when using CRLF (#105022)
Co-authored-by: Pablo Galindo Salgado <Pablogsal@gmail.com>
2023-05-27 16:50:43 +00:00
Pablo Galindo Salgado 46b52e6e2b
gh-104976: Ensure trailing dedent tokens are emitted as the previous tokenizer (#104980)
Signed-off-by: Pablo Galindo <pablogsal@gmail.com>
2023-05-26 22:02:26 +01:00
Pablo Galindo Salgado 3fdb55c482
gh-104972: Ensure that line attributes in tokens in the tokenize module are correct (#104975) 2023-05-26 15:46:22 +01:00
Lysandros Nikolaou c90a862cdc
gh-104866: Tokenize should emit NEWLINE after exiting block with comment (#104870) 2023-05-24 17:18:17 +01:00
Pablo Galindo Salgado c8cf9b42eb
gh-104825: Remove implicit newline in the line attribute in tokens emitted in the tokenize module (#104846) 2023-05-24 09:59:18 +00:00
Marta Gómez Macías 729b252241
gh-104741: Add line number attribute to indentation error exception (#104743) 2023-05-22 11:30:18 +00:00
Marta Gómez Macías 6715f91edc
gh-102856: Python tokenizer implementation for PEP 701 (#104323)
This commit replaces the Python implementation of the tokenize module with an implementation
that reuses the real C tokenizer via a private extension module. The tokenize module now implements
a compatibility layer that transforms tokens from the C tokenizer into Python tokenize tokens for backward
compatibility.

As the C tokenizer does not emit some tokens that the Python tokenizer provides (such as comments and non-semantic newlines), a new special mode has been added to the C tokenizer mode that currently is only used via
the extension module that exposes it to the Python layer. This new mode forces the C tokenizer to emit these new extra tokens and add the appropriate metadata that is needed to match the old Python implementation.

Co-authored-by: Pablo Galindo <pablogsal@gmail.com>
2023-05-21 01:03:02 +01:00
chgnrdv d5a97074d2
gh-103824: fix use-after-free error in Parser/tokenizer.c (#103993) 2023-05-01 15:26:43 +00:00
Pablo Galindo Salgado 1ef61cf71a
gh-102856: Initial implementation of PEP 701 (#102855)
Co-authored-by: Lysandros Nikolaou <lisandrosnik@gmail.com>
Co-authored-by: Batuhan Taskaya <isidentical@gmail.com>
Co-authored-by: Marta Gómez Macías <mgmacias@google.com>
Co-authored-by: sunmy2019 <59365878+sunmy2019@users.noreply.github.com>
2023-04-19 11:18:16 -05:00
Pablo Galindo Salgado e13d1d9dda
gh-99581: Fix a buffer overflow in the tokenizer when copying lines that fill the available buffer (#99605) 2022-11-20 20:20:03 +00:00
Michael Droettboom 23e83a8465
gh-94808: Coverage: Test that maximum indentation level is handled (#95926)
* gh-94808: Coverage: Test that maximum indentation level is handled

* Use "compile" rather than "exec"
2022-10-06 10:39:17 -07:00
Serhiy Storchaka 6927632492
Remove trailing spaces (GH-31695) 2022-03-05 17:47:00 +02:00
Pablo Galindo Salgado a0efc0c196
bpo-46091: Correctly calculate indentation levels for whitespace lines with continuation characters (GH-30130) 2022-01-25 22:12:14 +00:00
Serhiy Storchaka a5a56154f1
Remove trailing spaces. (GH-28706) 2021-10-03 16:58:14 +03:00
Pablo Galindo Salgado a24676bedc
Add tests for the C tokenizer and expose it as a private module (GH-27924) 2021-08-24 17:50:05 +01:00
Pablo Galindo Salgado b6bde9fc42
bpo-44667: Treat correctly lines ending with comments and no newlines in the Python tokenizer (GH-27499) 2021-07-31 02:17:09 +01:00
Hai Shi 4660597b51
bpo-40275: Use new test.support helper submodules in tests (GH-21448) 2020-08-03 18:49:18 +02:00
Serhiy Storchaka 9355868458
bpo-41043: Escape literal part of the path for glob(). (GH-20994) 2020-06-20 11:10:31 +03:00
Emily Morehouse 8f59ee01be
bpo-35224: PEP 572 Implementation (#10497)
* Add tokenization of :=
- Add token to Include/token.h. Add token to documentation in Doc/library/token.rst.
- Run `./python Lib/token.py` to regenerate Lib/token.py.
- Update Parser/tokenizer.c: add case to handle `:=`.

* Add initial usage of := in grammar.

* Update Python.asdl to match the grammar updates. Regenerated Include/Python-ast.h and Python/Python-ast.c

* Update AST and compiler files in Python/ast.c and Python/compile.c. Basic functionality, this isn't scoped properly

* Regenerate Lib/symbol.py using `./python Lib/symbol.py`

* Tests - Fix failing tests in test_parser.py due to changes in token numbers for internal representation

* Tests - Add simple test for := token

* Tests - Add simple tests for named expressions using expr and suite

* Tests - Update number of levels for nested expressions to prevent stack overflow

* Update symbol table to handle NamedExpr

* Update Grammar to allow assignment expressions in if statements.
Regenerate Python/graminit.c accordingly using `make regen-grammar`

* Tests - Add additional tests for named expressions in RoundtripLegalSyntaxTestCase, based on examples and information directly from PEP 572

Note: failing tests are currently commented out (4 out of 24 tests currently fail)

* Tests - Add temporary syntax test failure tests in test_parser.py

Note: There is an outstanding TODO for this -- syntax tests need to be
moved to a different file (presumably test_syntax.py), but this is
covering what needs to be tested at the moment, and it's more convenient
to run a single test for the time being

* Add support for allowing assignment expressions as function argument annotations. Uncomment tests for these cases because they all pass now!

* Tests - Move existing syntax tests out of test_parser.py and into test_named_expressions.py. Refactor syntax tests to use unittest

* Add TargetScopeError exception to extend SyntaxError

Note: This simply creates the TargetScopeError exception, it is not yet
used anywhere

* Tests - Update tests per PEP 572

Continue refactoring test suite:
The named expression test suite now checks for any invalid cases that
throw exceptions (no longer limited to SyntaxErrors), assignment tests
to ensure that variables are properly assigned, and scope tests to
ensure that variable availability and values are correct

Note:
- There are still tests that are marked to skip, as they are not yet
implemented
- There are approximately 300 lines of the PEP that have not yet been
addressed, though these may be deferred

* Documentation - Small updates to XXX/todo comments

- Remove XXX from child description in ast.c
- Add comment with number of previously supported nested expressions for
3.7.X in test_parser.py

* Fix assert in seq_for_testlist()

* Cleanup - Denote "Not implemented -- No keyword args" on failing test case. Fix PEP8 error for blank lines at beginning of test classes in test_parser.py

* Tests - Wrap all file opens in `with...as` to ensure files are closed

* WIP: handle f(a := 1)

* Tests and Cleanup - No longer skips keyword arg test. Keyword arg test now uses a simpler test case and does not rely on an external file. Remove print statements from ast.c

* Tests - Refactor last remaining test case that relied on on external file to use a simpler test case without the dependency

* Tests - Add better description of remaning skipped tests. Add test checking scope when using assignment expression in a function argument

* Tests - Add test for nested comprehension, testing value and scope. Fix variable name in skipped comprehension scope test

* Handle restriction of LHS for named expressions - can only assign to LHS of type NAME. Specifically, restrict assignment to tuples

This adds an alternative set_context specifically for named expressions,
set_namedexpr_context. Thus, context is now set differently for standard
assignment versus assignment for named expressions in order to handle
restrictions.

* Tests - Update negative test case for assigning to lambda to match new error message. Add negative test case for assigning to tuple

* Tests - Reorder test cases to group invalid syntax cases and named assignment target errors

* Tests - Update test case for named expression in function argument - check that result and variable are set correctly

* Todo - Add todo for TargetScopeError based on Guido's comment (2b3acd37bd (r30472562))

* Tests - Add named expression tests for assignment operator in function arguments

Note: One of two tests are skipped, as function arguments are currently treating
an assignment expression inside of parenthesis as one child, which does
not properly catch the named expression, nor does it count arguments
properly

* Add NamedStore to expr_context. Regenerate related code with `make regen-ast`

* Add usage of NamedStore to ast_for_named_expr in ast.c. Update occurances of checking for Store to also handle NamedStore where appropriate

* Add ste_comprehension to _symtable_entry to track if the namespace is a comprehension. Initialize ste_comprehension to 0. Set set_comprehension to 1 in symtable_handle_comprehension

* s/symtable_add_def/symtable_add_def_helper. Add symtable_add_def to handle grabbing st->st_cur and passing it to symtable_add_def_helper. This now allows us to call the original code from symtable_add_def by instead calling symtable_add_def_helper with a different ste.

* Refactor symtable_record_directive to take lineno and col_offset as arguments instead of stmt_ty. This allows symtable_record_directive to be used for stmt_ty and expr_ty

* Handle elevating scope for named expressions in comprehensions.

* Handle error for usage of named expression inside a class block

* Tests - No longer skip scope tests. Add additional scope tests

* Cleanup - Update error message for named expression within a comprehension within a class. Update comments. Add assert for symtable_extend_namedexpr_scope to validate that we always find at least a ModuleScope if we don't find a Class or FunctionScope

* Cleanup - Add missing case for NamedStore in expr_context_name. Remove unused var in set_namedexpr_content

* Refactor - Consolidate set_context and set_namedexpr_context to reduce duplicated code. Special cases for named expressions are handled by checking if ctx is NamedStore

* Cleanup - Add additional use cases for ast_for_namedexpr in usage comment. Fix multiple blank lines in test_named_expressions

* Tests - Remove unnecessary test case. Renumber test case function names

* Remove TargetScopeError for now. Will add back if needed

* Cleanup - Small comment nit for consistency

* Handle positional argument check with named expression

* Add TargetScopeError exception definition. Add documentation for TargetScopeError in c-api docs. Throw TargetScopeError instead of SyntaxError when using a named expression in a comprehension within a class scope

* Increase stack size for parser by 200. This is a minimal change (approx. 5kb) and should not have an impact on any systems. Update parser test to allow 99 nested levels again

* Add TargetScopeError to exception_hierarchy.txt for test_baseexception.py_

* Tests - Major update for named expression tests, both in test_named_expressions and test_parser

- Add test for TargetScopeError
- Add tests for named expressions in comprehension scope and edge cases
- Add tests for named expressions in function arguments (declarations
and call sites)
- Reorganize tests to group them more logically

* Cleanup - Remove unnecessary comment

* Cleanup - Comment nitpicks

* Explicitly disallow assignment expressions to a name inside parentheses, e.g.: ((x) := 0)

- Add check for LHS types to detect a parenthesis then a name (see note)
- Add test for this scenario
- Update tests for changed error message for named assignment to a tuple
(also, see note)

Note: This caused issues with the previous error handling for named assignment
to a LHS that contained an expression, such as a tuple. Thus, the check
for the LHS of a named expression must be changed to be more specific if
we wish to maintain the previous error messages

* Cleanup - Wrap lines more strictly in test file

* Revert "Explicitly disallow assignment expressions to a name inside parentheses, e.g.: ((x) := 0)"

This reverts commit f1531400ca7d7a2d148830c8ac703f041740896d.

* Add NEWS.d entry

* Tests - Fix error in test_pickle.test_exceptions by adding TargetScopeError to list of exceptions

* Tests - Update error message tests to reflect improved messaging convention (s/can't/cannot)

* Remove cases that cannot be reached in compile.c. Small linting update.

* Update Grammar/Tokens to add COLONEQUAL. Regenerate all files

* Update TargetScopeError PRE_INIT and POST_INIT, as this was purposefully left out when fixing rebase conflicts

* Add NamedStore back and regenerate files

* Pass along line number and end col info for named expression

* Simplify News entry

* Fix compiler warning and explicity mark fallthrough
2019-01-24 16:49:56 -07:00
Serhiy Storchaka 8ac658114d
bpo-30455: Generate all token related code and docs from Grammar/Tokens. (GH-10370)
"Include/token.h", "Lib/token.py" (containing now some data moved from
"Lib/tokenize.py") and new files "Parser/token.c" (containing the code
moved from "Parser/tokenizer.c") and "Doc/library/token-list.inc" (included
in "Doc/library/token.rst") are now generated from "Grammar/Tokens" by
"Tools/scripts/generate_token.py". The script overwrites files only if
needed and can be used on the read-only sources tree.

"Lib/symbol.py" is now generated by "Tools/scripts/generate_symbol_py.py"
instead of been executable itself.

Added new make targets "regen-token" and "regen-symbol" which are now
dependencies of "regen-all".

The documentation contains now strings for operators and punctuation tokens.
2018-12-22 11:18:40 +02:00
Sergey Fedoseev b796e7dcdc Fixed several assertTrue() that were intended to be assertEqual(). (GH-8191)
Fixed also testing the "always" warning filter.
2018-07-09 18:25:55 +03:00
Ammar Askar c4ef4896ea bpo-33899: Make tokenize module mirror end-of-file is end-of-line behavior (GH-7891)
Most of the change involves fixing up the test suite, which previously made
the assumption that there wouldn't be a new line if the input didn't end in
one.

Contributed by Ammar Askar.
2018-07-06 10:19:08 +03:00
Thomas Kluyver c56b17bd8c bpo-12486: Document tokenize.generate_tokens() as public API (#6957)
* Document tokenize.generate_tokens()

* Add news file

* Add test for generate_tokens

* Document behaviour around ENCODING token

* Add generate_tokens to __all__
2018-06-05 10:26:39 -07:00
Jelle Zijlstra ac317700ce bpo-30406: Make async and await proper keywords (#1669)
Per PEP 492, 'async' and 'await' should become proper keywords in 3.7.
2017-10-05 23:24:46 -04:00
Stéphane Wirtel 90addd6d1c bpo-31029: test_tokenize Add missing import unittest (#2865) 2017-07-25 16:33:53 +03:00
Albert-Jan Nijburg fc354f0785 bpo-25324: copy tok_name before changing it (#1608)
* add test to check if were modifying token

* copy list so import tokenize doesnt have side effects on token

* shorten line

* add tokenize tokens to token.h to get them to show up in token

* move ERRORTOKEN back to its previous location, and fix nitpick

* copy comments from token.h automatically

* fix whitespace and make more pythonic

* change to fix comments from @haypo

* update token.rst and Misc/NEWS

* change wording

* some more wording changes
2017-05-31 16:00:21 +02:00