`ast.parse` and `compile` support a `feature_version` parameter that
tells the parser to parse the input string, as if it were written in
an older Python version.
The `feature_version` is propagated to the tokenizer, which uses it
to handle the three different stages of support for `async` and
`await`. Additionally, it disallows the following at parser level:
- The '@' operator in < 3.5
- Async functions in < 3.5
- Async comprehensions in < 3.6
- Underscores in numeric literals in < 3.6
- Await expression in < 3.5
- Variable annotations in < 3.6
- Async for-loops in < 3.5
- Async with-statements in < 3.5
- F-strings in < 3.6
Closeswe-like-parsers/cpython#124.
Before this commit, if an exception was active inside a generator
when calling gen.throw(), then that exception was lost (i.e. there
was no implicit exception chaining). This commit fixes that.
This implements full support for # type: <type> comments, # type: ignore <stuff> comments, and the func_type parsing mode for ast.parse() and compile().
Closes https://github.com/we-like-parsers/cpython/issues/95.
(For now, you need to use the master branch of mypy, since another issue unique to 3.9 had to be fixed there, and there's no mypy release yet.)
The only thing missing is `feature_version=N`, which is being tracked in https://github.com/we-like-parsers/cpython/issues/124.
Now that the default parser is the new PEG parser, ast.parse uses it, which means that we don't actually test something in test_peg_parser. This commit introduces a new keyword argument (`oldparser`) for `_peg_parser.parse_string` for specifying that a string needs to be parsed with the old parser. This keyword argument is used in the tests to actually compare the ASTs the new parser generates with those generated by the old parser.
Remove _random.Random.randbytes(): the C implementation of
randbytes(). Implement the method in Python to ease subclassing:
randbytes() now directly reuses getrandbits().
test.pythoninfo logs OpenSSL FIPS_mode() and Linux
/proc/sys/crypto/fips_enabled in a new "fips" section.
Co-Authored-By: Petr Viktorin <encukou@gmail.com>
After parsing is done in single statement mode, the tokenizer buffer has to be checked for additional lines and a `SyntaxError` must be raised, in case there are any.
Co-authored-by: Pablo Galindo <Pablogsal@gmail.com>
This allows the caller to avoid creation of an exception when the channel is empty (just like `dict.get()` works). `ChannelEmptyError` is still raised if no default is provided.
Automerge-Triggered-By: @ericsnowcurrently
An E_EOF error was only being caught after the parser exited before this commit. There are some cases though, where the tokenizer returns ERRORTOKEN *and* has set an E_EOF error (like when EOF directly follows a line continuation character) which weren't correctly handled before.
Now only test_error_during_result_unpickle_in_result_handler()
captures and ignores sys.stderr in the test process.
Tools like test.bisect_cmd don't support subTest() but only
work with the granularity of one method.
Remove unused ExecutorDeadlockTest._sleep_id() method.
This commit also allows to pass flags to the new parser in all interfaces and fixes a bug in the parser generator that was causing to inline rules with actions, making them disappear.
* Move socket related functions from test.support to socket_helper.
* Import socket, nntplib and urllib.error lazily in transient_internet().
* Remove importing multiprocess.
Previously every test was building an extension module and
loading it into sys.modules. The tearDown function was thus
not able to clean up correctly, resulting in memory leaks.
With this commit, every test function now builds the extension
module and runs the actual test code in a new process
(using assert_python_ok), so that sys.modules stays intact
and no memory gets leaked.