This effectively reverts the Makefile change in gh-31637. I've added some notes so it is more clear what is going on.
We also update the "Check if generated files are up to date" job to run "make regen-deepfreeze" to ensure "make regen-global-objects" catches deepfreeze.c.
https://bugs.python.org/issue47146
We have to run "make regen-deepfreeze" before running Tools/scripts/generate-global-objects.py; otherwise we will miss any changes to global objects in deep-frozen modules (which aren't committed in the repo). However, building $(PYTHON_FOR_FREEZE) fails if one of its source files had a global object (e.g. via _Py_ID(...)) added or removed, without generate-global-objects.py running first. So "make regen-global-objects" would sometimes fail.
We solve this by running generate-global-objects.py before *and* after "make regen-deepfreeze". To speed things up and cut down on noise, we also avoid updating the global objects files if there are no changes to them.
https://bugs.python.org/issue46712
* Moves the bytecode to the end of the corresponding PyCodeObject, and quickens it in-place.
* Removes the almost-always-unused co_varnames, co_freevars, and co_cellvars member caches
* _PyOpcode_Deopt is a new mapping from all opcodes to their un-quickened forms.
* _PyOpcode_InlineCacheEntries is renamed to _PyOpcode_Caches
* _Py_IncrementCountAndMaybeQuicken is renamed to _PyCode_Warmup
* _Py_Quicken is renamed to _PyCode_Quicken
* _co_quickened is renamed to _co_code_adaptive (and is now a read-only memoryview).
* Do not emit unused nonzero opargs anymore in the compiler.
- fd inheritance can't be modified because Emscripten doesn't support subprocesses anyway.
- setpriority always fails
- geteuid no longer causes problems with latest emsdk
- umask is a stub
- geteuid / getuid always return 0, but process cannot chown to random uid.
- getgroups always fails.
- geteuid and getegid always return 0 (root), which confuse tarfile and
tests.
- hardlinks (link, linkat) always fails.
- non-encodable file names are not supported by NODERAWFS layer.
- mark more tests with dependency on subprocess and multiprocessing.
Mocking does not work if the module fails to import.
<stdbool.h> is the standard/modern way to define embedd/extends Python free to define bool, true and false, but there are existing applications that use slightly different redefinitions, which fail if the header is included.
It's OK to use stdbool outside the public headers, though.
https://bugs.python.org/issue46748
Instead of manually enumerating the global strings in generate_global_objects.py, we extrapolate the list from usage of _Py_ID() and _Py_STR() in the source files.
This is partly inspired by gh-31261.
https://bugs.python.org/issue46541
This change adds variables that had been added since the last time the whitelist was updated. It also cleans up the list a little.
https://bugs.python.org/issue36876
We're no longer using _Py_IDENTIFIER() (or _Py_static_string()) in any core CPython code. It is still used in a number of non-builtin stdlib modules.
The replacement is: PyUnicodeObject (not pointer) fields under _PyRuntimeState, statically initialized as part of _PyRuntime. A new _Py_GET_GLOBAL_IDENTIFIER() macro facilitates lookup of the fields (along with _Py_GET_GLOBAL_STRING() for non-identifier strings).
https://bugs.python.org/issue46541#msg411799 explains the rationale for this change.
The core of the change is in:
* (new) Include/internal/pycore_global_strings.h - the declarations for the global strings, along with the macros
* Include/internal/pycore_runtime_init.h - added the static initializers for the global strings
* Include/internal/pycore_global_objects.h - where the struct in pycore_global_strings.h is hooked into _PyRuntimeState
* Tools/scripts/generate_global_objects.py - added generation of the global string declarations and static initializers
I've also added a --check flag to generate_global_objects.py (along with make check-global-objects) to check for unused global strings. That check is added to the PR CI config.
The remainder of this change updates the core code to use _Py_GET_GLOBAL_IDENTIFIER() instead of _Py_IDENTIFIER() and the related _Py*Id functions (likewise for _Py_GET_GLOBAL_STRING() instead of _Py_static_string()). This includes adding a few functions where there wasn't already an alternative to _Py*Id(), replacing the _Py_Identifier * parameter with PyObject *.
The following are not changed (yet):
* stop using _Py_IDENTIFIER() in the stdlib modules
* (maybe) get rid of _Py_IDENTIFIER(), etc. entirely -- this may not be doable as at least one package on PyPI using this (private) API
* (maybe) intern the strings during runtime init
https://bugs.python.org/issue46541
This reduces the size of the data segment by **300 KB** of the executable because if the modules are deep-frozen then the marshalled frozen data just wastes space. This was inspired by comment by @gvanrossum in https://github.com/python/cpython/pull/29118#issuecomment-958521863. Note: There is a new option `--deepfreeze-only` in `freeze_modules.py` to change this behavior, it is on be default to save disk space.
```console
# du -s ./python before
27892 ./python
# du -s ./python after
27524 ./python
```
Automerge-Triggered-By: GH:ericsnowcurrently
Disable compiler optimization within test_peg_generator.
This speed up test_peg_generator by always disabling compiler
optimizations by using -O0 or equivalent when the test is building its
own C extensions.
A build not using --with-pydebug in order to speed up test execution
winds up with this test taking a very long time as it would do
repeated compilation of parser C code using the same optimization
flags as CPython was built with.
This speeds the test up 6-8x on gps-raspbian.
Also incorporate's #31017's win32 conditional and flags.
Co-authored-by: Kumar Aditya kumaraditya303
This change is a prerequisite for generating code for other global objects (like strings in gh-30928).
(We borrowed some code from Tools/scripts/deepfreeze.py.)
https://bugs.python.org/issue46541
The build system now uses a :program:`_bootstrap_python` interpreter for
freezing and deepfreezing again. To speed up build process the build tools
:program:`_bootstrap_python` and :program:`_freeze_module` are no longer
build with LTO.
Cross building depends on a build Python interpreter, which must have same
version and bytecode as target host Python.
Instead we use $(PYTHON_FOR_REGEN) .../deepfreeze.py with the
frozen .h file as input, as we did for Windows in bpo-45850.
We also get rid of the code that generates the .h files
when make regen-frozen is run (i.e., .../make_frozen.py),
and the MANIFEST file.
Restore Python 3.8 and 3.9 as Windows host Python again
Co-authored-by: Kumar Aditya <59607654+kumaraditya303@users.noreply.github.com>
Implement changes to build with deep-frozen modules on Windows.
Note that we now require Python 3.10 as the "bootstrap" or "host" Python.
This causes a modest startup speed (around 7%) on Windows.
This gains 10% or more in startup time for `python -c pass` on UNIX-ish systems.
The Makefile.pre.in generating code builds on Eric's work for bpo-45020, but the .c file generator is new.
Windows version TBD.
Currently custom modules (the array set on PyImport_FrozenModules) replace all the frozen stdlib modules. That can be problematic and is unlikely to be what the user wants. This change treats the custom frozen modules as additions instead. They take precedence over all other frozen modules except for those needed to bootstrap the import system. If the "code" field of an entry in the custom array is NULL then that frozen module is treated as disabled, which allows a custom entry to disable a frozen stdlib module.
This change allows us to get rid of is_essential_frozen_module() and simplifies the logic for which frozen modules should be ignored.
https://bugs.python.org/issue45395
The "freeze" tool has been part of the repo for a long time. However, it hasn't had any tests in the test suite to guard against regressions. We add such a test here. This is especially important as there has been a lot of change recently related to frozen modules, with more to come.
Note that as part of the test we build Python out-of-tree and install it in a temp dir.
https://bugs.python.org/issue45629
This is a cross-platform check that the symbols are actually
exported in the ABI, not e.g. hidden in a macro.
Caveat: PyModule_Create2 & PyModule_FromDefAndSpec2 are skipped.
These aren't exported on some of our buildbots. This is a bug
(bpo-44133). This test now makes sure all the others don't regress.
There are two errors that this commit fixes:
* The parser was not correctly computing the offset and the string
source for E_LINECONT errors due to the incorrect usage of strtok().
* The parser was not correctly unwinding the call stack when a tokenizer
exception happened in rules involving optionals ('?', [...]) as we
always make them return valid results by using the comma operator. We
need to check first if we don't have an error before continuing.
* Never change types' cached keys. It could invalidate inline attribute objects.
* Lazily create object dictionaries.
* Update specialization of LOAD/STORE_ATTR.
* Don't update shared keys version for deletion of value.
* Update gdb support to handle instance values.
* Rename SPLIT_KEYS opcodes to INSTANCE_VALUE.
Like #28744 but for the Tools directory.
[skip issue] Opening a related issue is pending python/psf-infra-meta#130
Automerge-Triggered-By: GH:pablogsal
In the list of generated frozen modules at the top of Tools/scripts/freeze_modules.py, you will find that some of the modules have a different name than the module (or .py file) that is actually frozen. Let's call each case an "alias". Aliases do not come into play until we get to the (generated) list of modules in Python/frozen.c. (The tool for freezing modules, Programs/_freeze_module, is only concerned with the source file, not the module it will be used for.)
Knowledge of which frozen modules are aliases (and the identity of the original module) normally isn't important. However, this information is valuable when we go to set __file__ on frozen stdlib modules. This change updates Tools/scripts/freeze_modules.py to map aliases to the original module name (or None if not a stdlib module) in Python/frozen.c. We also add a helper function in Python/import.c to look up a frozen module's alias and add the result of that function to the frozen info returned from find_frozen().
https://bugs.python.org/issue45020
I've added a number of test-only modules. Some of those cases are covered by the recently frozen stdlib modules (and some will be once we add encodings back in). However, I figured we'd play it safe by having a set of modules guaranteed to be there during tests.
https://bugs.python.org/issue45020
Currently we're freezing the __init__.py twice, duplicating the built data unnecessarily With this change we do it once. There is no change in runtime behavior.
https://bugs.python.org/issue45020
* Makefile.pre.in: Add $(srcdir) when needed, remove it when it was
used by mistake.
* freeze_modules.py tool uses ./Programs/_freeze_module if the
executable doesn't exist in the source tree.
The update_file.py tool now preserves the end of line of the updated
file. Fix the "make regen-frozen" command: it no longer changes the
end of line of PCbuild/ files on Unix. Git changes the end of line
depending on the platform.
The main advantage is that the files will no longer show up in diffs and PRs. That means, for a PR, the number of files / lines changed will more clearly reflect the actual change. (This is essentially an un-revert of gh-28375.)
https://bugs.python.org/issue45020
The main advantage is that the files will no longer show up in diffs and PRs. That means, for a PR, the number of files / lines changed will more clearly reflect the actual change.
https://bugs.python.org/issue45020
Here's one more small cleanup that should have been in PR gh-28319. We eliminate stdout side-effects from importing the frozen __hello__ module, and update tests accordingly. We also move the module's source file into Lib/ from Toos/freeze/flag.py.
https://bugs.python.org/issue45019
This will enable us to drop the frozen module header files from the repository.
It does currently cause many source files to be built twice, which just takes more time. For whoever comes to fix this in the future, the files shared between freeze_module and pythoncore should be put into a static library that is consumed by both.
Doing this provides significant performance gains for runtime startup (~15% with all the imported modules frozen). We don't yet freeze all the imported modules because there are a few hiccups in the build systems we need to sort out first. (See bpo-45186 and bpo-45188.)
Note that in PR GH-28320 we added a command-line flag (-X frozen_modules=[on|off]) that allows users to opt out of (or into) using frozen modules. The default is still "off" but we will change it to "on" as soon as we can do it in a way that does not cause contributors pain.
https://bugs.python.org/issue45020
There are a few things I missed in gh-27980. This is a follow-up that will make subsequent PRs cleaner. It includes fixes to tests and tools that reference the frozen modules.
https://bugs.python.org/issue45019
* Constructors of subclasses of some buitin classes (e.g. tuple, list,
frozenset) no longer accept arbitrary keyword arguments.
* Subclass of set can now define a __new__() method with additional
keyword parameters without overriding also __init__().
Frozen modules must be added to several files in order to work properly. Before this change this had to be done manually. Here we add a tool to generate the relevant lines in those files instead. This helps us avoid mistakes and omissions.
https://bugs.python.org/issue45019
Places the locals between the specials and stack. This is the more "natural" layout for a C struct, makes the code simpler and gives a slight speedup (~1%)
* Convert "specials" array to InterpreterFrame struct, adding f_lasti, f_state and other non-debug FrameObject fields to it.
* Refactor, calls pushing the call to the interpreter upward toward _PyEval_Vector.
* Compute f_back when on thread stack, only filling in value when frame object outlives stack invocation.
* Move ownership of InterpreterFrame in generator from frame object to generator object.
* Do not create frame objects for Python calls.
* Do not create frame objects for generators.
* Remove struct _node from the stable ABI list
This struct was removed along with the old parser in Python 3.9 (PEP 617)
* Stable ABI list: Use the public name "PyFrameObject" rather than "_frame"
* Ensure limited API doesn't contain private names
Names prefixed by an underscore are private by definition.
* Add a blurb
* Specialize LOAD_ATTR with LOAD_ATTR_SLOT and LOAD_ATTR_SPLIT_KEYS
* Move dict-common.h to internal/pycore_dict.h
* Add LOAD_ATTR_WITH_HINT specialized opcode.
* Quicken in function if loopy
* Specialize LOAD_ATTR for module attributes.
* Add specialization stats
These were reverted in gh-26530 (commit 17c4edc) due to refleaks.
* 2c1e258 - Compute deref offsets in compiler (gh-25152)
* b2bf2bc - Add new internal code objects fields: co_fastlocalnames and co_fastlocalkinds. (gh-26388)
This change fixes the refleaks.
https://bugs.python.org/issue43693
* Revert "bpo-43693: Compute deref offsets in compiler (gh-25152)"
This reverts commit b2bf2bc1ec.
* Revert "bpo-43693: Add new internal code objects fields: co_fastlocalnames and co_fastlocalkinds. (gh-26388)"
This reverts commit 2c1e2583fd.
These two commits are breaking the refleak buildbots.
A number of places in the code base (notably ceval.c and frameobject.c) rely on mapping variable names to indices in the frame "locals plus" array (AKA fast locals), and thus opargs. Currently the compiler indirectly encodes that information on the code object as the tuples co_varnames, co_cellvars, and co_freevars. At runtime the dependent code must calculate the proper mapping from those, which isn't ideal and impacts performance-sensitive sections. This is something we can easily address in the compiler instead.
This change addresses the situation by replacing internal use of co_varnames, etc. with a single combined tuple of names in locals-plus order, along with a minimal array mapping each to its kind (local vs. cell vs. free). These two new PyCodeObject fields, co_fastlocalnames and co_fastllocalkinds, are not exposed to Python code for now, but co_varnames, etc. are still available with the same values as before (though computed lazily).
Aside from the (mild) performance impact, there are a number of other benefits:
* there's now a clear, direct relationship between locals-plus and variables
* code that relies on the locals-plus-to-name mapping is simpler
* marshaled code objects are smaller and serialize/de-serialize faster
Also note that we can take this approach further by expanding the possible values in co_fastlocalkinds to include specific argument types (e.g. positional-only, kwargs). Doing so would allow further speed-ups in _PyEval_MakeFrameVector(), which is where args get unpacked into the locals-plus array. It would also allow us to shrink marshaled code objects even further.
https://bugs.python.org/issue43693
The invalid assignment rules are very delicate since the parser can
easily raise an invalid assignment when a keyword argument is provided.
As they are very deep into the grammar tree, is very difficult to
specify in which contexts these rules can be used and in which don't.
For that, we need to use a different version of the rule that doesn't do
error checking in those situations where we don't want the rule to raise
(keyword arguments and generator expressions).
We also need to check if we are in left-recursive rule, as those can try
to eagerly advance the parser even if the parse will fail at the end of
the expression. Failing to do this allows the parser to start parsing a
call as a tuple and incorrectly identify a keyword argument as an
invalid assignment, before it realizes that it was not a tuple after all.
* Remove 'zombie' frames. We won't need them once we are allocating fixed-size frames.
* Add co_nlocalplus field to code object to avoid recomputing size of locals + frees + cells.
* Move locals, cells and freevars out of frame object into separate memory buffer.
* Use per-threadstate allocated memory chunks for local variables.
* Move globals and builtins from frame object to per-thread stack.
* Move (slow) locals frame object to per-thread stack.
* Move internal frame functions to internal header.
The patch from [bpo-44074]() does not account for a possibly non-English locale and blindly greps for "HEAD branch" in a possibly localized text.
Automerge-Triggered-By: GH:pitrou
- Reformat the C API and ABI Versioning page (and extend/clarify a bit)
- Rewrite the stable ABI docs into a general text on C API Compatibility
- Add a list of Limited API contents, and notes for the individual items.
- Replace `Include/README.rst` with a link to a devguide page with the same info
"Zero cost" exception handling.
* Uses a lookup table to determine how to handle exceptions.
* Removes SETUP_FINALLY and POP_TOP block instructions, eliminating (most of) the runtime overhead of try statements.
* Reduces the size of the frame object by about 60%.
- Remove HAVE_X509_VERIFY_PARAM_SET1_HOST check
- Update hashopenssl to require OpenSSL 1.1.1
- multissltests only OpenSSL > 1.1.0
- ALPN is always supported
- SNI is always supported
- Remove deprecated NPN code. Python wrappers are no-op.
- ECDH is always supported
- Remove OPENSSL_VERSION_1_1 macro
- Remove locking callbacks
- Drop PY_OPENSSL_1_1_API macro
- Drop HAVE_SSL_CTX_CLEAR_OPTIONS macro
- SSL_CTRL_GET_MAX_PROTO_VERSION is always defined now
- security level is always available now
- get_num_tickets is available with TLS 1.3
- X509_V_ERR MISMATCH is always available now
- Always set SSL_MODE_RELEASE_BUFFERS
- X509_V_FLAG_TRUSTED_FIRST is always available
- get_ciphers is always supported
- SSL_CTX_set_keylog_callback is always available
- Update Modules/Setup with static link example
- Mention PEP in whatsnew
- Drop 1.0.2 and 1.1.0 from GHA tests
* Use instruction offset, rather than bytecode offset. Streamlines interpreter dispatch a bit, and removes most EXTENDED_ARGs for jumps.
* Change some uses of PyCode_Addr2Line to PyFrame_GetLineNumber
The stable_abi.py script no longer parse macros. Macro targets can be
static inline functions which are not part of the stable ABI, only
part of the limited C API.
Run "make regen-limited-abi" to exclude PyType_HasFeature from
Doc/data/stable_abi.dat.
- [x] Build OpenSSL 1.1.1k for macOS
- [x] Build OpenSSL 1.1.1k for Windows
I have also updated multissl tester and various CI configurations to use latest OpenSSL. The versions were all over the place.
Signed-off-by: Christian Heimes <christian@python.org>
Automerge-Triggered-By: GH:tiran
Remove the pyarena.h header file with functions:
* PyArena_New()
* PyArena_Free()
* PyArena_Malloc()
* PyArena_AddPyObject()
These functions were undocumented, excluded from the limited C API,
and were only used internally by the compiler.
Add pycore_pyarena.h header. Rename functions:
* PyArena_New() => _PyArena_New()
* PyArena_Free() => _PyArena_Free()
* PyArena_Malloc() => _PyArena_Malloc()
* PyArena_AddPyObject() => _PyArena_AddPyObject()
Remove parser functions using the "struct _mod" type, because the
AST C API was removed:
* PyParser_ASTFromFile()
* PyParser_ASTFromFileObject()
* PyParser_ASTFromFilename()
* PyParser_ASTFromString()
* PyParser_ASTFromStringObject()
These functions were undocumented and excluded from the limited C
API.
Add pycore_parser.h internal header file. Rename functions:
* PyParser_ASTFromFileObject() => _PyParser_ASTFromFile()
* PyParser_ASTFromStringObject() => _PyParser_ASTFromString()
These functions are no longer exported (replace PyAPI_FUNC() with
extern).
Remove also _PyPegen_run_parser_from_file() function. Update
test_peg_generator to use _PyPegen_run_parser_from_file_pointer()
instead.
Remove the compiler functions using "struct _mod" type, because the
public AST C API was removed:
* PyAST_Compile()
* PyAST_CompileEx()
* PyAST_CompileObject()
* PyFuture_FromAST()
* PyFuture_FromASTObject()
These functions were undocumented and excluded from the limited C API.
Rename functions:
* PyAST_CompileObject() => _PyAST_Compile()
* PyFuture_FromASTObject() => _PyFuture_FromAST()
Moreover, _PyFuture_FromAST() is no longer exported (replace
PyAPI_FUNC() with extern). _PyAST_Compile() remains exported for
test_peg_generator.
Remove also compatibility functions:
* PyAST_Compile()
* PyAST_CompileEx()
* PyFuture_FromAST()
Rename Include/symtable.h to to Include/internal/pycore_symtable.h,
don't export symbols anymore (replace PyAPI_FUNC and PyAPI_DATA with
extern) and rename functions:
* PyST_GetScope() to _PyST_GetScope()
* PySymtable_BuildObject() to _PySymtable_Build()
* PySymtable_Free() to _PySymtable_Free()
Remove PySymtable_Build(), Py_SymtableString() and
Py_SymtableStringObject() functions.
The Py_SymtableString() function was part the stable ABI by mistake
but it could not be used, since the symtable.h header file was
excluded from the limited C API.
The Python symtable module remains available and is unchanged.
test_peg_generator now defines _Py_TEST_PEGEN macro when building C
code to not call PyAST_Validate() in Parser/pegen.c. Moreover, it
defines Py_BUILD_CORE_MODULE macro to get access to the internal
C API.
Remove "global_ast_state" from Python-ast.c when it's built by
test_peg_generator: always get the AST state from the current interpreter.
Add frozen modules to sys.stdlib_module_names. For example, add
"_frozen_importlib" and "_frozen_importlib_external" names.
Add "list_frozen" command to Programs/_testembed.
Include/{odictobject.h,parser_interface.h,picklebufobject.h,pydebug.h,pyfpe.h}
into Include/cpython/.
Parser: peg_api: include Python.h instead of parser_interface.h.
Add a new configure --without-static-libpython option to not build
the libpythonMAJOR.MINOR.a static library and not install the
python.o object file.
Fix smelly.py and stable_abi.py tools when libpython3.10.a is
missing.
* Add to the peg generator a new directive ('&&') that allows to expect
a token and hard fail the parsing if the token is not found. This
allows to quickly emmit syntax errors for missing tokens.
* Use the new grammar element to hard-fail if the ':' is missing before
suites.
Add a private list of all stdlib modules: _Py_module_names.
* Add Tools/scripts/generate_module_names.py script.
* Makefile: Add "make regen-module-names" command.
* setup.py: Add --list-module-names option.
* GitHub Action and Travis CI also runs "make regen-module-names",
not ony "make regen-all", to ensure that the module names remains
up to date.
The distutils bdist_wininst command deprecated in Python 3.8 has been
removed. The distutils bidst_wheel command is now recommended to
distribute binary packages on Windows.
* Remove Lib/distutils/command/bdist_wininst.py
* Remove PC/bdist_wininst/ project
* Remove Lib/distutils/command/wininst-*.exe programs
* Remove all references to bdist_wininst
On Fedora 31 gdb is using python 3.7.9, calling `proxyval` on an instance with a dictionary fails because of the `dict.iteritems` usage. This PR changes the code to be compatible with py2 and py3.
This changed seemed small enough to not need an issue and news blurb, if one is required please let me know.
Automerge-Triggered-By: GH:benjaminp
* Tkinter functions and constructors which need a default root window
raise now RuntimeError with descriptive message instead of obscure
AttributeError or NameError if it is not created yet or cannot
be created automatically.
* Add tests for all functions which use default root window.
* Fix import in the pynche script.
The smelly.py script now also checks the Python dynamic library and
extension modules, not only the Python static library. Make also the
script more verbose: explain what it does.
The GitHub Action job now builds Python with the libpython dynamic
library.
Fix a race condition in "make regen-all" when make -jN option is used
to run jobs in parallel. The clinic.py script now only use atomic
write to write files. Moveover, generated files are now left
unchanged if the content does not change, to not change the file
modification time.
The "make regen-all" command runs "make clinic" and "make
regen-importlib" targets:
* "make regen-importlib" builds object files (ex: Modules/_weakref.o)
from source files (ex: Modules/_weakref.c) and clinic files (ex:
Modules/clinic/_weakref.c.h)
* "make clinic" always rewrites all clinic files
(ex: Modules/clinic/_weakref.c.h)
Since there is no dependency between "clinic" and "regen-importlib"
Makefile targets, these two targets can be run in parallel. Moreover,
half of clinic.py file writes are not atomic and so there is a race
condition when "make regen-all" runs jobs in parallel using make -jN
option (which can be passed in MAKEFLAGS environment variable).
Fix clinic.py to make all file writes atomic:
* Add write_file() function to ensure that all file writes are
atomic: write into a temporary file and then use os.replace().
* Moreover, write_file() doesn't recreate or modify the file if the
content does not change to avoid modifying the file modification
file.
* Update test_clinic to verify these assertions with a functional
test.
* Remove Clinic.force attribute which was no longer used, whereas
Clinic.verify remains useful.
Adds support to Tools/i18n/pygettext.py for gettext calls in f-strings. This process is done by parsing the f-strings, processing each value, and flagging the ones which contain a gettext call.
Co-authored-by: Batuhan Taskaya <batuhanosmantaskaya@gmail.com>
Left-recursive rules need to check for errors explicitly, since
even if the rule returns NULL, the parsing might continue and lead
to long-distance failures.
Co-authored-by: Pablo Galindo <Pablogsal@gmail.com>
Use PyLong_FromLong(0) and PyLong_FromLong(1) of the public C API
instead. For Python internals, _PyLong_GetZero() and _PyLong_GetOne()
of pycore_long.h can be used.
* Implement running the parser a second time for the errors messages
The first parser run is only responsible for detecting whether
there is a `SyntaxError` or not. If there isn't the AST gets returned.
Otherwise, the parser is run a second time with all the `invalid_*`
rules enabled so that all the customized error messages get produced.
The original tool wasn't working right and it was simpler to create a new one, partially re-using some of the old code. At this point the tool runs properly on the master. (Try: ./python Tools/c-analyzer/c-analyzer.py analyze.) It take ~40 seconds on my machine to analyze the full CPython code base.
Note that we'll need to iron out some OS-specific stuff (e.g. preprocessor). We're okay though since this tool isn't used yet in our workflow. We will also need to verify the analysis results in detail before activating the check in CI, though I'm pretty sure it's close.
https://bugs.python.org/issue36876
This commit reverts commit ac0333e1e1 as the original links are working again and they provide extended features such as comments and alternative versions.
Remove the global _Py_CheckRecursionLimit variable: it has been
replaced by ceval.recursion_limit of the PyInterpreterState
structure.
There is no need to keep the variable for the stable ABI, since
Py_EnterRecursiveCall() and Py_LeaveRecursiveCall() were not usable
in Python 3.8 and older: these macros accessed PyThreadState members,
whereas the PyThreadState structure is opaque in the limited C API.
* Add new capability to the PEG parser to type variable assignments. For instance:
```
| a[asdl_stmt_seq*]=';'.small_stmt+ [';'] NEWLINE { a }
```
* Add new sequence types from the asdl definition (automatically generated)
* Make `asdl_seq` type a generic aliasing pointer type.
* Create a new `asdl_generic_seq` for the generic case using `void*`.
* The old `asdl_seq_GET`/`ast_seq_SET` macros now are typed.
* New `asdl_seq_GET_UNTYPED`/`ast_seq_SET_UNTYPED` macros for dealing with generic sequences.
* Changes all possible `asdl_seq` types to use specific versions everywhere.
NuGet automatically includes .props file from the build directory in the
target using the package, but only if the .props file has the correct
name: it must be $(id).props
Rename python.props correspondingly in all the nuspec variants. Also
keep python.props as it were for backward compatibility.
Currently, empty sequences in gather rules make the conditional for
gather rules fail as empty sequences evaluate as "False". We need to
explicitly check for "None" (the failure condition) to avoid false
negatives.
Prevent installation on Windows 8 and earlier.
Download UCRT on demand when required (non-updated Windows 8.1 only)
Add reference to py launcher to post-install message
Replace MIDL-generated file with manual GUID definition.
Use the same .def file for release and debug builds.
Update setup build to support latest toolset