This builds HACL* as a library in one place.
A followup to #101707 which broke some WASM builds. This fixes 2/4 of them, but the enscripten toolchain in the others don't deduplicate linker arguments and error out. A follow-on PR will address those.
Replace the builtin hashlib implementations of SHA2-384 and SHA2-512
originally from LibTomCrypt with formally verified, side-channel resistant
code from the [HACL*](https://github.com/hacl-star/hacl-star/) project.
The builtins remain a fallback only used when OpenSSL does not provide them.
* Write output and metadata in a single run
This halves the time to run the cases generator
(most of the time goes into parsing the input).
* Declare or define opcode metadata based on NEED_OPCODE_TABLES
* Use generated metadata for stack_effect()
* compile.o depends on opcode_metadata.h
* Return -1 from _PyOpcode_num_popped/pushed for unknown opcode
Fix creating install directories in `make sharedinstall` if they exist already outside `DESTDIR`. The previous make rules assumed that the directories would be created via a dependency on a rule for `$(DESTSHARED)` that did not fire if the directory did exist outside `$(DESTDIR)`.
While technically `$(DESTDIR)` could be prepended to the rule name, moving the rules for creating directories straight into the `sharedinstall` rule seems to fit the common practices better. Since the rule explicitly checks whether the individual directories exist anyway, there seems to be no reason to rely on make determining that implicitly as well.
replacing hashlib primitives (for the non-OpenSSL case) with verified implementations from HACL*. This is the first PR in the series, and focuses specifically on SHA2-256 and SHA2-224.
This PR imports Hacl_Streaming_SHA2 into the Python tree. This is the HACL* implementation of SHA2, which combines a core implementation of SHA2 along with a layer of buffer management that allows updating the digest with any number of bytes. This supersedes the previous implementation in the tree.
@franziskuskiefer was kind enough to benchmark the changes: in addition to being verified (thus providing significant safety and security improvements), this implementation also provides a sizeable performance boost!
```
---------------------------------------------------------------
Benchmark Time CPU Iterations
---------------------------------------------------------------
Sha2_256_Streaming 3163 ns 3160 ns 219353 // this PR
LibTomCrypt_Sha2_256 5057 ns 5056 ns 136234 // library used by Python currently
```
The changes in this PR are as follows:
- import the subset of HACL* that covers SHA2-256/224 into `Modules/_hacl`
- rewire sha256module.c to use the HACL* implementation
Co-authored-by: Gregory P. Smith [Google LLC] <greg@krypto.org>
Co-authored-by: Erlend E. Aasland <erlend.aasland@protonmail.com>
(These aren't used yet, but may be coming soon,
and it's easier to keep this tool the same between branches.)
Added a sanity check for all this to compile.c.
Co-authored-by: Irit Katriel <iritkatriel@yahoo.com>
Add COMPILEALL_OPTS variable in Makefile to override compileall
options (default: -j0) in "make install". Also merge the compileall
commands into a single command building PYC files for the all
optimization levels (0, 1, 2) at once.
Co-authored-by: Gregory P. Smith <greg@krypto.org>
The global allocators were stored in 3 static global variables: _PyMem_Raw, _PyMem, and _PyObject. State for the "small block" allocator was stored in another 13. That makes a total of 16 global variables. We are moving all 16 to the _PyRuntimeState struct as part of the work for gh-81057. (If PEP 684 is accepted then we will follow up by moving them all to PyInterpreterState.)
https://github.com/python/cpython/issues/81057
We do the following:
* move the generated _PyUnicode_InitStaticStrings() to its own file
* move the generated _PyStaticObjects_CheckRefcnt() to its own file
* include pycore_global_objects.h in extension modules instead of pycore_runtime_init.h
These changes help us avoid including things that aren't needed.
https://github.com/python/cpython/issues/90868
Remove the distutils package. It was deprecated in Python 3.10 by PEP
632 "Deprecate distutils module". For projects still using distutils
and cannot be updated to something else, the setuptools project can
be installed: it still provides distutils.
* Remove Lib/distutils/ directory
* Remove test_distutils
* Remove references to distutils
* Skip test_check_c_globals and test_peg_generator since they use
distutils
This got introduced in commit 5884449539
to determine if readline is already linked against curses or tinfo in
the setup.py, which is no longer present.
The switch cases (really TARGET(opcode) macros) have been moved from ceval.c to generated_cases.c.h. That file is generated from instruction definitions in bytecodes.c (which impersonates a C file so the C code it contains can be edited without custom support in e.g. VS Code).
The code generator lives in Tools/cases_generator (it has a README.md explaining how it works). The DSL used to describe the instructions is a work in progress, described in https://github.com/faster-cpython/ideas/blob/main/3.12/interpreter_definition.md.
This is surely a work-in-progress. An easy next step could be auto-generating super-instructions.
**IMPORTANT: Merge Conflicts**
If you get a merge conflict for instruction implementations in ceval.c, your best bet is to port your changes to bytecodes.c. That file looks almost the same as the original cases, except instead of `TARGET(NAME)` it uses `inst(NAME)`, and the trailing `DISPATCH()` call is omitted (the code generator adds it automatically).
The test failed on a buildbot because the pointer was only 7 hex characters. To be safe,
I bumped it down to 3: 4 in case we have 32-bit platforms, and 3 in case the pointer is very small.
Relevant tests moved from test_exceptions to test_traceback to be able to
compare both implementations.
Co-authored-by: Carl Friedrich Bolz-Tereick <cfbolz@gmx.de>
⚠️⚠️ Note for reviewers, hackers and fellow systems/low-level/compiler engineers ⚠️⚠️
If you have a lot of experience with this kind of shenanigans and want to improve the **first** version, **please make a PR against my branch** or **reach out by email** or **suggest code changes directly on GitHub**.
If you have any **refinements or optimizations** please, wait until the first version is merged before starting hacking or proposing those so we can keep this PR productive.
- support EMSDK tot-upstream and git releases
- allow WASM assents for wasm64-emscripten and WASI. This makes single file distributions on WASI easier.
- decouple WASM assets from browser builds
* Add support for the BOLT post-link binary optimizer
Using [bolt](https://github.com/llvm/llvm-project/tree/main/bolt)
provides a fairly large speedup without any code or functionality
changes. It provides roughly a 1% speedup on pyperformance, and a
4% improvement on the Pyston web macrobenchmarks.
It is gated behind an `--enable-bolt` configure arg because not all
toolchains and environments are supported. It has been tested on a
Linux x86_64 toolchain, using llvm-bolt built from the LLVM 14.0.6
sources (their binary distribution of this version did not include bolt).
Compared to [a previous attempt](https://github.com/faster-cpython/ideas/issues/224),
this commit uses bolt's preferred "instrumentation" approach, as well as adds some non-PIE
flags which enable much better optimizations from bolt.
The effects of this change are a bit more dependent on CPU microarchitecture
than other changes, since it optimizes i-cache behavior which seems
to be a bit more variable between architectures. The 1%/4% numbers
were collected on an Intel Skylake CPU, and on an AMD Zen 3 CPU I
got a slightly larger speedup (2%/4%), and on a c6i.xlarge EC2 instance
I got a slightly lower speedup (1%/3%).
The low speedup on pyperformance is not entirely unexpected, because
BOLT improves i-cache behavior, and the benchmarks in the pyperformance
suite are small and tend to fit in i-cache.
This change uses the existing pgo profiling task (`python -m test --pgo`),
though I was able to measure about a 1% macrobenchmark improvement by
using the macrobenchmarks as the training task. I personally think that
both the PGO and BOLT tasks should be updated to use macrobenchmarks,
but for the sake of splitting up the work this PR uses the existing pgo task.
* Simplify the build flags
* Add a NEWS entry
* Update Makefile.pre.in
Co-authored-by: Dong-hee Na <donghee.na92@gmail.com>
* Update configure.ac
Co-authored-by: Dong-hee Na <donghee.na92@gmail.com>
* Add myself to ACKS
* Add docs
* Other review comments
* fix tab/space issue
* Make it more clear that --enable-bolt is experimental
* Add link to bolt's github page
Co-authored-by: Dong-hee Na <donghee.na92@gmail.com>
We only statically initialize for core code and builtin modules. Extension modules still create
the tuple at runtime. We'll solve that part of interpreter isolation separately.
This change includes generated code. The non-generated changes are in:
* Tools/clinic/clinic.py
* Python/getargs.c
* Include/cpython/modsupport.h
* Makefile.pre.in (re-generate global strings after running clinic)
* very minor tweaks to Modules/_codecsmodule.c and Python/Python-tokenize.c
All other changes are generated code (clinic, global strings).
Remove the "configure --with-cxx-main" build option: it didn't work
for many years. Remove the MAINCC variable from configure and
Makefile.
The MAINCC variable was added by the issue gh-42471: commit
0f48d98b74. Previously, --with-cxx-main
was named --with-cxx.
Keep CXX and LDCXXSHARED variables, even if they are no longer used
by Python build system.
* gh-95218: Move tests for importlib.resources into test_importlib.resources.
* Also update makefile
* Include test_importlib/resources in code ownership rule.
The `_testcapimodule.c` file is getting too large to work with effectively.
This PR lays out a general structure of how tests can be split up, with more splitting to come later if the structure is OK.
Vectorcall tests aren't the biggest issue -- it's just an area I want to work on next, so I'm starting here.
An issue specific to vectorcall tests is that it wasn't clear that e.g. `MethodDescriptor2` is related to testing vectorcall: the `/* Test PEP 590 */` section had an ambiguous end. Separate file should make things like this much clearer.
OTOH, for some pieces it might not be clear where they should be -- I left `meth_fastcall` with tests of the other calling conventions. IMO, even with the ambiguity it's still worth it to split the huge file up.
I'm not sure about the buildsystem changes, hopefully CI will tell me what's wrong.
@vstinner, @markshannon: Do you think this is a good idea?
Automerge-Triggered-By: GH:encukou
Add script ``Tools/scripts/check_modules.py`` to check and validate builtin
and shared extension modules. The script also handles ``Modules/Setup`` and
will eventually replace ``setup.py``.
Co-authored-by: Victor Stinner <vstinner@python.org>
Co-authored-by: Erlend Egeberg Aasland <erlend.aasland@protonmail.com>
It makes it easier to look for module states in sysconfig without
special casing suffixes "_CFLAGS", "_DEPS", "_LDFLAGS", "_OBJS",
and "CTYPES_MALLOC_CLOSURE".
* Move Lib/tkinter/test/test_tkinter/ to Lib/test/test_tkinter/.
* Move Lib/tkinter/test/test_ttk/ to Lib/test/test_ttk/.
* Add Lib/test/test_ttk/__init__.py based on test_ttk_guionly.py.
* Add Lib/test/test_tkinter/__init__.py
* Remove old Lib/test/test_tk.py.
* Remove old Lib/test/test_ttk_guionly.py.
* Add __main__ sub-modules.
* Update imports and update references to rename files.
* Move Lib/lib2to3/tests/ to Lib/test/test_lib2to3/.
* Remove Lib/test/test_lib2to3.py.
* Update imports.
* all_project_files(): use different paths and sort files
to make the tests more reproducible.
* Update references to tests.
Move the follow functions and type from frameobject.h to pyframe.h,
so the standard <Python.h> provide frame getter functions:
* PyFrame_Check()
* PyFrame_GetBack()
* PyFrame_GetBuiltins()
* PyFrame_GetGenerator()
* PyFrame_GetGlobals()
* PyFrame_GetLasti()
* PyFrame_GetLocals()
* PyFrame_Type
Remove #include "frameobject.h" from many C files. It's no longer
needed.
All install targets use the "all" target as synchronization point to
prevent race conditions with PGO builds. PGO builds use recursive make,
which can lead to two parallel `./python setup.py build` processes that
step on each others toes.
"test" targets now correctly compile PGO build in a clean repo.
Remove the token.h header file. There was never any public tokenizer
C API. The token.h header file was only designed to be used by Python
internals.
Move Include/token.h to Include/internal/pycore_token.h. Including
this header file now requires that the Py_BUILD_CORE macro is
defined. It no longer checks for the Py_LIMITED_API macro.
Rename functions:
* PyToken_OneChar() => _PyToken_OneChar()
* PyToken_TwoChars() => _PyToken_TwoChars()
* PyToken_ThreeChars() => _PyToken_ThreeChars()
`HOSTRUNNER` is a program which can be used to run `BUILDPYTHON` for the host platform (for example, `python.js` requires `node`).
Also change depedencies from `build_all` to `all` so that targets which can't build everything (e.g. WASM) can still run `buildbottest` and `pythoninfo`.
cc @tiran
Move the following API from Include/opcode.h (public C API) to a new
Include/internal/pycore_opcode.h header file (internal C API):
* EXTRA_CASES
* _PyOpcode_Caches
* _PyOpcode_Deopt
* _PyOpcode_Jump
* _PyOpcode_OpName
* _PyOpcode_RelativeJump
Fix signal.NSIG value on FreeBSD to accept signal numbers greater
than 32, like signal.SIGRTMIN and signal.SIGRTMAX.
* Add Py_NSIG constant.
* Add pycore_signal.h internal header file.
* _Py_Sigset_Converter() now includes the range of valid signals in
the error message.
- drop unnecessary ``=1`` suffix from Emscripten flags
- drop unnecessary ``-sWASM`` flag for side modules
- rename ``build_platform`` to ``build_wasm``. I introduced the target
for WASM builds a couple of months ago.
- fix ``--enable-test-modules`` for browser builds
* Move the code for generating Modules/_sre/sre_constants.h from
Lib/re/_constants.py into a separate script
Tools/scripts/generate_sre_constants.py.
* Add target `regen-sre` in the makefile.
* Make target `regen-all` depending on `regen-sre`.
Remove the Include/code.h header file. C extensions should only
include the main <Python.h> header file.
Python.h includes directly Include/cpython/code.h instead.
The side effect of this bug was that venv environments directly
used the main interpreter instead of the intermediate stub executable,
which can cause problems when a script uses system APIs that
require the use of an application bundle.
This effectively reverts the Makefile change in gh-31637. I've added some notes so it is more clear what is going on.
We also update the "Check if generated files are up to date" job to run "make regen-deepfreeze" to ensure "make regen-global-objects" catches deepfreeze.c.
https://bugs.python.org/issue47146
- Remove ``--with-tclk-*`` options from `configure`
- Use pkg-config to detect `_tkinter` dependencies (Tcl/Tk, X11)
- Manual override via environment variables `TCLTK_CFLAGS` and `TCLTK_LIBS`
We have to run "make regen-deepfreeze" before running Tools/scripts/generate-global-objects.py; otherwise we will miss any changes to global objects in deep-frozen modules (which aren't committed in the repo). However, building $(PYTHON_FOR_FREEZE) fails if one of its source files had a global object (e.g. via _Py_ID(...)) added or removed, without generate-global-objects.py running first. So "make regen-global-objects" would sometimes fail.
We solve this by running generate-global-objects.py before *and* after "make regen-deepfreeze". To speed things up and cut down on noise, we also avoid updating the global objects files if there are no changes to them.
https://bugs.python.org/issue46712
- Add requires_fork and requires_subprocess to more tests
- Skip extension import tests if dlopen is not available
- Don't assume that _testcapi is a shared extension
- Skip a lot of socket tests that don't work on Emscripten
- Skip mmap tests, mmap emulation is incomplete
- venv does not work yet
- Cannot get libc from executable
The "entire" test suite is now passing on Emscripten with EMSDK from git head (91 suites are skipped).
Move forward declarations of Python C API types to a new pytypedefs.h
header file to solve interdependency issues between header files.
pytypedefs.h contains forward declarations of the following types:
* PyCodeObject
* PyFrameObject
* PyGetSetDef
* PyInterpreterState
* PyLongObject
* PyMemberDef
* PyMethodDef
* PyModuleDef
* PyObject
* PyThreadState
* PyTypeObject
Rename Include/buffer.h header file to Include/pybuffer.h to avoid
conflicts with projects having an existing "buffer.h" header file.
* Incude pybuffer.h before object.h in Python.h.
* Remove #include "buffer.h" from Include/cpython/object.h.
* Add a forward declaration of the PyObject type in pybuffer.h to fix
an inter-dependency issue.
We're no longer using _Py_IDENTIFIER() (or _Py_static_string()) in any core CPython code. It is still used in a number of non-builtin stdlib modules.
The replacement is: PyUnicodeObject (not pointer) fields under _PyRuntimeState, statically initialized as part of _PyRuntime. A new _Py_GET_GLOBAL_IDENTIFIER() macro facilitates lookup of the fields (along with _Py_GET_GLOBAL_STRING() for non-identifier strings).
https://bugs.python.org/issue46541#msg411799 explains the rationale for this change.
The core of the change is in:
* (new) Include/internal/pycore_global_strings.h - the declarations for the global strings, along with the macros
* Include/internal/pycore_runtime_init.h - added the static initializers for the global strings
* Include/internal/pycore_global_objects.h - where the struct in pycore_global_strings.h is hooked into _PyRuntimeState
* Tools/scripts/generate_global_objects.py - added generation of the global string declarations and static initializers
I've also added a --check flag to generate_global_objects.py (along with make check-global-objects) to check for unused global strings. That check is added to the PR CI config.
The remainder of this change updates the core code to use _Py_GET_GLOBAL_IDENTIFIER() instead of _Py_IDENTIFIER() and the related _Py*Id functions (likewise for _Py_GET_GLOBAL_STRING() instead of _Py_static_string()). This includes adding a few functions where there wasn't already an alternative to _Py*Id(), replacing the _Py_Identifier * parameter with PyObject *.
The following are not changed (yet):
* stop using _Py_IDENTIFIER() in the stdlib modules
* (maybe) get rid of _Py_IDENTIFIER(), etc. entirely -- this may not be doable as at least one package on PyPI using this (private) API
* (maybe) intern the strings during runtime init
https://bugs.python.org/issue46541
This change is a prerequisite for generating code for other global objects (like strings in gh-30928).
(We borrowed some code from Tools/scripts/deepfreeze.py.)
https://bugs.python.org/issue46541
When Python is built without --enable-shared, the "python" program is
now linked to object files, rather than being linked to the Python
library (libpython.a), to make sure that all symbols are exported.
Previously, the linker omitted some symbols like the Py_FrozenMain()
function.
When Python is configured with --without-static-libpython, the Python
static library (libpython.a) is no longer built.
* Check --without-static-libpython earlier in configure.ac
* Add LINK_PYTHON_OBJS and LINK_PYTHON_DEPS variables to Makefile.
* test_capi now ensures that the "Py_FrozenMain" symbol is exported.
The array of small PyLong objects has been statically declared. Here I also statically initialize them. Consequently they are no longer initialized dynamically during runtime init.
I've also moved them under a new sub-struct in _PyRuntimeState, in preparation for static allocation and initialization of other global objects.
https://bugs.python.org/issue45953
This change is strictly renames and moving code around. It helps in the following ways:
* ensures type-related init functions focus strictly on one of the three aspects (state, objects, types)
* passes in PyInterpreterState * to all those functions, simplifying work on moving types/objects/state to the interpreter
* consistent naming conventions help make what's going on more clear
* keeping API related to a type in the corresponding header file makes it more obvious where to look for it
https://bugs.python.org/issue46008
* Check NS API return values for NULL to prevent segfault in
``_bootstrap_python``.
* Set modPathInitialized to 1 so the ``decode_to_dict`` path is used.
Signed-off-by: Christian Heimes <christian@python.org>
The build system now uses a :program:`_bootstrap_python` interpreter for
freezing and deepfreezing again. To speed up build process the build tools
:program:`_bootstrap_python` and :program:`_freeze_module` are no longer
build with LTO.
Cross building depends on a build Python interpreter, which must have same
version and bytecode as target host Python.
The getpath.py file is frozen at build time and executed as code over a namespace. It is never imported, nor is it meant to be importable or reusable. However, it should be easier to read, modify, and patch than the previous code.
This commit attempts to preserve every previously tested quirk, but these may be changed in the future to better align platforms.
The presence of frozen module headers in srcdir interfers with OOT
build. Make considers headers in srcdir up to date, but later builds do
not use VPATH to locate files. make clean now removes the headers, too.
Also remove stale ``_bootstrap_python`` from .gitignore.
Instead we use $(PYTHON_FOR_REGEN) .../deepfreeze.py with the
frozen .h file as input, as we did for Windows in bpo-45850.
We also get rid of the code that generates the .h files
when make regen-frozen is run (i.e., .../make_frozen.py),
and the MANIFEST file.
Restore Python 3.8 and 3.9 as Windows host Python again
Co-authored-by: Kumar Aditya <59607654+kumaraditya303@users.noreply.github.com>
* Make internal APIs that take PyFrameConstructor take a PyFunctionObject instead.
* Add reference to function to frame, borrow references to builtins and globals.
* Add COPY_FREE_VARS instruction to allow specialization of calls to inner functions.
``configure`` now uses a standardized format to forward state, compiler
flags, and linker flags to ``Makefile``, ``setup.py``, and
``Modules/Setup``. ``makesetup`` use the new variables by default if a
module line does not contain any compiler or linker flags. ``setup.py``
has a new function ``addext()``.
For a module ``egg``, configure adds:
* ``MODULE_EGG`` with value yes, missing, disabled, or n/a
* ``MODULE_EGG_CFLAGS``
* ``MODULE_EGG_LDFLAGS``
``Makefile.pre.in`` may also provide ``MODULE_EGG_DEPS`` that lists
dependencies such as header files and static libs.
Signed-off-by: Christian Heimes <christian@python.org>
Settings for :mod:`pyexpat` C extension are now detected by ``configure``.
The bundled ``expat`` library is built in ``Makefile``.
Signed-off-by: Christian Heimes <christian@python.org>
Settings for :mod:`decimal` internal C extension are now detected by
:program:`configure`. The bundled `libmpdec` library is built in
``Makefile``.
Signed-off-by: Christian Heimes <christian@python.org>
This gains 10% or more in startup time for `python -c pass` on UNIX-ish systems.
The Makefile.pre.in generating code builds on Eric's work for bpo-45020, but the .c file generator is new.
Windows version TBD.
SGI_ABI support was removed in [1] but this variable was never removed
from the makefile. Currently, it is just a bad variable that does not
get replaced by the configure script.
[1] https://github.com/python/cpython/pull/3294
Signed-off-by: Filipe Laíns <lains@riseup.net>
Some test cases don't work when test modules are static extensions.
Add dependency on Modules/config.c to trigger a rebuild whenever a
module build type is changed.
``makesetup`` puts shared extensions into ``Modules/`` directory. Create
symlinks from pybuilddir so the extensions can be imported.
Note: It is not possible to use the content of pybuilddir.txt as a build
target. Makefile evaluates target variables in the first pass. The
pybuilddir.txt file does not exist at that point.
* record which modules are build as shared extensions
* put object files in same directory as source files
* remove dependency on deleted _math.c
Signed-off-by: Christian Heimes <christian@python.org>
``setup.py`` and ``makesetup`` now track build dependencies on all Python
header files and module specific header files.
Signed-off-by: Christian Heimes <christian@python.org>
The :mod:`math` and :mod:`cmath` implementation now require a C99 compatible
``libm`` and no longer ship with workarounds for missing acosh, asinh,
expm1, and log1p functions.
The changeset also removes ``_math.c`` and moves the last remaining
workaround into ``_math.h``. This simplifies static builds with
``Modules/Setup`` and resolves symbol conflicts.
Co-authored-by: Mark Dickinson <mdickinson@enthought.com>
Co-authored-by: Brett Cannon <brett@python.org>
Signed-off-by: Christian Heimes <christian@python.org>
Move Include/longobject.h non-limited API to a new
Include/cpython/longobject.h header file.
Move the following definitions to the internal C API:
* _PyLong_DigitValue
* _PyLong_FormatAdvancedWriter()
* _PyLong_FormatWriter()
Split header files to move the non-limited API to Include/cpython/:
* Include/warnings.h => Include/cpython/warnings.h
* Include/weakrefobject.h => Include/cpython/weakrefobject.h
Exclude PyWeakref_GET_OBJECT() from the limited C API. It never
worked since the PyWeakReference structure is opaque in the limited C
API.
Move _PyWarnings_Init() and _PyErr_WarnUnawaitedCoroutine() to the
internal C API.
Rename Include/namespaceobject.h to
Include/internal/pycore_namespace.h.
The _testmultiphase extension is now built with the
Py_BUILD_CORE_MODULE macro defined to access _PyNamespace_Type.
object.c: remove unused "pycore_context.h" include.
Move classobject.h, context.h, genobject.h and longintrepr.h header
files from Include/ to Include/cpython/.
Remove redundant "#ifndef Py_LIMITED_API" in context.h.
Remove explicit #include "longintrepr.h" in C files. It's not needed,
Python.h already includes it.
Move Include/pystrhex.h to Include/internal/pycore_strhex.h.
The header file only contains private functions.
The following C extensions are now built with Py_BUILD_CORE_MODULE
macro defined to get access to the internal C API:
* _blake2
* _hashopenssl
* _md5
* _sha1
* _sha3
* _ssl
* binascii
I've added a number of test-only modules. Some of those cases are covered by the recently frozen stdlib modules (and some will be once we add encodings back in). However, I figured we'd play it safe by having a set of modules guaranteed to be there during tests.
https://bugs.python.org/issue45020
* Makefile.pre.in: Add $(srcdir) when needed, remove it when it was
used by mistake.
* freeze_modules.py tool uses ./Programs/_freeze_module if the
executable doesn't exist in the source tree.
The main advantage is that the files will no longer show up in diffs and PRs. That means, for a PR, the number of files / lines changed will more clearly reflect the actual change. (This is essentially an un-revert of gh-28375.)
https://bugs.python.org/issue45020
Here's one more small cleanup that should have been in PR gh-28319. We eliminate stdout side-effects from importing the frozen __hello__ module, and update tests accordingly. We also move the module's source file into Lib/ from Toos/freeze/flag.py.
https://bugs.python.org/issue45019
Doing this provides significant performance gains for runtime startup (~15% with all the imported modules frozen). We don't yet freeze all the imported modules because there are a few hiccups in the build systems we need to sort out first. (See bpo-45186 and bpo-45188.)
Note that in PR GH-28320 we added a command-line flag (-X frozen_modules=[on|off]) that allows users to opt out of (or into) using frozen modules. The default is still "off" but we will change it to "on" as soon as we can do it in a way that does not cause contributors pain.
https://bugs.python.org/issue45020
There are a few things I missed in gh-27980. This is a follow-up that will make subsequent PRs cleaner. It includes fixes to tests and tools that reference the frozen modules.
https://bugs.python.org/issue45019
Frozen modules must be added to several files in order to work properly. Before this change this had to be done manually. Here we add a tool to generate the relevant lines in those files instead. This helps us avoid mistakes and omissions.
https://bugs.python.org/issue45019
* Convert "specials" array to InterpreterFrame struct, adding f_lasti, f_state and other non-debug FrameObject fields to it.
* Refactor, calls pushing the call to the interpreter upward toward _PyEval_Vector.
* Compute f_back when on thread stack, only filling in value when frame object outlives stack invocation.
* Move ownership of InterpreterFrame in generator from frame object to generator object.
* Do not create frame objects for Python calls.
* Do not create frame objects for generators.
* Specialize LOAD_ATTR with LOAD_ATTR_SLOT and LOAD_ATTR_SPLIT_KEYS
* Move dict-common.h to internal/pycore_dict.h
* Add LOAD_ATTR_WITH_HINT specialized opcode.
* Quicken in function if loopy
* Specialize LOAD_ATTR for module attributes.
* Add specialization stats
* Add co_firstinstr field to code object.
* Implement barebones quickening.
* Use non-quickened bytecode when tracing.
* Add NEWS item
* Add new file to Windows build.
* Don't specialize instructions with EXTENDED_ARG.
This allows reliably forcing macOS universal2 framework builds
to run under Rosetta 2 Intel-64 emulation on Apple Silicon Macs
if needed for testing or when universal2 wheels are not yet
available.
Add pycore_moduleobject.h internal header file with static inline
functions to access module members:
* _PyModule_GetDict()
* _PyModule_GetDef()
* _PyModule_GetState()
These functions don't check at runtime if their argument has a valid
type and can be inlined even if Python is not built with LTO.
_PyType_GetModuleByDef() uses _PyModule_GetDef().
Replace PyModule_GetState() with _PyModule_GetState() in the
extension modules, considered as performance sensitive:
* _abc
* _functools
* _operator
* _pickle
* _queue
* _random
* _sre
* _struct
* _thread
* _winapi
* array
* posix
The following extensions are now built with the Py_BUILD_CORE_MODULE
macro defined, to be able to use the internal pycore_moduleobject.h
header: _abc, array, _operator, _queue, _sre, _struct.
When printing AttributeError, PyErr_Display will offer suggestions of similar
attribute names in the object that the exception was raised from:
>>> collections.namedtoplo
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'collections' has no attribute 'namedtoplo'. Did you mean: namedtuple?
Remove the pyarena.h header file with functions:
* PyArena_New()
* PyArena_Free()
* PyArena_Malloc()
* PyArena_AddPyObject()
These functions were undocumented, excluded from the limited C API,
and were only used internally by the compiler.
Add pycore_pyarena.h header. Rename functions:
* PyArena_New() => _PyArena_New()
* PyArena_Free() => _PyArena_Free()
* PyArena_Malloc() => _PyArena_Malloc()
* PyArena_AddPyObject() => _PyArena_AddPyObject()
Remove parser functions using the "struct _mod" type, because the
AST C API was removed:
* PyParser_ASTFromFile()
* PyParser_ASTFromFileObject()
* PyParser_ASTFromFilename()
* PyParser_ASTFromString()
* PyParser_ASTFromStringObject()
These functions were undocumented and excluded from the limited C
API.
Add pycore_parser.h internal header file. Rename functions:
* PyParser_ASTFromFileObject() => _PyParser_ASTFromFile()
* PyParser_ASTFromStringObject() => _PyParser_ASTFromString()
These functions are no longer exported (replace PyAPI_FUNC() with
extern).
Remove also _PyPegen_run_parser_from_file() function. Update
test_peg_generator to use _PyPegen_run_parser_from_file_pointer()
instead.
Remove the compiler functions using "struct _mod" type, because the
public AST C API was removed:
* PyAST_Compile()
* PyAST_CompileEx()
* PyAST_CompileObject()
* PyFuture_FromAST()
* PyFuture_FromASTObject()
These functions were undocumented and excluded from the limited C API.
Rename functions:
* PyAST_CompileObject() => _PyAST_Compile()
* PyFuture_FromASTObject() => _PyFuture_FromAST()
Moreover, _PyFuture_FromAST() is no longer exported (replace
PyAPI_FUNC() with extern). _PyAST_Compile() remains exported for
test_peg_generator.
Remove also compatibility functions:
* PyAST_Compile()
* PyAST_CompileEx()
* PyFuture_FromAST()
These functions were undocumented and excluded from the limited C
API.
Most names defined by these header files were not prefixed by "Py"
and so could create names conflicts. For example, Python-ast.h
defined a "Yield" macro which was conflict with the "Yield" name used
by the Windows <winbase.h> header.
Use the Python ast module instead.
* Move Include/asdl.h to Include/internal/pycore_asdl.h.
* Move Include/Python-ast.h to Include/internal/pycore_ast.h.
* Remove ast.h header file.
* pycore_symtable.h no longer includes Python-ast.h.
Rename Include/symtable.h to to Include/internal/pycore_symtable.h,
don't export symbols anymore (replace PyAPI_FUNC and PyAPI_DATA with
extern) and rename functions:
* PyST_GetScope() to _PyST_GetScope()
* PySymtable_BuildObject() to _PySymtable_Build()
* PySymtable_Free() to _PySymtable_Free()
Remove PySymtable_Build(), Py_SymtableString() and
Py_SymtableStringObject() functions.
The Py_SymtableString() function was part the stable ABI by mistake
but it could not be used, since the symtable.h header file was
excluded from the limited C API.
The Python symtable module remains available and is unchanged.
Move _PyAST_GetDocString() and _PyAST_ExprAsUnicode() functions the
internal C API: from Include/ast.h to a new
Include/internal/pycore_ast.h header file. Don't export these
functions anymore: replace PyAPI_FUNC() with extern.
Remove also unused includes.
Add frozen modules to sys.stdlib_module_names. For example, add
"_frozen_importlib" and "_frozen_importlib_external" names.
Add "list_frozen" command to Programs/_testembed.
This approach ensures the code matches the interpreter version.
Previously, PYTHON_FOR_REGEN was used to generate the code, which might
be wrong. The marshal format for code objects has changed with
bpo-42246, commit 877df851. Update the code and the expected code sizes
in ctypes test_frozentable.
Include/{odictobject.h,parser_interface.h,picklebufobject.h,pydebug.h,pyfpe.h}
into Include/cpython/.
Parser: peg_api: include Python.h instead of parser_interface.h.
Add a new configure --without-static-libpython option to not build
the libpythonMAJOR.MINOR.a static library and not install the
python.o object file.
Fix smelly.py and stable_abi.py tools when libpython3.10.a is
missing.
Add --with-wheel-pkg-dir=PATH option to the ./configure script. If
specified, the ensurepip module looks for setuptools and pip wheel
packages in this directory: if both are present, these wheel packages
are used instead of ensurepip bundled wheel packages.
Some Linux distribution packaging policies recommend against bundling
dependencies. For example, Fedora installs wheel packages in the
/usr/share/python-wheels/ directory and don't install the
ensurepip._bundled package.
ensurepip: Remove unused runpy import.
Add a private list of all stdlib modules: _Py_module_names.
* Add Tools/scripts/generate_module_names.py script.
* Makefile: Add "make regen-module-names" command.
* setup.py: Add --list-module-names option.
* GitHub Action and Travis CI also runs "make regen-module-names",
not ony "make regen-all", to ensure that the module names remains
up to date.
The distutils bdist_wininst command deprecated in Python 3.8 has been
removed. The distutils bidst_wheel command is now recommended to
distribute binary packages on Windows.
* Remove Lib/distutils/command/bdist_wininst.py
* Remove PC/bdist_wininst/ project
* Remove Lib/distutils/command/wininst-*.exe programs
* Remove all references to bdist_wininst
Added --disable-test-modules option to the configure script:
don't build nor install test modules.
Patch by Xavier de Gaye, Thomas Petazzoni and Peixing Xin.
Co-Authored-By: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Co-Authored-By: Xavier de Gaye <xdegaye@gmail.com>
Add pycore_atomic_funcs.h internal header file: similar to
pycore_atomic.h but don't require to declare variables as atomic.
Add _Py_atomic_size_get() and _Py_atomic_size_set() functions.
As AIX 5.3 and below do not support thread_cputime, it was decided in
https://bugs.python.org/issue40680 to require AIX 6.1 and above. This
commit removes workarounds for — and references to — older, unsupported
AIX versions.
The ast module internal state is now per interpreter.
* Rename "astmodulestate" to "struct ast_state"
* Add pycore_ast.h internal header: the ast_state structure is now
declared in pycore_ast.h.
* Add PyInterpreterState.ast (struct ast_state)
* Remove get_ast_state()
* Rename get_global_ast_state() to get_ast_state()
* PyAST_obj2mod() now handles get_ast_state() failures
Add _PyLong_GetZero() and _PyLong_GetOne() functions and a new
internal pycore_long.h header file.
Python cannot be built without small integer singletons anymore.
The private _PyUnicode_Name_CAPI structure of the PyCapsule API
unicodedata.ucnhash_CAPI moves to the internal C API. Moreover, the
structure gets a new state member which must be passed to the
getcode() and getname() functions.
* Move Include/ucnhash.h to Include/internal/pycore_ucnhash.h
* unicodedata module is now built with Py_BUILD_CORE_MODULE.
* unicodedata: move hashAPI variable into unicodedata_module_state.
This commit removes the old parser, the deprecated parser module, the old parser compatibility flags and environment variables and all associated support code and documentation.
Followup of bpo-40854, there is one remaining usage of PLATLIBDIR
which should be replaced by config->platlibdir.
test_sys checks that sys.platlibdir attribute exists and is a string.
Update Makefile: getpath.c and sysmodule.c no longer need PLATLIBDIR
macro, PyConfig.platlibdir member is used instead.
Co-authored-by: Sandro Mani <manisandro@gmail.com>
"make install" now uses the PLATLIBDIR variable for the destination
lib-dynload/ directory when ./configure --with-platlibdir is used.
Update --with-platlibdir comment in configure.
* Rename pycore_byteswap.h to pycore_bitutils.h.
* Move popcount_digit() to pycore_bitutils.h as _Py_popcount32().
* _Py_popcount32() uses GCC and clang builtin function if available.
* Add unit tests to _Py_popcount32().
Without this, only the _zoneinfo module is getting installed, not the
zoneinfo module. I believe this was not noticed earlier because
test.test_zoneinfo was also not being installed.
- Switch from getopt to argparse.
- Removed the limitation of not being able to produce both C and H simultaneously.
This will make it run faster since it parses the asdl definition once and uses the generated tree to generate both the header and the C source.
This reverts commit 0da5466650.
The commit is causing make failures on a FreeBSD buildbot.
Due to the imminent 3.9.0b1 cutoff, revert this commit for
now pending further investigation.
Add support to the configure script for OBJC and OBJCXX command line options so that the macOS builds can use the clang compiler for the macOS-specific Objective C source files. This allows third-party compilers, like GNU gcc, to be used to build the rest of the project since some of the Objective C system header files are not compilable by GNU gcc.
Co-authored-by: Jeffrey Kintscher <websurfer@surf2c.net>
Co-authored-by: Terry Jan Reedy <tjreedy@udel.edu>
This is the initial implementation of PEP 615, the zoneinfo module,
ported from the standalone reference implementation (see
https://www.python.org/dev/peps/pep-0615/#reference-implementation for a
link, which has a more detailed commit history).
This includes (hopefully) all functional elements described in the PEP,
but documentation is found in a separate PR. This includes:
1. A pure python implementation of the ZoneInfo class
2. A C accelerated implementation of the ZoneInfo class
3. Tests with 100% branch coverage for the Python code (though C code
coverage is less than 100%).
4. A compile-time configuration option on Linux (though not on Windows)
Differences from the reference implementation:
- The module is arranged slightly differently: the accelerated module is
`_zoneinfo` rather than `zoneinfo._czoneinfo`, which also necessitates
some changes in the test support function. (Suggested by Victor
Stinner and Steve Dower.)
- The tests are arranged slightly differently and do not include the
property tests. The tests live at test/test_zoneinfo/test_zoneinfo.py
rather than test/test_zoneinfo.py or test/test_zoneinfo/__init__.py
because we may do some refactoring in the future that would likely
require this separation anyway; we may:
- include the property tests
- automatically run all the tests against both pure Python and C,
rather than manually constructing C and Python test classes (similar
to the way this works with test_datetime.py, which generates C
and Python test cases from datetimetester.py).
- This includes a compile-time configuration option on Linux (though not
on Windows); added with much help from Thomas Wouters.
- Integration into the CPython build system is obviously different from
building a standalone zoneinfo module wheel.
- This includes configuration to install the tzdata package as part of
CI, though only on the coverage jobs. Introducing a PyPI dependency as
part of the CI build was controversial, and this is seen as less of a
major change, since the coverage jobs already depend on pip and PyPI.
Additional changes that were introduced as part of this PR, most / all of
which were backported to the reference implementation:
- Fixed reference and memory leaks
With much debugging help from Pablo Galindo
- Added smoke tests ensuring that the C and Python modules are built
The import machinery can be somewhat fragile, and the "seamlessly falls
back to pure Python" nature of this module makes it so that a problem
building the C extension or a failure to import the pure Python version
might easily go unnoticed.
- Adjustments to zoneinfo.__dir__
Suggested by Petr Viktorin.
- Slight refactorings as suggested by Steve Dower.
- Removed unnecessary if check on std_abbr
Discovered this because of a missing line in branch coverage.
* Move Modules/hashtable.h to Include/internal/pycore_hashtable.h
* Move Modules/hashtable.c to Python/hashtable.c
* Python is now linked to hashtable.c. _tracemalloc is no longer
linked to hashtable.c. Previously, marshal.c got hashtable.c via
_tracemalloc.c which is built as a builtin module.