The "Check if generated files are up to date" job of GitHub Actions
now runs the "autoreconf -ivf -Werror" command instead of the "make
regen-configure" command to avoid depending on the external quay.io
server.
Add Tools/build/regen-configure.sh script to regenerate the configure
with an Ubuntu container image. The
"quay.io/tiran/cpython_autoconf:271" container image
(https://github.com/tiran/cpython_autoconf) is no longer used.
* Add mimalloc v2.12
Modified src/alloc.c to remove include of alloc-override.c and not
compile new handler.
Did not include the following files:
- include/mimalloc-new-delete.h
- include/mimalloc-override.h
- src/alloc-override-osx.c
- src/alloc-override.c
- src/static.c
- src/region.c
mimalloc is thread safe and shares a single heap across all runtimes,
therefore finalization and getting global allocated blocks across all
runtimes is different.
* mimalloc: minimal changes for use in Python:
- remove debug spam for freeing large allocations
- use same bytes (0xDD) for freed allocations in CPython and mimalloc
This is important for the test_capi debug memory tests
* Don't export mimalloc symbol in libpython.
* Enable mimalloc as Python allocator option.
* Add mimalloc MIT license.
* Log mimalloc in Lib/test/pythoninfo.py.
* Document new mimalloc support.
* Use macro defs for exports as done in:
https://github.com/python/cpython/pull/31164/
Co-authored-by: Sam Gross <colesbury@gmail.com>
Co-authored-by: Christian Heimes <christian@python.org>
Co-authored-by: Victor Stinner <vstinner@python.org>
* The lexer, which include the actual lexeme producing logic, goes into
the `lexer` directory.
* The wrappers, one wrapper per input mode (file, string, utf-8, and
readline), go into the `tokenizer` directory and include logic for
creating a lexer instance and managing the buffer for different modes.
---------
Co-authored-by: Pablo Galindo <pablogsal@gmail.com>
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Add wrapper for timerfd_create, timerfd_settime, and timerfd_gettime to os module.
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
Co-authored-by: Adam Turner <9087854+AA-Turner@users.noreply.github.com>
Co-authored-by: Erlend E. Aasland <erlend.aasland@protonmail.com>
Co-authored-by: Victor Stinner <vstinner@python.org>
PyMutex is a one byte lock with fast, inlineable lock and unlock functions for the common uncontended case. The design is based on WebKit's WTF::Lock.
PyMutex is built using the _PyParkingLot APIs, which provides a cross-platform futex-like API (based on WebKit's WTF::ParkingLot). This internal API will be used for building other synchronization primitives used to implement PEP 703, such as one-time initialization and events.
This also includes tests and a mini benchmark in Tools/lockbench/lockbench.py to compare with the existing PyThread_type_lock.
Uncontended acquisition + release:
* Linux (x86-64): PyMutex: 11 ns, PyThread_type_lock: 44 ns
* macOS (arm64): PyMutex: 13 ns, PyThread_type_lock: 18 ns
* Windows (x86-64): PyMutex: 13 ns, PyThread_type_lock: 38 ns
PR Overview:
The primary purpose of this PR is to implement PyMutex, but there are a number of support pieces (described below).
* PyMutex: A 1-byte lock that doesn't require memory allocation to initialize and is generally faster than the existing PyThread_type_lock. The API is internal only for now.
* _PyParking_Lot: A futex-like API based on the API of the same name in WebKit. Used to implement PyMutex.
* _PyRawMutex: A word sized lock used to implement _PyParking_Lot.
* PyEvent: A one time event. This was used a bunch in the "nogil" fork and is useful for testing the PyMutex implementation, so I've included it as part of the PR.
* pycore_llist.h: Defines common operations on doubly-linked list. Not strictly necessary (could do the list operations manually), but they come up frequently in the "nogil" fork. ( Similar to https://man.freebsd.org/cgi/man.cgi?queue)
---------
Co-authored-by: Eric Snow <ericsnowcurrently@gmail.com>
There is a WIP proposal to enable webassembly stack switching which have been
implemented in v8:
https://github.com/WebAssembly/js-promise-integration
It is not possible to switch stacks that contain JS frames so the Emscripten JS
trampolines that allow calling functions with the wrong number of arguments
don't work in this case. However, the js-promise-integration proposal requires
the [type reflection for Wasm/JS API](https://github.com/WebAssembly/js-types)
proposal, which allows us to actually count the number of arguments a function
expects.
For better compatibility with stack switching, this PR checks if type reflection
is available, and if so we use a switch block to decide the appropriate
signature. If type reflection is unavailable, we should use the current EMJS
trampoline.
We cache the function argument counts since when I didn't cache them performance
was negatively affected.
Co-authored-by: T. Wouters <thomas@python.org>
Co-authored-by: Brett Cannon <brett@python.org>
configure no longer uses libatomic by default when Python is
cross-compiled. The LIBATOMIC variable can be set manually in this
case:
./configure LIBATOMIC="-latomic" (...)
Fix building the _testcapi extension on Linux AArch64 which requires
linking to libatomic when <cpython/pyatomic.h> is used: the
_Py_atomic_or_uint64() function requires libatomic
__atomic_fetch_or_8() on this platform.
The configure script now checks if linking to libatomic is needed and
generates a new LIBATOMIC variable used to build the _testcapi
extension.
Building the _testcapi extension now uses the LIBATOMIC variable in
its LDFLAGS, since Modules/_testcapi/pyatomic.c uses
<cpython/pyatomic.h>.
Co-authored-by: Erlend E. Aasland <erlend.aasland@protonmail.com>
Python built with "configure --with-trace-refs" (tracing references)
is now ABI compatible with Python release build and debug build.
Moreover, it now also supports the Limited API.
Change Py_TRACE_REFS build:
* Remove _PyObject_EXTRA_INIT macro.
* The PyObject structure no longer has two extra members (_ob_prev
and _ob_next).
* Use a hash table (_Py_hashtable_t) to trace references (all
objects): PyInterpreterState.object_state.refchain.
* Py_TRACE_REFS build is now ABI compatible with release build and
debug build.
* Limited C API extensions can now be built with Py_TRACE_REFS:
xxlimited, xxlimited_35, _testclinic_limited.
* No longer rename PyModule_Create2() and PyModule_FromDefAndSpec2()
functions to PyModule_Create2TraceRefs() and
PyModule_FromDefAndSpec2TraceRefs().
* _Py_PrintReferenceAddresses() is now called before
finalize_interp_delete() which deletes the refchain hash table.
* test_tracemalloc find_trace() now also filters by size to ignore
the memory allocated by _PyRefchain_Trace().
Test changes for Py_TRACE_REFS:
* Add test.support.Py_TRACE_REFS constant.
* Add test_sys.test_getobjects() to test sys.getobjects() function.
* test_exceptions skips test_recursion_normalizing_with_no_memory()
and test_memory_error_in_PyErr_PrintEx() if Python is built with
Py_TRACE_REFS.
* test_repl skips test_no_memory().
* test_capi skisp test_set_nomemory().
If Python is built with ./configure --with-trace-refs, don't build
the _testclinic_limited extension. The limited C API (Py_LIMITED_API)
is not compatible with Py_TRACE_REFS.
- Move platform triplet detection code into Misc/platform_triplet.c
- Refactor MIPS detection, use defined(__mips64) to detect MIPS64
- Compute libc values in separate section
- Add detection for MIPS soft float
- Add detection for musl
musl supports SPE with its soft-float ABI:
https://git.musl-libc.org/cgit/musl/commit/?id=7be59733d71ada3a32a98622507399253f1d5e48
Original patch by Christian Heimes.
Co-authored-by: Christian Heimes <christian@python.org>
Co-authored-by: Erlend E. Aasland <erlend@python.org>
Quoting autoconf (v2.71):
All current systems provide time.h; it need not be checked for.
Not all systems provide sys/time.h, but those that do, all allow
you to include it and time.h simultaneously.
Apply BOLT optimizations to libpython for shared builds. Most of the C
code is in libpython so it is critical to apply BOLT there fully realize
BOLT benefits.
This change also reworks how BOLT instrumentation is applied. It
effectively removes the readelf based logic added in gh-101525 and
replaces it with a mechanism that saves a copy of the pre-bolt binary
and restores that copy when necessary. This allows us to perform BOLT
optimizations without having to manually delete the output binary to
force a new bolt run.
Also:
- add a clean-bolt target for purging BOLT files and hook that up to the
clean target
- .gitignore BOLT related files
Before and after this refactor, `make` will no-op after a previous run.
Both versions should also share common make DAG deficiencies where
targets fail to trigger as often as they need to or can trigger
prematurely in certain scenarios. e.g. after this change you may need to
`rm profile-bolt-stamp` to force a BOLT run because there aren't
appropriate non-phony targets for BOLT's make target to depend on.
To make it easier to iterate on custom BOLT settings, the flags to pass
to instrumentation and application are now defined in configure and can
be overridden by passing BOLT_INSTRUMENT_FLAGS and BOLT_APPLY_FLAGS.
This merges their code. They're backed by the same single HACL* static library, having them be a single module simplifies maintenance.
This should unbreak the wasm enscripten builds that currently fail due to linking in --whole-archive mode and the HACL* library appearing twice.
Long unnoticed error fixed: _sha512.SHA384Type was doubly assigned and was actually SHA512Type. Nobody depends on those internal names.
Also rename LIBHACL_ make vars to LIBHACL_SHA2_ in preperation for other future HACL things.
Replace the builtin hashlib implementations of SHA2-384 and SHA2-512
originally from LibTomCrypt with formally verified, side-channel resistant
code from the [HACL*](https://github.com/hacl-star/hacl-star/) project.
The builtins remain a fallback only used when OpenSSL does not provide them.
replacing hashlib primitives (for the non-OpenSSL case) with verified implementations from HACL*. This is the first PR in the series, and focuses specifically on SHA2-256 and SHA2-224.
This PR imports Hacl_Streaming_SHA2 into the Python tree. This is the HACL* implementation of SHA2, which combines a core implementation of SHA2 along with a layer of buffer management that allows updating the digest with any number of bytes. This supersedes the previous implementation in the tree.
@franziskuskiefer was kind enough to benchmark the changes: in addition to being verified (thus providing significant safety and security improvements), this implementation also provides a sizeable performance boost!
```
---------------------------------------------------------------
Benchmark Time CPU Iterations
---------------------------------------------------------------
Sha2_256_Streaming 3163 ns 3160 ns 219353 // this PR
LibTomCrypt_Sha2_256 5057 ns 5056 ns 136234 // library used by Python currently
```
The changes in this PR are as follows:
- import the subset of HACL* that covers SHA2-256/224 into `Modules/_hacl`
- rewire sha256module.c to use the HACL* implementation
Co-authored-by: Gregory P. Smith [Google LLC] <greg@krypto.org>
Co-authored-by: Erlend E. Aasland <erlend.aasland@protonmail.com>
Fix the gdbm_compat library detection logic to actually check for
-lgdbm_compat independently of the ndbm detection.
This fixes the build failure with `--with-dbmliborder=gdbm`,
and implicit fallback to ndbm with the default value.
When cross-compiling, the compile/run test for -pthread always fails so -pthread
will never be automatically set without an override from the cache. ac_cv_pthread
can already be overridden, so do the same thing for ac_cv_cxx_thread.
It has had no effect on non-macOS platforms for a long time, and has had
the non-obvious effect of invoking `pkg_config` and not setting
`-DUSING_APPLE_OS_LIBFFI` on macOS since GH-22855.
This got introduced in commit 5884449539
to determine if readline is already linked against curses or tinfo in
the setup.py, which is no longer present.
Added os.setns and os.unshare to easily switch between namespaces
on Linux.
Co-authored-by: Christian Heimes <christian@python.org>
Co-authored-by: CAM Gerlach <CAM.Gerlach@Gerlach.CAM>
Co-authored-by: Victor Stinner <vstinner@python.org>
⚠️⚠️ Note for reviewers, hackers and fellow systems/low-level/compiler engineers ⚠️⚠️
If you have a lot of experience with this kind of shenanigans and want to improve the **first** version, **please make a PR against my branch** or **reach out by email** or **suggest code changes directly on GitHub**.
If you have any **refinements or optimizations** please, wait until the first version is merged before starting hacking or proposing those so we can keep this PR productive.
- support EMSDK tot-upstream and git releases
- allow WASM assents for wasm64-emscripten and WASI. This makes single file distributions on WASI easier.
- decouple WASM assets from browser builds
* Add support for the BOLT post-link binary optimizer
Using [bolt](https://github.com/llvm/llvm-project/tree/main/bolt)
provides a fairly large speedup without any code or functionality
changes. It provides roughly a 1% speedup on pyperformance, and a
4% improvement on the Pyston web macrobenchmarks.
It is gated behind an `--enable-bolt` configure arg because not all
toolchains and environments are supported. It has been tested on a
Linux x86_64 toolchain, using llvm-bolt built from the LLVM 14.0.6
sources (their binary distribution of this version did not include bolt).
Compared to [a previous attempt](https://github.com/faster-cpython/ideas/issues/224),
this commit uses bolt's preferred "instrumentation" approach, as well as adds some non-PIE
flags which enable much better optimizations from bolt.
The effects of this change are a bit more dependent on CPU microarchitecture
than other changes, since it optimizes i-cache behavior which seems
to be a bit more variable between architectures. The 1%/4% numbers
were collected on an Intel Skylake CPU, and on an AMD Zen 3 CPU I
got a slightly larger speedup (2%/4%), and on a c6i.xlarge EC2 instance
I got a slightly lower speedup (1%/3%).
The low speedup on pyperformance is not entirely unexpected, because
BOLT improves i-cache behavior, and the benchmarks in the pyperformance
suite are small and tend to fit in i-cache.
This change uses the existing pgo profiling task (`python -m test --pgo`),
though I was able to measure about a 1% macrobenchmark improvement by
using the macrobenchmarks as the training task. I personally think that
both the PGO and BOLT tasks should be updated to use macrobenchmarks,
but for the sake of splitting up the work this PR uses the existing pgo task.
* Simplify the build flags
* Add a NEWS entry
* Update Makefile.pre.in
Co-authored-by: Dong-hee Na <donghee.na92@gmail.com>
* Update configure.ac
Co-authored-by: Dong-hee Na <donghee.na92@gmail.com>
* Add myself to ACKS
* Add docs
* Other review comments
* fix tab/space issue
* Make it more clear that --enable-bolt is experimental
* Add link to bolt's github page
Co-authored-by: Dong-hee Na <donghee.na92@gmail.com>
Automate WASM build with a new Python script. The script provides
several build profiles with configure flags for Emscripten flavors
and WASI. The script can detect and use Emscripten SDK and WASI SDK from
default locations or env vars.
``configure`` now detects Node arguments and creates HOSTRUNNER
arguments for Node 16. It also sets correct arguments for
``wasm64-emscripten``.
Co-authored-by: Brett Cannon <brett@python.org>
Remove the "configure --with-cxx-main" build option: it didn't work
for many years. Remove the MAINCC variable from configure and
Makefile.
The MAINCC variable was added by the issue gh-42471: commit
0f48d98b74. Previously, --with-cxx-main
was named --with-cxx.
Keep CXX and LDCXXSHARED variables, even if they are no longer used
by Python build system.
- check for ``dup()`` libc function
- handle missing ``F_DUPFD`` in ``dup2()`` replacement function
- add workaround for WASI libc bug in MSG_TRUNC
- ESHUTDOWN is missing, use EPIPE instead
- POLLPRI is missing, define as 0 (no-op)