Part of the work on PEP 738: Adding Android as a supported platform.
* Rename the LIBPYTHON variable to MODULE_LDFLAGS, to more accurately
reflect its purpose.
* Edit makesetup to use MODULE_LDFLAGS when linking extension modules.
* Edit the Makefile so that extension modules depend on libpython on
Android and Cygwin.
* Restore `-fPIC` on Android. It was removed several years ago with a
note that the toolchain used it automatically, but this is no longer
the case. Omitting it causes all linker commands to fail with an error
like `relocation R_AARCH64_ADR_PREL_PG_HI21 cannot be used against
symbol '_Py_FalseStruct'; recompile with -fPIC`.
This adds a safe memory reclamation scheme based on FreeBSD's "GUS" and
quiescent state based reclamation (QSBR). The API provides a mechanism
for callers to detect when it is safe to free memory that may be
concurrently accessed by readers.
These are intended to be used in places where atomics are required in
free-threaded builds but not in the default build. We don't want to
introduce the potential performance overhead of an atomic operation in the
default build.
Update the list of installed test subdirectories with all newly added
subdirectories of Lib/test, so that the tests in those directories are
properly installed.
For the most part, these changes make is substantially easier to backport subinterpreter-related code to 3.12, especially the related modules (e.g. _xxsubinterpreters). The main motivation is to support releasing a PyPI package with the 3.13 capabilities compiled for 3.12.
A lot of the changes here involve either hiding details behind macros/functions or splitting up some files.
Part of the PEP 730 work to add iOS support.
This change lays the groundwork for introducing iOS/tvOS/watchOS
frameworks; it includes the structural refactoring needed so that iOS
branches can be added into in a subsequent PR.
Summary of changes:
* Updates config.sub to the 2024-01-01 release. This is the "as
released" version of config.sub.
* Adds a RESSRCDIR variable to allow sharing of macOS and iOS Makefile
steps.
* Adds an INSTALLTARGETS variable so platforms can customise which
targets are actually installed. This will be used to exclude certain
targets (e.g., binaries, manfiles) from iOS framework installs.
* Adds a PYTHONFRAMEWORKINSTALLNAMEPREFIX variable; this is used as
the install name for the library. This is needed to allow for iOS
frameworks to specify an @rpath-based install name.
* Evaluates MACHDEP earlier in the configure process so that
ac_sys_system is available.
* Modifies _PYTHON_HOST_PLATFORM evaluation for cross-platform builds
so that the CPU architecture is differentiated from the host
identifier. This will be used to generate a _PYTHON_HOST_PLATFORM
definition that includes ABI information, not just CPU architecture.
* Differentiates between SOABI_PLATFORM and PLATFORM_TRIPLET.
SOABI_PLATFORM is used in binary module names, and includes the ABI,
but not the OS or CPU architecture (e.g.,
math.cpython-313-iphonesimulator.dylib). PLATFORM_TRIPLET is used
as the sys._multiarch value, and on iOS will contains the ABI and
architecture (e.g., iphoneos-arm64). This differentiation hasn't
historically been needed because while macOS is a multiarch platform,
it uses a bare darwin as PLATFORM_TRIPLE.
* Removes the use of the deprecated -Wl,-single_module flag when
compiling macOS frameworks.
* Some whitespace normalisation where there was a mix of spaces and tabs
in a single block.
Biased reference counting maintains two refcount fields in each object:
`ob_ref_local` and `ob_ref_shared`. The true refcount is the sum of these two
fields. In some cases, when refcounting operations are split across threads,
the ob_ref_shared field can be negative (although the total refcount must be
at least zero). In this case, the thread that decremented the refcount
requests that the owning thread give up ownership and merge the refcount
fields.
Add an option (--enable-experimental-jit for configure-based builds
or --experimental-jit for PCbuild-based ones) to build an
*experimental* just-in-time compiler, based on copy-and-patch (https://fredrikbk.com/publications/copy-and-patch.pdf).
See Tools/jit/README.md for more information on how to install the required build-time tooling.
* gh-112529: Implement GC for free-threaded builds
This implements a mark and sweep GC for the free-threaded builds of
CPython. The implementation relies on mimalloc to find GC tracked
objects (i.e., "containers").
* gh-112529: Use GC heaps for GC allocations in free-threaded builds
The free-threaded build's garbage collector implementation will need to
find GC objects by traversing mimalloc heaps. This hooks up the
allocation calls with the correct heaps by using a thread-local
"current_obj_heap" variable.
* Refactor out setting heap based on type
Remove LibreSSL specific workaround ifdefs from `_ssl.c` and delete the non-version-specific `_ssl_data.h` file (relevant for OpenSSL < 1.1.1, which we no longer support per PEP 644).
Co-authored-by: Christian Heimes <christian@python.org>
Co-authored-by: Gregory P. Smith <greg@krypto.org>
This splits part of Modules/gcmodule.c of into Python/gc.c, which
now contains the core garbage collection implementation. The Python
module remain in the Modules/gcmodule.c file.
A typo left this check broken so many of us who do out-of-tree builds
were seeing strange failures due to bad `Python/frozen_modules/*.h`
files being picked up from the source tree and used at build time from
different Python versions leading to errors like:
`Fatal Python error: _PyImport_InitCore: failed to initialize importlib`
Or similar once our build got to an "invoke the interpreter"
bootstrapping step due to incorrect bytecode being embedded.
The `MIMALLOC_HEADERS` variable is defined in the Makefile.pre.in, not
the configure script, so we should use the `$(MIMALLOC_HEADERS)` syntax
instead of the `@MIMALLOC_HEADERS@` syntax.
Every PyThreadState instance is now actually a _PyThreadStateImpl.
It is safe to cast from `PyThreadState*` to `_PyThreadStateImpl*` and back.
The _PyThreadStateImpl will contain fields that we do not want to expose
in the public C API.
- Double max trace size to 256
- Add a dependency on executor_cases.c.h for ceval.o
- Mark `_SPECIALIZE_UNPACK_SEQUENCE` as `TIER_ONE_ONLY`
- Add debug output back showing the optimized trace
- Bunch of cleanups to Tools/cases_generator/
The "Check if generated files are up to date" job of GitHub Actions
now runs the "autoreconf -ivf -Werror" command instead of the "make
regen-configure" command to avoid depending on the external quay.io
server.
Add Tools/build/regen-configure.sh script to regenerate the configure
with an Ubuntu container image. The
"quay.io/tiran/cpython_autoconf:271" container image
(https://github.com/tiran/cpython_autoconf) is no longer used.
The "make regen-unicodedata" should now be run manually. By the
default, it requires an Internet connection, which is not always the
case. Some Linux distributions build Linux packages in isolated
environment (without network).
This avoids:
python3.13 Tools/unicode/makeunicodedata.py
python3.13: can't open file '.../build/debug/Tools/unicode/makeunicodedata.py': [Errno 2] No such file or directory
make: *** [Makefile:1498: regen-unicodedata] Error 2
Re-run `make regen-unicodedata` to update the script path in generated files.
Critical sections are helpers to replace the global interpreter lock
with finer grained locking. They provide similar guarantees to the GIL
and avoid the deadlock risk that plain locking involves. Critical
sections are implicitly ended whenever the GIL would be released. They
are resumed when the GIL would be acquired. Nested critical sections
behave as if the sections were interleaved.
- There is no longer a separate Python/executor.c file.
- Conventions in Python/bytecodes.c are slightly different -- don't use `goto error`,
you must use `GOTO_ERROR(error)` (same for others like `unused_local_error`).
- The `TIER_ONE` and `TIER_TWO` symbols are only valid in the generated (.c.h) files.
- In Lib/test/support/__init__.py, `Py_C_RECURSION_LIMIT` is imported from `_testcapi`.
- On Windows, in debug mode, stack allocation grows from 8MiB to 12MiB.
- **Beware!** This changes the env vars to enable uops and their debugging
to `PYTHON_UOPS` and `PYTHON_LLTRACE`.
This is partly to clear this stuff out of pystate.c, but also in preparation for moving some code out of _xxsubinterpretersmodule.c. This change also moves this stuff to the internal API (new: Include/internal/pycore_crossinterp.h). @vstinner did this previously and I undid it. Now I'm re-doing it. :/
* Add mimalloc v2.12
Modified src/alloc.c to remove include of alloc-override.c and not
compile new handler.
Did not include the following files:
- include/mimalloc-new-delete.h
- include/mimalloc-override.h
- src/alloc-override-osx.c
- src/alloc-override.c
- src/static.c
- src/region.c
mimalloc is thread safe and shares a single heap across all runtimes,
therefore finalization and getting global allocated blocks across all
runtimes is different.
* mimalloc: minimal changes for use in Python:
- remove debug spam for freeing large allocations
- use same bytes (0xDD) for freed allocations in CPython and mimalloc
This is important for the test_capi debug memory tests
* Don't export mimalloc symbol in libpython.
* Enable mimalloc as Python allocator option.
* Add mimalloc MIT license.
* Log mimalloc in Lib/test/pythoninfo.py.
* Document new mimalloc support.
* Use macro defs for exports as done in:
https://github.com/python/cpython/pull/31164/
Co-authored-by: Sam Gross <colesbury@gmail.com>
Co-authored-by: Christian Heimes <christian@python.org>
Co-authored-by: Victor Stinner <vstinner@python.org>
Move the following private functions and structures to
pycore_modsupport.h internal C API:
* _PyArg_BadArgument()
* _PyArg_CheckPositional()
* _PyArg_NoKeywords()
* _PyArg_NoPositional()
* _PyArg_ParseStack()
* _PyArg_ParseStackAndKeywords()
* _PyArg_Parser structure
* _PyArg_UnpackKeywords()
* _PyArg_UnpackKeywordsWithVararg()
* _PyArg_UnpackStack()
* _Py_ANY_VARARGS()
Changes:
* Python/getargs.h now includes pycore_modsupport.h to export
functions.
* clinic.py now adds pycore_modsupport.h when one of these functions
is used.
* Add pycore_modsupport.h includes when a C extension uses one of
these functions.
* Define Py_BUILD_CORE_MODULE in C extensions which now include
directly or indirectly (via code generated by Argument Clinic)
pycore_modsupport.h:
* _csv
* _curses_panel
* _dbm
* _gdbm
* _multiprocessing.posixshmem
* _sqlite.row
* _statistics
* grp
* resource
* syslog
* _testcapi: bad_get() no longer uses METH_FASTCALL calling
convention but METH_VARARGS. Replace _PyArg_UnpackStack() with
PyArg_ParseTuple().
* _testcapi: add PYTESTCAPI_NEED_INTERNAL_API macro which is defined
by _testcapi sub-modules which need the internal C API
(pycore_modsupport.h): exceptions.c, float.c, vectorcall.c,
watchers.c.
* Remove Include/cpython/modsupport.h header file.
Include/modsupport.h no longer includes the removed header file.
* Fix mypy clinic.py
* The lexer, which include the actual lexeme producing logic, goes into
the `lexer` directory.
* The wrappers, one wrapper per input mode (file, string, utf-8, and
readline), go into the `tokenizer` directory and include logic for
creating a lexer instance and managing the buffer for different modes.
---------
Co-authored-by: Pablo Galindo <pablogsal@gmail.com>
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
"make regen-pegen" now creates a temporary file called "parser.c.new"
instead of "parser.new.c". Previously, if "make clinic" was run in
parallel with "make regen-all", clinic may try but fail to open
"parser.new.c" if the temporay file was removed in the meanwhile.
* _add_python_opts() now handles cross compilation and HOSTRUNNER.
* display_header() now tells if Python is cross-compiled, display
HOSTRUNNER, and get the host platform.
* Remove Tools/scripts/run_tests.py script.
* Remove "make hostrunnertest": use "make buildbottest"
or "make test" instead.
Split test_gdb.py file into a test_gdb package made of multiple
tests, so tests can now be run in parallel.
* Create Lib/test/test_gdb/ directory.
* Split test_gdb.py into multiple files in Lib/test/test_gdb/
directory.
* Move Lib/test/gdb_sample.py to Lib/test/test_gdb/ directory.
Update get_sample_script(): use __file__ to locate gdb_sample.py.
* Move gdb_has_frame_select() and HAS_PYUP_PYDOWN to test_misc.py.
* Explicitly skip test_gdb on Windows. Previously, test_gdb was
skipped even if gdb was available because of
gdb_has_frame_select().
* Add --fast-ci and --slow-ci options to libregrtest:
* --fast-ci uses a default timeout of 10 minutes and "-u all,-cpu"
(skip slowest tests).
* --slow-ci uses a default timeout of 20 minues and "-u all" (run
all tests).
* regrtest header now lists test resources.
* Makefile changes:
* "make test", "make hostrunnertest" and "make coverage-report" now
use --fast-ci option and TESTTIMEOUT variable.
* "make buildbottest" now uses "--slow-ci". Remove options which
became redundant with "--slow-ci".
* "make testall" and "make testuniversal" now use --slow-ci option
and TESTTIMEOUT variable.
* "make testall" now uses "find -exec rm ..." instead of
"find ... -print|xargs rm ...", same as "make clean".
* GitHub Actions workflow:
* Ubuntu and Address Sanitizer jobs now use "make test". Remove
options which became redundant with "--fast-ci".
* Windows jobs now use --fast-ci option.
* Use -j0 to detect the number of CPUs.
* Set Makefile TESTTIMEOUT default to an empty string, since
--slow-ci and --fast-ci use different default timeout. It's now
accepted to pass "--timeout=" to regrtest: treated as not timeout.
* Tools/scripts/run_tests.py now uses --fast-ci option.
* Tools/buildbot/test.bat now uses --slow-ci option. Remove
--timeout=1200 option, redundant with --slow-ci.
LTO optimization is nice to make Python faster, but _freeze_module
and _testembed performance is not important. Using LTO to build these
two programs make a whole Python build way slower, especially
combined with a sanitizer (like ASAN).
PyMutex is a one byte lock with fast, inlineable lock and unlock functions for the common uncontended case. The design is based on WebKit's WTF::Lock.
PyMutex is built using the _PyParkingLot APIs, which provides a cross-platform futex-like API (based on WebKit's WTF::ParkingLot). This internal API will be used for building other synchronization primitives used to implement PEP 703, such as one-time initialization and events.
This also includes tests and a mini benchmark in Tools/lockbench/lockbench.py to compare with the existing PyThread_type_lock.
Uncontended acquisition + release:
* Linux (x86-64): PyMutex: 11 ns, PyThread_type_lock: 44 ns
* macOS (arm64): PyMutex: 13 ns, PyThread_type_lock: 18 ns
* Windows (x86-64): PyMutex: 13 ns, PyThread_type_lock: 38 ns
PR Overview:
The primary purpose of this PR is to implement PyMutex, but there are a number of support pieces (described below).
* PyMutex: A 1-byte lock that doesn't require memory allocation to initialize and is generally faster than the existing PyThread_type_lock. The API is internal only for now.
* _PyParking_Lot: A futex-like API based on the API of the same name in WebKit. Used to implement PyMutex.
* _PyRawMutex: A word sized lock used to implement _PyParking_Lot.
* PyEvent: A one time event. This was used a bunch in the "nogil" fork and is useful for testing the PyMutex implementation, so I've included it as part of the PR.
* pycore_llist.h: Defines common operations on doubly-linked list. Not strictly necessary (could do the list operations manually), but they come up frequently in the "nogil" fork. ( Similar to https://man.freebsd.org/cgi/man.cgi?queue)
---------
Co-authored-by: Eric Snow <ericsnowcurrently@gmail.com>
Fix a race condition in "make regen-all". The deepfreeze.c source and
files generated by Argument Clinic are now generated or updated
before generating "global objects". Previously, some identifiers may
miss depending on the order in which these files were generated.
* "make regen-global-objects": Make sure that deepfreeze.c is
generated and up to date, and always run "make clinic".
* "make clinic" no longer runs generate_global_objects.py script.
* "make regen-deepfreeze" now only updates deepfreeze.c (C file).
It doesn't build deepfreeze.o (object) anymore.
* Remove misleading messages in "make regen-global-objects" and
"make clinic". They are now outdated, these commands are now
safe to use.
* Document generates files in Doc/using/configure.rst.
Co-authored-by: Erlend E. Aasland <erlend@python.org>
Statistics gathering is now off by default. Use the "-X pystats"
command line option or set the new PYTHONSTATS environment variable
to 1 to turn statistics gathering on at Python startup.
Statistics are no longer dumped at exit if statistics gathering was
off or statistics have been cleared.
Changes:
* Add PYTHONSTATS environment variable.
* sys._stats_dump() now returns False if statistics are not dumped
because they are all equal to zero.
* Add PyConfig._pystats member.
* Add tests on sys functions and on setting PyConfig._pystats to 1.
* Add Include/cpython/pystats.h and Include/internal/pycore_pystats.h
header files.
* Rename '_py_stats' variable to '_Py_stats'.
* Exclude Include/cpython/pystats.h from the Py_LIMITED_API.
* Move pystats.h include from object.h to Python.h.
* Add _Py_StatsOn() and _Py_StatsOff() functions. Remove
'_py_stats_struct' variable from the API: make it static in
specialize.c.
* Document API in Include/pystats.h and Include/cpython/pystats.h.
* Complete pystats documentation in Doc/using/configure.rst.
* Don't write "all zeros" stats: if _stats_off() and _stats_clear()
or _stats_dump() were called.
* _PyEval_Fini() now always call _Py_PrintSpecializationStats() which
does nothing if stats are all zeros.
Co-authored-by: Michael Droettboom <mdboom@gmail.com>
This adds a new header that provides atomic operations on common data
types. The intention is that this will be exposed through Python.h,
although that is not the case yet. The only immediate use is in
the test file.
Co-authored-by: Sam Gross <colesbury@gmail.com>
Remove these private functions from the public C API:
* _PyRun_AnyFileObject()
* _PyRun_InteractiveLoopObject()
* _PyRun_SimpleFileObject()
* _Py_SourceAsString()
Move them to the internal C API: add a new pycore_pythonrun.h header
file. No longer export these functions.
Remove the private _Py_Identifier type and related private functions
from the public C API:
* _PyObject_GetAttrId()
* _PyObject_LookupSpecialId()
* _PyObject_SetAttrId()
* _PyType_LookupId()
* _Py_IDENTIFIER()
* _Py_static_string()
* _Py_static_string_init()
Move them to the internal C API: add a new pycore_identifier.h header
file. No longer export these functions.
* Move test_cppext to its own directory
* Rename setup_testcppext.py to setup.py
* Rename _testcppext.cpp to extension.cpp
* The source (extension.cpp) is now also copied by the test.
* Move Python scripts related to test_module to this new directory:
good_getattr.py and bad_getattrX.py scripts.
* Move Lib/test/test_module.py to Lib/test/test_module/__init__.py.
* Instead of calling get_identifiers_and_strings(), extract identifiers and strings from pycore_global_strings.h.
* Avoid ast.literal_eval(), it's very slow.
The _xxsubinterpreters module should not rely on internal API. Some of the functions it uses were recently moved there however. Here we move them back (and expose them properly).
Move the private _PyInterpreterID C API to the internal C API: add a
new pycore_interp_id.h header file.
Remove Include/interpreteridobject.h and
Include/cpython/interpreteridobject.h header files.
in the case of an empty FRAMEWORKALTINSTALLLAST, this patch prevents leaving
an astray linebreak and two tabs in the resulting Makefile.
Before change:
```
.PHONY: commoninstall
commoninstall: check-clean-src \
altbininstall libinstall inclinstall libainstall \
sharedinstall altmaninstall \
```
After change (with empty FRAMEWORKALTINSTALLLAST):
```
.PHONY: commoninstall
commoninstall: check-clean-src \
altbininstall libinstall inclinstall libainstall \
sharedinstall altmaninstall
```