The code for Tier 2 is now only compiled when configured
with `--enable-experimental-jit[=yes|interpreter]`.
We drop support for `PYTHON_UOPS` and -`Xuops`,
but you can disable the interpreter or JIT
at runtime by setting `PYTHON_JIT=0`.
You can also build it without enabling it by default
using `--enable-experimental-jit=yes-off`;
enable with `PYTHON_JIT=1`.
On Windows, the `build.bat` script supports
`--experimental-jit`, `--experimental-jit-off`,
`--experimental-interpreter`.
In the C code, `_Py_JIT` is defined as before
when the JIT is enabled; the new variable
`_Py_TIER2` is defined when the JIT *or* the
interpreter is enabled. It is actually a bitmask:
1: JIT; 2: default-off; 4: interpreter.
Introduce a unified 16-bit backoff counter type (``_Py_BackoffCounter``),
shared between the Tier 1 adaptive specializer and the Tier 2 optimizer. The
API used for adaptive specialization counters is changed but the behavior is
(supposed to be) identical.
The behavior of the Tier 2 counters is changed:
- There are no longer dynamic thresholds (we never varied these).
- All counters now use the same exponential backoff.
- The counter for ``JUMP_BACKWARD`` starts counting down from 16.
- The ``temperature`` in side exits starts counting down from 64.
These helpers make it easier to customize and inspect the config used to initialize interpreters. This is especially valuable in our tests. I found inspiration from the PyConfig API for the PyInterpreterConfig dict conversion stuff. As part of this PR I've also added a bunch of tests.
I added it quite a while ago as a strategy for managing interpreter lifetimes relative to the PEP 554 (now 734) implementation. Relatively recently I refactored that implementation to no longer rely on InterpreterID objects. Thus now I'm removing it.
Keep Tools/build/deepfreeze.py around (we may repurpose it for deepfreezing non-code objects),
and keep basic "clean" targets that remove the output of former deep-freeze activities,
to keep the build directories of current devs clean.
This adds a safe memory reclamation scheme based on FreeBSD's "GUS" and
quiescent state based reclamation (QSBR). The API provides a mechanism
for callers to detect when it is safe to free memory that may be
concurrently accessed by readers.
These are intended to be used in places where atomics are required in
free-threaded builds but not in the default build. We don't want to
introduce the potential performance overhead of an atomic operation in the
default build.
Biased reference counting maintains two refcount fields in each object:
`ob_ref_local` and `ob_ref_shared`. The true refcount is the sum of these two
fields. In some cases, when refcounting operations are split across threads,
the ob_ref_shared field can be negative (although the total refcount must be
at least zero). In this case, the thread that decremented the refcount
requests that the owning thread give up ownership and merge the refcount
fields.
Add an option (--enable-experimental-jit for configure-based builds
or --experimental-jit for PCbuild-based ones) to build an
*experimental* just-in-time compiler, based on copy-and-patch (https://fredrikbk.com/publications/copy-and-patch.pdf).
See Tools/jit/README.md for more information on how to install the required build-time tooling.
* gh-112529: Implement GC for free-threaded builds
This implements a mark and sweep GC for the free-threaded builds of
CPython. The implementation relies on mimalloc to find GC tracked
objects (i.e., "containers").
* gh-112529: Use GC heaps for GC allocations in free-threaded builds
The free-threaded build's garbage collector implementation will need to
find GC objects by traversing mimalloc heaps. This hooks up the
allocation calls with the correct heaps by using a thread-local
"current_obj_heap" variable.
* Refactor out setting heap based on type
This splits part of Modules/gcmodule.c of into Python/gc.c, which
now contains the core garbage collection implementation. The Python
module remain in the Modules/gcmodule.c file.
This ensures the source directory is not modified at build time, and different builds (e.g. different versions or GIL vs no-GIL) do not have conflicts.
Every PyThreadState instance is now actually a _PyThreadStateImpl.
It is safe to cast from `PyThreadState*` to `_PyThreadStateImpl*` and back.
The _PyThreadStateImpl will contain fields that we do not want to expose
in the public C API.
Critical sections are helpers to replace the global interpreter lock
with finer grained locking. They provide similar guarantees to the GIL
and avoid the deadlock risk that plain locking involves. Critical
sections are implicitly ended whenever the GIL would be released. They
are resumed when the GIL would be acquired. Nested critical sections
behave as if the sections were interleaved.
- There is no longer a separate Python/executor.c file.
- Conventions in Python/bytecodes.c are slightly different -- don't use `goto error`,
you must use `GOTO_ERROR(error)` (same for others like `unused_local_error`).
- The `TIER_ONE` and `TIER_TWO` symbols are only valid in the generated (.c.h) files.
- In Lib/test/support/__init__.py, `Py_C_RECURSION_LIMIT` is imported from `_testcapi`.
- On Windows, in debug mode, stack allocation grows from 8MiB to 12MiB.
- **Beware!** This changes the env vars to enable uops and their debugging
to `PYTHON_UOPS` and `PYTHON_LLTRACE`.
This is partly to clear this stuff out of pystate.c, but also in preparation for moving some code out of _xxsubinterpretersmodule.c. This change also moves this stuff to the internal API (new: Include/internal/pycore_crossinterp.h). @vstinner did this previously and I undid it. Now I'm re-doing it. :/
* Add mimalloc v2.12
Modified src/alloc.c to remove include of alloc-override.c and not
compile new handler.
Did not include the following files:
- include/mimalloc-new-delete.h
- include/mimalloc-override.h
- src/alloc-override-osx.c
- src/alloc-override.c
- src/static.c
- src/region.c
mimalloc is thread safe and shares a single heap across all runtimes,
therefore finalization and getting global allocated blocks across all
runtimes is different.
* mimalloc: minimal changes for use in Python:
- remove debug spam for freeing large allocations
- use same bytes (0xDD) for freed allocations in CPython and mimalloc
This is important for the test_capi debug memory tests
* Don't export mimalloc symbol in libpython.
* Enable mimalloc as Python allocator option.
* Add mimalloc MIT license.
* Log mimalloc in Lib/test/pythoninfo.py.
* Document new mimalloc support.
* Use macro defs for exports as done in:
https://github.com/python/cpython/pull/31164/
Co-authored-by: Sam Gross <colesbury@gmail.com>
Co-authored-by: Christian Heimes <christian@python.org>
Co-authored-by: Victor Stinner <vstinner@python.org>
Move the following private functions and structures to
pycore_modsupport.h internal C API:
* _PyArg_BadArgument()
* _PyArg_CheckPositional()
* _PyArg_NoKeywords()
* _PyArg_NoPositional()
* _PyArg_ParseStack()
* _PyArg_ParseStackAndKeywords()
* _PyArg_Parser structure
* _PyArg_UnpackKeywords()
* _PyArg_UnpackKeywordsWithVararg()
* _PyArg_UnpackStack()
* _Py_ANY_VARARGS()
Changes:
* Python/getargs.h now includes pycore_modsupport.h to export
functions.
* clinic.py now adds pycore_modsupport.h when one of these functions
is used.
* Add pycore_modsupport.h includes when a C extension uses one of
these functions.
* Define Py_BUILD_CORE_MODULE in C extensions which now include
directly or indirectly (via code generated by Argument Clinic)
pycore_modsupport.h:
* _csv
* _curses_panel
* _dbm
* _gdbm
* _multiprocessing.posixshmem
* _sqlite.row
* _statistics
* grp
* resource
* syslog
* _testcapi: bad_get() no longer uses METH_FASTCALL calling
convention but METH_VARARGS. Replace _PyArg_UnpackStack() with
PyArg_ParseTuple().
* _testcapi: add PYTESTCAPI_NEED_INTERNAL_API macro which is defined
by _testcapi sub-modules which need the internal C API
(pycore_modsupport.h): exceptions.c, float.c, vectorcall.c,
watchers.c.
* Remove Include/cpython/modsupport.h header file.
Include/modsupport.h no longer includes the removed header file.
* Fix mypy clinic.py
* The lexer, which include the actual lexeme producing logic, goes into
the `lexer` directory.
* The wrappers, one wrapper per input mode (file, string, utf-8, and
readline), go into the `tokenizer` directory and include logic for
creating a lexer instance and managing the buffer for different modes.
---------
Co-authored-by: Pablo Galindo <pablogsal@gmail.com>
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
PyMutex is a one byte lock with fast, inlineable lock and unlock functions for the common uncontended case. The design is based on WebKit's WTF::Lock.
PyMutex is built using the _PyParkingLot APIs, which provides a cross-platform futex-like API (based on WebKit's WTF::ParkingLot). This internal API will be used for building other synchronization primitives used to implement PEP 703, such as one-time initialization and events.
This also includes tests and a mini benchmark in Tools/lockbench/lockbench.py to compare with the existing PyThread_type_lock.
Uncontended acquisition + release:
* Linux (x86-64): PyMutex: 11 ns, PyThread_type_lock: 44 ns
* macOS (arm64): PyMutex: 13 ns, PyThread_type_lock: 18 ns
* Windows (x86-64): PyMutex: 13 ns, PyThread_type_lock: 38 ns
PR Overview:
The primary purpose of this PR is to implement PyMutex, but there are a number of support pieces (described below).
* PyMutex: A 1-byte lock that doesn't require memory allocation to initialize and is generally faster than the existing PyThread_type_lock. The API is internal only for now.
* _PyParking_Lot: A futex-like API based on the API of the same name in WebKit. Used to implement PyMutex.
* _PyRawMutex: A word sized lock used to implement _PyParking_Lot.
* PyEvent: A one time event. This was used a bunch in the "nogil" fork and is useful for testing the PyMutex implementation, so I've included it as part of the PR.
* pycore_llist.h: Defines common operations on doubly-linked list. Not strictly necessary (could do the list operations manually), but they come up frequently in the "nogil" fork. ( Similar to https://man.freebsd.org/cgi/man.cgi?queue)
---------
Co-authored-by: Eric Snow <ericsnowcurrently@gmail.com>
Statistics gathering is now off by default. Use the "-X pystats"
command line option or set the new PYTHONSTATS environment variable
to 1 to turn statistics gathering on at Python startup.
Statistics are no longer dumped at exit if statistics gathering was
off or statistics have been cleared.
Changes:
* Add PYTHONSTATS environment variable.
* sys._stats_dump() now returns False if statistics are not dumped
because they are all equal to zero.
* Add PyConfig._pystats member.
* Add tests on sys functions and on setting PyConfig._pystats to 1.
* Add Include/cpython/pystats.h and Include/internal/pycore_pystats.h
header files.
* Rename '_py_stats' variable to '_Py_stats'.
* Exclude Include/cpython/pystats.h from the Py_LIMITED_API.
* Move pystats.h include from object.h to Python.h.
* Add _Py_StatsOn() and _Py_StatsOff() functions. Remove
'_py_stats_struct' variable from the API: make it static in
specialize.c.
* Document API in Include/pystats.h and Include/cpython/pystats.h.
* Complete pystats documentation in Doc/using/configure.rst.
* Don't write "all zeros" stats: if _stats_off() and _stats_clear()
or _stats_dump() were called.
* _PyEval_Fini() now always call _Py_PrintSpecializationStats() which
does nothing if stats are all zeros.
Co-authored-by: Michael Droettboom <mdboom@gmail.com>
This adds a new header that provides atomic operations on common data
types. The intention is that this will be exposed through Python.h,
although that is not the case yet. The only immediate use is in
the test file.
Co-authored-by: Sam Gross <colesbury@gmail.com>
Remove these private functions from the public C API:
* _PyRun_AnyFileObject()
* _PyRun_InteractiveLoopObject()
* _PyRun_SimpleFileObject()
* _Py_SourceAsString()
Move them to the internal C API: add a new pycore_pythonrun.h header
file. No longer export these functions.
Remove the private _Py_Identifier type and related private functions
from the public C API:
* _PyObject_GetAttrId()
* _PyObject_LookupSpecialId()
* _PyObject_SetAttrId()
* _PyType_LookupId()
* _Py_IDENTIFIER()
* _Py_static_string()
* _Py_static_string_init()
Move them to the internal C API: add a new pycore_identifier.h header
file. No longer export these functions.
The _xxsubinterpreters module should not rely on internal API. Some of the functions it uses were recently moved there however. Here we move them back (and expose them properly).
Move the private _PyInterpreterID C API to the internal C API: add a
new pycore_interp_id.h header file.
Remove Include/interpreteridobject.h and
Include/cpython/interpreteridobject.h header files.
Remove the following functions from the C API, move them to the internal C
API: add a new pycore_modsupport.h internal header file:
* PyModule_CreateInitialized()
* _PyArg_NoKwnames()
* _Py_VaBuildStack()
No longer export these functions.