`Py_DECREF` and `PyStackRef_CLOSE` are now implemented as macros in the
free-threaded build in ceval.c. There are two motivations;
* MSVC has problems inlining functions in ceval.c in the PGO build.
* We will want to mark escaping calls in order to spill the stack
pointer in ceval.c and we will want to do this around `_Py_Dealloc`
(or `_Py_MergeZeroLocalRefcount` or `_Py_DecRefShared`), not around
the entire `Py_DECREF` or `PyStackRef_CLOSE` call.
The free-threaded GC now visits interpreter stacks to keep objects
that use deferred reference counting alive.
Interpreter frames are zero initialized in the free-threaded GC so
that the GC doesn't see garbage data. This is a temporary measure
until stack spilling around escaping calls is implemented.
Co-authored-by: Ken Jin <kenjin@python.org>
As of 529a160 (gh-118204), building with HAVE_DYNAMIC_LOADING stopped working. This is a minimal fix just to get builds working again. There are actually a number of long-standing deficiencies with HAVE_DYNAMIC_LOADING builds that need to be resolved separately.
There were a still a number of gaps in the tests, including not looking
at all the builtin types and not checking wrappers in subinterpreters
that weren't in the main interpreter. This fixes all that.
I considered incorporating the names of the PyTypeObject fields
(a la gh-122866), but figured doing so doesn't add much value.
This replaces `_PyList_FromArraySteal` with `_PyList_FromStackRefSteal`.
It's functionally equivalent, but takes a `_PyStackRef` array instead of
an array of `PyObject` pointers.
Co-authored-by: Ken Jin <kenjin@python.org>
Return -1 and set an exception on error; return 0 if the iterator is
exhausted, and return 1 if the next item was fetched successfully.
Prefer this API to PyIter_Next(), which requires the caller to use
PyErr_Occurred() to differentiate between iterator exhaustion and errors.
Co-authered-by: Irit Katriel <iritkatriel@yahoo.com>
The free-threaded build partially stores heap type reference counts in
distributed manner in per-thread arrays. This avoids reference count
contention when creating or destroying instances.
Co-authored-by: Ken Jin <kenjin@python.org>
* gh-122188: Move magic number to its own file
* Add versionadded directive
* Do work in C
* Integrate launcher.c
* Make _pyc_magic_number private
* Remove metadata
* Move sys.implementation -> _imp
* Modernize comment
* Move _RAW_MAGIC_NUMBER to the C side as well
* _pyc_magic_number -> pyc_magic_number
* Remove unused import
* Update docs
* Apply suggestions from code review
Co-authored-by: Eric Snow <ericsnowcurrently@gmail.com>
* Fix typo in tests
---------
Co-authored-by: Eric Snow <ericsnowcurrently@gmail.com>
Change _PyLong_IsCompact() and _PyLong_CompactValue() parameter type
from 'PyObject*' to 'const PyObject*'. Avoid the Py_TYPE() macro
which does not support const parameter.
* gh-120974: Make _asyncio._leave_task atomic in the free-threaded build
Update `_PyDict_DelItemIf` to allow for an argument to be passed to the
predicate.
This refactors asyncio to use the common freelist helper functions and
macros. As a side effect, the freelist for _asyncio.Future is now
re-enabled in the free-threaded build.
This combines and updates our freelist handling to use a consistent
implementation. Objects in the freelist are linked together using the
first word of memory block.
If configured with freelists disabled, these operations are essentially
no-ops.
When builtin static types are initialized for a subinterpreter, various "tp" slots have already been inherited (for the main interpreter). This was interfering with the logic in add_operators() (in Objects/typeobject.c), causing a wrapper to get created when it shouldn't. This change fixes that by preserving the original data from the static type struct and checking that.
The `_PySeqLock_EndRead` function needs an acquire fence to ensure that
the load of the sequence happens after any loads within the read side
critical section. The missing fence can trigger bugs on macOS arm64.
Additionally, we need a release fence in `_PySeqLock_LockWrite` to
ensure that the sequence update is visible before any modifications to
the cache entry.
Add more offsets to _Py_DebugOffsets
We add a few more offsets that are required by some out-of-process
tools, such as [Austin](https://github.com/p403n1x87/austin).
Refactor the fast Unicode hash check into `_PyObject_HashFast` and use relaxed
atomic loads in the free-threaded build.
After this change, the TSAN doesn't report data races for this method.
1. Use pkg-config to check for ncursesw/panelw. If that fails, use
pkg-config to check for ncurses/panel.
2. Regardless of pkg-config output, search for curses/panel headers, so
we're sure we have all defines in pyconfig.h.
3. Regardless of pkg-config output, check if libncurses or libncursesw
contains the 'initscr' symbol; if it does _and_ pkg-config failed
earlier, add the resulting -llib linker option to CURSES_LIBS.
Ditto for 'update_panels' and PANEL_LIBS.
4. Wrap the rest of the checks with WITH_SAVE_ENV and make sure we're
using updated LIBS and CPPFLAGS for those.
Add the PY_CHECK_CURSES convenience macro.
On Windows, test_cext and test_cppext now pass /WX flag to the MSC
compiler to treat all compiler warnings as errors. In verbose mode,
these tests now log the compiler commands to help debugging.
Change Py_BUILD_ASSERT_EXPR implementation on Windows to avoid a
compiler warning about an unnamed structure.
Fix warnings when using -Wimplicit-fallthrough compiler flag.
Annotate explicitly "fall through" switch cases with a new
_Py_FALLTHROUGH macro which uses __attribute__((fallthrough)) if
available. Replace "fall through" comments with _Py_FALLTHROUGH.
Add _Py__has_attribute() macro. No longer define __has_attribute()
macro if it's not defined. Move also _Py__has_builtin() at the top
of pyport.h.
Co-Authored-By: Nikita Sobolev <mail@sobolevn.me>
This PR sets up tagged pointers for CPython.
The general idea is to create a separate struct _PyStackRef for everything on the evaluation stack to store the bits. This forces the C compiler to warn us if we try to cast things or pull things out of the struct directly.
Only for free threading: We tag the low bit if something is deferred - that means we skip incref and decref operations on it. This behavior may change in the future if Mark's plans to defer all objects in the interpreter loop pans out.
This implies a strict stack reference discipline is required. ALL incref and decref operations on stackrefs must use the stackref variants. It is unsafe to untag something then do normal incref/decref ops on it.
The new incref and decref variants are called dup and close. They mimic a "handle" API operating on these stackrefs.
Please read Include/internal/pycore_stackref.h for more information!
---------
Co-authored-by: Mark Shannon <9448417+markshannon@users.noreply.github.com>
Remove the const qualifier of the argument of functions:
* _PyLong_IsCompact()
* _PyLong_CompactValue()
Py_TYPE() argument is not const.
Fix the compiler warning:
Include/cpython/longintrepr.h: In function ‘_PyLong_CompactValue’:
Include/pyport.h:19:31: error: cast discards ‘const’ qualifier from
pointer target type [-Werror=cast-qual]
(...)
Include/cpython/longintrepr.h:133:30: note: in expansion of macro
‘Py_TYPE’
assert(PyType_HasFeature(Py_TYPE(op), Py_TPFLAGS_LONG_SUBCLASS));
* linked list
* add tail optmiization to linked list
* wip
* wip
* wip
* more fixes
* finally it works
* add tests
* remove weakreflist
* add some comments
* reduce code duplication in _asynciomodule.c
* address some review comments
* add invariants about the state of the linked list
* add better explanation
* clinic regen
* reorder branches for better branch prediction
* Update Modules/_asynciomodule.c
* Apply suggestions from code review
Co-authored-by: Itamar Oren <itamarost@gmail.com>
* fix capturing of eager tasks
* add comment to task finalization
* fix tests and couple c implmentation to c task
improved linked-list logic and more comments
* fix test
---------
Co-authored-by: Itamar Oren <itamarost@gmail.com>
This makes the following macros public as part of the non-limited C-API for
locking a single object or two objects at once.
* `Py_BEGIN_CRITICAL_SECTION(op)` / `Py_END_CRITICAL_SECTION()`
* `Py_BEGIN_CRITICAL_SECTION2(a, b)` / `Py_END_CRITICAL_SECTION2()`
The supporting functions and structs used by the macros are also exposed for
cases where C macros are not available.
* Add an InternalDocs file describing how interning should work and how to use it.
* Add internal functions to *explicitly* request what kind of interning is done:
- `_PyUnicode_InternMortal`
- `_PyUnicode_InternImmortal`
- `_PyUnicode_InternStatic`
* Switch uses of `PyUnicode_InternInPlace` to those.
* Disallow using `_Py_SetImmortal` on strings directly.
You should use `_PyUnicode_InternImmortal` instead:
- Strings should be interned before immortalization, otherwise you're possibly
interning a immortalizing copy.
- `_Py_SetImmortal` doesn't handle the `SSTATE_INTERNED_MORTAL` to
`SSTATE_INTERNED_IMMORTAL` update, and those flags can't be changed in
backports, as they are now part of public API and version-specific ABI.
* Add private `_only_immortal` argument for `sys.getunicodeinternedsize`, used in refleak test machinery.
* Make sure the statically allocated string singletons are unique. This means these sets are now disjoint:
- `_Py_ID`
- `_Py_STR` (including the empty string)
- one-character latin-1 singletons
Now, when you intern a singleton, that exact singleton will be interned.
* Add a `_Py_LATIN1_CHR` macro, use it instead of `_Py_ID`/`_Py_STR` for one-character latin-1 singletons everywhere (including Clinic).
* Intern `_Py_STR` singletons at startup.
* For free-threaded builds, intern `_Py_LATIN1_CHR` singletons at startup.
* Beef up the tests. Cover internal details (marked with `@cpython_only`).
* Add lots of assertions
Co-Authored-By: Eric Snow <ericsnowcurrently@gmail.com>
This exposes `PyUnstable_Object_ClearWeakRefsNoCallbacks` as an unstable
C-API function to provide a thread-safe mechanism for clearing weakrefs
without executing callbacks.
Some C-API extensions need to clear weakrefs without calling callbacks,
such as after running finalizers like we do in subtype_dealloc.
Previously they could use `_PyWeakref_ClearRef` on each weakref, but
that's not thread-safe in the free-threaded build.
Co-authored-by: Petr Viktorin <encukou@gmail.com>
In gh-120009 I used an atexit hook to finalize the _datetime module's static types at interpreter shutdown. However, atexit hooks are executed very early in finalization, which is a problem in the few cases where a subclass of one of those static types is still alive until the final GC collection. The static builtin types don't have this probably because they are finalized toward the end, after the final GC collection. To avoid the problem for _datetime, I have applied a similar approach here.
Also, credit goes to @mgorny and @neonene for the new tests.
FYI, I would have liked to take a slightly cleaner approach with managed static types, but wanted to get a smaller fix in first for the sake of backporting. I'll circle back to the cleaner approach with a future change on the main branch.
The _strptime module object was cached in a static local variable (in the datetime.strptime() implementation). That's a problem when it crosses isolation boundaries, such as reinitializing the runtme or between interpreters. This change fixes the problem by dropping the static variable, instead always relying on the normal sys.modules cache (via PyImport_Import()).
This adds a `_PyRecursiveMutex` type based on `PyMutex` and uses that
for the import lock. This fixes some data races in the free-threaded
build and generally simplifies the import lock code.
The `_PyThreadState_Bind()` function is called before the first
`PyEval_AcquireThread()` so it's not synchronized with the stop the
world GC. We had a race where `gc_visit_heaps()` might visit a thread's
heap while it's being initialized.
Use a simple atomic int to avoid visiting heaps for threads that are not
yet fully initialized (i.e., before `tstate_mimalloc_bind()` is called).
The race was reproducible by running:
`python Lib/test/test_importlib/partial/pool_in_threads.py`.
We make use of the same mechanism that we use for the static builtin types. This required a few tweaks.
The relevant code could use some cleanup but I opted to avoid the significant churn in this change. I'll tackle that separately.
This change is the final piece needed to make _datetime support multiple interpreters. I've updated the module slot accordingly.
I was able to make use of the existing datetime_state struct, but there was one tricky thing I had to sort out. We mostly aren't converting to heap types, so we can't use things like PyType_GetModuleByDef() to look up the module state. The solution I came up with is somewhat novel, but I consider it straightforward. Also, it shouldn't have much impact on performance.
In summary, this main changes here are:
* I've added some macros to help hide how various objects relate to module state
* as a solution to the module state lookup problem, I've stored the last loaded module on the current interpreter's internal dict (actually a weakref)
* if the static type method is used after the module has been deleted, it is reloaded
* to avoid extra work when loading the module, we directly copy the objects (new refs only) from the old module state into the new state if the old module hasn't been deleted yet
* during module init we set various objects on the static types' __dict__s; to simplify things, we only do that the first time; once those static types have a separate __dict__ per interpreter, we'll do it every time
* we now clear the module state when the module is destroyed (before, we were leaking everything in _datetime_global_state)
The free-threaded build currently immortalizes objects that use deferred
reference counting (see gh-117783). This typically happens once the
first non-main thread is created, but the behavior can be suppressed for
tests, in subinterpreters, or during a compile() call.
This fixes a race condition involving the tracking of whether the
behavior is suppressed.
Remove the delegation of `int` to the `__trunc__` special method: `int` will now only delegate to `__int__` and `__index__` (in that order). `__trunc__` continues to exist, but its sole purpose is to support `math.trunc`.
---------
Co-authored-by: Bénédikt Tran <10796600+picnixz@users.noreply.github.com>
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
Support non-dict globals in LOAD_FROM_DICT_OR_GLOBALS
The implementation basically copies LOAD_GLOBAL. Possibly it could be deduplicated,
but that seems like it may get hairy since the two operations have different operands.
This is important to fix in 3.14 for PEP 649, but it's a bug in earlier versions too,
and we should backport to 3.13 and 3.12 if possible.
Co-authored-by: Hugo van Kemenade <1324225+hugovk@users.noreply.github.com>
Co-authored-by: Erlend E. Aasland <erlend.aasland@protonmail.com>
Co-authored-by: Victor Stinner <vstinner@python.org>
Structure layout, and especially bitfields, sometimes resulted in clearly
wrong behaviour like overlapping fields. This fixes
Co-authored-by: Gregory P. Smith <gps@python.org>
Co-authored-by: Petr Viktorin <encukou@gmail.com>
`drop_gil()` assumes that its caller is attached, which means that the current
thread holds the GIL if and only if the GIL is enabled, and the enabled-state
of the GIL won't change. This isn't true, though, because `detach_thread()`
calls `_PyEval_ReleaseLock()` after detaching and
`_PyThreadState_DeleteCurrent()` calls it after removing the current thread
from consideration for stop-the-world requests (effectively detaching it).
Fix this by remembering whether or not a thread acquired the GIL when it last
attached, in `PyThreadState._status.holds_gil`, and check this in `drop_gil()`
instead of `gil->enabled`.
This fixes a crash in `test_multiprocessing_pool_circular_import()`, so I've
reenabled it.
Add `Py_BEGIN_CRITICAL_SECTION_SEQUENCE_FAST` and
`Py_END_CRITICAL_SECTION_SEQUENCE_FAST` macros and update `str.join` to use
them. Also add a regression test that would crash reliably without this
patch.
_PyArg_Parser holds static global data generated for modules by Argument Clinic. The _PyArg_Parser.kwtuple field is a tuple object, even though it's stored within a static global. In some cases the tuple is statically allocated and thus it's okay that it gets shared by multiple interpreters. However, in other cases the tuple is set lazily, allocated from the heap using the active interprepreter at the point the tuple is needed.
This is a problem once that interpreter is destroyed since _PyArg_Parser.kwtuple becomes at dangling pointer, leading to crashes. It isn't a problem if the tuple is allocated under the main interpreter, since its lifetime is bound to the lifetime of the runtime. The solution here is to temporarily switch to the main interpreter. The alternative would be to always statically allocate the tuple.
This change also fixes a bug where only the most recent parser was added to the global linked list.
The PEP 649 implementation will require a way to load NotImplementedError
from the bytecode. @markshannon suggested implementing this by converting
LOAD_ASSERTION_ERROR into a more general mechanism for loading constants.
This PR adds this new opcode. I will work on the rest of the implementation
of the PEP separately.
Co-authored-by: Irit Katriel <1055913+iritkatriel@users.noreply.github.com>