Structure layout, and especially bitfields, sometimes resulted in clearly
wrong behaviour like overlapping fields. This fixes
Co-authored-by: Gregory P. Smith <gps@python.org>
Co-authored-by: Petr Viktorin <encukou@gmail.com>
When the _Py_SINGLETON() is used, Argument Clinic now adds an
explicit "pycore_runtime.h" include to get the macro. Previously, the
macro may or may not be included indirectly by another include.
This is minimal support. Subinterpreters are not supported yet. That will be addressed in a later change.
Co-authored-by: Eric Snow <ericsnowcurrently@gmail.com>
The fix in gh-119561 introduced an assertion that doesn't hold true if any of the three new test extension modules are loaded more than once. This is fine normally but breaks if the new test_check_state_first() is run more than once, which happens for refleak checking and with the regrtest --forever flag. We fix that here by clearing each of the three modules after loading them. We also tweak a check in _modules_by_index_check().
The assertion was added in gh-118532 but was based on the invalid assumption that PyState_FindModule() would only be called with an already-initialized module def. I've added a test to make sure we don't make that assumption again.
Add socket.VMADDR_CID_LOCAL constant.
Fix ThreadedVSOCKSocketStreamTest: if get_cid() returns the host
address or the "any" address, use the local communication address
(loopback): VMADDR_CID_LOCAL.
On Linux 6.9, apparently, the /dev/vsock device is now available but
get_cid() returns VMADDR_CID_ANY (-1).
Add a funciton that inlines PyObject_GetTypeData and skips
type-checking, so it doesn't need access to the CType_Type object.
This will break if the memory layout changes, but should
be an acceptable solution to enable ctypes in subinterpreters in
Python 3.13.
Mark _ctypes as safe for multiple interpreters
Co-authored-by: neonene <53406459+neonene@users.noreply.github.com>
As reported in #117847 and #115366, an unpaired backtick in a docstring
tends to confuse e.g. Sphinx running on subclasses of standard library
objects, and the typographic style of using a backtick as an opening
quote is no longer in favor. Convert almost all uses of the form
The variable `foo' should do xyz
to
The variable 'foo' should do xyz
and also fix up miscellaneous other unpaired backticks (extraneous /
missing characters).
No functional change is intended here other than in human-readable
docstrings.
_PyArg_Parser holds static global data generated for modules by Argument Clinic. The _PyArg_Parser.kwtuple field is a tuple object, even though it's stored within a static global. In some cases the tuple is statically allocated and thus it's okay that it gets shared by multiple interpreters. However, in other cases the tuple is set lazily, allocated from the heap using the active interprepreter at the point the tuple is needed.
This is a problem once that interpreter is destroyed since _PyArg_Parser.kwtuple becomes at dangling pointer, leading to crashes. It isn't a problem if the tuple is allocated under the main interpreter, since its lifetime is bound to the lifetime of the runtime. The solution here is to temporarily switch to the main interpreter. The alternative would be to always statically allocate the tuple.
This change also fixes a bug where only the most recent parser was added to the global linked list.
Follow-up of gh-101693. The previous DeprecationWarning is replaced with
raising sqlite3.ProgrammingError.
Co-authored-by: Hugo van Kemenade <1324225+hugovk@users.noreply.github.com>
* shm_unlink() parameter becomes positional-only.
* shm_open() first parameter (path) becomes positional-only,
the two following parameters remain positional-or-keyword.
Callbacks registered in the tkinter module now take arguments as
various Python objects (int, float, bytes, tuple), not just str.
To restore the previous behavior set tkinter module global wantobject to 1
before creating the Tk object or call the wantobject() method of the Tk object
with argument 1.
Calling it with argument 2 restores the current default behavior.
Fix an edge case in `binascii.a2b_base64` strict mode, where
excessive padding was not detected when no padding is necessary.
Co-authored-by: Terry Jan Reedy <tjreedy@udel.edu>
Co-authored-by: Pieter Eendebak <pieter.eendebak@gmail.com>
We already intern and immortalize most string constants. In the
free-threaded build, other constants can be a source of reference count
contention because they are shared by all threads running the same code
objects.
Add _PyType_LookupRef and use incref before setting attribute on type
Makes setting an attribute on a class and signaling type modified atomic
Avoid adding re-entrancy exposing the type cache in an inconsistent state by decrefing after type is updated
This is an experimental feature, for internal use.
Setting tkinter._debug = True before creating the root window enables
printing every executed Tcl command (or a Tcl command equivalent to the
used Tcl C API).
This will help to convert a Tkinter example into Tcl script to check
whether the issue is caused by Tkinter or exists in the underlying Tcl/Tk
library.
Use the new public Raw functions:
* _PyTime_PerfCounterUnchecked() with PyTime_PerfCounterRaw()
* _PyTime_TimeUnchecked() with PyTime_TimeRaw()
* _PyTime_MonotonicUnchecked() with PyTime_MonotonicRaw()
Remove internal functions:
* _PyTime_PerfCounterUnchecked()
* _PyTime_TimeUnchecked()
* _PyTime_MonotonicUnchecked()
This PR adds the ability to enable the GIL if it was disabled at
interpreter startup, and modifies the multi-phase module initialization
path to enable the GIL when loading a module, unless that module's spec
includes a slot indicating it can run safely without the GIL.
PEP 703 called the constant for the slot `Py_mod_gil_not_used`; I went
with `Py_MOD_GIL_NOT_USED` for consistency with gh-104148.
A warning will be issued up to once per interpreter for the first
GIL-using module that is loaded. If `-v` is given, a shorter message
will be printed to stderr every time a GIL-using module is loaded
(including the first one that issues a warning).
Now inspect.signature() supports references to the module globals in
parameter defaults on methods in extension modules. Previously it was
only supported in functions. The workaround was to specify the fully
qualified name, including the module name.
Add "Raw" variant of PyTime functions:
* PyTime_MonotonicRaw()
* PyTime_PerfCounterRaw()
* PyTime_TimeRaw()
Changes:
* Add documentation and tests. Tests release the GIL while calling
raw clock functions.
* py_get_system_clock() and py_get_monotonic_clock() now check that
the GIL is hold by the caller if raise_exc is non-zero.
* Reimplement "Unchecked" functions with raw clock functions.
Co-authored-by: Petr Viktorin <encukou@gmail.com>
The code for Tier 2 is now only compiled when configured
with `--enable-experimental-jit[=yes|interpreter]`.
We drop support for `PYTHON_UOPS` and -`Xuops`,
but you can disable the interpreter or JIT
at runtime by setting `PYTHON_JIT=0`.
You can also build it without enabling it by default
using `--enable-experimental-jit=yes-off`;
enable with `PYTHON_JIT=1`.
On Windows, the `build.bat` script supports
`--experimental-jit`, `--experimental-jit-off`,
`--experimental-interpreter`.
In the C code, `_Py_JIT` is defined as before
when the JIT is enabled; the new variable
`_Py_TIER2` is defined when the JIT *or* the
interpreter is enabled. It is actually a bitmask:
1: JIT; 2: default-off; 4: interpreter.
Avoid detaching thread state when stopping the world. When re-attaching
the thread state, the thread would attempt to resume the top-most
critical section, which might now be held by a thread paused for our
stop-the-world request.
Deferred reference counting is not fully implemented yet. As a temporary
measure, we immortalize objects that would use deferred reference
counting to avoid multi-threaded scaling bottlenecks.
This is only performed in the free-threaded build once the first
non-main thread is started. Additionally, some tests, including refleak
tests, suppress this behavior.
* Allow to specify the signature of custom callable instances of extension
type by the __text_signature__ attribute.
* Specify signatures of operator.attrgetter, operator.itemgetter, and
operator.methodcaller instances.
The behavior of fileno() after fclose() is undefined, but it is the only
practical way to check whether the file was closed.
Only test this on the known platforms (Linux, Windows, macOS), where we
already tested that it works.
BufferedWriter() was buffering calls that are the exact same size as the buffer. it's a very common case to read/write in blocks of the exact buffer size.
it's pointless to copy a full buffer, it's costing extra memory copy and the full buffer will have to be written in the next call anyway.
Co-authored-by: rmorotti <romain.morotti@man.com>
Older libedit versions (like Apple's) use a different type signature
for rl_startup_hook and rl_pre_input_hook. Add a configure check to
determine which signature is accepted by introducing the
Py_RL_STARTUP_HOOK_TAKES_ARGS macro in pyconfig.h.
This is similar to the situation with threading._DummyThread. The methods (incl. __del__()) of interpreters.Interpreter objects must be careful with interpreters not created by interpreters.create(). The simplest thing to start with is to disable any method that modifies or runs in the interpreter. As part of this, the runtime keeps track of where an interpreter was created. We also handle interpreter "refcounts" properly.
Detect libcrypto BLAKE2, Shake, SHA3, and Truncated-SHA512 support at hashlib build time
## BLAKE2
While OpenSSL supports both "b" and "s" variants of the BLAKE2 hash
function, other cryptographic libraries may lack support for one or both
of the variants. This commit modifies `hashlib`'s C code to detect
whether or not the linked libcrypto supports each BLAKE2 variant, and
elides references to each variant's NID accordingly. In cases where the
underlying libcrypto doesn't fully support BLAKE2, CPython's
`./configure` script can be given the following flag to use CPython's
interned BLAKE2 implementation: `--with-builtin-hashlib-hashes=blake2`.
## SHA3, Shake, & truncated SHA512.
Detect BLAKE2, SHA3, Shake, & truncated SHA512 support in the
OpenSSL-ish libcrypto library at build time. This helps allow hashlib's
`_hashopenssl` to be used with libraries that do not to support every
algorithm that upstream OpenSSL does. Such as AWS-LC & BoringSSL.
Co-authored-by: Gregory P. Smith [Google LLC] <greg@krypto.org>
Moves the validation for invalid years in the C implementation of the `datetime` module into a common location between `fromisoformat` and `fromisocalendar`, which improves the error message and fixes a failed assertion when parsing invalid ISO 8601 years using one of the "ISO weeks" formats.
---------
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
This prevents external cancellations of a task group's parent task to
be dropped when an internal cancellation happens at the same time.
Also strengthen the semantics of uncancel() to clear self._must_cancel
when the cancellation count reaches zero.
Co-Authored-By: Tin Tvrtković <tinchester@gmail.com>
Co-Authored-By: Arthur Tacca
Most mutable data is protected by a striped lock that is keyed on the
referenced object's address. The weakref's hash is protected using the
weakref's per-object lock.
Note that this only affects free-threaded builds. Apart from some minor
refactoring, the added code is all either gated by `ifdef`s or is a no-op
(e.g. `Py_BEGIN_CRITICAL_SECTION`).
Introduce a unified 16-bit backoff counter type (``_Py_BackoffCounter``),
shared between the Tier 1 adaptive specializer and the Tier 2 optimizer. The
API used for adaptive specialization counters is changed but the behavior is
(supposed to be) identical.
The behavior of the Tier 2 counters is changed:
- There are no longer dynamic thresholds (we never varied these).
- All counters now use the same exponential backoff.
- The counter for ``JUMP_BACKWARD`` starts counting down from 16.
- The ``temperature`` in side exits starts counting down from 64.
This merges all `_CHECK_STACK_SPACE` uops in a trace into a single `_CHECK_STACK_SPACE_OPERAND` uop that checks whether there is enough stack space for all calls included in the entire trace.
I had meant to switch everything to InterpreterError when I added it a while back. At the time I missed a few key spots.
As part of this, I've added print-the-exception to _PyXI_InitTypes() and fixed an error case in `_PyStaticType_InitBuiltin().
On Linux >= 2.6.36 with glibc < 2.27, `getcwd()` can return a relative
pathname starting with '(unreachable)'. We detect this and fail with
ENOENT, matching new glibc behaviour.
Co-authored-by: Petr Viktorin <encukou@gmail.com>
These helpers make it easier to customize and inspect the config used to initialize interpreters. This is especially valuable in our tests. I found inspiration from the PyConfig API for the PyInterpreterConfig dict conversion stuff. As part of this PR I've also added a bunch of tests.
Integrates the following ctypes meta tp slot functions:
* `CDataType_traverse()` into `CType_Type_traverse()`.
* `CDataType_clear()` into `CType_Type_clear()`.
* `CDataType_dealloc()` into `CType_Type_dealloc()`.
* `CDataType_repeat()` into `CType_Type_repeat()`.
Remove extra self DECREF on ssl "no ciphers" error path.
This doesn't come up in practice because nobody links against a broken
OpenSSL library that provides nothing.
Python 3.10 changed from using SSL_write() and SSL_read() to SSL_write_ex() and
SSL_read_ex(), but did not update handling of the return value.
Change error handling so that the return value is not examined.
OSError (not EOF) is now returned when retval is 0.
According to *recent* man pages of all functions for which we call
PySSL_SetError, (in OpenSSL 3.0 and 1.1.1), their return value should
be used to determine whether an error happened (i.e. if PySSL_SetError
should be called), but not what kind of error happened (so,
PySSL_SetError shouldn't need retval). To get the error,
we need to use SSL_get_error.
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
Co-authored-by: Petr Viktorin <encukou@gmail.com>
When I added _PyInterpreterState_IsRunningMain() and friends last year, I tried to accommodate applications that embed Python but don't call _PyInterpreterState_SetRunningMain() (not that they're expected to). That mostly worked fine until my recent changes in gh-117049, where the subtleties with the fallback code led to failures; the change ended up breaking test_tools.test_freeze, which exercises a basic embedding situation.
The simplest fix is to drop the fallback code I originally added to _PyInterpreterState_IsRunningMain() (and later to _PyThreadState_IsRunningMain()). I've kept the fallback in the _xxsubinterpreters module though. I've also updated Py_FrozenMain() to call _PyInterpreterState_SetRunningMain().
Split `_PyThreadState_DeleteExcept` into two functions:
- `_PyThreadState_RemoveExcept` removes all thread states other than one
passed as an argument. It returns the removed thread states as a
linked list.
- `_PyThreadState_DeleteList` deletes those dead thread states. It may
call destructors, so we want to "start the world" before calling
`_PyThreadState_DeleteList` to avoid potential deadlocks.
I added it quite a while ago as a strategy for managing interpreter lifetimes relative to the PEP 554 (now 734) implementation. Relatively recently I refactored that implementation to no longer rely on InterpreterID objects. Thus now I'm removing it.
Add Py_GetConstant() and Py_GetConstantBorrowed() functions.
In the limited C API version 3.13, getting Py_None, Py_False,
Py_True, Py_Ellipsis and Py_NotImplemented singletons is now
implemented as function calls at the stable ABI level to hide
implementation details. Getting these constants still return borrowed
references.
Add _testlimitedcapi/object.c and test_capi/test_object.py to test
Py_GetConstant() and Py_GetConstantBorrowed() functions.
Mostly we unify the two different implementations of the conversion code (from PyObject * to int64_t. We also drop the PyArg_ParseTuple()-style converter function, as well as rename and move PyInterpreterID_LookUp().
This changes the free-threaded build to perform a stop-the-world pause
before deleting other thread states when forking and during shutdown.
This fixes some crashes when using multiprocessing and during shutdown
when running with `PYTHON_GIL=0`.
This also changes `PyOS_BeforeFork` to acquire the runtime lock
(i.e., `HEAD_LOCK(&_PyRuntime)`) before forking to ensure that data
protected by the runtime lock (and not just the GIL or stop-the-world)
is in a consistent state before forking.
Before this change, ctypes classes used a custom dict subclass, `StgDict`,
as their `tp_dict`. This acts like a regular dict but also includes extra information
about the type.
This replaces stgdict by `StgInfo`, a C struct on the type, accessed by
`PyObject_GetTypeData()` (PEP-697).
All usage of `StgDict` (mainly variables named `stgdict`, `dict`, `edict` etc.) is
converted to `StgInfo` (named `stginfo`, `info`, `einfo`, etc.).
Where the dict is actually used for class attributes (as a regular PyDict), it's now
called `attrdict`.
This change -- not overriding `tp_dict` -- is made to make me comfortable with
the next part of this PR: moving the initialization logic from `tp_new` to `tp_init`.
The `StgInfo` is set up in `__init__` of each class, with a guard that prevents
calling `__init__` more than once. Note that abstract classes (like `Array` or
`Structure`) are created using `PyType_FromMetaclass` and do not have
`__init__` called.
Previously, this was done in `__new__`, which also wasn't called for abstract
classes.
Since `__init__` can be called from Python code or skipped, there is a tested
guard to ensure `StgInfo` is initialized exactly once before it's used.
Co-authored-by: neonene <53406459+neonene@users.noreply.github.com>
Co-authored-by: Erlend E. Aasland <erlend.aasland@protonmail.com>
Starting in Python 3.12, we prevented calling fork() and starting new threads
during interpreter finalization (shutdown). This has led to a number of
regressions and flaky tests. We should not prevent starting new threads
(or `fork()`) until all non-daemon threads exit and finalization starts in
earnest.
This changes the checks to use `_PyInterpreterState_GetFinalizing(interp)`,
which is set immediately before terminating non-daemon threads.
* Split long.c tests of _testcapi into two parts: limited C API tests
in _testlimitedcapi and non-limited C API tests in _testcapi.
* Move testcapi_long.h from Modules/_testcapi/ to
Modules/_testlimitedcapi/.
* Add MODULE__TESTLIMITEDCAPI_DEPS to Makefile.pre.in.
Split unicode.c tests of _testcapi into two parts: limited C API
tests in _testlimitedcapi and non-limited C API tests in _testcapi.
Update test_codecs.
Split abstract.c and float.c tests of _testcapi into two parts:
limited C API tests in _testlimitedcapi and non-limited C API tests
in _testcapi.
Update test_bytes and test_class.
This includes adding what should be a relatively temporary
`Modules/_decimal/windows/mpdecimal.h` shim to choose between `mpdecimal32vc.h`
or `mpdecimal64vc.h` based on which of `CONFIG_64` or `CONFIG_32` is defined.
Even though it has no internal references to Python objects it still
has a reference to its type by virtue of being a heap type. We need
to provide a traverse function that visits the type, but we do not
need to provide a clear function.
There is a race between when `Thread._tstate_lock` is released[^1] in `Thread._wait_for_tstate_lock()`
and when `Thread._stop()` asserts[^2] that it is unlocked. Consider the following execution
involving threads A, B, and C:
1. A starts.
2. B joins A, blocking on its `_tstate_lock`.
3. C joins A, blocking on its `_tstate_lock`.
4. A finishes and releases its `_tstate_lock`.
5. B acquires A's `_tstate_lock` in `_wait_for_tstate_lock()`, releases it, but is swapped
out before calling `_stop()`.
6. C is scheduled, acquires A's `_tstate_lock` in `_wait_for_tstate_lock()` but is swapped
out before releasing it.
7. B is scheduled, calls `_stop()`, which asserts that A's `_tstate_lock` is not held.
However, C holds it, so the assertion fails.
The race can be reproduced[^3] by inserting sleeps at the appropriate points in
the threading code. To do so, run the `repro_join_race.py` from the linked repo.
There are two main parts to this PR:
1. `_tstate_lock` is replaced with an event that is attached to `PyThreadState`.
The event is set by the runtime prior to the thread being cleared (in the same
place that `_tstate_lock` was released). `Thread.join()` blocks waiting for the
event to be set.
2. `_PyInterpreterState_WaitForThreads()` provides the ability to wait for all
non-daemon threads to exit. To do so, an `is_daemon` predicate was added to
`PyThreadState`. This field is set each time a thread is created. `threading._shutdown()`
now calls into `_PyInterpreterState_WaitForThreads()` instead of waiting on
`_tstate_lock`s.
[^1]: 441affc9e7/Lib/threading.py (L1201)
[^2]: 441affc9e7/Lib/threading.py (L1115)
[^3]: 8194653279
---------
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Antoine Pitrou <antoine@python.org>
Use the NtQueryInformationProcess system call to efficiently retrieve the parent process ID in a single step, rather than using the process snapshots API which retrieves large amounts of unnecessary information and is more prone to failure (since it makes heap allocations).
Includes a fallback to the original win32_getppid implementation in case the unstable API appears to return strange results.
The fildes converter of Argument Clinic now always call
PyObject_AsFileDescriptor(), not only for the limited C API.
The _PyLong_FileDescriptor_Converter() converter stays as a fallback
when PyObject_AsFileDescriptor() cannot be used.
Return 0 on success. Set an exception and return -1 on error.
Fix os.timerfd_settime(): properly report exceptions on
_PyTime_FromSecondsDouble() failure.
No longer export _PyTime_FromSecondsDouble().
Move the following files from Modules/_testcapi/ to
Modules/_testlimitedcapi/:
* bytearray.c
* bytes.c
* pyos.c
* sys.c
Changes:
* Replace PyBytes_AS_STRING() with PyBytes_AsString().
* Replace PyBytes_GET_SIZE() with PyBytes_Size().
* Update related test_capi tests.
* Copy Modules/_testcapi/util.h to Modules/_testlimitedcapi/util.h.
Add a new C extension "_testlimitedcapi" which is only built with the
limited C API.
Move heaptype_relative.c and vectorcall_limited.c from
Modules/_testcapi/ to Modules/_testlimitedcapi/.
* configure: add _testlimitedcapi test extension.
* Update generate_stdlib_module_names.py.
* Update make check-c-globals.
Co-authored-by: Erlend E. Aasland <erlend.aasland@protonmail.com>
Previously, the `locked` field was set after releasing the lock. This reverses
the order so that the `locked` field is set while the lock is still held.
There is still one thread-safety issue where `locked` is checked prior to
releasing the lock, however, in practice that will only be an issue when
unlocking the lock is contended, which should be rare.
The problem manifested when the .py module got reloaded and the corresponding extension module didn't. The .py module registers types with the extension and the extension was not allowing that to happen more than once. The solution: let it happen more than once.
If awailable, enable -fstrict-overflow for libmpdec. Also
shut off false positive warnings (-Warray-bounds).
The later was backported from mpdecimal-4.0.0.
Make `_thread.ThreadHandle` thread-safe in free-threaded builds
We protect the mutable state of `ThreadHandle` using a `_PyOnceFlag`.
Concurrent operations (i.e. `join` or `detach`) on `ThreadHandle` block
until it is their turn to execute or an earlier operation succeeds.
Once an operation has been applied successfully all future operations
complete immediately.
The `join()` method is now idempotent. It may be called multiple times
but the underlying OS thread will only be joined once. After `join()`
succeeds, any future calls to `join()` will succeed immediately.
The internal thread handle `detach()` method has been removed.
This code decrefs `qidobj` twice in some paths. Since `qidobj` isn't used after
the first `Py_DECREF()`, remove the others, and replace the `Py_DECREF()` with
`Py_CLEAR()` to make it clear that the variable is dead.
With this fix, `python -mtest test_interpreters -R 3:3 -mtest_queues` no longer
fails with `_Py_NegativeRefcount: Assertion failed: object has negative ref
count`.
Allow controlling Expat >=2.6.0 reparse deferral (CVE-2023-52425) by adding five new methods:
- `xml.etree.ElementTree.XMLParser.flush`
- `xml.etree.ElementTree.XMLPullParser.flush`
- `xml.parsers.expat.xmlparser.GetReparseDeferralEnabled`
- `xml.parsers.expat.xmlparser.SetReparseDeferralEnabled`
- `xml.sax.expatreader.ExpatParser.flush`
Based on the "flush" idea from https://github.com/python/cpython/pull/115138#issuecomment-1932444270 .
### Notes
- Please treat as a security fix related to CVE-2023-52425.
Includes code suggested-by: Snild Dolkow <snild@sony.com>
and by core dev Serhiy Storchaka.
This change is part of the work on PEP-738: Adding Android as a
supported platform.
* Remove the "1.0" suffix from libpython's filename on Android, which
would prevent Gradle from packaging it into an app.
* Simplify the build command in the Makefile so that libpython always
gets given an SONAME with the `-Wl-h` argument, even if the SONAME is
identical to the actual filename.
* Disable a number of functions on Android which can be compiled and
linked against, but always fail at runtime. As a result, the native
_multiprocessing module is no longer built for Android.
* gh-115390 (bee7bb331) added some pre-determined results to the
configure script for things that can't be autodetected when
cross-compiling; this change adds Android to these where appropriate.
* Add a couple more pre-determined results for Android, and making them
cover iOS as well. This means the --enable-ipv6 configure option will
no longer be required on either platform.
This brings the code under test.support.interpreters, and the corresponding extension modules, in line with recent updates to PEP 734.
(Note: PEP 734 has not been accepted at this time. However, we are using an internal copy of the implementation in the test suite to exercise the existing subinterpreters feature.)
* Rename _Py_UOpsAbstractInterpContext to _Py_UOpsContext and _Py_UOpsSymType to _Py_UopsSymbol.
* #define shortened form of _Py_uop_... names for improved readability.
Part of the work on PEP 738: Adding Android as a supported platform.
* Rename the LIBPYTHON variable to MODULE_LDFLAGS, to more accurately
reflect its purpose.
* Edit makesetup to use MODULE_LDFLAGS when linking extension modules.
* Edit the Makefile so that extension modules depend on libpython on
Android and Cygwin.
* Restore `-fPIC` on Android. It was removed several years ago with a
note that the toolchain used it automatically, but this is no longer
the case. Omitting it causes all linker commands to fail with an error
like `relocation R_AARCH64_ADR_PREL_PG_HI21 cannot be used against
symbol '_Py_FalseStruct'; recompile with -fPIC`.
PyTime_t no longer uses an arbitrary unit, it's always a number of
nanoseconds (64-bit signed integer).
* Rename _PyTime_FromNanosecondsObject() to _PyTime_FromLong().
* Rename _PyTime_AsNanosecondsObject() to _PyTime_AsLong().
* Remove pytime_from_nanoseconds().
* Remove pytime_as_nanoseconds().
* Remove _PyTime_FromNanoseconds().
Remove references to the old names _PyTime_MIN
and _PyTime_MAX, now that PyTime_MIN and
PyTime_MAX are public.
Replace also _PyTime_MIN with PyTime_MIN.
* Rename `_testinternalcapi.get_{uop,counter}_optimizer` to `new_*_optimizer`
* Use `_PyUOpName()` instead of` _PyOpcode_uop_name[]`
* Add `target` to executor iterator items -- `list(ex)` now returns `(opcode, oparg, target, operand)` quadruples
* Add executor methods `get_opcode()` and `get_oparg()` to get `vmdata.opcode`, `vmdata.oparg`
* Define a helper for printing uops, and unify various places where they are printed
* Add a hack to summarize_stats.py to fix legacy uop names (e.g. `POP_TOP` -> `_POP_TOP`)
* Define helpers in `test_opt.py` for accessing the set or list of opnames of an executor
<pycore_time.h> include is no longer needed to get the PyTime_t type
in internal header files. This type is now provided by <Python.h>
include. Add <pycore_time.h> includes to C files instead.
Restore support of such combination, disabled in gh-113796.
csv.writer() now quotes empty fields if delimiter is a space and
skipinitialspace is true and raises exception if quoting is not possible.
This change adds an `eval_breaker` field to `PyThreadState`. The primary
motivation is for performance in free-threaded builds: with thread-local eval
breakers, we can stop a specific thread (e.g., for an async exception) without
interrupting other threads.
The source of truth for the global instrumentation version is stored in the
`instrumentation_version` field in PyInterpreterState. Threads usually read the
version from their local `eval_breaker`, where it continues to be colocated
with the eval breaker bits.
lseek() always returns 0 for character pseudo-devices like
`/dev/urandom` (for other non-regular files, e.g. `/dev/stdin`, it
always returns -1, to which CPython reacts by raising appropriate
exceptions). They are thus technically seekable despite not having seek
semantics.
When calling read() on e.g. an instance of `io.BufferedReader` that
wraps such a file, `BufferedReader` reads ahead, filling its buffer,
creating a discrepancy between the number of bytes read and the internal
`tell()` always returning 0, which previously resulted in e.g.
`BufferedReader.tell()` or `BufferedReader.seek()` being able to return
positions < 0 even though these are supposed to be always >= 0.
Invariably keep the return value non-negative by returning
max(former_return_value, 0) instead, and add some corresponding tests.
This adds a safe memory reclamation scheme based on FreeBSD's "GUS" and
quiescent state based reclamation (QSBR). The API provides a mechanism
for callers to detect when it is safe to free memory that may be
concurrently accessed by readers.
The ID of the owning thread (`rlock_owner`) may be accessed by
multiple threads without holding the underlying lock; relaxed
atomics are used in place of the previous loads/stores.
The number of times that the lock has been acquired (`rlock_count`)
is only ever accessed by the thread that holds the lock; we do not
need to use atomics to access it.
The GC keeps track of the number of allocations (less deallocations)
since the last GC. This buffers the count in thread-local state and uses
atomic operations to modify the per-interpreter count. The thread-local
buffering avoids contention on shared state.
A consequence is that the GC scheduling is not as precise, so
"test_sneaky_frame_object" is skipped because it requires that the GC be
run exactly after allocating a frame object.
* gh-114572: Fix locking in cert_store_stats and get_ca_certs
cert_store_stats and get_ca_certs query the SSLContext's X509_STORE with
X509_STORE_get0_objects, but reading the result requires a lock. See
https://github.com/openssl/openssl/pull/23224 for details.
Instead, use X509_STORE_get1_objects, newly added in that PR.
X509_STORE_get1_objects does not exist in current OpenSSLs, but we can
polyfill it with X509_STORE_lock and X509_STORE_unlock.
* Work around const-correctness problem
* Add missing X509_STORE_get1_objects failure check
* Add blurb
Makes _PyType_Lookup thread safe, including:
Thread safety of the underlying cache.
Make mutation of mro and type members thread safe
Also _PyType_GetMRO and _PyType_GetBases are currently returning borrowed references which aren't safe.
This adds `Py_XBEGIN_CRITICAL_SECTION` and
`Py_XEND_CRITICAL_SECTION`, which accept a possibly NULL object as an
argument. If the argument is NULL, then nothing is locked or unlocked.
Otherwise, they behave like `Py_BEGIN/END_CRITICAL_SECTION`.
Use critical sections to make deque methods that operate on mutable
state thread-safe when the GIL is disabled. This is mostly accomplished
by using the @critical_section Argument Clinic directive, though there
are a few places where this was not possible and critical sections had
to be manually acquired/released.
Add PythonFinalizationError exception. This exception derived from
RuntimeError is raised when an operation is blocked during the Python
finalization.
The following functions now raise PythonFinalizationError, instead of
RuntimeError:
* _thread.start_new_thread()
* subprocess.Popen
* os.fork()
* os.fork1()
* os.forkpty()
Morever, _winapi.Overlapped finalizer now logs an unraisable
PythonFinalizationError, instead of an unraisable RuntimeError.
For the most part, these changes make is substantially easier to backport subinterpreter-related code to 3.12, especially the related modules (e.g. _xxsubinterpreters). The main motivation is to support releasing a PyPI package with the 3.13 capabilities compiled for 3.12.
A lot of the changes here involve either hiding details behind macros/functions or splitting up some files.
We add _winapi.BatchedWaitForMultipleObjects to wait for larger numbers of handles.
This is an internal module, hence undocumented, and should be used with caution.
Check the docstring for info before using BatchedWaitForMultipleObjects.