In a few places we switch to another interpreter without knowing if it has a thread state associated with the current thread. For the main interpreter there wasn't much of a problem, but for subinterpreters we were *mostly* okay re-using the tstate created with the interpreter (located via PyInterpreterState_ThreadHead()). There was a good chance that tstate wasn't actually in use by another thread.
However, there are no guarantees of that. Furthermore, re-using an already used tstate is currently fragile. To address this, now we create a new thread state in each of those places and use it.
One consequence of this change is that PyInterpreterState_ThreadHead() may not return NULL (though that won't happen for the main interpreter).
* Enhanced sqlite3 connection context management documentation with contextlib.closing
* 📜🤖 Added by blurb_it.
* Fixed gitignore spelling error from nitignore to gitignore
* Renamed .gitignore to .nitignore
* Added generated doctests
* Deleted sqlite3 generated files
* Removed white-space changes
* Removed News entry from the doc
* Expanded a note that context manager can be used for connection management using contextlib.closing
* Removed repeated contextlib.closing code snippet
* Expanded the note around usage of context manageer for sqlite3 connection management
* Deleted extra white-spaces
* Deleted extra white-space
* re-arranged context manager wording
* Re-arranged word layout on how to use context manager
* Fix whitespace errors
* Remove unneeded change in .gitignore
* Added suggested changes
* Added suggested change redirecting to the contextlib.closing implementation
* Added closing keyword
* Removed line 2473
---------
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Erlend E. Aasland <erlend@python.org>
The existence of background threads running on a subinterpreter was preventing interpreters from getting properly destroyed, as well as impacting the ability to run the interpreter again. It also affected how we wait for non-daemon threads to finish.
We add PyInterpreterState.threads.main, with some internal C-API functions.
This change makes sure sys.path[0] is set properly for subinterpreters. Before, it wasn't getting set at all. This PR does not address the broader concerns from gh-109853.
Make PyObject_VisitManagedDict() and PyObject_ClearManagedDict()
functions public in Python 3.13 C API.
* Rename _PyObject_VisitManagedDict() to PyObject_VisitManagedDict().
* Rename _PyObject_ClearManagedDict() to PyObject_ClearManagedDict().
* Document these functions.
If the timeout is greater than PY_TIMEOUT_MAX,
PyThread_acquire_lock_timed() uses a timeout of PY_TIMEOUT_MAX
microseconds, which is around 280.6 years. This case is unlikely and
limiting a timeout to 280.6 years sounds like a reasonable trade-off.
The constant PY_TIMEOUT_MAX is not used in PyPI top 5,000 projects.
* Add new installation path functions subsection
* Add content from install/index to sysconfig
* Fix table
* Update note about installers
* Clean up the list of schemes, remove references to Distutils
This method supports file URIs (including variants) as described in RFC 8089, such as URIs generated by `pathlib.Path.as_uri()` and `urllib.request.pathname2url()`.
The method is added to `Path` rather than `PurePath` because it uses `os.fsdecode()`, and so its results vary from system to system. I intend to deprecate `PurePath.as_uri()` and move it to `Path` for the same reason.
Co-authored-by: Adam Turner <9087854+AA-Turner@users.noreply.github.com>
* _add_python_opts() now handles cross compilation and HOSTRUNNER.
* display_header() now tells if Python is cross-compiled, display
HOSTRUNNER, and get the host platform.
* Remove Tools/scripts/run_tests.py script.
* Remove "make hostrunnertest": use "make buildbottest"
or "make test" instead.
* Refactor os_sched_getaffinity_impl(): move variable definitions to
their first assignment.
* Fix test_posix.test_sched_getaffinity(): restore the old CPU mask
when the test completes!
* Doc: Specify that os.cpu_count() counts *logicial* CPUs.
* Doc: Specify that os.sched_getaffinity(0) is related to the calling
thread.
Rename WORKER_ERROR to WORKER_BUG. Add WORKER_FAILED state: it does
not stop the manager, whereas WORKER_BUG does.
Change also TestResults.display_result() order: display failed tests
at the end, the important important information.
WorkerThread now tries to get the signal name for negative exit code.
* Remove unused <locale.h> includes.
* Remove unused <fcntl.h> include in traceback.h.
* Remove redundant <assert.h> and <stddef.h> includes. They are already
included by "Python.h".
* Remove <object.h> include in faulthandler.c. Python.h already includes it.
* Add missing <stdbool.h> in pycore_pythread.h if HAVE_PTHREAD_STUBS
is defined.
* Fix also warnings in pthread_stubs.h: don't redefine macros if they
are already defined, like the __NEED_pthread_t macro.
Call also copy_python_src_ignore() on listdir() names.
shutil.copytree(): replace set() with an empty tuple. An empty tuple
becomes a constant in the compiler and checking if an item is in an
empty tuple is cheap.
* pycore_pythread.h is now the central place to make sure that
_POSIX_THREADS and _POSIX_SEMAPHORES macros are defined if
available.
* Make sure that pycore_pythread.h is included when _POSIX_THREADS
and _POSIX_SEMAPHORES macros are tested.
* PY_TIMEOUT_MAX is now defined as a constant, since its value
depends on _POSIX_THREADS, instead of being defined as a macro.
* Prevent integer overflow in the preprocessor when computing
PY_TIMEOUT_MAX_VALUE on Windows:
replace "0xFFFFFFFELL * 1000 < LLONG_MAX"
with "0xFFFFFFFELL < LLONG_MAX / 1000".
* Document the change and give hints how to fix affected code.
* Add an exception for PY_TIMEOUT_MAX name to smelly.py
* Add PY_TIMEOUT_MAX to the stable ABI
Add private `pathlib._PathBase` class. This will be used by an experimental PyPI package to incubate a `tarfile.TarPath` class.
Co-authored-by: Adam Turner <9087854+AA-Turner@users.noreply.github.com>