Port the _thread extension module to the multiphase initialization
API (PEP 489) and convert its static types to heap types.
Add a traverse function to the lock type, so the garbage collector
can break reference cycles.
* Fix ExceptHookArgsType name: "_thread.ExceptHookArgs", instead of
"_thread._ExceptHookArgs".
* PyInit__thread() no longer intializes interp->num_threads to 0:
it is already done in PyInterpreterState_New().
* Use PyModule_AddType(), Py_NewRef() and Py_XNewRef().
* Replace str_dict variable with _Py_IDENTIFIER(__dict__).
* Remove assert(Py_IS_TYPE(obj, &Locktype)) from release_sentinel()
to avoid having to retrive the type from this callback.
* Add thread_bootstate_free()
* Rename t_bootstrap() to thread_run()
* bootstate structure: rename keyw member to kwargs
No longer use deprecated aliases to functions:
* Replace PyObject_MALLOC() with PyObject_Malloc()
* Replace PyObject_REALLOC() with PyObject_Realloc()
* Replace PyObject_FREE() with PyObject_Free()
* Replace PyObject_Del() with PyObject_Free()
* Replace PyObject_DEL() with PyObject_Free()
No longer use deprecated aliases to functions:
* Replace PyMem_MALLOC() with PyMem_Malloc()
* Replace PyMem_REALLOC() with PyMem_Realloc()
* Replace PyMem_FREE() with PyMem_Free()
* Replace PyMem_Del() with PyMem_Free()
* Replace PyMem_DEL() with PyMem_Free()
Modify also the PyMem_DEL() macro to use directly PyMem_Free().
These functions are considered not safe because they suppress all internal errors
and can return wrong result. PyDict_GetItemString and _PyDict_GetItemId can
also silence current exception in rare cases.
Remove no longer used _PyDict_GetItemId.
Add _PyDict_ContainsId and rename _PyDict_Contains into
_PyDict_Contains_KnownHash.
An isolated subinterpreter cannot spawn threads, spawn a child
process or call os.fork().
* Add private _Py_NewInterpreter(isolated_subinterpreter) function.
* Add isolated=True keyword-only parameter to
_xxsubinterpreters.create().
* Allow again os.fork() in "non-isolated" subinterpreters.
Rename _PyInterpreterState_GET_UNSAFE() to _PyInterpreterState_GET()
for consistency with _PyThreadState_GET() and to have a shorter name
(help to fit into 80 columns).
Add also "assert(tstate != NULL);" to the function.
Add a private _at_fork_reinit() method to _thread.Lock,
_thread.RLock, threading.RLock and threading.Condition classes:
reinitialize the lock after fork in the child process; reset the lock
to the unlocked state.
Rename also the private _reset_internal_locks() method of
threading.Event to _at_fork_reinit().
* Add _PyThread_at_fork_reinit() private function. It is excluded
from the limited C API.
* threading.Thread._reset_internal_locks() now calls
_at_fork_reinit() on self._tstate_lock rather than creating a new
Python lock object.
* _PyThreadState_DeleteCurrent() now takes tstate rather than
runtime.
* Add ensure_tstate_not_null() helper to pystate.c.
* Add _PyEval_ReleaseLock() function.
* _PyThreadState_DeleteCurrent() now calls
_PyEval_ReleaseLock(tstate) and frees PyThreadState memory after
this call, not before.
* PyGILState_Release(): rename "tcur" variable to "tstate".
Replace _PyInterpreterState_Get() function call with
_PyInterpreterState_GET_UNSAFE() macro which is more efficient but
don't check if tstate or interp is NULL.
_Py_GetConfigsAsDict() now uses _PyThreadState_GET().
Add PyInterpreterState.runtime field: reference to the _PyRuntime
global variable. This field exists to not have to pass runtime in
addition to tstate to a function. Get runtime from tstate:
tstate->interp->runtime.
Remove "_PyRuntimeState *runtime" parameter from functions already
taking a "PyThreadState *tstate" parameter.
_PyGC_Init() first parameter becomes "PyThreadState *tstate".
* Rename PyThreadState_DeleteCurrent()
to _PyThreadState_DeleteCurrent()
* Move it to the internal C API
Co-Authored-By: Carol Willing <carolcode@willingconsulting.com>
In a subinterpreter, spawning a daemon thread now raises an
exception. Daemon threads were never supported in subinterpreters.
Previously, the subinterpreter finalization crashed with a Pyton
fatal error if a daemon thread was still running.
* Add _thread._is_main_interpreter()
* threading.Thread.start() now raises RuntimeError if the thread is a
daemon thread and the method is called from a subinterpreter.
* The _thread module now uses Argument Clinic for the new function.
* Use textwrap.dedent() in test_threading.SubinterpThreadingTests
_thread.start_new_thread() now logs uncaught exception raised by the
function using sys.unraisablehook(), rather than sys.excepthook(), so
the hook gets access to the function which raised the exception.
Add a new threading.excepthook() function which handles uncaught
Thread.run() exception. It can be overridden to control how uncaught
exceptions are handled.
threading.ExceptHookArgs is not documented on purpose: it should not
be used directly.
* threading.excepthook() and threading.ExceptHookArgs.
* Add _PyErr_Display(): similar to PyErr_Display(), but accept a
'file' parameter.
* Add _thread._excepthook(): C implementation of the exception hook
calling _PyErr_Display().
* Add _thread._ExceptHookArgs: structseq type.
* Add threading._invoke_excepthook_wrapper() which handles the gory
details to ensure that everything remains alive during Python
shutdown.
* Add unit tests.
* Add 'runtime' parameter to _PyThreadState_Init()
* Add 'gilstate' parameter to _PyGILState_NoteThreadState()
* Move _PyThreadState_Init() and _PyThreadState_DeleteExcept()
to the internal C API.
Fix invalid function cast warnings with gcc 8
for method conventions different from METH_NOARGS, METH_O and
METH_VARARGS excluding Argument Clinic generated code.
METH_NOARGS functions need only a single argument but they are cast
into a PyCFunction, which takes two arguments. This triggers an
invalid function cast warning in gcc8 due to the argument mismatch.
Fix this by adding a dummy unused argument.
AttributeError was raised always when attribute is not found.
This commit skip raising AttributeError when `tp_getattro` is `PyObject_GenericGetAttr`.
It makes hasattr() and getattr() about 4x faster when attribute is not found.
Fix the following false-alarm Coverity warning:
Result is not floating-point
(UNINTENDED_INTEGER_DIVISION)integer_division: Dividing integer
expressions 9223372036854775807LL and 1000LL, and then converting
the integer quotient to type double. Any remainder, or fractional
part of the quotient, is ignored.
To compute and use a non-integer quotient, change or cast either
operand to type double. If integer division is intended, consider
indicating that by casting the result to type long long .
kB (*kilo* byte) unit means 1000 bytes, whereas KiB ("kibibyte")
means 1024 bytes. KB was misused: replace kB or KB with KiB when
appropriate.
Same change for MB and GB which become MiB and GiB.
Change the output of Tools/iobench/iobench.py.
Round also the size of the documentation from 5.5 MB to 5 MiB.
Fix the pthread+semaphore implementation of
PyThread_acquire_lock_timed() when called with timeout > 0 and
intr_flag=0: recompute the timeout if sem_timedwait() is interrupted
by a signal (EINTR).
See also the PEP 475.
The pthread implementation of PyThread_acquire_lock() now fails with
a fatal error if the timeout is larger than PY_TIMEOUT_MAX, as done
in the Windows implementation.
The check prevents any risk of overflow in PyThread_acquire_lock().
Add also PY_DWORD_MAX constant.
Fix timeout rounding in time.sleep(), threading.Lock.acquire() and
socket.socket.settimeout() to round correctly negative timeouts between -1.0 and
0.0. The functions now block waiting for events as expected. Previously, the
call was incorrectly non-blocking.
* group the (stateful) runtime globals into various topical structs
* consolidate the topical structs under a single top-level _PyRuntimeState struct
* add a check-c-globals.py script that helps identify runtime globals
Other globals are excluded (see globals.txt and check-c-globals.py).