* _PyThreadState_DeleteCurrent() now takes tstate rather than
runtime.
* Add ensure_tstate_not_null() helper to pystate.c.
* Add _PyEval_ReleaseLock() function.
* _PyThreadState_DeleteCurrent() now calls
_PyEval_ReleaseLock(tstate) and frees PyThreadState memory after
this call, not before.
* PyGILState_Release(): rename "tcur" variable to "tstate".
Replace _PyInterpreterState_Get() function call with
_PyInterpreterState_GET_UNSAFE() macro which is more efficient but
don't check if tstate or interp is NULL.
_Py_GetConfigsAsDict() now uses _PyThreadState_GET().
Add PyInterpreterState.runtime field: reference to the _PyRuntime
global variable. This field exists to not have to pass runtime in
addition to tstate to a function. Get runtime from tstate:
tstate->interp->runtime.
Remove "_PyRuntimeState *runtime" parameter from functions already
taking a "PyThreadState *tstate" parameter.
_PyGC_Init() first parameter becomes "PyThreadState *tstate".
* Rename PyThreadState_DeleteCurrent()
to _PyThreadState_DeleteCurrent()
* Move it to the internal C API
Co-Authored-By: Carol Willing <carolcode@willingconsulting.com>
In a subinterpreter, spawning a daemon thread now raises an
exception. Daemon threads were never supported in subinterpreters.
Previously, the subinterpreter finalization crashed with a Pyton
fatal error if a daemon thread was still running.
* Add _thread._is_main_interpreter()
* threading.Thread.start() now raises RuntimeError if the thread is a
daemon thread and the method is called from a subinterpreter.
* The _thread module now uses Argument Clinic for the new function.
* Use textwrap.dedent() in test_threading.SubinterpThreadingTests
_thread.start_new_thread() now logs uncaught exception raised by the
function using sys.unraisablehook(), rather than sys.excepthook(), so
the hook gets access to the function which raised the exception.
Add a new threading.excepthook() function which handles uncaught
Thread.run() exception. It can be overridden to control how uncaught
exceptions are handled.
threading.ExceptHookArgs is not documented on purpose: it should not
be used directly.
* threading.excepthook() and threading.ExceptHookArgs.
* Add _PyErr_Display(): similar to PyErr_Display(), but accept a
'file' parameter.
* Add _thread._excepthook(): C implementation of the exception hook
calling _PyErr_Display().
* Add _thread._ExceptHookArgs: structseq type.
* Add threading._invoke_excepthook_wrapper() which handles the gory
details to ensure that everything remains alive during Python
shutdown.
* Add unit tests.
* Add 'runtime' parameter to _PyThreadState_Init()
* Add 'gilstate' parameter to _PyGILState_NoteThreadState()
* Move _PyThreadState_Init() and _PyThreadState_DeleteExcept()
to the internal C API.
Fix invalid function cast warnings with gcc 8
for method conventions different from METH_NOARGS, METH_O and
METH_VARARGS excluding Argument Clinic generated code.
METH_NOARGS functions need only a single argument but they are cast
into a PyCFunction, which takes two arguments. This triggers an
invalid function cast warning in gcc8 due to the argument mismatch.
Fix this by adding a dummy unused argument.
AttributeError was raised always when attribute is not found.
This commit skip raising AttributeError when `tp_getattro` is `PyObject_GenericGetAttr`.
It makes hasattr() and getattr() about 4x faster when attribute is not found.
Fix the following false-alarm Coverity warning:
Result is not floating-point
(UNINTENDED_INTEGER_DIVISION)integer_division: Dividing integer
expressions 9223372036854775807LL and 1000LL, and then converting
the integer quotient to type double. Any remainder, or fractional
part of the quotient, is ignored.
To compute and use a non-integer quotient, change or cast either
operand to type double. If integer division is intended, consider
indicating that by casting the result to type long long .
kB (*kilo* byte) unit means 1000 bytes, whereas KiB ("kibibyte")
means 1024 bytes. KB was misused: replace kB or KB with KiB when
appropriate.
Same change for MB and GB which become MiB and GiB.
Change the output of Tools/iobench/iobench.py.
Round also the size of the documentation from 5.5 MB to 5 MiB.
Fix the pthread+semaphore implementation of
PyThread_acquire_lock_timed() when called with timeout > 0 and
intr_flag=0: recompute the timeout if sem_timedwait() is interrupted
by a signal (EINTR).
See also the PEP 475.
The pthread implementation of PyThread_acquire_lock() now fails with
a fatal error if the timeout is larger than PY_TIMEOUT_MAX, as done
in the Windows implementation.
The check prevents any risk of overflow in PyThread_acquire_lock().
Add also PY_DWORD_MAX constant.
Fix timeout rounding in time.sleep(), threading.Lock.acquire() and
socket.socket.settimeout() to round correctly negative timeouts between -1.0 and
0.0. The functions now block waiting for events as expected. Previously, the
call was incorrectly non-blocking.
* group the (stateful) runtime globals into various topical structs
* consolidate the topical structs under a single top-level _PyRuntimeState struct
* add a check-c-globals.py script that helps identify runtime globals
Other globals are excluded (see globals.txt and check-c-globals.py).
* group the (stateful) runtime globals into various topical structs
* consolidate the topical structs under a single top-level _PyRuntimeState struct
* add a check-c-globals.py script that helps identify runtime globals
Other globals are excluded (see globals.txt and check-c-globals.py).
* bpo-6532: Make the thread id an unsigned integer.
From C API side the type of results of PyThread_start_new_thread() and
PyThread_get_thread_ident(), the id parameter of
PyThreadState_SetAsyncExc(), and the thread_id field of PyThreadState
changed from "long" to "unsigned long".
* Restore a check in thread_get_ident().