Add a private C API for deadlines: add _PyDeadline_Init() and
_PyDeadline_Get() functions.
* Add _PyTime_Add() and _PyTime_Mul() functions which compute t1+t2
and t1*t2 and clamp the result on overflow.
* _PyTime_MulDiv() now uses _PyTime_Add() and _PyTime_Mul().
If the DEBUG_STATS debug flag is set, gc_collect_main() now uses
_PyTime_GetPerfCounter() instead of _PyTime_GetMonotonicClock() to
measure the elapsed time.
On Windows, _PyTime_GetMonotonicClock() only has a resolution of 15.6
ms, whereas _PyTime_GetPerfCounter() is closer to a resolution of 100
ns.
WaitForSingleObject() accepts timeout in milliseconds in the range
[0; 0xFFFFFFFE] (DWORD type). INFINITE value (0xFFFFFFFF) means no
timeout. 0xFFFFFFFE milliseconds is around 49.7 days.
PY_TIMEOUT_MAX is (0xFFFFFFFE * 1000) milliseconds on Windows, around
49.7 days.
Partially revert commit 37b8294d62.
Add a PID to names of POSIX shared memory objects to allow
running multiprocessing tests (test_multiprocessing_fork,
test_multiprocessing_spawn, etc) in parallel.
On Unix, if the sem_clockwait() function is available in the C
library (glibc 2.30 and newer), the threading.Lock.acquire() method
now uses the monotonic clock (time.CLOCK_MONOTONIC) for the timeout,
rather than using the system clock (time.CLOCK_REALTIME), to not be
affected by system clock changes.
configure now checks if the sem_clockwait() function is available.
I've added a number of test-only modules. Some of those cases are covered by the recently frozen stdlib modules (and some will be once we add encodings back in). However, I figured we'd play it safe by having a set of modules guaranteed to be there during tests.
https://bugs.python.org/issue45020
* Work correctly if an additional fresh module imports other
additional fresh module which imports a blocked module.
* Raises ImportError if the specified module cannot be imported
while all additional fresh modules are successfully imported.
* Support blocking packages.
* Always restore the import state of fresh and blocked modules
and their submodules.
* Fix test_decimal and test_xml_etree which depended on an undesired
side effect of import_fresh_module().
PyThread_acquire_lock_timed() now clamps the timeout into the
[_PyTime_MIN; _PyTime_MAX] range (_PyTime_t type) if it is too large,
rather than calling Py_FatalError() which aborts the process.
PyThread_acquire_lock_timed() no longer uses
MICROSECONDS_TO_TIMESPEC() to compute sem_timedwait() argument, but
_PyTime_GetSystemClock() and _PyTime_AsTimespec_truncate().
Fix _thread.TIMEOUT_MAX value on Windows: the maximum timeout is
0x7FFFFFFF milliseconds (around 24.9 days), not 0xFFFFFFFF
milliseconds (around 49.7 days).
Set PY_TIMEOUT_MAX to 0x7FFFFFFF milliseconds, rather than 0xFFFFFFFF
milliseconds.
Fix PY_TIMEOUT_MAX overflow test: replace (us >= PY_TIMEOUT_MAX) with
(us > PY_TIMEOUT_MAX).
Add pytime_add() and pytime_mul() functions to pytime.c to compute
t+t2 and t*k with clamping to [_PyTime_MIN; _PyTime_MAX].
Fix pytime.h: _PyTime_FromTimeval() is not implemented on Windows.
Add the _PyTime_AsTimespec_clamp() function: similar to
_PyTime_AsTimespec(), but clamp to _PyTime_t min/max and don't raise
an exception.
PyThread_acquire_lock_timed() now uses _PyTime_AsTimespec_clamp() to
remove the Py_UNREACHABLE() code path.
* Add _PyTime_AsTime_t() function.
* Add PY_TIME_T_MIN and PY_TIME_T_MAX constants.
* Replace _PyTime_AsTimeval_noraise() with _PyTime_AsTimeval_clamp().
* Add pytime_divide_round_up() function.
* Fix integer overflow in pytime_divide().
* Add pytime_divmod() function.
Currently we're freezing the __init__.py twice, duplicating the built data unnecessarily With this change we do it once. There is no change in runtime behavior.
https://bugs.python.org/issue45020
Removed extra comma in comment that indicates state of a `Barrier` as it was confusing and breaking the flow while reading.
Co-authored-by: Priyank <5903604+cpriyank@users.noreply.github.com>
* during tarfile parsing, a zlib error indicates invalid data
* tarfile.open now raises a descriptive exception from the zlib error
* this makes it clear to the user that they may be trying to open a
corrupted tar file
During runtime startup we figure out the stdlib dir but currently throw that information away. This change preserves it and exposes it via PyConfig.stdlib_dir, _Py_GetStdlibDir(), and sys._stdlib_dir.
https://bugs.python.org/issue45211
Fix the threading._shutdown() function when the threading module was
imported first from a thread different than the main thread: no
longer log an error at Python exit.
This accomplishes 2 things:
* consolidates some common code between getpath.c and getpathp.c
* makes the helpers available to code in other files
FWIW, the signature of the join_relfile() function (in fileutils.c) intentionally mirrors that of Windows' PathCchCombineEx().
Note that this change is mostly moving code around. No behavior is meant to change.
https://bugs.python.org/issue45211
Fix a race condition in the Thread.join() method of the threading
module. If the function is interrupted by a signal and the signal
handler raises an exception, make sure that the thread remains in a
consistent state to prevent a deadlock.