_PyTime_t type is defined as int64_t, and so min/max are INT64_MIN/INT64_MAX,
not PY_LLONG_MIN/PY_LLONG_MAX.
(cherry picked from commit 8e76c45622)
Co-authored-by: Sergey Fedoseev <fedoseev.sergey@gmail.com>
Add new time functions:
* time.clock_gettime_ns()
* time.clock_settime_ns()
* time.monotonic_ns()
* time.perf_counter_ns()
* time.process_time_ns()
* time.time_ns()
Add new _PyTime functions:
* _PyTime_FromTimespec()
* _PyTime_FromNanosecondsObject()
* _PyTime_FromTimeval()
Other changes:
* Add also os.times() tests to test_os.
* pytime_fromtimeval() and pytime_fromtimeval() now return
_PyTime_MAX or _PyTime_MIN on overflow, rather than undefined
behaviour
* _PyTime_FromNanoseconds() parameter type changes from long long to
_PyTime_t
* Rewrite win_perf_counter() to only use integers internally.
* Add _PyTime_MulDiv() which compute "ticks * mul / div"
in two parts (int part and remaining) to prevent integer overflow.
* Clock frequency is checked at initialization for integer overflow.
* Enhance also pymonotonic() to reduce the precision loss on macOS
(mach_absolute_time() clock).
time.clock() and time.perf_counter() now use again C double
internally.
Remove also _PyTime_GetWinPerfCounterWithInfo(): use
_PyTime_GetPerfCounterDoubleWithInfo() instead on Windows.
On Windows, the tv_sec field of the timeval structure has the type C long,
whereas it has the type C time_t on all other platforms. A C long has a size of
32 bits (signed inter, 1 bit for the sign, 31 bits for the value) which is not
enough to store an Epoch timestamp after the year 2038.
Add the _PyTime_AsTimevalTime_t() function written for datetime.datetime.now():
convert a _PyTime_t timestamp to a (secs, us) tuple where secs type is time_t.
It allows to support dates after the year 2038 on Windows.
Enhance also _PyTime_AsTimeval_impl() to detect overflow on the number of
seconds when rounding the number of microseconds.
datetime.datetime now round microseconds to nearest with ties going to nearest
even integer (ROUND_HALF_EVEN), as round(float), instead of rounding towards
-Infinity (ROUND_FLOOR).
pytime API: replace _PyTime_ROUND_HALF_UP with _PyTime_ROUND_HALF_EVEN. Fix
also _PyTime_Divide() for negative numbers.
_PyTime_AsTimeval_impl() now reuses _PyTime_Divide() instead of reimplementing
rounding modes.
with ties going away from zero (ROUND_HALF_UP), as Python 2 and Python older
than 3.3, instead of rounding to nearest with ties going to nearest even
integer (ROUND_HALF_EVEN).
Add also a new _PyTime_AsMicroseconds() function.
threading.TIMEOUT_MAX is now be smaller: only 292 years instead of 292,271
years on 64-bit system for example. Sorry, your threads will hang a *little
bit* shorter. Call me if you want to ensure that your locks wait longer, I can
share some tricks with you.
* _PyTime_AsTimeval() now ensures that tv_usec is always positive
* _PyTime_AsTimespec() now ensures that tv_nsec is always positive
* _PyTime_AsTimeval() now returns an integer on overflow instead of raising an
exception
* Rename _PyTime_FromObject() to _PyTime_FromSecondsObject()
* Add _PyTime_AsNanosecondsObject() and _testcapi.pytime_fromsecondsobject()
* Add unit tests
In practice, _PyTime_t is a number of nanoseconds. Its C type is a 64-bit
signed number. It's integer value is in the range [-2^63; 2^63-1]. In seconds,
the range is around [-292 years; +292 years]. In term of Epoch timestamp
(1970-01-01), it can store a date between 1677-09-21 and 2262-04-11.
The API has a resolution of 1 nanosecond and use integer number. With a
resolution on 1 nanosecond, 64-bit IEEE 754 floating point numbers loose
precision after 194 days. It's not the case with this API. The drawback is
overflow for values outside [-2^63; 2^63-1], but these values are unlikely for
most Python modules, except of the datetime module.
New functions:
- _PyTime_GetMonotonicClock()
- _PyTime_FromObject()
- _PyTime_AsMilliseconds()
- _PyTime_AsTimeval()
This change uses these new functions in time.sleep() to avoid rounding issues.
The new API will be extended step by step, and the old API will be removed step
by step. Currently, some code is duplicated just to be able to move
incrementally, instead of pushing a large change at once.
interrupted by a signal
Add a new _PyTime_AddDouble() function and remove _PyTime_ADD_SECONDS() macro.
The _PyTime_ADD_SECONDS only supported an integer number of seconds, the
_PyTime_AddDouble() has subsecond resolution.
threading.Lock.acquire(), threading.RLock.acquire() and socket operations now
use a monotonic clock, instead of the system clock, when a timeout is used.
Other changes:
* The whole _PyTime API is private (not defined if Py_LIMITED_API is set)
* _PyTime_gettimeofday_info() also returns -1 on error
* Simplify PyTime_gettimeofday(): only use clock_gettime(CLOCK_REALTIME) or
gettimeofday() on UNIX. Don't fallback to ftime() or time() anymore.
Fix also its value on Windows and Linux according to its documentation:
"adjustable" indicates if the clock *can be* adjusted, not if it is or was
adjusted.
In most cases, it is not possible to indicate if a clock is or was adjusted.
Removed futimens as it is now redundant.
Changed shutil.copystat to use st_atime_ns and st_mtime_ns from os.stat
and ns= parameter to utime--it once again preserves exact metadata on Linux!
* Rename time.steady() to time.monotonic()
* On Windows, time.monotonic() uses GetTickCount/GetTickCount64() instead of
QueryPerformanceCounter()
* time.monotonic() uses CLOCK_HIGHRES if available
* Add time.get_clock_info(), time.perf_counter() and time.process_time()
functions
time.ctime(), gmtime(), time.localtime(), datetime.date.fromtimestamp(),
datetime.datetime.fromtimestamp() and datetime.datetime.utcfromtimestamp() now
raises an OverflowError, instead of a ValueError, if the timestamp does not fit
in time_t.
datetime.datetime.fromtimestamp() and datetime.datetime.utcfromtimestamp() now
round microseconds towards zero instead of rounding to nearest with ties going
away from zero.
retry the select() loop instead of bailing out. This is because select()
can incorrectly report a socket as ready for reading (for example, if it
received some data with an invalid checksum).