Add also a new _PyTime_AsMicroseconds() function.
threading.TIMEOUT_MAX is now be smaller: only 292 years instead of 292,271
years on 64-bit system for example. Sorry, your threads will hang a *little
bit* shorter. Call me if you want to ensure that your locks wait longer, I can
share some tricks with you.
* _PyTime_AsTimeval() now ensures that tv_usec is always positive
* _PyTime_AsTimespec() now ensures that tv_nsec is always positive
* _PyTime_AsTimeval() now returns an integer on overflow instead of raising an
exception
* Rename _PyTime_FromObject() to _PyTime_FromSecondsObject()
* Add _PyTime_AsNanosecondsObject() and _testcapi.pytime_fromsecondsobject()
* Add unit tests
In practice, _PyTime_t is a number of nanoseconds. Its C type is a 64-bit
signed number. It's integer value is in the range [-2^63; 2^63-1]. In seconds,
the range is around [-292 years; +292 years]. In term of Epoch timestamp
(1970-01-01), it can store a date between 1677-09-21 and 2262-04-11.
The API has a resolution of 1 nanosecond and use integer number. With a
resolution on 1 nanosecond, 64-bit IEEE 754 floating point numbers loose
precision after 194 days. It's not the case with this API. The drawback is
overflow for values outside [-2^63; 2^63-1], but these values are unlikely for
most Python modules, except of the datetime module.
New functions:
- _PyTime_GetMonotonicClock()
- _PyTime_FromObject()
- _PyTime_AsMilliseconds()
- _PyTime_AsTimeval()
This change uses these new functions in time.sleep() to avoid rounding issues.
The new API will be extended step by step, and the old API will be removed step
by step. Currently, some code is duplicated just to be able to move
incrementally, instead of pushing a large change at once.
I expected more users of _Py_wstat(), but in practice it's only used by
Modules/getpath.c. Move the function because it's not needed on Windows.
Windows uses PC/getpathp.c which uses the Win32 API (ex: GetFileAttributesW())
not the POSIX API.
which returned an invalid result (result+error or no result without error) in
the exception message.
Add also unit test to check that the exception contains the name of the
function.
Special case: the final _PyEval_EvalFrameEx() check doesn't mention the
function since it didn't execute a single function but a whole frame.
interrupted by a signal
Add a new _PyTime_AddDouble() function and remove _PyTime_ADD_SECONDS() macro.
The _PyTime_ADD_SECONDS only supported an integer number of seconds, the
_PyTime_AddDouble() has subsecond resolution.
EINTR error and special cases for Windows.
These functions now truncate the length to PY_SSIZE_T_MAX to have a portable
and reliable behaviour. For example, read() result is undefined if counter is
greater than PY_SSIZE_T_MAX on Linux.
* _Py_open() now raises exceptions on error. If open() fails, it raises an
OSError with the filename.
* _Py_open() now releases the GIL while calling open()
* Add _Py_open_noraise() when _Py_open() cannot be used because the GIL is not
held
Disable completly pyatomic.h on C++, because <stdatomic.h> is not compatible with C++.
<pyatomic.h> is only needed by the optimized PyThreadState_GET() macro in
pystate.h. Instead, declare PyThreadState_GET() as an alias to
PyThreadState_Get(), as done for limited API.
raise a SystemError if a function returns a result and raises an exception.
The SystemError is chained to the previous exception.
Refactor also PyObject_Call() and PyCFunction_Call() to make them more readable.
Remove some checks which became useless (duplicate checks).
Change reviewed by Serhiy Storchaka.
Declarations of Windows-specific auxilary functions need Windows types
from windows.h. Instead of including windows.h in Python.h and making
it available to all Windows users, it is simpler and safer just move
declarations to the single file that needs them.
- Move all Py_LIMITED_API exclusions together under one #ifndef
- Group PyAPI_FUNC functions and PyAPI_DATA together.
- Bring related comments together and put them in the appropriate section.
threading.Lock.acquire(), threading.RLock.acquire() and socket operations now
use a monotonic clock, instead of the system clock, when a timeout is used.
Other changes:
* The whole _PyTime API is private (not defined if Py_LIMITED_API is set)
* _PyTime_gettimeofday_info() also returns -1 on error
* Simplify PyTime_gettimeofday(): only use clock_gettime(CLOCK_REALTIME) or
gettimeofday() on UNIX. Don't fallback to ftime() or time() anymore.
name, and use it in the representation of a generator (``repr(gen)``). The
default name of the generator (``__name__`` attribute) is now get from the
function instead of the code. Use ``gen.gi_code.co_name`` to get the name of
the code.
PyObject_Calloc(), _PyObject_GC_Calloc(). bytes(int) and bytearray(int) are now
using ``calloc()`` instead of ``malloc()`` for large objects which is faster
and use less memory (until the bytearray buffer is filled with data).
now register both filenames in the exception on failure.
This required adding new C API functions allowing OSError exceptions
to reference two filenames instead of one.
The new syntax is highly human readable while still preventing false
positives. The syntax also extends Python syntax to denote "self" and
positional-only parameters, allowing inspect.Signature objects to be
totally accurate for all supported builtins in Python 3.4.
- io.TextIOWrapper (and hence the open() builtin) now use the
internal codec marking system added for issue #19619
- also tweaked the C code to only look up the encoding once,
rather than multiple times
- the existing output type checks remain in place to deal with
unmarked third party codecs.
annotate text signatures in docstrings, resulting in fewer false
positives. "self" parameters are also explicitly marked, allowing
inspect.Signature() to authoritatively detect (and skip) said parameters.
Issue #20326: Argument Clinic now generates separate checksums for the
input and output sections of the block, allowing external tools to verify
that the input has not changed (and thus the output is not out-of-date).
PyMethodDescr_Type, _PyMethodWrapper_Type, and PyWrapperDescr_Type)
have been modified to provide introspection information for builtins.
Also: many additional Lib, test suite, and Argument Clinic fixes.
crash when a generator is created in a C thread that is destroyed while the
generator is still used. The issue was that a generator contains a frame, and
the frame kept a reference to the Python state of the destroyed C thread. The
crash occurs when a trace function is setup.
str.encode, bytes.decode and bytearray.decode now use an
internal API to throw LookupError for known non-text encodings,
rather than attempting the encoding or decoding operation and
then throwing a TypeError for an unexpected output type.
The latter mechanism remains in place for third party non-text
encodings.
- output type errors now redirect users to the type-neutral
convenience functions in the codecs module
- stateless errors that occur during encoding and decoding
will now be automatically wrapped in exceptions that give
the name of the codec involved
_PyUnicode_CompareWithId() is faster than PyUnicode_CompareWithASCIIString()
when both strings are equal and interned.
Add also _PyId_builtins identifier for "builtins" common string.
instead of creating temporary Unicode string objects
Add also more identifiers in pythonrun.c to avoid temporary Unicode string
objets for the interactive interpreter.
- don't call PyErr_NoMemory with interpreter is not initialised
- note that it's OK to call _PyMem_RawStrDup here
- don't include this in the limited API
- capitalise "IO"
- be explicit that a non-zero return indicates an error
- include versionadded marker in docs
This new pre-initialization API allows embedding
applications like Blender to force a particular
encoding and error handler for the standard IO streams.
Also refactors Modules/_testembed.c to let us start
testing multiple embedding scenarios.
(Initial patch by Bastien Montagne)
The setobject freelist was consuming memory but not providing much value.
Even when a freelisted setobject was available, most of the setobject
fields still needed to be initialized and the small table still required
a memset(). This meant that the custom freelisting scheme for sets was
providing almost no incremental benefit over the default Python freelist
scheme used by _PyObject_Malloc() in Objects/obmalloc.c.
-I
Run Python in isolated mode. This also implies -E and -s. In isolated mode
sys.path contains neither the script’s directory nor the user’s
site-packages directory. All PYTHON* environment variables are ignored,
too. Further restrictions may be imposed to prevent the user from
injecting malicious code.
PyStructSequence_InitType() except that it has a return value (0 on success,
-1 on error).
* PyStructSequence_InitType2() now raises MemoryError on memory allocation failure
* Fix also some calls to PyDict_SetItemString(): handle error
Add new enum:
* PyMemAllocatorDomain
Add new structures:
* PyMemAllocator
* PyObjectArenaAllocator
Add new functions:
* PyMem_RawMalloc(), PyMem_RawRealloc(), PyMem_RawFree()
* PyMem_GetAllocator(), PyMem_SetAllocator()
* PyObject_GetArenaAllocator(), PyObject_SetArenaAllocator()
* PyMem_SetupDebugHooks()
Changes:
* PyMem_Malloc()/PyObject_Realloc() now always call malloc()/realloc(), instead
of calling PyObject_Malloc()/PyObject_Realloc() in debug mode.
* PyObject_Malloc()/PyObject_Realloc() now falls back to
PyMem_Malloc()/PyMem_Realloc() for allocations larger than 512 bytes.
* Redesign debug checks on memory block allocators as hooks, instead of using C
macros
* Add a new PyMemAllocators structure
* New functions:
- PyMem_RawMalloc(), PyMem_RawRealloc(), PyMem_RawFree(): GIL-free memory
allocator functions
- PyMem_GetRawAllocators(), PyMem_SetRawAllocators()
- PyMem_GetAllocators(), PyMem_SetAllocators()
- PyMem_SetupDebugHooks()
- _PyObject_GetArenaAllocators(), _PyObject_SetArenaAllocators()
* Add unit test for PyMem_Malloc(0) and PyObject_Malloc(0)
* Add unit test for new get/set allocators functions
* PyObject_Malloc() now falls back on PyMem_Malloc() instead of malloc() if
size is bigger than SMALL_REQUEST_THRESHOLD, and PyObject_Realloc() falls
back on PyMem_Realloc() instead of realloc()
* PyMem_Malloc() and PyMem_Realloc() now always call malloc() and realloc(),
instead of calling PyObject_Malloc() and PyObject_Realloc() in debug mode
Forgot to raise ModuleNotFoundError when None is found in sys.modules.
This led to introducing the C function PyErr_SetImportErrorSubclass()
to make setting ModuleNotFoundError easier.
Also updated the reference docs to mention ModuleNotFoundError
appropriately. Updated the docs for ModuleNotFoundError to mention the
None in sys.modules case.
Lastly, it was noticed that PyErr_SetImportError() was not setting an
exception when returning None in one case. That issue is now fixed.
ImportError.
The exception is raised by import when a module could not be found.
Technically this is defined as no viable loader could be found for the
specified module. This includes ``from ... import`` statements so that
the module usage is consistent for all situations where import
couldn't find what was requested.
This should allow for the common idiom of::
try:
import something
except ImportError:
pass
to be updated to using ModuleNotFoundError and not accidentally mask
ImportError messages that should propagate (e.g. issues with a
loader).
This work was driven by the fact that the ``from ... import``
statement needed to be able to tell the difference between an
ImportError that simply couldn't find a module (and thus silence the
exception so that ceval can raise it) and an ImportError that
represented an actual problem.