Issue #27128: Add _PyObject_FastCall(), a new calling convention avoiding a
temporary tuple to pass positional parameters in most cases, but create a
temporary tuple if needed (ex: for the tp_call slot).
The API is prepared to support keyword parameters, but the full implementation
will come later (_PyFunction_FastCall() doesn't support keyword parameters
yet).
Add also:
* _PyStack_AsTuple() helper function: convert a "stack" of parameters to
a tuple.
* _PyCFunction_FastCall(): fast call implementation for C functions
* _PyFunction_FastCall(): fast call implementation for Python functions
* Optimize tracemalloc_add_trace(): modify hashtable entry data (trace) if the
memory block is already tracked, rather than trying to remove the old trace
and then add a new trace.
* Add _Py_HASHTABLE_ENTRY_WRITE_DATA() macro
Issue #26530:
* Add C functions _PyTraceMalloc_Track() and _PyTraceMalloc_Untrack() to track
memory blocks using the tracemalloc module.
* Add _PyTraceMalloc_GetTraceback() to get the traceback of an object.
Issue #26567:
* Add a new function PyErr_ResourceWarning() function to pass the destroyed
object
* Add a source attribute to warnings.WarningMessage
* Add warnings._showwarnmsg() which uses tracemalloc to get the traceback where
source object was allocated.
Issue #26563:
* Add _PyGILState_GetInterpreterStateUnsafe() function: the single
PyInterpreterState used by this process' GILState implementation.
* Enhance _Py_DumpTracebackThreads() to retrieve the interpreter state from
autoInterpreterState in last resort. The function now accepts NULL for interp
and current_tstate parameters.
* test_faulthandler: fix a ResourceWarning when test is interrupted by CTRL+c
Issue #26564:
* Expose _Py_DumpASCII() and _Py_DumpDecimal() in traceback.h
* Change the type of the second _Py_DumpASCII() parameter from int to unsigned
long
* Rewrite _Py_DumpDecimal() and dump_hexadecimal() to write directly characters
in the expected order, avoid the need of reversing the string.
* dump_hexadecimal() limits width to the size of the buffer
* _Py_DumpASCII() does nothing if the object is not a Unicode string
* dump_frame() wrtites "???" as the line number if the line number is negative
Issue #10915, #15751, #26558:
* PyGILState_Check() now returns 1 (success) before the creation of the GIL and
after the destruction of the GIL. It allows to use the function early in
Python initialization and late in Python finalization.
* Add a flag to disable PyGILState_Check(). Disable PyGILState_Check() when
Py_NewInterpreter() is called
* Add assert(PyGILState_Check()) to: _Py_dup(), _Py_fstat(), _Py_read()
and _Py_write()
Issue #26516:
* Add PYTHONMALLOC environment variable to set the Python memory
allocators and/or install debug hooks.
* PyMem_SetupDebugHooks() can now also be used on Python compiled in release
mode.
* The PYTHONMALLOCSTATS environment variable can now also be used on Python
compiled in release mode. It now has no effect if set to an empty string.
* In debug mode, debug hooks are now also installed on Python memory allocators
when Python is configured without pymalloc.
Issue #26146: Add a new kind of AST node: ast.Constant. It can be used by
external AST optimizers, but the compiler does not emit directly such node.
An optimizer can replace the following AST nodes with ast.Constant:
* ast.NameConstant: None, False, True
* ast.Num: int, float, complex
* ast.Str: str
* ast.Bytes: bytes
* ast.Tuple if items are constants too: tuple
* frozenset
Update code to accept ast.Constant instead of ast.Num and/or ast.Str:
* compiler
* docstrings
* ast.literal_eval()
* Tools/parser/unparse.py
Issue #26161: Use Py_uintptr_t instead of void* for atomic pointers in
pyatomic.h. Use atomic_uintptr_t when <stdatomic.h> is used.
Using void* causes compilation warnings depending on which implementation of
atomic types is used.
Issue #25843: When compiling code, don't merge constants if they are equal but
have a different types. For example, "f1, f2 = lambda: 1, lambda: 1.0" is now
correctly compiled to two different functions: f1() returns 1 (int) and f2()
returns 1.0 (int), even if 1 and 1.0 are equal.
Add a new _PyCode_ConstantKey() private function.
Issue #25843: When compiling code, don't merge constants if they are equal but
have a different types. For example, "f1, f2 = lambda: 1, lambda: 1.0" is now
correctly compiled to two different functions: f1() returns 1 (int) and f2()
returns 1.0 (int), even if 1 and 1.0 are equal.
Add a new _PyCode_ConstantKey() private function.
issue25909 - Correct the documentation of PyMapping_Items, PyMapping_Keys and
PyMapping_Values in Include/abstract.h and Doc/c-api/mapping.rst.
Patch contributed by Sonali Gupta.
Issue #26107: The format of the co_lnotab attribute of code objects changes to
support negative line number delta.
Changes:
* assemble_lnotab(): if line number delta is less than -128 or greater than
127, emit multiple (offset_delta, lineno_delta) in co_lnotab
* update functions decoding co_lnotab to use signed 8-bit integers
- dis.findlinestarts()
- PyCode_Addr2Line()
- _PyCode_CheckLineNumber()
- frame_setlineno()
* update lnotab_notes.txt
* increase importlib MAGIC_NUMBER to 3361
* document the change in What's New in Python 3.6
* cleanup also PyCode_Optimize() to use better variable names
Issue #26154: Add a new private _PyThreadState_UncheckedGet() function which
gets the current thread state, but don't call Py_FatalError() if it is NULL.
Python 3.5.1 removed the _PyThreadState_Current symbol from the Python C API to
no more expose complex and private atomic types. Atomic types depends on the
compiler or can even depend on compiler options. The new function
_PyThreadState_UncheckedGet() allows to get the variable value without having
to care of the exact implementation of atomic types.
Changes:
* Replace direct usage of the _PyThreadState_Current variable with a call to
_PyThreadState_UncheckedGet().
* In pystate.c, replace direct usage of the _PyThreadState_Current variable
with the PyThreadState_GET() macro for readability.
* Document also PyThreadState_Get() in pystate.h
Also document that the separate functions that delete objects are preferred;
using PyObject_SetAttr(), _SetAttrString(), and PySequence_SetItem() to
delete is deprecated.
This changes the main documentation, doc strings, source code comments, and a
couple error messages in the test suite. In some cases the word was removed
or edited some other way to fix the grammar.
* Don't overallocate by 400% when recode is needed: only overallocate on demand
using _PyBytesWriter.
* Use _PyLong_DigitValue to convert hexadecimal digit to int
* Create _PyBytes_DecodeEscapeRecode() subfunction
Issue #25401: Optimize bytes.fromhex() and bytearray.fromhex(): they are now
between 2x and 3.5x faster. Changes:
* Use a fast-path working on a char* string for ASCII string
* Use a slow-path for non-ASCII string
* Replace slow hex_digit_to_int() function with a O(1) lookup in
_PyLong_DigitValue precomputed table
* Use _PyBytesWriter API to handle the buffer
* Add unit tests to check the error position in error messages
Issue #25399: Don't create temporary bytes objects: modify _PyBytes_Format() to
create work directly on bytearray objects.
* Rename _PyBytes_Format() to _PyBytes_FormatEx() just in case if something
outside CPython uses it
* _PyBytes_FormatEx() now uses (char*, Py_ssize_t) for the input string, so
bytearray_format() doesn't need tot create a temporary input bytes object
* Add use_bytearray parameter to _PyBytes_FormatEx() which is passed to
_PyBytesWriter, to create a bytearray buffer instead of a bytes buffer
Most formatting operations are now between 2.5 and 5 times faster.
Issue #25274: sys.setrecursionlimit() now raises a RecursionError if the new
recursion limit is too low depending at the current recursion depth. Modify
also the "lower-water mark" formula to make it monotonic. This mark is used to
decide when the overflowed flag of the thread state is reset.
Don't require _PyBytesWriter pointer to be a "char *". Same change for
_PyBytesWriter_WriteBytes() parameter.
For example, binascii uses "unsigned char*".
Optimize bytes.__mod__(args) for integere formats: %d (%i, %u), %o, %x and %X.
_PyBytesWriter is now used to format directly the integer into the writer
buffer, instead of using a temporary bytes object.
Formatting is between 30% and 50% faster on a microbenchmark.
Add a new private API to optimize Unicode encoders. It uses a small buffer
allocated on the stack and supports overallocation.
Use _PyBytesWriter API for UCS1 (ASCII and Latin1) and UTF-8 encoders. Enable
overallocation for the UTF-8 encoder with error handlers.
unicode_encode_ucs1(): initialize collend to collstart+1 to not check the
current character twice, we already know that it is not ASCII.
Python.h header to fix a compilation error with OpenMP. PyThreadState_GET()
becomes an alias to PyThreadState_Get() to avoid ABI incompatibilies.
It is important that the _PyThreadState_Current variable is always accessed
with the same implementation of pyatomic.h. Use the PyThreadState_Get()
function so extension modules will all reuse the same implementation.
On Windows, the tv_sec field of the timeval structure has the type C long,
whereas it has the type C time_t on all other platforms. A C long has a size of
32 bits (signed inter, 1 bit for the sign, 31 bits for the value) which is not
enough to store an Epoch timestamp after the year 2038.
Add the _PyTime_AsTimevalTime_t() function written for datetime.datetime.now():
convert a _PyTime_t timestamp to a (secs, us) tuple where secs type is time_t.
It allows to support dates after the year 2038 on Windows.
Enhance also _PyTime_AsTimeval_impl() to detect overflow on the number of
seconds when rounding the number of microseconds.
On Windows, the tv_sec field of the timeval structure has the type C long,
whereas it has the type C time_t on all other platforms. A C long has a size of
32 bits (signed inter, 1 bit for the sign, 31 bits for the value) which is not
enough to store an Epoch timestamp after the year 2038.
Add the _PyTime_AsTimevalTime_t() function written for datetime.datetime.now():
convert a _PyTime_t timestamp to a (secs, us) tuple where secs type is time_t.
It allows to support dates after the year 2038 on Windows.
Enhance also _PyTime_AsTimeval_impl() to detect overflow on the number of
seconds when rounding the number of microseconds.
datetime.datetime now round microseconds to nearest with ties going to nearest
even integer (ROUND_HALF_EVEN), as round(float), instead of rounding towards
-Infinity (ROUND_FLOOR).
pytime API: replace _PyTime_ROUND_HALF_UP with _PyTime_ROUND_HALF_EVEN. Fix
also _PyTime_Divide() for negative numbers.
_PyTime_AsTimeval_impl() now reuses _PyTime_Divide() instead of reimplementing
rounding modes.
with ties going away from zero (ROUND_HALF_UP), as Python 2 and Python older
than 3.3, instead of rounding to nearest with ties going to nearest even
integer (ROUND_HALF_EVEN).
The real benefit of the unicode specialized function comes from
bypassing the overhead of PyObject_RichCompareBool() and not
from being in-lined (especially since there was almost no shared
data between the caller and callee). Also, the in-lining was
having a negative effect on code generation for the callee.
Summary of changes:
1. Coroutines now have a distinct, separate from generators
type at the C level: PyGen_Type, and a new typedef PyCoroObject.
PyCoroObject shares the initial segment of struct layout with
PyGenObject, making it possible to reuse existing generators
machinery. The new type is exposed as 'types.CoroutineType'.
As a consequence of having a new type, CO_GENERATOR flag is
no longer applied to coroutines.
2. Having a separate type for coroutines made it possible to add
an __await__ method to the type. Although it is not used by the
interpreter (see details on that below), it makes coroutines
naturally (without using __instancecheck__) conform to
collections.abc.Coroutine and collections.abc.Awaitable ABCs.
[The __instancecheck__ is still used for generator-based
coroutines, as we don't want to add __await__ for generators.]
3. Add new opcode: GET_YIELD_FROM_ITER. The opcode is needed to
allow passing native coroutines to the YIELD_FROM opcode.
Before this change, 'yield from o' expression was compiled to:
(o)
GET_ITER
LOAD_CONST
YIELD_FROM
Now, we use GET_YIELD_FROM_ITER instead of GET_ITER.
The reason for adding a new opcode is that GET_ITER is used
in some contexts (such as 'for .. in' loops) where passing
a coroutine object is invalid.
4. Add two new introspection functions to the inspec module:
getcoroutinestate(c) and getcoroutinelocals(c).
5. inspect.iscoroutine(o) is updated to test if 'o' is a native
coroutine object. Before this commit it used abc.Coroutine,
and it was requested to update inspect.isgenerator(o) to use
abc.Generator; it was decided, however, that inspect functions
should really be tailored for checking for native types.
6. sys.set_coroutine_wrapper(w) API is updated to work with only
native coroutines. Since types.coroutine decorator supports
any type of callables now, it would be confusing that it does
not work for all types of coroutines.
7. Exceptions logic in generators C implementation was updated
to raise clearer messages for coroutines:
Before: TypeError("generator raised StopIteration")
After: TypeError("coroutine raised StopIteration")
Known limitations of the current implementation:
- documentation changes are incomplete
- there's a reference leak I haven't tracked down yet
The leak is most visible by running:
./python -m test -R3:3 test_importlib
However, you can also see it by running:
./python -X showrefcount
Importing the array or _testmultiphase modules, and
then deleting them from both sys.modules and the local
namespace shows significant increases in the total
number of active references each cycle. By contrast,
with _testcapi (which continues to use single-phase
initialisation) the global refcounts stabilise after
a couple of cycles.
Replaces the PyList_GET_ITEM and PyList_SET_ITEM macros with normal array
accesses. Replace the siftup unpredicatable branch with arithmetic.
Replace the rc == -1 tests with rc < 0. Gives nicer looking assembly
with both Clang and GCC-4.9. Also gives a small performance both for both.
Add also a new _PyTime_AsMicroseconds() function.
threading.TIMEOUT_MAX is now be smaller: only 292 years instead of 292,271
years on 64-bit system for example. Sorry, your threads will hang a *little
bit* shorter. Call me if you want to ensure that your locks wait longer, I can
share some tricks with you.
* _PyTime_AsTimeval() now ensures that tv_usec is always positive
* _PyTime_AsTimespec() now ensures that tv_nsec is always positive
* _PyTime_AsTimeval() now returns an integer on overflow instead of raising an
exception
* Rename _PyTime_FromObject() to _PyTime_FromSecondsObject()
* Add _PyTime_AsNanosecondsObject() and _testcapi.pytime_fromsecondsobject()
* Add unit tests
In practice, _PyTime_t is a number of nanoseconds. Its C type is a 64-bit
signed number. It's integer value is in the range [-2^63; 2^63-1]. In seconds,
the range is around [-292 years; +292 years]. In term of Epoch timestamp
(1970-01-01), it can store a date between 1677-09-21 and 2262-04-11.
The API has a resolution of 1 nanosecond and use integer number. With a
resolution on 1 nanosecond, 64-bit IEEE 754 floating point numbers loose
precision after 194 days. It's not the case with this API. The drawback is
overflow for values outside [-2^63; 2^63-1], but these values are unlikely for
most Python modules, except of the datetime module.
New functions:
- _PyTime_GetMonotonicClock()
- _PyTime_FromObject()
- _PyTime_AsMilliseconds()
- _PyTime_AsTimeval()
This change uses these new functions in time.sleep() to avoid rounding issues.
The new API will be extended step by step, and the old API will be removed step
by step. Currently, some code is duplicated just to be able to move
incrementally, instead of pushing a large change at once.
I expected more users of _Py_wstat(), but in practice it's only used by
Modules/getpath.c. Move the function because it's not needed on Windows.
Windows uses PC/getpathp.c which uses the Win32 API (ex: GetFileAttributesW())
not the POSIX API.
which returned an invalid result (result+error or no result without error) in
the exception message.
Add also unit test to check that the exception contains the name of the
function.
Special case: the final _PyEval_EvalFrameEx() check doesn't mention the
function since it didn't execute a single function but a whole frame.
interrupted by a signal
Add a new _PyTime_AddDouble() function and remove _PyTime_ADD_SECONDS() macro.
The _PyTime_ADD_SECONDS only supported an integer number of seconds, the
_PyTime_AddDouble() has subsecond resolution.
EINTR error and special cases for Windows.
These functions now truncate the length to PY_SSIZE_T_MAX to have a portable
and reliable behaviour. For example, read() result is undefined if counter is
greater than PY_SSIZE_T_MAX on Linux.
* _Py_open() now raises exceptions on error. If open() fails, it raises an
OSError with the filename.
* _Py_open() now releases the GIL while calling open()
* Add _Py_open_noraise() when _Py_open() cannot be used because the GIL is not
held
Disable completly pyatomic.h on C++, because <stdatomic.h> is not compatible with C++.
<pyatomic.h> is only needed by the optimized PyThreadState_GET() macro in
pystate.h. Instead, declare PyThreadState_GET() as an alias to
PyThreadState_Get(), as done for limited API.
raise a SystemError if a function returns a result and raises an exception.
The SystemError is chained to the previous exception.
Refactor also PyObject_Call() and PyCFunction_Call() to make them more readable.
Remove some checks which became useless (duplicate checks).
Change reviewed by Serhiy Storchaka.
Declarations of Windows-specific auxilary functions need Windows types
from windows.h. Instead of including windows.h in Python.h and making
it available to all Windows users, it is simpler and safer just move
declarations to the single file that needs them.
- Move all Py_LIMITED_API exclusions together under one #ifndef
- Group PyAPI_FUNC functions and PyAPI_DATA together.
- Bring related comments together and put them in the appropriate section.
threading.Lock.acquire(), threading.RLock.acquire() and socket operations now
use a monotonic clock, instead of the system clock, when a timeout is used.
Other changes:
* The whole _PyTime API is private (not defined if Py_LIMITED_API is set)
* _PyTime_gettimeofday_info() also returns -1 on error
* Simplify PyTime_gettimeofday(): only use clock_gettime(CLOCK_REALTIME) or
gettimeofday() on UNIX. Don't fallback to ftime() or time() anymore.
name, and use it in the representation of a generator (``repr(gen)``). The
default name of the generator (``__name__`` attribute) is now get from the
function instead of the code. Use ``gen.gi_code.co_name`` to get the name of
the code.
PyObject_Calloc(), _PyObject_GC_Calloc(). bytes(int) and bytearray(int) are now
using ``calloc()`` instead of ``malloc()`` for large objects which is faster
and use less memory (until the bytearray buffer is filled with data).
now register both filenames in the exception on failure.
This required adding new C API functions allowing OSError exceptions
to reference two filenames instead of one.
The new syntax is highly human readable while still preventing false
positives. The syntax also extends Python syntax to denote "self" and
positional-only parameters, allowing inspect.Signature objects to be
totally accurate for all supported builtins in Python 3.4.
- io.TextIOWrapper (and hence the open() builtin) now use the
internal codec marking system added for issue #19619
- also tweaked the C code to only look up the encoding once,
rather than multiple times
- the existing output type checks remain in place to deal with
unmarked third party codecs.
annotate text signatures in docstrings, resulting in fewer false
positives. "self" parameters are also explicitly marked, allowing
inspect.Signature() to authoritatively detect (and skip) said parameters.
Issue #20326: Argument Clinic now generates separate checksums for the
input and output sections of the block, allowing external tools to verify
that the input has not changed (and thus the output is not out-of-date).
PyMethodDescr_Type, _PyMethodWrapper_Type, and PyWrapperDescr_Type)
have been modified to provide introspection information for builtins.
Also: many additional Lib, test suite, and Argument Clinic fixes.
crash when a generator is created in a C thread that is destroyed while the
generator is still used. The issue was that a generator contains a frame, and
the frame kept a reference to the Python state of the destroyed C thread. The
crash occurs when a trace function is setup.
str.encode, bytes.decode and bytearray.decode now use an
internal API to throw LookupError for known non-text encodings,
rather than attempting the encoding or decoding operation and
then throwing a TypeError for an unexpected output type.
The latter mechanism remains in place for third party non-text
encodings.
- output type errors now redirect users to the type-neutral
convenience functions in the codecs module
- stateless errors that occur during encoding and decoding
will now be automatically wrapped in exceptions that give
the name of the codec involved
_PyUnicode_CompareWithId() is faster than PyUnicode_CompareWithASCIIString()
when both strings are equal and interned.
Add also _PyId_builtins identifier for "builtins" common string.
instead of creating temporary Unicode string objects
Add also more identifiers in pythonrun.c to avoid temporary Unicode string
objets for the interactive interpreter.
- don't call PyErr_NoMemory with interpreter is not initialised
- note that it's OK to call _PyMem_RawStrDup here
- don't include this in the limited API
- capitalise "IO"
- be explicit that a non-zero return indicates an error
- include versionadded marker in docs
This new pre-initialization API allows embedding
applications like Blender to force a particular
encoding and error handler for the standard IO streams.
Also refactors Modules/_testembed.c to let us start
testing multiple embedding scenarios.
(Initial patch by Bastien Montagne)