When Python is embedded in other applications, it is not easy to determine which version of Python is being used. This change exposes the Python version as part of the API data. Tools like Austin (https://github.com/P403n1x87/austin) can benefit from this data when targeting applications like uWSGI, as the Python version can then be inferred systematically by looking at the exported symbols rather than relying on unreliable pattern matching or other hacks (like remote code execution etc...).
Automerge-Triggered-By: GH:pablogsal
Add PyThreadState_EnterTracing() and PyThreadState_LeaveTracing()
functions to the limited C API to suspend and resume tracing and
profiling.
Add an unit test on the PyThreadState C API to _testcapi.
Add also internal _PyThreadState_DisableTracing() and
_PyThreadState_ResetTracing().
Rename Include/namespaceobject.h to
Include/internal/pycore_namespace.h.
The _testmultiphase extension is now built with the
Py_BUILD_CORE_MODULE macro defined to access _PyNamespace_Type.
object.c: remove unused "pycore_context.h" include.
* Move _PyObject_CallNoArgs() to pycore_call.h (internal C API).
* _ssl, _sqlite and _testcapi extensions now call the public
PyObject_CallNoArgs() function, rather than _PyObject_CallNoArgs().
* _lsprof extension is now built with Py_BUILD_CORE_MODULE macro
defined to get access to internal _PyObject_CallNoArgs().
Fix typo in the private _PyObject_CallNoArg() function name: rename
it to _PyObject_CallNoArgs() to be consistent with the public
function PyObject_CallNoArgs().
Add the _PyTime_AsTimespec_clamp() function: similar to
_PyTime_AsTimespec(), but clamp to _PyTime_t min/max and don't raise
an exception.
PyThread_acquire_lock_timed() now uses _PyTime_AsTimespec_clamp() to
remove the Py_UNREACHABLE() code path.
* Add _PyTime_AsTime_t() function.
* Add PY_TIME_T_MIN and PY_TIME_T_MAX constants.
* Replace _PyTime_AsTimeval_noraise() with _PyTime_AsTimeval_clamp().
* Add pytime_divide_round_up() function.
* Fix integer overflow in pytime_divide().
* Add pytime_divmod() function.
Refactor pytime.c:
* Add pytime_from_nanoseconds() and pytime_as_nanoseconds(),
and use explicitly these functions
* Add two empty lines between functions
* PEP 7: add braces { ... }
* C99: declare variables where they are set
* Rename private functions to lowercase
* Rename error_time_t_overflow() to pytime_time_t_overflow()
* Rename win_perf_counter_frequency() to py_win_perf_counter_frequency()
* py_get_monotonic_clock(): add an assertion to detect overflow when
mach_absolute_time() unsigned uint64_t is casted to _PyTime_t
(signed int64_t).
_testcapi: use _PyTime_FromNanoseconds().
Convert the Py_TYPE() and Py_SIZE() macros to static inline
functions. The Py_SET_TYPE() and Py_SET_SIZE() functions must now be
used to set an object type and size.
Remove useless calls to PyThread_exit_thread() in two unit tests of
_testcapi and _testembed modules.
Co-authored-by: Alexey Izbyshev <izbyshev@ispras.ru>
Convert the Py_TYPE() and Py_SIZE() macros to static inline
functions. The Py_SET_TYPE() and Py_SET_SIZE() functions must now be
used to set an object type and size.
* _testcapi.heapgctype: implement a traverse function since the type
is defined with Py_TPFLAGS_HAVE_GC.
* _decimal: PyDecSignalDictMixin_Type is no longer defined with
Py_TPFLAGS_HAVE_GC since it has no traverse function.
Add new C-API functions to control the state of the garbage collector:
PyGC_Enable(), PyGC_Disable(), PyGC_IsEnabled(),
corresponding to the functions in the gc module.
Co-authored-by: Pablo Galindo <Pablogsal@gmail.com>
Co-authored-by: Victor Stinner <vstinner@python.org>
Add the Py_Is(x, y) function to test if the 'x' object is the 'y'
object, the same as "x is y" in Python. Add also the Py_IsNone(),
Py_IsTrue(), Py_IsFalse() functions to test if an object is,
respectively, the None singleton, the True singleton or the False
singleton.
Stefan Krah requested the reversal of these (unreleased) changes, quoting him:
> The capsule API does not meet my testing standards, since I've focused
on the upstream mpdecimal in the last couple of months.
> Additionally, I'd like to refine the API, perhaps together with the
Arrow community.
Automerge-Triggered-By: GH:pitrou
* bpo-42979: Enhance abstract.c assertions checking slot result
Add _Py_CheckSlotResult() function which fails with a fatal error if
a slot function succeeded with an exception set or failed with no
exception set: write the slot name, the type name and the current
exception (if an exception is set).
No longer use deprecated aliases to functions:
* Replace PyObject_MALLOC() with PyObject_Malloc()
* Replace PyObject_REALLOC() with PyObject_Realloc()
* Replace PyObject_FREE() with PyObject_Free()
* Replace PyObject_Del() with PyObject_Free()
* Replace PyObject_DEL() with PyObject_Free()
No longer use deprecated aliases to functions:
* Replace PyMem_MALLOC() with PyMem_Malloc()
* Replace PyMem_REALLOC() with PyMem_Realloc()
* Replace PyMem_FREE() with PyMem_Free()
* Replace PyMem_Del() with PyMem_Free()
* Replace PyMem_DEL() with PyMem_Free()
Modify also the PyMem_DEL() macro to use directly PyMem_Free().
This change partically reverts
commit ad3252bad9
and the commit fe2978b3b9.
Many third party C extension modules rely on the ability of using
Py_TYPE() to set an object type: "Py_TYPE(obj) = type;" or to set an
object type using: "Py_SIZE(obj) = size;".
The PY_SSIZE_T_CLEAN macro must now be defined to use
PyArg_ParseTuple() and Py_BuildValue() "#" formats: "es#", "et#",
"s#", "u#", "y#", "z#", "U#" and "Z#". See the PEP 353.
Update _testcapi.test_buildvalue_issue38913().
Move the PyGC_Head structure and the following private macros to the
internal C API:
* _PyGCHead_FINALIZED()
* _PyGCHead_NEXT()
* _PyGCHead_PREV()
* _PyGCHead_SET_FINALIZED()
* _PyGCHead_SET_NEXT()
* _PyGCHead_SET_PREV()
* _PyGC_FINALIZED()
* _PyGC_PREV_MASK
* _PyGC_PREV_MASK_COLLECTING
* _PyGC_PREV_MASK_FINALIZED
* _PyGC_PREV_SHIFT
* _PyGC_SET_FINALIZED()
* _PyObject_GC_IS_TRACKED()
* _PyObject_GC_MAY_BE_TRACKED()
* _Py_AS_GC(o)
Keep the private _PyGC_FINALIZED() macro in the public C API for
backward compatibility with Python 3.8: make it an alias to the new
PyObject_GC_IsFinalized() function.
Move the SIZEOF_PYGC_HEAD constant from _testcapi module to
_testinternalcapi module.
Add the functions PyObject_GC_IsTracked and PyObject_GC_IsFinalized to the public API to allow to query if Python objects are being currently tracked or have been already finalized by the garbage collector respectively.
The bulk of this patch was generated automatically with:
for name in \
PyObject_Vectorcall \
Py_TPFLAGS_HAVE_VECTORCALL \
PyObject_VectorcallMethod \
PyVectorcall_Function \
PyObject_CallOneArg \
PyObject_CallMethodNoArgs \
PyObject_CallMethodOneArg \
;
do
echo $name
git grep -lwz _$name | xargs -0 sed -i "s/\b_$name\b/$name/g"
done
old=_PyObject_FastCallDict
new=PyObject_VectorcallDict
git grep -lwz $old | xargs -0 sed -i "s/\b$old\b/$new/g"
and then cleaned up:
- Revert changes to in docs & news
- Revert changes to backcompat defines in headers
- Nudge misaligned comments
* Remove _Py_INC_REFTOTAL and _Py_DEC_REFTOTAL macros: modify
directly _Py_RefTotal.
* _Py_ForgetReference() is no longer defined if the Py_TRACE_REFS
macro is not defined.
* Remove _Py_NewReference() implementation from object.c:
unify the two implementations in object.h inline function.
* Fix Py_TRACE_REFS build: _Py_INC_TPALLOCS() macro has been removed.
bpo-36389, bpo-38376: The _PyObject_CheckConsistency() function is
now also available in release mode. For example, it can be used to
debug a crash in the visit_decref() function of the GC.
Modify the following functions to also work in release mode:
* _PyDict_CheckConsistency()
* _PyObject_CheckConsistency()
* _PyType_CheckConsistency()
* _PyUnicode_CheckConsistency()
Other changes:
* _PyMem_IsPtrFreed(ptr) now also returns 1 if ptr is NULL
(equals to 0).
* _PyBytesWriter_CheckConsistency() now returns 1 and is only used
with assert().
* Reorder _PyObject_Dump() to write safe fields first, and only
attempt to render repr() at the end.
The instance destructor for a type is responsible for preparing
an instance for deallocation by decrementing the reference counts
of its referents.
If an instance belongs to a heap type, the type object of an instance
has its reference count decremented while for static types, which
are permanently allocated, the type object is unaffected by the
instance destructor.
Previously, the default instance destructor searched the class
hierarchy for an inherited instance destructor and, if present,
would invoke it.
Then, if the instance type is a heap type, it would decrement the
reference count of that heap type. However, this could result in the
premature destruction of a type because the inherited instance
destructor should have already decremented the reference count
of the type object.
This change avoids the premature destruction of the type object
by suppressing the decrement of its reference count when an
inherited, non-default instance destructor has been invoked.
Finally, an assertion on the Py_SIZE of a type was deleted. Heap
types have a non zero size, making this into an incorrect assertion.
https://github.com/python/cpython/pull/15323
Add functions with various calling conventions to `_testcapi`, expose them as module-level functions, bound methods, class methods, and static methods, and test calling them and introspecting them through GDB.
https://bugs.python.org/issue37499
Co-authored-by: Jeroen Demeyer <J.Demeyer@UGent.be>
Automerge-Triggered-By: @pganssle
There are plenty of legitimate scripts in the tree that begin with a
`#!`, but also a few that seem to be marked executable by mistake.
Found them with this command -- it gets executable files known to Git,
filters to the ones that don't start with a `#!`, and then unmarks
them as executable:
$ git ls-files --stage \
| perl -lane 'print $F[3] if (!/^100644/)' \
| while read f; do
head -c2 "$f" | grep -qxF '#!' \
|| chmod a-x "$f"; \
done
Looking at the list by hand confirms that we didn't sweep up any
files that should have the executable bit after all. In particular
* The `.psd` files are images from Photoshop.
* The `.bat` files sure look like things that can be run.
But we have lots of other `.bat` files, and they don't have
this bit set, so it must not be needed for them.
Automerge-Triggered-By: @benjaminp
Replace two Python function calls with a single one to ensure that no
memory allocation is done between the invalid object is created and
when _PyObject_IsFreed() is called.
When inheriting a heap subclass from a vectorcall class that sets
`.tp_call=PyVectorcall_Call` (as recommended in PEP 590), the subclass does
not inherit `_Py_TPFLAGS_HAVE_VECTORCALL`, and thus `PyVectorcall_Call` does
not work for it.
This attempts to solve the issue by:
* always inheriting `tp_vectorcall_offset` unless `tp_call` is overridden
in the subclass
* inheriting _Py_TPFLAGS_HAVE_VECTORCALL for static types, unless `tp_call`
is overridden
* making `PyVectorcall_Call` ignore `_Py_TPFLAGS_HAVE_VECTORCALL`
This means it'll be ever more important to only call `PyVectorcall_Call`
on classes that support vectorcall. In `PyVectorcall_Call`'s intended role
as `tp_call` filler, that's not a problem.