Discovered using clang's MemorySanitizer when it ran python3's
test_fstring test_misformed_unicode_character_name.
An msan build will fail by simply executing: ./python -c 'u"\N"'
This function may access memory which is mapped but is considered
free by libc allocator. It behaves so by design, therefore we
need to suppress sanitizer reports.
GCC doesn't support MSan, so disable only TSan for it.
* _PyTuple_ITEMS() gives access to the tuple->ob_item field and cast the
first argument to PyTupleObject*. This internal macro is only usable if
Py_BUILD_CORE is defined.
* Replace &PyTuple_GET_ITEM(ob, 0) with _PyTuple_ITEMS(ob).
* Replace PyTuple_GET_ITEM(op, 1) with &_PyTuple_ITEMS(ob)[1].
Gives approx 20% speed-up using clang depending on the number of elements in the set (the less dense the set, the more the speed-up).
Uses the same entry++ logic used elsewhere in the setobject.c code.
The accu.h header is no longer part of the Python C API: it has been
moved to the "internal" headers which are restricted to Python
itself.
Replace #include "accu.h" with #include "pycore_accu.h".
If Py_BUILD_CORE is defined, the PyThreadState_GET() macro access
_PyRuntime which comes from the internal pycore_state.h header.
Public headers must not require internal headers.
Move PyThreadState_GET() and _PyInterpreterState_GET_UNSAFE() from
Include/pystate.h to Include/internal/pycore_state.h, and rename
PyThreadState_GET() to _PyThreadState_GET() there.
The PyThreadState_GET() macro of pystate.h is now redefined when
pycore_state.h is included, to use the fast _PyThreadState_GET().
Changes:
* Add _PyThreadState_GET() macro
* Replace "PyThreadState_GET()->interp" with
_PyInterpreterState_GET_UNSAFE()
* Replace PyThreadState_GET() with _PyThreadState_GET() in internal C
files (compiled with Py_BUILD_CORE defined), but keep
PyThreadState_GET() in the public header files.
* _testcapimodule.c: replace PyThreadState_GET() with
PyThreadState_Get(); the module is not compiled with Py_BUILD_CORE
defined.
* pycore_state.h now requires Py_BUILD_CORE to be defined.
* Remove _PyThreadState_Current
* Replace GET_TSTATE() with PyThreadState_GET()
* Replace GET_INTERP_STATE() with _PyInterpreterState_GET_UNSAFE()
* Replace direct access to _PyThreadState_Current with
PyThreadState_GET()
* Replace _PyThreadState_Current with
_PyRuntime.gilstate.tstate_current
* Rename SET_TSTATE() to _PyThreadState_SET(), name more
consistent with _PyThreadState_GET()
* Update outdated comments
The list() constructor isn't taking full advantage of known input
lengths or length hints. This commit makes the constructor
pre-size and not over-allocate when the input size is known (the
input collection implements __len__). One on the main advantages is
that this provides 12% difference in memory savings due to the difference
between overallocating and allocating exactly the input size.
For efficiency purposes and to avoid a performance regression for small
generators and collections, the size of the input object is calculated using
__len__ and not __length_hint__, as the later is considerably slower.
Configuring python with ./configure --with-pydebug CFLAGS="-D COUNT_ALLOCS -O0"
makes "make smelly" fail as some symbols were being exported without the "Py_" or
"_Py" prefixes.
Use _PyObject_ASSERT() in:
* _PyDict_CheckConsistency()
* _PyType_CheckConsistency()
* _PyUnicode_CheckConsistency()
_PyObject_ASSERT() dumps the faulty object if the assertion fails
to help debugging.
* Convert PyObject_INIT() and PyObject_INIT_VAR() macros to static
inline functions.
* Fix usage of these functions: cast to PyObject* or PyVarObject*.
Changes:
* Add _PyObject_AssertFailed() function.
* Add _PyObject_ASSERT() and _PyObject_ASSERT_WITH_MSG() macros.
* gc_decref(): replace assert() with _PyObject_ASSERT_WITH_MSG() to
dump the faulty object if the assertion fails.
_PyObject_AssertFailed() calls:
* _PyMem_DumpTraceback(): try to log the traceback where the object
memory has been allocated if tracemalloc is enabled.
* _PyObject_Dump(): log repr(obj).
* Py_FatalError(): log the current Python traceback.
_PyObject_AssertFailed() uses _PyObject_IsFreed() heuristic to check
if the object memory has been freed by a debug hook on Python memory
allocators.
Initial patch written by David Malcolm.
Co-Authored-By: David Malcolm <dmalcolm@redhat.com>
* Add Py_STATIC_INLINE() macro to declare a "static inline" function.
If the compiler supports it, try to always inline the function even if no
optimization level was specified.
* Modify pydtrace.h to use Py_STATIC_INLINE() when WITH_DTRACE is
not defined.
* Add an unit test on Py_DECREF() to make sure that
_Py_NegativeRefcount() reports the correct filename.
tracemalloc now tries to update the traceback when an object is
reused from a "free list" (optimization for faster object creation,
used by the builtin list type for example).
Changes:
* Add _PyTraceMalloc_NewReference() function which tries to update
the Python traceback of a Python object.
* _Py_NewReference() now calls _PyTraceMalloc_NewReference().
* Add an unit test.
_PyObject_Dump() now uses an heuristic to check if the object memory
has been freed: log "<freed object>" in that case.
The heuristic rely on the debug hooks on Python memory allocators
which fills the memory with DEADBYTE (0xDB) when memory is
deallocated. Use PYTHONMALLOC=debug to always enable these debug
hooks.
The assignment of i/2 to nk is redundant because on this code path, nk is already the size of the dictionary, and i is already twice the size of the dictionary. I've replaced the store with an assertion that i/2 is nk.
property_descr_get() uses a "cached" tuple to optimize function
calls. But this tuple can be discovered in debug mode with
sys.getobjects(). Remove the optimization, it's not really worth it
and it causes 3 different crashes last years.
Microbenchmark:
./python -m perf timeit -v \
-s "from collections import namedtuple; P = namedtuple('P', 'x y'); p = P(1, 2)" \
--duplicate 1024 "p.x"
Result:
Mean +- std dev: [ref] 32.8 ns +- 0.8 ns -> [patch] 40.4 ns +- 1.3 ns: 1.23x slower (+23%)
When dict subclass overrides order (`__iter__()`, `keys()`, and `items()`), `dict(o)`
should use it instead of dict ordering.
https://bugs.python.org/issue34320
Address a C undefined behavior signed integer overflow issue in set object table resizing. Our -fwrapv compiler flag and practical reasons why sets are unlikely to get this large should mean this was never an issue but it was incorrect code that generates code analysis warnings.
<!-- issue-number: [bpo-1621](https://www.bugs.python.org/issue1621) -->
https://bugs.python.org/issue1621
<!-- /issue-number -->
* Add %T format to PyUnicode_FromFormatV(), and so to
PyUnicode_FromFormat() and PyErr_Format(), to format an object type
name: equivalent to "%s" with Py_TYPE(obj)->tp_name.
* Replace Py_TYPE(obj)->tp_name with %T format in unicodeobject.c.
* Add unit test on %T format.
* Rename unicode_fromformat_write_cstr() to
unicode_fromformat_write_utf8(), to make the intent more explicit.
Add support for the "surrogatepass" error handler in
PyUnicode_DecodeFSDefault() and PyUnicode_EncodeFSDefault()
for the UTF-8 encoding.
Changes:
* _Py_DecodeUTF8Ex() and _Py_EncodeUTF8Ex() now support the
surrogatepass error handler (_Py_ERROR_SURROGATEPASS).
* _Py_DecodeLocaleEx() and _Py_EncodeLocaleEx() now use
the _Py_error_handler enum instead of "int surrogateescape" to pass
the error handler. These functions now return -3 if the error
handler is unknown.
* Add unit tests on _Py_DecodeLocaleEx() and _Py_EncodeLocaleEx()
in test_codecs.
* Rename get_error_handler() to _Py_GetErrorHandler() and expose it
as a private function.
* _freeze_importlib doesn't need config.filesystem_errors="strict"
workaround anymore.
_PyCoreConfig_Read() is now responsible to choose the filesystem
encoding and error handler. Using Py_Main(), the encoding is now
chosen even before calling Py_Initialize().
_PyCoreConfig.filesystem_encoding is now the reference, instead of
Py_FileSystemDefaultEncoding, for the Python filesystem encoding.
Changes:
* Add filesystem_encoding and filesystem_errors to _PyCoreConfig
* _PyCoreConfig_Read() now reads the locale encoding for the file
system encoding.
* PyUnicode_EncodeFSDefault() and PyUnicode_DecodeFSDefaultAndSize()
now use the interpreter configuration rather than
Py_FileSystemDefaultEncoding and Py_FileSystemDefaultEncodeErrors
global configuration variables.
* Add _Py_SetFileSystemEncoding() and _Py_ClearFileSystemEncoding()
private functions to only modify Py_FileSystemDefaultEncoding and
Py_FileSystemDefaultEncodeErrors in coreconfig.c.
* _Py_CoerceLegacyLocale() now takes an int rather than
_PyCoreConfig for the warning.
Also, propagate the error from PyNumber_AsSsize_t() because we don't care
only about OverflowError which is not reported if the second argument is NULL.
Reported by Svace static analyzer.
* The hash of BuiltinMethodType instances no longer depends on the hash
of __self__. It depends now on the hash of id(__self__).
* The hash and equality of ModuleType and MethodWrapperType instances no
longer depend on the hash and equality of __self__. They depend now on
the hash and equality of id(__self__).
* MethodWrapperType instances no longer support ordering.
Add more fields to _PyCoreConfig:
* _check_hash_pycs_mode
* bytes_warning
* debug
* inspect
* interactive
* legacy_windows_fs_encoding
* legacy_windows_stdio
* optimization_level
* quiet
* unbuffered_stdio
* user_site_directory
* verbose
* write_bytecode
Changes:
* Remove pymain_get_global_config() and pymain_set_global_config()
which became useless. These functions have been replaced by
_PyCoreConfig_GetGlobalConfig() and
_PyCoreConfig_SetGlobalConfig().
* sys.flags.dont_write_bytecode value is now restricted to 1 even if
-B option is specified multiple times on the command line.
* PyThreadState_Clear() now uses the config from the current
interpreter rather than using global Py_VerboseFlag
Fix error messages for PySequence_Size(), PySequence_GetItem(),
PySequence_SetItem() and PySequence_DelItem() called with a mapping
and PyMapping_Size() called with a sequence.
`_PyUnicode_TransformDecimalAndSpaceToASCII()` missed trailing NUL char.
It caused buffer overflow in `_Py_string_to_number_with_underscores()`.
This bug is introduced in 9b6c60cb.
During development of the limited API support for PySide,
we saw an error in a macro that accessed a type field.
This patch fixes the 7 errors in the Python headers.
Macros which were not written as capitals were implemented
as function.
To do the necessary analysis again, a script was included that
parses all headers and looks for "->tp_" in serctions which can
be reached with active limited API.
It is easily possible to call this script as a test.
Error listing:
../../Include/objimpl.h:243
#define PyObject_IS_GC(o) (PyType_IS_GC(Py_TYPE(o)) && \
(Py_TYPE(o)->tp_is_gc == NULL || Py_TYPE(o)->tp_is_gc(o)))
Action: commented only
../../Include/objimpl.h:362
#define PyType_SUPPORTS_WEAKREFS(t) ((t)->tp_weaklistoffset > 0)
Action: commented only
../../Include/objimpl.h:364
#define PyObject_GET_WEAKREFS_LISTPTR(o) \
((PyObject **) (((char *) (o)) + Py_TYPE(o)->tp_weaklistoffset))
Action: commented only
../../Include/pyerrors.h:143
#define PyExceptionClass_Name(x) \
((char *)(((PyTypeObject*)(x))->tp_name))
Action: implemented function
../../Include/abstract.h:593
#define PyIter_Check(obj) \
((obj)->ob_type->tp_iternext != NULL && \
(obj)->ob_type->tp_iternext != &_PyObject_NextNotImplemented)
Action: implemented function
../../Include/abstract.h:713
#define PyIndex_Check(obj) \
((obj)->ob_type->tp_as_number != NULL && \
(obj)->ob_type->tp_as_number->nb_index != NULL)
Action: implemented function
../../Include/abstract.h:924
#define PySequence_ITEM(o, i)\
( Py_TYPE(o)->tp_as_sequence->sq_item(o, i) )
Action: commented only
METH_NOARGS functions need only a single argument but they are cast
into a PyCFunction, which takes two arguments. This triggers an
invalid function cast warning in gcc8 due to the argument mismatch.
Fix this by adding a dummy unused argument.