coro->cr_origin wasn't initialized if compute_cr_origin() failed in
PyCoro_New(), which would cause a crash during the coroutine's
deallocation.
https://bugs.python.org/issue35269
PyTuple_Pack can fail and return NULL. If this happens, then PyType_FromSpecWithBases will incorrectly create a new type without bases. Also, it will crash on the Py_DECREF that follows. Also free members and type in error conditions.
Discovered using clang's MemorySanitizer when it ran python3's
test_fstring test_misformed_unicode_character_name.
An msan build will fail by simply executing: ./python -c 'u"\N"'
This function may access memory which is mapped but is considered
free by libc allocator. It behaves so by design, therefore we
need to suppress sanitizer reports.
GCC doesn't support MSan, so disable only TSan for it.
* _PyTuple_ITEMS() gives access to the tuple->ob_item field and cast the
first argument to PyTupleObject*. This internal macro is only usable if
Py_BUILD_CORE is defined.
* Replace &PyTuple_GET_ITEM(ob, 0) with _PyTuple_ITEMS(ob).
* Replace PyTuple_GET_ITEM(op, 1) with &_PyTuple_ITEMS(ob)[1].
Gives approx 20% speed-up using clang depending on the number of elements in the set (the less dense the set, the more the speed-up).
Uses the same entry++ logic used elsewhere in the setobject.c code.
The accu.h header is no longer part of the Python C API: it has been
moved to the "internal" headers which are restricted to Python
itself.
Replace #include "accu.h" with #include "pycore_accu.h".
If Py_BUILD_CORE is defined, the PyThreadState_GET() macro access
_PyRuntime which comes from the internal pycore_state.h header.
Public headers must not require internal headers.
Move PyThreadState_GET() and _PyInterpreterState_GET_UNSAFE() from
Include/pystate.h to Include/internal/pycore_state.h, and rename
PyThreadState_GET() to _PyThreadState_GET() there.
The PyThreadState_GET() macro of pystate.h is now redefined when
pycore_state.h is included, to use the fast _PyThreadState_GET().
Changes:
* Add _PyThreadState_GET() macro
* Replace "PyThreadState_GET()->interp" with
_PyInterpreterState_GET_UNSAFE()
* Replace PyThreadState_GET() with _PyThreadState_GET() in internal C
files (compiled with Py_BUILD_CORE defined), but keep
PyThreadState_GET() in the public header files.
* _testcapimodule.c: replace PyThreadState_GET() with
PyThreadState_Get(); the module is not compiled with Py_BUILD_CORE
defined.
* pycore_state.h now requires Py_BUILD_CORE to be defined.
* Remove _PyThreadState_Current
* Replace GET_TSTATE() with PyThreadState_GET()
* Replace GET_INTERP_STATE() with _PyInterpreterState_GET_UNSAFE()
* Replace direct access to _PyThreadState_Current with
PyThreadState_GET()
* Replace _PyThreadState_Current with
_PyRuntime.gilstate.tstate_current
* Rename SET_TSTATE() to _PyThreadState_SET(), name more
consistent with _PyThreadState_GET()
* Update outdated comments
The list() constructor isn't taking full advantage of known input
lengths or length hints. This commit makes the constructor
pre-size and not over-allocate when the input size is known (the
input collection implements __len__). One on the main advantages is
that this provides 12% difference in memory savings due to the difference
between overallocating and allocating exactly the input size.
For efficiency purposes and to avoid a performance regression for small
generators and collections, the size of the input object is calculated using
__len__ and not __length_hint__, as the later is considerably slower.
Configuring python with ./configure --with-pydebug CFLAGS="-D COUNT_ALLOCS -O0"
makes "make smelly" fail as some symbols were being exported without the "Py_" or
"_Py" prefixes.
Use _PyObject_ASSERT() in:
* _PyDict_CheckConsistency()
* _PyType_CheckConsistency()
* _PyUnicode_CheckConsistency()
_PyObject_ASSERT() dumps the faulty object if the assertion fails
to help debugging.
* Convert PyObject_INIT() and PyObject_INIT_VAR() macros to static
inline functions.
* Fix usage of these functions: cast to PyObject* or PyVarObject*.
Changes:
* Add _PyObject_AssertFailed() function.
* Add _PyObject_ASSERT() and _PyObject_ASSERT_WITH_MSG() macros.
* gc_decref(): replace assert() with _PyObject_ASSERT_WITH_MSG() to
dump the faulty object if the assertion fails.
_PyObject_AssertFailed() calls:
* _PyMem_DumpTraceback(): try to log the traceback where the object
memory has been allocated if tracemalloc is enabled.
* _PyObject_Dump(): log repr(obj).
* Py_FatalError(): log the current Python traceback.
_PyObject_AssertFailed() uses _PyObject_IsFreed() heuristic to check
if the object memory has been freed by a debug hook on Python memory
allocators.
Initial patch written by David Malcolm.
Co-Authored-By: David Malcolm <dmalcolm@redhat.com>
* Add Py_STATIC_INLINE() macro to declare a "static inline" function.
If the compiler supports it, try to always inline the function even if no
optimization level was specified.
* Modify pydtrace.h to use Py_STATIC_INLINE() when WITH_DTRACE is
not defined.
* Add an unit test on Py_DECREF() to make sure that
_Py_NegativeRefcount() reports the correct filename.
tracemalloc now tries to update the traceback when an object is
reused from a "free list" (optimization for faster object creation,
used by the builtin list type for example).
Changes:
* Add _PyTraceMalloc_NewReference() function which tries to update
the Python traceback of a Python object.
* _Py_NewReference() now calls _PyTraceMalloc_NewReference().
* Add an unit test.
_PyObject_Dump() now uses an heuristic to check if the object memory
has been freed: log "<freed object>" in that case.
The heuristic rely on the debug hooks on Python memory allocators
which fills the memory with DEADBYTE (0xDB) when memory is
deallocated. Use PYTHONMALLOC=debug to always enable these debug
hooks.
The assignment of i/2 to nk is redundant because on this code path, nk is already the size of the dictionary, and i is already twice the size of the dictionary. I've replaced the store with an assertion that i/2 is nk.
property_descr_get() uses a "cached" tuple to optimize function
calls. But this tuple can be discovered in debug mode with
sys.getobjects(). Remove the optimization, it's not really worth it
and it causes 3 different crashes last years.
Microbenchmark:
./python -m perf timeit -v \
-s "from collections import namedtuple; P = namedtuple('P', 'x y'); p = P(1, 2)" \
--duplicate 1024 "p.x"
Result:
Mean +- std dev: [ref] 32.8 ns +- 0.8 ns -> [patch] 40.4 ns +- 1.3 ns: 1.23x slower (+23%)
When dict subclass overrides order (`__iter__()`, `keys()`, and `items()`), `dict(o)`
should use it instead of dict ordering.
https://bugs.python.org/issue34320
Address a C undefined behavior signed integer overflow issue in set object table resizing. Our -fwrapv compiler flag and practical reasons why sets are unlikely to get this large should mean this was never an issue but it was incorrect code that generates code analysis warnings.
<!-- issue-number: [bpo-1621](https://www.bugs.python.org/issue1621) -->
https://bugs.python.org/issue1621
<!-- /issue-number -->
* Add %T format to PyUnicode_FromFormatV(), and so to
PyUnicode_FromFormat() and PyErr_Format(), to format an object type
name: equivalent to "%s" with Py_TYPE(obj)->tp_name.
* Replace Py_TYPE(obj)->tp_name with %T format in unicodeobject.c.
* Add unit test on %T format.
* Rename unicode_fromformat_write_cstr() to
unicode_fromformat_write_utf8(), to make the intent more explicit.
Add support for the "surrogatepass" error handler in
PyUnicode_DecodeFSDefault() and PyUnicode_EncodeFSDefault()
for the UTF-8 encoding.
Changes:
* _Py_DecodeUTF8Ex() and _Py_EncodeUTF8Ex() now support the
surrogatepass error handler (_Py_ERROR_SURROGATEPASS).
* _Py_DecodeLocaleEx() and _Py_EncodeLocaleEx() now use
the _Py_error_handler enum instead of "int surrogateescape" to pass
the error handler. These functions now return -3 if the error
handler is unknown.
* Add unit tests on _Py_DecodeLocaleEx() and _Py_EncodeLocaleEx()
in test_codecs.
* Rename get_error_handler() to _Py_GetErrorHandler() and expose it
as a private function.
* _freeze_importlib doesn't need config.filesystem_errors="strict"
workaround anymore.
_PyCoreConfig_Read() is now responsible to choose the filesystem
encoding and error handler. Using Py_Main(), the encoding is now
chosen even before calling Py_Initialize().
_PyCoreConfig.filesystem_encoding is now the reference, instead of
Py_FileSystemDefaultEncoding, for the Python filesystem encoding.
Changes:
* Add filesystem_encoding and filesystem_errors to _PyCoreConfig
* _PyCoreConfig_Read() now reads the locale encoding for the file
system encoding.
* PyUnicode_EncodeFSDefault() and PyUnicode_DecodeFSDefaultAndSize()
now use the interpreter configuration rather than
Py_FileSystemDefaultEncoding and Py_FileSystemDefaultEncodeErrors
global configuration variables.
* Add _Py_SetFileSystemEncoding() and _Py_ClearFileSystemEncoding()
private functions to only modify Py_FileSystemDefaultEncoding and
Py_FileSystemDefaultEncodeErrors in coreconfig.c.
* _Py_CoerceLegacyLocale() now takes an int rather than
_PyCoreConfig for the warning.
Also, propagate the error from PyNumber_AsSsize_t() because we don't care
only about OverflowError which is not reported if the second argument is NULL.
Reported by Svace static analyzer.
* The hash of BuiltinMethodType instances no longer depends on the hash
of __self__. It depends now on the hash of id(__self__).
* The hash and equality of ModuleType and MethodWrapperType instances no
longer depend on the hash and equality of __self__. They depend now on
the hash and equality of id(__self__).
* MethodWrapperType instances no longer support ordering.
Add more fields to _PyCoreConfig:
* _check_hash_pycs_mode
* bytes_warning
* debug
* inspect
* interactive
* legacy_windows_fs_encoding
* legacy_windows_stdio
* optimization_level
* quiet
* unbuffered_stdio
* user_site_directory
* verbose
* write_bytecode
Changes:
* Remove pymain_get_global_config() and pymain_set_global_config()
which became useless. These functions have been replaced by
_PyCoreConfig_GetGlobalConfig() and
_PyCoreConfig_SetGlobalConfig().
* sys.flags.dont_write_bytecode value is now restricted to 1 even if
-B option is specified multiple times on the command line.
* PyThreadState_Clear() now uses the config from the current
interpreter rather than using global Py_VerboseFlag
Fix error messages for PySequence_Size(), PySequence_GetItem(),
PySequence_SetItem() and PySequence_DelItem() called with a mapping
and PyMapping_Size() called with a sequence.
`_PyUnicode_TransformDecimalAndSpaceToASCII()` missed trailing NUL char.
It caused buffer overflow in `_Py_string_to_number_with_underscores()`.
This bug is introduced in 9b6c60cb.
During development of the limited API support for PySide,
we saw an error in a macro that accessed a type field.
This patch fixes the 7 errors in the Python headers.
Macros which were not written as capitals were implemented
as function.
To do the necessary analysis again, a script was included that
parses all headers and looks for "->tp_" in serctions which can
be reached with active limited API.
It is easily possible to call this script as a test.
Error listing:
../../Include/objimpl.h:243
#define PyObject_IS_GC(o) (PyType_IS_GC(Py_TYPE(o)) && \
(Py_TYPE(o)->tp_is_gc == NULL || Py_TYPE(o)->tp_is_gc(o)))
Action: commented only
../../Include/objimpl.h:362
#define PyType_SUPPORTS_WEAKREFS(t) ((t)->tp_weaklistoffset > 0)
Action: commented only
../../Include/objimpl.h:364
#define PyObject_GET_WEAKREFS_LISTPTR(o) \
((PyObject **) (((char *) (o)) + Py_TYPE(o)->tp_weaklistoffset))
Action: commented only
../../Include/pyerrors.h:143
#define PyExceptionClass_Name(x) \
((char *)(((PyTypeObject*)(x))->tp_name))
Action: implemented function
../../Include/abstract.h:593
#define PyIter_Check(obj) \
((obj)->ob_type->tp_iternext != NULL && \
(obj)->ob_type->tp_iternext != &_PyObject_NextNotImplemented)
Action: implemented function
../../Include/abstract.h:713
#define PyIndex_Check(obj) \
((obj)->ob_type->tp_as_number != NULL && \
(obj)->ob_type->tp_as_number->nb_index != NULL)
Action: implemented function
../../Include/abstract.h:924
#define PySequence_ITEM(o, i)\
( Py_TYPE(o)->tp_as_sequence->sq_item(o, i) )
Action: commented only
METH_NOARGS functions need only a single argument but they are cast
into a PyCFunction, which takes two arguments. This triggers an
invalid function cast warning in gcc8 due to the argument mismatch.
Fix this by adding a dummy unused argument.
Fix clang ubsan (undefined behavior sanitizer) warnings in dictobject.c by
adjusting how the internal struct _dictkeysobject shared keys structure is
declared.
This remains ABI compatible. We get rid of the union at the end of the
struct being used for conveinence to avoid typecasting in favor of char[]
variable length array at the end of a struct. This is known to clang to be
used for variable sized objects and will not cause an undefined behavior
problem. Similarly, char arrays do not have strict aliasing undefined
behavior when cast.
PEP-007 does not currently list variable length arrays (VLAs) as allowed
in our subset of C99. If this turns out to be a problem, the fix to this is
to change the char `dk_indices[]` into `dk_indices[1]` and restore the
three size computation subtractions this change removes:
`- Py_MEMBER_SIZE(PyDictKeysObject, dk_indices)`
If this works as is I'll make a separate PR to update PEP-007.
* Added new opcode END_ASYNC_FOR.
* Setting global StopAsyncIteration no longer breaks "async for" loops.
* Jumping into an "async for" loop is now disabled.
* Jumping out of an "async for" loop no longer corrupts the stack.
* Simplify the compiler.
Multi-phase initialized modules allow m_traverse to be called while the
module is still being initialized, so module authors may need to account
for that.
The commit removes one unnecessary "if" clause in genobject.c. That "if" clause was masking un-awaited coroutines warnings just to make writing unittests more convenient.
Better account for single-line compound statements and
semi-colon separated statements when suggesting
Py3 replacements for Py2 print statements.
Initial patch by Nitish Chandra.
dictview_repr(): Use a Py_ReprEnter() / Py_ReprLeave() pair to check
for recursion, and produce "..." if so.
test_recursive_repr(): Check for the string rather than a
RecursionError. (Test cannot be any tighter as contents are
implementation-dependent.)
test_deeply_nested_repr(): Add new test, replacing the original
test_recursive_repr(). It checks that a RecursionError is raised in
the case of a non-recursive but deeply nested structure. (Very
similar to what test_repr_deep() in test/test_dict.py does for a
normal dict.)
OrderedDictTests: Add new test case, to test behavior on OrderedDict
instances containing their own values() or items().
* Add coro.cr_origin and sys.set_coroutine_origin_tracking_depth
* Use coroutine origin information in the unawaited coroutine warning
* Stop using set_coroutine_wrapper in asyncio debug mode
* In BaseEventLoop.set_debug, enable debugging in the correct thread
The suggested replacement for print statements previously failed to account
for leading whitespace and hence could end up including unwanted text in
the proposed call to the print builtin.
Patch by Sanyam Khurana.
AttributeError was raised always when attribute is not found.
This commit skip raising AttributeError when `tp_getattro` is `PyObject_GenericGetAttr`.
It makes hasattr() and getattr() about 4x faster when attribute is not found.
Modify locale.localeconv(), time.tzname, os.strerror() and other
functions to ignore the UTF-8 Mode: always use the current locale
encoding.
Changes:
* Add _Py_DecodeLocaleEx() and _Py_EncodeLocaleEx(). On decoding or
encoding error, they return the position of the error and an error
message which are used to raise Unicode errors in
PyUnicode_DecodeLocale() and PyUnicode_EncodeLocale().
* Replace _Py_DecodeCurrentLocale() with _Py_DecodeLocaleEx().
* PyUnicode_DecodeLocale() now uses _Py_DecodeLocaleEx() for all
cases, especially for the strict error handler.
* Add _Py_DecodeUTF8Ex(): return more information on decoding error
and supports the strict error handler.
* Rename _Py_EncodeUTF8_surrogateescape() to _Py_EncodeUTF8Ex().
* Replace _Py_EncodeCurrentLocale() with _Py_EncodeLocaleEx().
* Ignore the UTF-8 mode to encode/decode localeconv(), strerror()
and time zone name.
* Remove PyUnicode_DecodeLocale(), PyUnicode_DecodeLocaleAndSize()
and PyUnicode_EncodeLocale() now ignore the UTF-8 mode: always use
the "current" locale.
* Remove _PyUnicode_DecodeCurrentLocale(),
_PyUnicode_DecodeCurrentLocaleAndSize() and
_PyUnicode_EncodeCurrentLocale().
Add new fuctions ignoring the UTF-8 mode:
* _Py_DecodeCurrentLocale()
* _Py_EncodeCurrentLocale()
* _PyUnicode_DecodeCurrentLocaleAndSize()
* _PyUnicode_EncodeCurrentLocale()
Modify the readline module to use these functions.
Re-enable test_readline.test_nonascii().
Replace Py_EncodeLocale() with _Py_EncodeLocaleRaw() in:
* _Py_wfopen()
* _Py_wreadlink()
* _Py_wrealpath()
* _Py_wstat()
* pymain_open_filename()
These functions are called early during Python intialization, only
the RAW memory allocator must be used.
Py_EncodeLocale() now uses _Py_EncodeUTF8_surrogateescape(), instead
of using temporary unicode and bytes objects. So Py_EncodeLocale()
doesn't use the Python C API anymore.
* Add -X utf8 command line option, PYTHONUTF8 environment variable
and a new sys.flags.utf8_mode flag.
* If the LC_CTYPE locale is "C" at startup: enable automatically the
UTF-8 mode.
* Add _winapi.GetACP(). encodings._alias_mbcs() now calls
_winapi.GetACP() to get the ANSI code page
* locale.getpreferredencoding() now returns 'UTF-8' in the UTF-8
mode. As a side effect, open() now uses the UTF-8 encoding by
default in this mode.
* Py_DecodeLocale() and Py_EncodeLocale() now use the UTF-8 encoding
in the UTF-8 Mode.
* Update subprocess._args_from_interpreter_flags() to handle -X utf8
* Skip some tests relying on the current locale if the UTF-8 mode is
enabled.
* Add test_utf8mode.py.
* _Py_DecodeUTF8_surrogateescape() gets a new optional parameter to
return also the length (number of wide characters).
* pymain_get_global_config() and pymain_set_global_config() now
always copy flag values, rather than only copying if the new value
is greater than the old value.
The error messages in `object.__new__` and `object.__init__` now aim
to point the user more directly at the name of the class being instantiated
in cases where they *haven't* been overridden (on the assumption that
the actual problem is a missing `__new__` or `__init__` definition in the
class body).
When they *have* been overridden, the errors still report themselves as
coming from object, on the assumption that the problem is with the call
up to the base class in the method implementation, rather than with the
way the constructor is being called.
Explicitly cast digits (Py_ssize_t) to double to fix the following
false-alarm warning from Coverity:
"fsize_z = digits * log_base_BASE[base] + 1;"
CID 1424951: Incorrect expression (UNINTENDED_INTEGER_DIVISION)
Dividing integer expressions "9223372036854775783UL" and "4UL", and
then converting the integer quotient to type "double". Any remainder,
or fractional part of the quotient, is ignored.
* Py_Main() now starts by reading Py_xxx configuration variables to
only work on its own private structure, and then later writes back
the configuration into these variables.
* Replace Py_GETENV() with pymain_get_env_var() which ignores empty
variables.
* Add _PyCoreConfig.dump_refs
* Add _PyCoreConfig.malloc_stats
* _PyObject_DebugMallocStats() is now responsible to check if debug
hooks are installed. The function returns 1 if stats were written,
or 0 if the hooks are disabled. Mark _PyMem_PymallocEnabled() as
static.
Previously, CO_NOFREE was set in the compiler, which meant
it could end up being set incorrectly when code objects
were created directly. Setting it in the constructor based
on freevars and cellvars ensures it is always accurate,
regardless of how the code object is defined.
* Rename PyPathConfig structure to _PyPathConfig and move it to
Include/internal/pystate.h
* Rename path_config to _Py_path_config
* _PyPathConfig: Rename program_name field to program_full_path
* Add assert(str != NULL); to _PyMem_RawWcsdup(), _PyMem_RawStrdup()
and _PyMem_Strdup().
* Rename calculate_path() to pathconfig_global_init(). The function
now does nothing if it's already initiallized.
* Fix _PyMem_SetupAllocators("debug"): always restore allocators to
the defaults, rather than only caling _PyMem_SetupDebugHooks().
* Add _PyMem_SetDefaultAllocator() helper to set the "default"
allocator.
* Add _PyMem_GetAllocatorsName(): get the name of the allocators
* main() now uses debug hooks on memory allocators if Py_DEBUG is
defined, rather than calling directly malloc()
* Document default memory allocators in C API documentation
* _Py_InitializeCore() now fails with a fatal user error if
PYTHONMALLOC value is an unknown memory allocator, instead of
failing with a fatal internal error.
* Add new tests on the PYTHONMALLOC environment variable
* Add support.with_pymalloc()
* Add the _testcapi.WITH_PYMALLOC constant and expose it as
support.with_pymalloc().
* sysconfig.get_config_var('WITH_PYMALLOC') doesn't work on Windows, so
replace it with support.with_pymalloc().
* pythoninfo: add _testcapi collector for pymem
Py_GetPath() and Py_Main() now call
_PyMainInterpreterConfig_ReadEnv() to share the same code to get
environment variables.
Changes:
* Add _PyMainInterpreterConfig_ReadEnv()
* Add _PyMainInterpreterConfig_Clear()
* Add _PyMem_RawWcsdup()
* _PyMainInterpreterConfig: rename pythonhome to home
* Rename _Py_ReadMainInterpreterConfig() to
_PyMainInterpreterConfig_Read()
* Use _Py_INIT_USER_ERR(), instead of _Py_INIT_ERR(), for decoding
errors: the user is able to fix the issue, it's not a bug in
Python. Same change was made in _Py_INIT_NO_MEMORY().
* Remove _Py_GetPythonHomeWithConfig()
bpo-32096, bpo-30860: Partially revert the commit
2ebc5ce42a8a9e047e790aefbf9a94811569b2b6:
* Move structures back from Include/internal/mem.h to
Objects/obmalloc.c
* Remove _PyObject_Initialize() and _PyMem_Initialize()
* Remove Include/internal/pymalloc.h
* Add test_capi.test_pre_initialization_api():
Make sure that it's possible to call Py_DecodeLocale(), and then call
Py_SetProgramName() with the decoded string, before Py_Initialize().
PyMem_RawMalloc() and Py_DecodeLocale() can be called again before
_PyRuntimeState_Init().
Co-Authored-By: Eric Snow <ericsnowcurrently@gmail.com>
Py_Main() now handles two more -X options:
* -X showrefcount: new _PyCoreConfig.show_ref_count field
* -X showalloccount: new _PyCoreConfig.show_alloc_count field
Add a new "developer mode": new "-X dev" command line option to
enable debug checks at runtime.
Changes:
* Add unit tests for -X dev
* test_cmd_line: replace test.support with support.
* Fix _PyRuntimeState_Fini(): Use the same memory allocator
than _PyRuntimeState_Init().
* Fix _PyMem_GetDefaultRawAllocator()
* Don't use "Python runtime" anymore to parse command line options or
to get environment variables: pymain_init() is now a strict
separation.
* Use an error message rather than "crashing" directly with
Py_FatalError(). Limit the number of calls to Py_FatalError(). It
prepares the code to handle errors more nicely later.
* Warnings options (-W, PYTHONWARNINGS) and "XOptions" (-X) are now
only added to the sys module once Python core is properly
initialized.
* _PyMain is now the well identified owner of some important strings
like: warnings options, XOptions, and the "program name". The
program name string is now properly freed at exit.
pymain_free() is now responsible to free the "command" string.
* Rename most methods in Modules/main.c to use a "pymain_" prefix to
avoid conflits and ease debug.
* Replace _Py_CommandLineDetails_INIT with memset(0)
* Reorder a lot of code to fix the initialization ordering. For
example, initializing standard streams now comes before parsing
PYTHONWARNINGS.
* Py_Main() now handles errors when adding warnings options and
XOptions.
* Add _PyMem_GetDefaultRawAllocator() private function.
* Cleanup _PyMem_Initialize(): remove useless global constants: move
them into _PyMem_Initialize().
* Call _PyRuntime_Initialize() as soon as possible:
_PyRuntime_Initialize() now returns an error message on failure.
* Add _PyInitError structure and following macros:
* _Py_INIT_OK()
* _Py_INIT_ERR(msg)
* _Py_INIT_USER_ERR(msg): "user" error, don't abort() in that case
* _Py_INIT_FAILED(err)
kB (*kilo* byte) unit means 1000 bytes, whereas KiB ("kibibyte")
means 1024 bytes. KB was misused: replace kB or KB with KiB when
appropriate.
Same change for MB and GB which become MiB and GiB.
Change the output of Tools/iobench/iobench.py.
Round also the size of the documentation from 5.5 MB to 5 MiB.
Few bytes at the begin and at the end of the reallocated blocks, as well
as the header and the trailer, now are erased before calling realloc()
in debug build. This will help to detect using or double freeing the
reallocated block.
Cleanup pymalloc:
* Rename _PyObject_Alloc() to pymalloc_alloc()
* Rename _PyObject_FreeImpl() to pymalloc_free()
* Rename _PyObject_Realloc() to pymalloc_realloc()
* pymalloc_alloc() and pymalloc_realloc() don't fallback on the raw
allocator anymore, it now must be done by the caller
* Add "success" and "failed" labels to pymalloc_alloc() and
pymalloc_free()
* pymalloc_alloc() and pymalloc_free() don't update
num_allocated_blocks anymore: it should be done in the caller
* _PyObject_Calloc() is now responsible to fill the memory block
allocated by pymalloc with zeros
* Simplify pymalloc_alloc() prototype
* _PyObject_Realloc() now calls _PyObject_Malloc() rather than
calling directly pymalloc_alloc()
_PyMem_DebugRawAlloc() and _PyMem_DebugRawRealloc():
* document the layout of a memory block
* don't increase the serial number if the allocation failed
* check for integer overflow before computing the total size
* add a 'data' variable to make the code easiler to follow
test_setallocators() of _testcapimodule.c now test also the context.