kB (*kilo* byte) unit means 1000 bytes, whereas KiB ("kibibyte")
means 1024 bytes. KB was misused: replace kB or KB with KiB when
appropriate.
Same change for MB and GB which become MiB and GiB.
Change the output of Tools/iobench/iobench.py.
Round also the size of the documentation from 5.5 MB to 5 MiB.
Few bytes at the begin and at the end of the reallocated blocks, as well
as the header and the trailer, now are erased before calling realloc()
in debug build. This will help to detect using or double freeing the
reallocated block.
Cleanup pymalloc:
* Rename _PyObject_Alloc() to pymalloc_alloc()
* Rename _PyObject_FreeImpl() to pymalloc_free()
* Rename _PyObject_Realloc() to pymalloc_realloc()
* pymalloc_alloc() and pymalloc_realloc() don't fallback on the raw
allocator anymore, it now must be done by the caller
* Add "success" and "failed" labels to pymalloc_alloc() and
pymalloc_free()
* pymalloc_alloc() and pymalloc_free() don't update
num_allocated_blocks anymore: it should be done in the caller
* _PyObject_Calloc() is now responsible to fill the memory block
allocated by pymalloc with zeros
* Simplify pymalloc_alloc() prototype
* _PyObject_Realloc() now calls _PyObject_Malloc() rather than
calling directly pymalloc_alloc()
_PyMem_DebugRawAlloc() and _PyMem_DebugRawRealloc():
* document the layout of a memory block
* don't increase the serial number if the allocation failed
* check for integer overflow before computing the total size
* add a 'data' variable to make the code easiler to follow
test_setallocators() of _testcapimodule.c now test also the context.
The concrete PyDict_* API is used to interact with PyInterpreterState.modules in a number of places. This isn't compatible with all dict subclasses, nor with other Mapping implementations. This patch switches the concrete API usage to the corresponding abstract API calls.
We also add a PyImport_GetModule() function (and some other helpers) to reduce a bunch of code duplication.
* Add Py_UNREACHABLE() as an alias to abort().
* Use Py_UNREACHABLE() instead of assert(0)
* Convert more unreachable code to use Py_UNREACHABLE()
* Document Py_UNREACHABLE() and a few other macros.
A bunch of code currently uses PyInterpreterState.modules directly instead of PyImport_GetModuleDict(). This complicates efforts to make changes relative to sys.modules. This patch switches to using PyImport_GetModuleDict() uniformly. Also, a number of related uses of sys.modules are updated for uniformity for the same reason.
Note that this code was already reviewed and merged as part of #1638. I reverted that and am now splitting it up into more focused parts.
PR #1638, for bpo-28411, causes problems in some (very) edge cases. Until that gets sorted out, we're reverting the merge. PR #3506, a fix on top of #1638, is also getting reverted.
* Drop warnoptions from PyInterpreterState.
* Drop xoptions from PyInterpreterState.
* Don't set warnoptions and _xoptions again.
* Decref after adding to sys.__dict__.
* Drop an unused macro.
* Check sys.xoptions *before* we delete it.
This undoes a853a8ba78 except for the pytime.c
parts. We want to continue to allow IEEE 754 doubles larger than FLT_MAX to be
rounded into finite floats. Tests were added to very this behavior.
* group the (stateful) runtime globals into various topical structs
* consolidate the topical structs under a single top-level _PyRuntimeState struct
* add a check-c-globals.py script that helps identify runtime globals
Other globals are excluded (see globals.txt and check-c-globals.py).
f_trace_lines: enable/disable line trace events
f_trace_opcodes: enable/disable opcode trace events
These are intended primarily for testing of the interpreter
itself, as they make it much easier to emulate signals
arriving at unfortunate times.
* group the (stateful) runtime globals into various topical structs
* consolidate the topical structs under a single top-level _PyRuntimeState struct
* add a check-c-globals.py script that helps identify runtime globals
Other globals are excluded (see globals.txt and check-c-globals.py).
* fixed OrderedDict.__init__ docstring re PEP 468
* tightened comment and mirrored to C impl
* added space after period per marco-buttu
* preserved substituted for stable
* drop references to Python 3.6 and PEP 468
Based on patch by Victor Stinner.
Add private C API function _PyUnicode_AsUnicode() which is similar to
PyUnicode_AsUnicode(), but checks for null characters.
The function '_PyArg_ParseStack()' and
'_PyArg_UnpackStack' were failing (with error
"XXX() takes Y argument (Z given)") before
the function '_PyArg_NoStackKeywords()' was called.
Thus, the latter did not raise its more meaningful
error : "XXX() takes no keyword arguments".
Error messages when pass keyword arguments to some builtins that
don't support keyword arguments contained double parenthesis: "()()".
The regression was introduced by bpo-30534.
Fix MemoryError caused by moving around code in PR #886; nbytes was sometimes used unitinitalized (in non-debug builds, when use_calloc was false and elsize was 0).
Make a non-Py_DEBUG, asserts-enabled build of CPython possible. This means
making sure helper functions are defined when NDEBUG is not defined, not
just when Py_DEBUG is defined.
Also fix a division-by-zero in obmalloc.c that went unnoticed because in Py_DEBUG mode, elsize is never zero.
* Add _PyObject_HasFastCall()
* partial_call() now avoids temporary tuple to pass positional
arguments if the callable supports the FASTCALL calling convention
for positional arguments.
* Fix also a performance regression in partial_call() if the callable
doesn't support FASTCALL.
PyEval_Call* APIs are not documented and they doesn't respect PY_SSIZE_T_CLEAN.
So add comment block which recommends PyObject_Call* APIs to ceval.h.
This commit also changes PyEval_CallMethod and PyEval_CallFunction
implementation same to PyObject_CallMethod and PyObject_CallFunction
to reduce future maintenance cost. Optimization to avoid temporary
tuple are copied too.
PyEval_CallFunction(callable, "i", (int)i) now calls callable(i) instead of
raising TypeError. But accepting this edge case is backward compatible.
allocated.
On PyMem_Realloc failure, _PyCode_SetExtra should free co_extra if
co_extra->ce_extras could not be allocated.
On PyMem_Realloc success, _PyCode_SetExtra should set all unused slots in
co_extra->ce_extras to NULL.
* Move all functions to call objects in a new Objects/call.c file.
* Rename fast_function() to _PyFunction_FastCallKeywords().
* Copy null_error() from Objects/abstract.c
* Inline type_error() in call.c to not have to copy it, it was only
called once.
* Export _PyEval_EvalCodeWithName() since it is now called
from call.c.
* Move all functions to call objects in a new Objects/call.c file.
* Rename fast_function() to _PyFunction_FastCallKeywords().
* Copy null_error() from Objects/abstract.c
* Inline type_error() in call.c to not have to copy it, it was only
called once.
* Export _PyEval_EvalCodeWithName() since it is now called
from call.c.
Issue #29507: Optimize slots calling Python methods. For Python methods, get
the unbound Python function and prepend arguments with self, rather than
calling the descriptor which creates a temporary PyMethodObject.
Add a new _PyObject_FastCall_Prepend() function used to call the unbound Python
method with self. It avoids the creation of a temporary tuple to pass
positional arguments.
Avoiding temporary PyMethodObject and avoiding temporary tuple makes Python
slots up to 1.46x faster. Microbenchmark on a __getitem__() method implemented
in Python:
Median +- std dev: 121 ns +- 5 ns -> 82.8 ns +- 1.0 ns: 1.46x faster (-31%)
Co-Authored-by: INADA Naoki <songofacandy@gmail.com>
Issue #29259, #29465: PyCFunction_Call() doesn't create anymore a redundant
tuple to pass positional arguments for METH_VARARGS.
Add a new cfunction_call() subfunction.
* *PyCFunction_*Call*() functions now call Py_EnterRecursiveCall().
* PyObject_Call() now calls directly _PyFunction_FastCallDict() and
PyCFunction_Call() to avoid calling Py_EnterRecursiveCall() twice per
function call
Decreased density gives better collision statistics (average of 2.5 probes in a
full table versus 3.0 previously) and fewer occurences of starting a second
possibly overlapping sequence of 10 linear probes. Makes resizes a little more
frequent but each with less work (fewer insertions and fewer collisions).