Most of this code was old enough to vote. Examples of cleanups:
+ Backslashes were used for line continuation even inside unclosed
bracket structures, from back in the days that was still needed.
+ There was no use of % formats, and e.g. the old fpformat module was
still used to format floats "by hand" in conjunction with rjust().
+ There was even use of a do-nothing .ignore() method to tack on to the
end of a chain of method calls, else way back when Python would print
the non-None result (as it does now in an interactive session -- it
*used* to do that in batch mode too).
+ Perhaps controversial (although I can't imagine why for real <wink>),
used augmented assignment where helpful. Stuff like
self.total_calls = self.total_calls + other.total_calls
is just plain harder to follow than
self.total_calls += other.total_calls
seriously wrong. This started out by just fixing the docs, but then it
occurred to me that the doc confusion propagated into misleading vrbl names
too, so I also renamed those to match reality. As a result, INO the time
computations are much easier to understand now (within the limitations of
vast quantities of 3-character names <wink>).
This simplifies the rounding in _PyObject_VAR_SIZE, allows to restore the
pre-rounding calling sequence, and allows some nice little simplifications
in its callers. I'm still making it return a size_t, though.
As Guido suggested, this makes the new subclassing code substantially
simpler. But the mechanics of doing it w/ C macro semantics are a mess,
and _PyObject_VAR_SIZE has a new calling sequence now.
Question: The PyObject_NEW_VAR macro appears to be part of the public API.
Regardless of what it expands to, the notion that it has to round up the
memory it allocates is new, and extensions containing the old
PyObject_NEW_VAR macro expansion (which was embedded in the
PyObject_NEW_VAR expansion) won't do this rounding. But the rounding
isn't actually *needed* except for new-style instances with dict pointers
after a variable-length blob of embedded data. So my guess is that we do
not need to bump the API version for this (as the rounding isn't needed
for anything an extension can do unless it's recompiled anyway). What's
your guess?
pad memory to properly align the __dict__ pointer in all cases.
gcmodule.c/objimpl.h, _PyObject_GC_Malloc:
+ Added a "padding" argument so that this flavor of malloc can allocate
enough bytes for alignment padding (it can't know this is needed, but
its callers do).
typeobject.c, PyType_GenericAlloc:
+ Allocated enough bytes to align the __dict__ pointer.
+ Sped and simplified the round-up-to-PTRSIZE logic.
+ Added blank lines so I could parse the if/else blocks <0.7 wink>.
+ Use the _PyObject_VAR_SIZE macro to compute object size.
+ Break the computation into lines convenient for debugger inspection.
+ Speed the round-up-to-pointer-size computation.
many types were subclassable but had a xxx_dealloc function that
called PyObject_DEL(self) directly instead of deferring to
self->ob_type->tp_free(self). It is permissible to set tp_free in the
type object directly to _PyObject_Del, for non-GC types, or to
_PyObject_GC_Del, for GC types. Still, PyObject_DEL was a tad faster,
so I'm fearing that our pystone rating is going down again. I'm not
sure if doing something like
void xxx_dealloc(PyObject *self)
{
if (PyXxxCheckExact(self))
PyObject_DEL(self);
else
self->ob_type->tp_free(self);
}
is any faster than always calling the else branch, so I haven't
attempted that -- however those types whose own dealloc is fancier
(int, float, unicode) do use this pattern.
foo\d
when it was clearly intended to render as
foo$
Fred, is this a right way to fix it? If not, the earlier place in the
same paragraph that does render as
foo$
is also wrong.