types. The special handling for these can now be removed from save_newobj().
Add some testing for this.
Also add support for setting the 'fast' flag on the Python Pickler class,
which suppresses use of the memo.
comments everywhere that bugged me: /* Foo is inlined */ instead of
/* Inline Foo */. Somehow the "is inlined" phrase always confused me
for half a second (thinking, "No it isn't" until I added the missing
"here"). The new phrase is hopefully unambiguous.
The staticforward define was needed to support certain broken C
compilers (notably SCO ODT 3.0, perhaps early AIX as well) botched the
static keyword when it was used with a forward declaration of a static
initialized structure. Standard C allows the forward declaration with
static, and we've decided to stop catering to broken C compilers. (In
fact, we expect that the compilers are all fixed eight years later.)
I'm leaving staticforward and statichere defined in object.h as
static. This is only for backwards compatibility with C extensions
that might still use it.
XXX I haven't updated the documentation.
helper macros to something saner, and used them appropriately in other
files too, to reduce #ifdef blocks.
classobject.c, instance_dealloc(): One of my worst Python Memories is
trying to fix this routine a few years ago when COUNT_ALLOCS was defined
but Py_TRACE_REFS wasn't. The special-build code here is way too
complicated. Now it's much simpler. Difference: in a Py_TRACE_REFS
build, the instance is no longer in the doubly-linked list of live
objects while its __del__ method is executing, and that may be visible
via sys.getobjects() called from a __del__ method. Tough -- the object
is presumed dead while its __del__ is executing anyway, and not calling
_Py_NewReference() at the start allows enormous code simplification.
typeobject.c, call_finalizer(): The special-build instance_dealloc()
pain apparently spread to here too via cut-'n-paste, and this is much
simpler now too. In addition, I didn't understand why this routine
was calling _PyObject_GC_TRACK() after a resurrection, since there's no
plausible way _PyObject_GC_UNTRACK() could have been called on the
object by this point. I suspect it was left over from pasting the
instance_delloc() code. Instead asserted that the object is still
tracked. Caution: I suspect we don't have a test that actually
exercises the subtype_dealloc() __del__-resurrected-me code.
[ 400998 ] experimental support for extended slicing on lists
somewhat spruced up and better tested than it was when I wrote it.
Includes docs & tests. The whatsnew section needs expanding, and arrays
should support extended slices -- later.
The fix makes it possible to call PyObject_GC_UnTrack() more than once
on the same object, and then move the PyObject_GC_UnTrack() call to
*before* the trashcan code is invoked.
BUGFIX CANDIDATE!
out the for loop at the end intended to zero out new items wasn't
doing anything, because sv->ob_size was already equal to newsize. The
fix slightly refactors the function, introducing a variable oldsize
and doing away with sizediff (which was used only once), and using
oldsize and newsize consistently. I also added comments explaining
what the two for loops do. (Looking at the CVS annotation of this
function, it's no miracle a bug crept in -- this has been patched by
many different folks! :-)
many types were subclassable but had a xxx_dealloc function that
called PyObject_DEL(self) directly instead of deferring to
self->ob_type->tp_free(self). It is permissible to set tp_free in the
type object directly to _PyObject_Del, for non-GC types, or to
_PyObject_GC_Del, for GC types. Still, PyObject_DEL was a tad faster,
so I'm fearing that our pystone rating is going down again. I'm not
sure if doing something like
void xxx_dealloc(PyObject *self)
{
if (PyXxxCheckExact(self))
PyObject_DEL(self);
else
self->ob_type->tp_free(self);
}
is any faster than always calling the else branch, so I haven't
attempted that -- however those types whose own dealloc is fancier
(int, float, unicode) do use this pattern.
tupledealloc(): only feed the free list when the type is really a
tuple, not a subtype. Otherwise, use PyObject_GC_Del().
_PyTuple_Resize(): disallow using this for tuple subtypes.
Gave Python linear-time repr() implementations for dicts, lists, strings.
This means, e.g., that repr(range(50000)) is no longer 50x slower than
pprint.pprint() in 2.2 <wink>.
I don't consider this a bugfix candidate, as it's a performance boost.
Added _PyString_Join() to the internal string API. If we want that in the
public API, fine, but then it requires runtime error checks instead of
asserts.
1. Omit the early-out EQ/NE "lengths different?" test. Was unable to find
any real code where it triggered, but it always costs. The same is not
true of list richcmps, where different-size lists appeared to get
compared about half the time.
2. Because tuples are immutable, there's no need to refetch the lengths of
both tuples from memory again on each loop trip.
BUG ALERT: The tuple (and list) richcmp algorithm is arguably wrong,
because it won't believe there's any difference unless Py_EQ returns false
for some corresponding elements:
>>> class C:
... def __lt__(x, y): return 1
... __eq__ = __lt__
...
>>> C() < C()
1
>>> (C(),) < (C(),)
0
>>>
That doesn't make sense -- provided you believe the defn. of C makes sense.
- tuplecontains(): call RichCompare(Py_EQ).
- Get rid of tuplecompare(), in favor of new tuplerichcompare() (a
clone of list_compare()).
- Aligned the comments for large struct initializers.
This patch modifies the type structures of objects that
participate in GC. The object's tp_basicsize is increased when
GC is enabled. GC information is prefixed to the object to
maintain binary compatibility. GC objects also define the
tp_flag Py_TPFLAGS_GC.
The following patch adds "sq_contains" support to rangeobject, and enables
the already-written support for sq_contains in listobject and tupleobject.
The rangeobject "contains" code should be a bit more efficient than the
current default "in" implementation ;-) It might not get used much, but it's
not that much to add.
listobject.c and tupleobject.c already had code for sq_contains, and the
proper struct member was set, but the PyType structure was not extended to
include tp_flags, so the object-specific code was not getting called (Go
ahead, test it ;-). I also did this for the immutable_list_type in
listobject.c, eventhough it is probably never used. Symmetry and all that.
For more comments, read the patches@python.org archives.
For documentation read the comments in mymalloc.h and objimpl.h.
(This is not exactly what Vladimir posted to the patches list; I've
made a few changes, and Vladimir sent me a fix in private email for a
problem that only occurs in debug mode. I'm also holding back on his
change to main.c, which seems unnecessary to me.)
_PyTuple_Resize(). In addition, a change suggested by Jeremy Hylton
to limit the size of the free lists is also merged into this patch.
Charles wrote initially:
"""
Test Case: run the following code:
class Nothing:
def __len__(self):
return 5
def __getitem__(self, i):
if i < 3:
return i
else:
raise IndexError, i
def g(a,*b,**c):
return
for x in xrange(1000000):
g(*Nothing())
and watch Python's memory use go up and up.
Diagnosis:
The analysis begins with the call to PySequence_Tuple at line 1641 in
ceval.c - the argument to g is seen to be a sequence but not a tuple,
so it needs to be converted from an abstract sequence to a concrete
tuple. PySequence_Tuple starts off by creating a new tuple of length
5 (line 1122 in abstract.c). Then at line 1149, since only 3 elements
were assigned, _PyTuple_Resize is called to make the 5-tuple into a
3-tuple. When we're all done the 3-tuple is decrefed, but rather than
being freed it is placed on the free_tuples cache.
The basic problem is that the 3-tuples are being added to the cache
but never picked up again, since _PyTuple_Resize doesn't make use of
the free_tuples cache. If you are resizing a 5-tuple to a 3-tuple and
there is already a 3-tuple in free_tuples[3], instead of using this
tuple, _PyTuple_Resize will realloc the 5-tuple to a 3-tuple. It
would more efficient to use the existing 3-tuple and cache the
5-tuple.
By making _PyTuple_Resize aware of the free_tuples (just as
PyTuple_New), we not only save a few calls to realloc, but also
prevent this misbehavior whereby tuples are being added to the
free_tuples list but never properly "recycled".
"""
And later:
"""
This patch replaces my submission of Sun, 16 Apr and addresses Jeremy
Hylton's suggestions that we also limit the size of the free tuple
list. I chose 2000 as the maximum number of tuples of any particular
size to save.
There was also a problem with the previous version of this patch
causing a core dump if Python was built with Py_TRACE_REFS. This is
fixed in the below version of the patch, which uses tupledealloc
instead of _Py_Dealloc.
"""
Added wrapping macros to dictobject.c, listobject.c, tupleobject.c,
frameobject.c, traceback.c that safely prevends core dumps
on stack overflow. Macros and functions in object.c, object.h.
The method is an "elevator destructor" that turns cascading
deletes into tail recursive behavior when some limit is hit.