hack: it would resize *interned* strings in-place! This occurred because
their reference counts do not have their expected value -- stringobject.c
hacks them. Mea culpa.
interning were not clear here -- a subclass could be mutable, for
example -- and had bugs. Explicitly interning a subclass of string
via intern() will raise a TypeError. Internal operations that attempt
to intern a string subclass will have no effect.
Added a few tests to test_builtin that includes the old buggy code and
verifies that calls like PyObject_SetAttr() don't fail. Perhaps these
tests should have gone in test_string.
unicodedata.east_asian_width(). You can still implement your own
simple width() function using it like this:
def width(u):
w = 0
for c in unicodedata.normalize('NFC', u):
cwidth = unicodedata.east_asian_width(c)
if cwidth in ('W', 'F'): w += 2
else: w += 1
return w
the case of __del__ resurrecting an object.
This makes the apparent reference leaks in test_descr go away (which I
expected) and also kills off those in test_gc (which is more surprising
but less so once you actually think about it a bit).
comma expression in listpop() that was being returned. Still essentially
unused (as it is meant to be), but now the compiler thinks it is worth
*something* by having it incremented.
of no more than 8 elements cannot fail.
listpop(): Take advantage of that its calls to list_resize() and
list_ass_slice() can't fail. This is assert'ed in a debug build now, but
in an icky way. That is, you can't say:
assert(some_call() >= 0);
because then some_call() won't occur at all in a release build. So it
has to be a big pile of #ifdefs on Py_DEBUG (yuck), or the pleasant:
status = some_call();
assert(status >= 0);
But in that case, compilers may whine in a release build, because status
appears unused then. I'm not certain the ugly trick I used here will
convince all compilers to shut up about status (status is always "used" now,
as the first (ignored) clause in a comma expression).
impossible to remember, so renamed one to something obvious. Headed
off potential signed-vs-unsigned compiler complaints I introduced by
changing the type of a vrbl to unsigned. Removed the need for the
tedious explanation about "backward pointer loops" by looping on an
int instead.
result.
list_resize(): Document the intent. Code is increasingly relying on
subtle aspects of its behavior, and they deserve to be spelled out.
list_ass_slice(): A bit more simplification, by giving it a common
error exit and initializing more values.
Be clearer in comments about what "size" means (# of elements? # of
bytes?).
While the number of elements in a list slice must fit in an int, there's
no guarantee that the number of bytes occupied by the slice will. That
malloc() and memmove() take size_t arguments is a hint about that <wink>.
So changed to use size_t where appropriate.
ihigh - ilow should always be >= 0, but we never asserted that. We do
now.
The loop decref'ing the recycled slice had a subtle insecurity: C doesn't
guarantee that a pointer one slot *before* an array will compare "less
than" to a pointer within the array (it does guarantee that a pointer
one beyond the end of the array compares as expected). This was actually
an issue in KSR's C implementation, so isn't purely theoretical. Python
probably has other "go backwards" loops with a similar glitch.
list_clear() is OK (it marches an integer backwards, not a pointer).
though I tried to be very careful. This is a slight simplification, and it
adds a new feature: a small stack-allocated "recycled" array for the cases
when we don't remove too many items.
It allows PyList_SetSlice() to never fail if:
* you are sure that the object is a list; and
* you either do not remove more than 8 items, or clear the list.
This makes a number of other places in the source code correct again -- there
are some places that delete a single item without checking for MemoryErrors
raised by PyList_SetSlice(), or that clear the whole list, and sometimes the
context doesn't allow an error to be propagated.
invariants allows the ob_item != NULL check to be replaced with an
assertion.
* Added assertions to list_init() which document and verify that the
tp_new slot establishes the invariants. This may preclude a future
bug if a custom tp_new slot is written.
to NULL during the lifetime of the object.
* listobject.c nevertheless did not conform to the other invariants,
either; fixed.
* listobject.c now uses list_clear() as the obvious internal way to clear
a list, instead of abusing list_ass_slice() for that. It makes it easier
to enforce the invariant about ob_item == NULL.
* listsort() sets allocated to -1 during sort; any mutation will set it
to a value >= 0, so it is a safe way to detect mutation. A negative
value for allocated does not cause a problem elsewhere currently.
test_sort.py has a new test for this fix.
* listsort() leak: if items were added to the list during the sort, AND if
these items had a __del__ that puts still more stuff into the list,
then this more stuff (and the PyObject** array to hold them) were
overridden at the end of listsort() and never released.
mutation during list.sort() used to rely on that listobject.c always
NULL'ed ob_item when ob_size fell to 0. That's no longer true, so the
test for list mutation during a sort is no longer reliable. Changed the
test to rely instead on that listobject.c now never NULLs-out ob_item
after (if ever) ob_item gets a non-NULL value. This new assumption is
also documented now, as a required invariant in listobject.h.
The new assumption allowed some real simplification to some of the
hairier code in listsort(), so is a Good Thing on that count.
__oct__, and __hex__. Raise TypeError if an invalid type is
returned. Note that PyNumber_Int and PyNumber_Long can still
return ints or longs. Fixes SF bug #966618.
- weakref.ref and weakref.ReferenceType will become aliases for each
other
- weakref.ref will be a modern, new-style class with proper __new__
and __init__ methods
- weakref.WeakValueDictionary will have a lighter memory footprint,
using a new weakref.ref subclass to associate the key with the
value, allowing us to have only a single object of overhead for each
dictionary entry (currently, there are 3 objects of overhead per
entry: a weakref to the value, a weakref to the dictionary, and a
function object used as a weakref callback; the weakref to the
dictionary could be avoided without this change)
- a new macro, PyWeakref_CheckRefExact(), will be added
- PyWeakref_CheckRef() will check for subclasses of weakref.ref
This closes SF patch #983019.
The builtin eval() function now accepts any mapping for the locals argument.
Time sensitive steps guarded by PyDict_CheckExact() to keep from slowing
down the normal case. My timings so no measurable impact.
tests which nicely highly highlight weaknesses).
* Initial value is now a large prime.
* Pre-multiply by the set length to add one more basis of differentiation.
* Work a bit harder inside the loop to scatter bits from sources that
may have closely spaced hash values.
All of this is necessary to make up for keep the hash function commutative.
Fortunately, the hash value is cached so the call to frozenset_hash() will
only occur once per set.
* Non-zero initial value so that hash(frozenset()) != hash(0).
* Final permutation to differentiate nested sets.
* Add logic to make sure that -1 is not a possible hash value.
iswide() for east asian width manipulation. (Inspired by David
Goodger, Reviewed by Martin v. Loewis)
- Move _PyUnicode_TypeRecord.flags to the end of the struct so that
no padding is added for UCS-4 builds. (Suggested by Martin v. Loewis)
- Neatened the braces in PyList_New().
- Made sure "indexerr" was initialized to NULL.
- Factored if blocks in PyList_Append().
- Made sure "allocated" is initialized in list_init().
close() calls would attempt to free() the buffer already free()ed on
the first close(). [bug introduced with patch #788249]
Making sure that the buffer is free()ed in file object deallocation is
a belt-n-braces bit of insurance against a memory leak.
the newly created tuples, but tuples added in the freelist are now cleared in
tupledealloc already (which is very cheap, because we are already
Py_XDECREF'ing all elements anyway).
Python should have a standard Py_ZAP macro like ZAP in pystate.c.
This gives another 30% speedup for operations such as
map(func, d.iteritems()) or list(d.iteritems()) which can both take
advantage of length information when provided.
* Split into three separate types that share everything except the
code for iternext. Saves run time decision making and allows
each iternext function to be specialized.
* Inlined PyDict_Next(). In addition to saving a function call, this
allows a redundant test to be eliminated and further specialization
of the code for the unique needs of each iterator type.
* Created a reusable result tuple for iteritems(). Saves the malloc
time for tuples when the previous result was not kept by client code
(this is the typical use case for iteritems). If the client code
does keep the reference, then a new tuple is created.
Results in a 20% to 30% speedup depending on the size and sparsity
of the dictionary.
* Factored constant structure references out of the inner loops for
PyDict_Next(), dict_keys(), dict_values(), and dict_items().
Gave measurable speedups to each (the improvement varies depending
on the sparseness of the dictionary being measured).
* Added a freelist scheme styled after that for tuples. Saves around
80% of the calls to malloc and free. About 10% of the time, the
previous dictionary was completely empty; in those cases, the
dictionary initialization with memset() can be skipped.
scheme in situations that likely won't benefit from it. This further
improves memory utilization from Py2.3 which always over-allocates
except for PyList_New().
Situations expected to benefit from over-allocation:
list.insert(), list.pop(), list.append(), and list.extend()
Situations deemed unlikely to benefit:
list_inplace_repeat, list_ass_slice, list_ass_subscript
The most gray area was for listextend_internal() which only runs
when the argument is a list or a tuple. This could be viewed as
a one-time fixed length addition or it could be viewed as wrapping
a series of appends. I left its over-allocation turned on but
could be convinced otherwise.
worth it to in-line the call to PyIter_Next().
Saves another 15% on most list operations that acceptable a general
iterable argument (such as the list constructor).
avoids creating an intermediate tuple for iterable arguments other than
lists or tuples.
In other words, a+=b no longer requires extra memory when b is not a
list or tuple. The list and tuple cases are unchanged.
for xrange and list objects).
* list.__reversed__ now checks the length of the sequence object before
calling PyList_GET_ITEM() because the mutable could have changed length.
* all three implementations are now tranparent with respect to length and
maintain the invariant len(it) == len(list(it)) even when the underlying
sequence mutates.
* __builtin__.reversed() now frees the underlying sequence as soon
as the iterator is exhausted.
* the code paths were rearranged so that the most common paths
do not require a jump.
* Replace sprintf message with a constant message string -- this error
message ran on every invocation except straight deletions but it was
only needed when the rhs was not iterable. The message was also
out-of-date and did not reflect that iterable arguments were allowed.
* For inner loops that do not make ref count adjustments, use memmove()
for fast copying and better readability.
* For inner loops that do make ref count adjustments, speed them up by
factoring out the constant structure reference and using vitem[] instead.
* Using addition instead of substraction on array indices allows the
compiler to use a fast addressing mode. Saves about 10%.
* Using PyTuple_GET_ITEM and PyList_SET_ITEM is about 7% faster than
PySequenceFast_GET_ITEM which has to make a list check on every pass.
(Championed by Bob Ippolito.)
The update() method for mappings now accepts all the same argument forms
as the dict() constructor. This includes item lists and/or keyword
arguments.
recent gcc on Linux/x86)
[ 899109 ] 1==float('nan')
by implementing rich comparisons for floats.
Seems to make comparisons involving NaNs somewhat less surprising
when the underlying C compiler actually implements C99 semantics.
utilization, and speed:
* Moved the responsibility for emptying the previous list from list_fill
to list_init.
* Replaced the code in list_extend with the superior code from list_fill.
* Eliminated list_fill.
Results:
* list.extend() no longer creates an intermediate tuple except to handle
the special case of x.extend(x). The saves memory and time.
* list.extend(x) runs
5 to 10% faster when x is a list or tuple
15% faster when x is an iterable not defining __len__
twice as fast when x is an iterable defining __len__
* the code is about 15 lines shorter and no longer duplicates
functionality.
The Py2.3 approach overallocated small lists by up to 8 elements.
The last checkin would limited this to one but slowed down (by 20 to 30%)
the creation of small lists between 3 to 8 elements.
This tune-up balances the two, limiting overallocation to 3 elements
(significantly reducing space consumption from Py2.3) and running faster
than the previous checkin.
The first part of the growth pattern (0, 4, 8, 16) neatly meshes with
allocators that trigger data movement only when crossing a power of two
boundary. Also, then even numbers mesh well with common data alignments.
realloc(). This is achieved by tracking the overallocation size in a new
field and using that information to skip calls to realloc() whenever
possible.
* Simplified and tightened the amount of overallocation. For larger lists,
this overallocates by 1/8th (compared to the previous scheme which ranged
between 1/4th to 1/32nd over-allocation). For smaller lists (n<6), the
maximum overallocation is one byte (formerly it could be upto eight bytes).
This saves memory in applications with large numbers of small lists.
* Eliminated the NRESIZE macro in favor of a new, static list_resize function
that encapsulates the resizing logic. Coverting this back to macro would
give a small (under 1%) speed-up. This was too small to warrant the loss
of readability, maintainability, and de-coupling.
* Some functions using NRESIZE had grown unnecessarily complex in their
efforts to bend to the macro's calling pattern. With the new list_resize
function in place, those other functions could be simplified. That is
being saved for a separate patch.
* The ob_item==NULL check could be eliminated from the new list_resize
function. This would entail finding each piece of code that sets ob_item
to NULL and adding a new line to invalidate the overallocation tracking
field. Rather than impose a new requirement on other pieces of list code,
it was preferred to leave the NULL check in place and retain the benefits
of decoupling, maintainability and information hiding (only PyList_New()
and list_sort() need to know about the new field). This approach also
reduces the odds of breaking an extension module.
(Collaborative effort by Raymond Hettinger, Hye-Shik Chang, Tim Peters,
and Armin Rigo.)
the same object to be collected by the cyclic GC support if they are
only referenced by a cycle. If the weakref being collected was one of
the weakrefs without callbacks, some local variables for the
constructor became invalid and have to be re-computed.
The test caused a segfault under a debug build without the fix applied.
Formerly, length data fetched from sequence objects.
Now, any object that reports its length can benefit from pre-sizing.
On one sample timing, it gave a threefold speedup for list(s) where s
was a set object.
The special-case code that was removed could return a value indicating
success but leave an exception set. test_fileinput failed in a debug
build as a result.
which can be reviewed via
http://coding.derkeiler.com/Archive/Python/comp.lang.python/2003-12/1011.html
Duncan Booth investigated, and discovered that an "optimisation" was
in fact a pessimisation for small numbers of elements in a source list,
compared to not having the optimisation, although with large numbers
of elements in the source list the optimisation was quite beneficial.
He posted his change to comp.lang.python (but not to SF).
Further research has confirmed his assessment that the optimisation only
becomes a net win when the source list has more than 100 elements.
I also found that the optimisation could apply to tuples as well,
but the gains only arrive with source tuples larger than about 320
elements and are nowhere near as significant as the gains with lists,
(~95% gain @ 10000 elements for lists, ~20% gain @ 10000 elements for
tuples) so I haven't proceeded with this.
The code as it was applied the optimisation to list subclasses as
well, and this also appears to be a net loss for all reasonable sized
sources (~80-100% for up to 100 elements, ~20% for more than 500
elements; I tested up to 10000 elements).
Duncan also suggested special casing empty lists, which I've extended
to all empty sequences.
On the basis that list_fill() is only ever called with a list for the
result argument, testing for the source being the destination has
now happens before testing source types.
bit by checking the value of UCHAR_MAX in Include/Python.h. There was a
check in Objects/stringobject.c. Remove that. (Note that we don't define
UCHAR_MAX if it's not defined as the old test did.)
and left shifts. (Thanks to Kalle Svensson for SF patch 849227.)
This addresses most of the remaining semantic changes promised by
PEP 237, except for repr() of a long, which still shows the trailing
'L'. The PEP appears to promise warnings for operations that
changed semantics compared to Python 2.3, but this is not
implemented; we've suffered through enough warnings related to
hex/oct literals and I think it's best to be silent now.
* Add more tests
* Refactor and neaten the code a bit.
* Rename union_update() to update().
* Improve the algorithms (making them a closer to sets.py).
function.
* Add a better test for deepcopying.
* Add tests to show the __init__() function works like it does for list
and tuple. Add related test.
* Have shallow copies of frozensets return self. Add related test.
* Have frozenset(f) return f if f is already a frozenset. Add related test.
* Beefed-up some existing tests.
by the function object or by the method object, the function
object's attribute usually wins. Christian Tismer pointed out that
that this is really a mistake, because this only happens for special
methods (like __reduce__) where the method object's version is
really more appropriate than the function's attribute. So from now
on, all method attributes will have precedence over function
attributes with the same name.
* Improve the hash function to increase the chance that distinct sets will
have distinct xor'd hash totals.
* Use PyDict_Merge where possible (it is faster than an equivalent iter/set
pair).
* Don't rebuild dictionaries where the input already has one.
Also SF patch 843455.
This is a critical bugfix.
I'll backport to 2.3 maint, but not beyond that. The bugs this fixes
have been there since weakrefs were introduced.
* Install the unittests, docs, newsitem, include file, and makefile update.
* Exercise the new functions whereever sets.py was being used.
Includes the docs for libfuncs.tex. Separate docs for the types are
forthcoming.
subtype_dealloc(): This left the dying object exposed to gc, so that
if cyclic gc triggered during the weakref callback, gc tried to delete
the dying object a second time. That's a disaster. subtype_dealloc()
had a (I hope!) unique problem here, as every normal dealloc routine
untracks the object (from gc) before fiddling with weakrefs etc. But
subtype_dealloc has obscure technical reasons for re-registering the
dying object with gc (already explained in a large comment block at
the bottom of the function).
The fix amounts to simply refraining from reregistering the dying object
with gc until after the weakref callback (if any) has been called.
This is a critical bug (hard to predict, and causes seemingly random
memory corruption when it occurs). I'll backport it to 2.3 later.
charmaptranslate_makespace() allocated more memory than required for the
next replacement but didn't remember that fact, so memory size was growing
exponentially every time a replacement string is longer that one character.
This fixes SF bug #828737.
key provides C support for the decorate-sort-undecorate pattern.
reverse provide a stable sort of the list with the comparisions reversed.
* Amended the docs to guarantee sort stability.
If a length-1 Unicode string was in the freelist and it was
uninitialized or pointed to a very large (magnitude) negative number,
the check
unicode_latin1[unicode->str[0]] == unicode
could cause a segmentation violation, e.g. unicode->str[0] is 0xcbcbcbcb.
Fix this in two ways:
1. Change guard befor unicode_latin1[] to test against 256U. If I
understand correctly, the unsigned long used to store UCS4 on my
box was getting converted to a signed long to compare with the
signed constant 256.
2. Change _PyUnicode_New() to make sure the first element of str is
always initialized to zero. There are several places in the code
where the caller can exit with an error before initializing any
of str, which would leave junk in str[0].
Also, silence a compiler warning on pointer vs. int arithmetic.
Bug fix candidate.
The unicode_resize() family only returns -1 or 0 so simply checking
for != 0 is sufficient, but somewhat unclear. Many Python API
functions return < 0 on error, reserving the right to return 0 or 1 on
success. Change the call sites for consistency with these calls.
file_truncate(): C doesn't define what fflush(fp) does if fp is open
for update, and the preceding I/O operation on fp was input. On Windows,
fflush() actually changes the current file position then. Because
Windows doesn't support ftruncate() directly, this not only caused
Python's file.truncate() to change the file position (contra our docs),
it also caused the file not to change size.
Repaired by getting the initial file position at the start, restoring
it at the end, and tossing all the complicated micro-efficiency checks
trying to avoid "provably unnecessary" seeks. file.truncate() can't
be a frequent operation, and seeking to the current file position has
got to be cheap anyway.
Bugfix candidate.
[ 784825 ] fix obscure crash in descriptor handling
Should be applied to release23-maint and in all likelyhood
release22-maint, too.
Certainly doesn't apply to release21-maint.
number. This accounts for the 2 refcount leaks per test_complex run
Michael Hudson discovered (I figured only I would have the stomach to
look for leaks in floating-point code <wink>).
when an encoding error occurs and the callback name is unknown,
i.e. when the callback has to be called. The problem was that
the fact that the callback has already been looked up was only
recorded in a local variable in charmap_encoding_error(), because
charmap_encoding_error() got it's own copy of the errorHandler
pointer instead of a pointer to the pointer in
PyUnicode_EncodeCharmap().
Now test_descr only appears to leak two references & I think this
are in fact illusory (it's to do with things getting resurrected in
__del__ methods & it's easy to be believe confusion occurs when that
happens <wink>). Woohoo!
Sure looks like it to me! <wink>
When I run the leak2.py script I posted to python-dev, I only see
three reference leaks in all of test_descr. When I run
test_descr.test_main, I still see 46 leaks. This clearly demands
posting a yelp to python-dev :-)
This certainly should be applied to release23-maint, and in all
likelyhood release22-maint as well.
The !PyType_Check(base) check snuck in as part of rev 2.215, but was
unrelated to the SF patch that is mentioned in the checkin comment.
The test is currently unnecessary because base is set to the return
value of best_bases(), which returns a type or NULL.