the same object to be collected by the cyclic GC support if they are
only referenced by a cycle. If the weakref being collected was one of
the weakrefs without callbacks, some local variables for the
constructor became invalid and have to be re-computed.
The test caused a segfault under a debug build without the fix applied.
John J. Lee writes: "the patch makes it possible to implement
functionality like HTTP cookie handling, Refresh handling,
etc. etc. using handler objects. At the moment urllib2's handler
objects aren't quite up to the job, which results in a lot of
cut-n-paste and subclassing. I believe the changes are
backwards-compatible, with the exception of people who've
reimplemented build_opener()'s functionality -- those people would
need to call opener.add_handler(HTTPErrorProcessor).
The main change is allowing handlers to implement
methods like:
http_request(request)
http_response(request, response)
In addition to the usual
http_open(request)
http_error{_*}(...)
"
Note that the change isn't well documented at least in part because
handlers aren't well documented at all. Need to fix this.
Add a bunch of new tests. It appears that none of these tests
actually use the network, so they don't need to be guarded by a
resource flag.
test_tuple.py and test_list.py. Common tests for tuple, list and UserList
are shared (in seq_tests.py and list_tests.py). Port tests to PyUnit.
(From SF patch #736962)
Original idea by Guido van Rossum.
Idea for skipable inner iterators by Raymond Hettinger.
Idea for argument order and identity function default by Alex Martelli.
Implementation by Hye-Shik Chang (with tweaks by Raymond Hettinger).
for Big String). This should make the tests pass on Win98SE. Note
that the docs only promise lengths up to 2048. Unfortunately this no
longer tests for the segfault I was seeing earlier, but I'm confident
I've nailed that one. :-) Fixes SF 852281. Will backport to 2.3.
unicode filenames"
Reorganize tests into functions so more combinations of
unicode/encoded/ascii can be tested, and while I was at it, upgrade to
unittest based test.
and left shifts. (Thanks to Kalle Svensson for SF patch 849227.)
This addresses most of the remaining semantic changes promised by
PEP 237, except for repr() of a long, which still shows the trailing
'L'. The PEP appears to promise warnings for operations that
changed semantics compared to Python 2.3, but this is not
implemented; we've suffered through enough warnings related to
hex/oct literals and I think it's best to be silent now.
* Add more tests
* Refactor and neaten the code a bit.
* Rename union_update() to update().
* Improve the algorithms (making them a closer to sets.py).
function.
* Add a better test for deepcopying.
* Add tests to show the __init__() function works like it does for list
and tuple. Add related test.
* Have shallow copies of frozensets return self. Add related test.
* Have frozenset(f) return f if f is already a frozenset. Add related test.
* Beefed-up some existing tests.
Also SF patch 843455.
This is a critical bugfix.
I'll backport to 2.3 maint, but not beyond that. The bugs this fixes
have been there since weakrefs were introduced.
* Install the unittests, docs, newsitem, include file, and makefile update.
* Exercise the new functions whereever sets.py was being used.
Includes the docs for libfuncs.tex. Separate docs for the types are
forthcoming.
subtype_dealloc(): This left the dying object exposed to gc, so that
if cyclic gc triggered during the weakref callback, gc tried to delete
the dying object a second time. That's a disaster. subtype_dealloc()
had a (I hope!) unique problem here, as every normal dealloc routine
untracks the object (from gc) before fiddling with weakrefs etc. But
subtype_dealloc has obscure technical reasons for re-registering the
dying object with gc (already explained in a large comment block at
the bottom of the function).
The fix amounts to simply refraining from reregistering the dying object
with gc until after the weakref callback (if any) has been called.
This is a critical bug (hard to predict, and causes seemingly random
memory corruption when it occurs). I'll backport it to 2.3 later.
Formerly, underlying queue was implemented in terms of two lists. The
new queue is a series of singly-linked fixed length lists.
The new implementation runs much faster, supports multi-way tees, and
allows tees of tees without additional memory costs.
The root ideas for this structure were contributed by Andrew Koenig
and Guido van Rossum.
memory leak that would've occurred for all iterators that were
destroyed before having iterated until they raised StopIteration.
* Simplify some code.
* Add new test cases to check for the memleak and ensure that mixing
iteration with modification of the values for existing keys works.
* tee object is no longer subclassable
* independent iterators renamed to "itertools.tee_iterator"
* fixed doc string typo and added entry in the module doc string
charmaptranslate_makespace() allocated more memory than required for the
next replacement but didn't remember that fact, so memory size was growing
exponentially every time a replacement string is longer that one character.
This fixes SF bug #828737.
It works like the pure python verion except:
* it stops storing data after of the iterators gets deallocated
* the data queue is implemented with two stacks instead of one dictionary.
key provides C support for the decorate-sort-undecorate pattern.
reverse provide a stable sort of the list with the comparisions reversed.
* Amended the docs to guarantee sort stability.
* Added C coded getrandbits(k) method that runs in linear time.
* Call the new method from randrange() for ranges >= 2**53.
* Adds a warning for generators not defining getrandbits() whenever they
have a call to randrange() with too large of a population.
is None, the next row read is used as the fieldnames. In the common case,
this means the programmer doesn't need to know the fieldnames ahead of time.
The first row of the file will be used. In the uncommon case, this means
the programmer can set the reader's fieldnames attribute to None at any time
and have the next row read as the next set of fieldnames, so a csv file can
contain several "sections", each with different fieldnames.
why in a new comment. My home Win98SE box is one of the "real systems"
alluded to (my system "default sound" appears to have vanished sometime
in the last month, that's certainly not a Python bug, and the MS
PlaySound docs are correct in their explanation of what happens then).
Bugfix candidate. If someone can still sneak it into 2.3.1, that would
be good.
test_bad_address(): Recover from that VeriSign thought it would boost
its corporate coffers to start resolving http://www.sadflkjsasadf.com/.
Bugfix candidate -- although the bug is more VeriSign's than Python's!