and left shifts. (Thanks to Kalle Svensson for SF patch 849227.)
This addresses most of the remaining semantic changes promised by
PEP 237, except for repr() of a long, which still shows the trailing
'L'. The PEP appears to promise warnings for operations that
changed semantics compared to Python 2.3, but this is not
implemented; we've suffered through enough warnings related to
hex/oct literals and I think it's best to be silent now.
* Add more tests
* Refactor and neaten the code a bit.
* Rename union_update() to update().
* Improve the algorithms (making them a closer to sets.py).
function.
* Add a better test for deepcopying.
* Add tests to show the __init__() function works like it does for list
and tuple. Add related test.
* Have shallow copies of frozensets return self. Add related test.
* Have frozenset(f) return f if f is already a frozenset. Add related test.
* Beefed-up some existing tests.
Also SF patch 843455.
This is a critical bugfix.
I'll backport to 2.3 maint, but not beyond that. The bugs this fixes
have been there since weakrefs were introduced.
* Install the unittests, docs, newsitem, include file, and makefile update.
* Exercise the new functions whereever sets.py was being used.
Includes the docs for libfuncs.tex. Separate docs for the types are
forthcoming.
subtype_dealloc(): This left the dying object exposed to gc, so that
if cyclic gc triggered during the weakref callback, gc tried to delete
the dying object a second time. That's a disaster. subtype_dealloc()
had a (I hope!) unique problem here, as every normal dealloc routine
untracks the object (from gc) before fiddling with weakrefs etc. But
subtype_dealloc has obscure technical reasons for re-registering the
dying object with gc (already explained in a large comment block at
the bottom of the function).
The fix amounts to simply refraining from reregistering the dying object
with gc until after the weakref callback (if any) has been called.
This is a critical bug (hard to predict, and causes seemingly random
memory corruption when it occurs). I'll backport it to 2.3 later.
Formerly, underlying queue was implemented in terms of two lists. The
new queue is a series of singly-linked fixed length lists.
The new implementation runs much faster, supports multi-way tees, and
allows tees of tees without additional memory costs.
The root ideas for this structure were contributed by Andrew Koenig
and Guido van Rossum.
memory leak that would've occurred for all iterators that were
destroyed before having iterated until they raised StopIteration.
* Simplify some code.
* Add new test cases to check for the memleak and ensure that mixing
iteration with modification of the values for existing keys works.
* tee object is no longer subclassable
* independent iterators renamed to "itertools.tee_iterator"
* fixed doc string typo and added entry in the module doc string
charmaptranslate_makespace() allocated more memory than required for the
next replacement but didn't remember that fact, so memory size was growing
exponentially every time a replacement string is longer that one character.
This fixes SF bug #828737.
It works like the pure python verion except:
* it stops storing data after of the iterators gets deallocated
* the data queue is implemented with two stacks instead of one dictionary.
key provides C support for the decorate-sort-undecorate pattern.
reverse provide a stable sort of the list with the comparisions reversed.
* Amended the docs to guarantee sort stability.
* Added C coded getrandbits(k) method that runs in linear time.
* Call the new method from randrange() for ranges >= 2**53.
* Adds a warning for generators not defining getrandbits() whenever they
have a call to randrange() with too large of a population.
is None, the next row read is used as the fieldnames. In the common case,
this means the programmer doesn't need to know the fieldnames ahead of time.
The first row of the file will be used. In the uncommon case, this means
the programmer can set the reader's fieldnames attribute to None at any time
and have the next row read as the next set of fieldnames, so a csv file can
contain several "sections", each with different fieldnames.
why in a new comment. My home Win98SE box is one of the "real systems"
alluded to (my system "default sound" appears to have vanished sometime
in the last month, that's certainly not a Python bug, and the MS
PlaySound docs are correct in their explanation of what happens then).
Bugfix candidate. If someone can still sneak it into 2.3.1, that would
be good.
test_bad_address(): Recover from that VeriSign thought it would boost
its corporate coffers to start resolving http://www.sadflkjsasadf.com/.
Bugfix candidate -- although the bug is more VeriSign's than Python's!
Add support for the iterator and mapping protocols.
For Py2.3, this was done for shelve, dumbdbm and other mapping objects, but
not for bsddb and dbhash which were inadvertently missed.
file_truncate(): C doesn't define what fflush(fp) does if fp is open
for update, and the preceding I/O operation on fp was input. On Windows,
fflush() actually changes the current file position then. Because
Windows doesn't support ftruncate() directly, this not only caused
Python's file.truncate() to change the file position (contra our docs),
it also caused the file not to change size.
Repaired by getting the initial file position at the start, restoring
it at the end, and tossing all the complicated micro-efficiency checks
trying to avoid "provably unnecessary" seeks. file.truncate() can't
be a frequent operation, and seeking to the current file position has
got to be cheap anyway.
Bugfix candidate.
random.sample() uses one of two algorithms depending on the ratio of the
sample size to the population size. One of the algorithms accepted any
iterable population argument so long as it defined __len__(). The other
had a stronger requirement that the population argument be indexable.
While it met the documentation specifications which insisted that the
population argument be a sequence, it made random.sample() less usable
with sets. So, the second algorithm was modified to coerce non-indexable
iterables and dictionaries into a tuple before proceeding.
For smaller datasets, it is not always true the increasing the compression
level always results in better compression. Removed the test which made
this invalid assumption.
(Contributed by Walter Dörwald).
* Convert three test modules to unittest format.
* Expanded coverage in test_structseq.py.
* Raymond added a new test in test_sets.py
When the indents were set to longer than the width and long word breaking
was enabled, an infinite loop would result because the inner loop did not
assure that at least one character was stripped off on every pass.
platforms (e.g., Cygwin) that are "particular" about open files, this will
cause other regression tests that use the same temp file to fail:
$ ./python.exe -E -tt Lib/test/regrtest.py -l
test_largefile test_mmap test_mutants
test_largefile
test test_largefile failed -- got -1794967295L, but expected 2500000001L
test_mmap
test test_mmap crashed -- exceptions.IOError: [Errno 13] Permission denied: '@test'
test_mutants
test test_mutants crashed -- exceptions.IOError: [Errno 13] Permission denied: '@test'
This patch solves the problem by adding missing "try/finally" blocks. Note
that the "large" size of this patch is due to many white space changes --
otherwise, the patch is small.
I tested this patch under Red Hat Linux 8.0 too.
* Relaxed the argument restrictions for non-operator methods. They now
allow any iterable instead of requiring a set. This makes the module
a little easier to use and paves the way for an efficient C
implementation which can take better advantage of iterable arguments
while screening out immutables.
* Deprecated Set.update() because it now duplicates Set.union_update()
* Adapted the tests and docs to include the above changes.
* Added more test coverage including testing identities and checking
to make sure non-restartable generators work as arguments.
Will backport to Py2.3.1 so that the interface remains consistent
across versions. The deprecation of update() will be changed to
a FutureWarning.
[ 784825 ] fix obscure crash in descriptor handling
Should be applied to release23-maint and in all likelyhood
release22-maint, too.
Certainly doesn't apply to release21-maint.
The default seed is time.time().
Multiplied by 256 before truncating so that fractional seconds are used.
This way, two successive calls to random.seed() are much more likely
to produce different sequences.
Also remove now unnecessary property attributes for thread safety
(no longer have lazy attributes) and code simplicity reasons.
Timezone storage has been reworked to be simpler and more flexible. All values
in LocaleTime instances are lower-cased. This is all done to simplify the
module.
The module now assumes nothing beyond the strptime function will be exposed for
general use beyond providing functionality for strptime.
caught when executing test_strptime, test_logging, and test_time in that order
when the testing of "%c" occured. Suspect the cache was not being recreated
(the test passed when test_logging was forced to re-establish the locale).
Obtain the original locale in the documented way. This way actually
works for me.
Restore the original locale at the end, instead of forcing to "C".
Move the locale fiddling into the test driver instead of doing it as a
side effect of merely importing the module. I don't know why the test
is mucking with locale (and also added a comment saying so), but it
surely has no justification for doing that as an import side-effect.
Now whenever the locale-changing code executes, the locale-restoring code
will also get run.
arbitrary bytes before the actual zip compatible archive. Zipfiles
containing comments at the end of the file are still not supported.
Add a testcase to test_zipimport, and update NEWS.
This closes sf #775637 and sf #669036.
If this doesn't happen, it leaves the locale in a state that can cause
other tests to fail. For example, running test_strptime,
test_logging, and test_time in that order.
* It ran fine under "python regrtest.py test_warnings" but failed under
"python regrtest.py" presumably because other tests would add to
filtered warnings and not reset them at the end of the test.
* Converted to a unittest format for better control. Renamed
monkey() and unmonkey() to setUp() and tearDown().
* Increased coverage by testing all warnings in __builtin__.
* Increased coverage by testing regex matching of specific messages.
reported consistently with the *nix world. 'Lib/test/test_warnings.py'
came out as 'lib\test\test_warnings.py'. The basename is all we care
about so I used that.
Add API function simplefilter() that does not create or install
regular expressions to match message or module. Extend the filters
data structure to store None as an alternative to re.compile("").
Move the _test() function to test_warnings and add some code to try
and avoid disturbing the global state of the warnings module.
time.tzname[1] and not time.daylight`` is true when it should only when
time.daylight is true. Tests are also fixed.
Closes bug #763047 and its cohort #763052.
because it was still looking in the ossaudiodev module namespace for
this symbol.
As the symbol has already been rebound as a global, use that instead.
Analysis by Bob Halley:
The test seems to expect that if time.daylight is true, then the
is_dst field of the tm structure will be 1 too. But this isn't
the case, since daylight is true if the timezone does DST, *not*
if DST is in effect.
SF bug #760703: SocketHandler and LogRecord don't work well together
SF bug #757821: logging module docs
Applied Vinay Sajip's patch with a few minor fixups and a NEWS item.
Patched __init__.py - added new function
makeLogRecord (for bug report 760703).
Patched handlers.py - updated some docstrings and
deleted some old commented-out code.
Patched test_logging.py to make use of makeLogRecord.
Patched liblogging.tex to fill documentation gaps (both
760703 and bug 757821).
The interning of short strings violates the refcnt==1 assumption for
_PyString_Resize().
A simple fix is to boost the initial value of "totalnew" by 1.
Combined with an NULL argument to PyString_FromStringAndSize(),
this assures that resulting format string is not interned.
This will remain true even if the implementation of
PyString_FromStringAndSize() changes because only the uninitialized
strings that can be interned are those of zero length.
Added a test case.
* Updated comment on design of imap()
* Added untraversed object in izip() structure
* Replaced the pairwise() example with a more general window() example
code use proper functions to get paths.
Changed the name of tar file that is searched for to be absolute (i.e., not use
os.extsep) since filename is locked in based on name of file in CVS
(testtar.tar).
Closes bug #731403 .
to test the new setparameters() interface.
Modified play_sound_file() to print the elapsed time taken to play the
test sample (to the nearest 0.1 sec).
Python-Dev. Fixed typos in test comments. Added some trivial new test
guts to show the parallelism (now) among __delitem__, __setitem__ and
__getitem__ wrt error conditions.
Still a bugfix candidate for 2.2.3 final, but waiting for Fred to get a
chance to chime in.
Someone review this, please! Final releases are getting close, Fred
(the weakref guy) won't be around until Tuesday, and the pre-patch
code can indeed raise spurious RuntimeErrors in the presence of
threads or mutating comparison functions.
See the bug report for my confusions: I can't see any reason for why
__delitem__ iterated over the keys. The new one-liner implementation
is much faster, can't raise RuntimeError, and should be better-behaved
in all respects wrt threads.
New tests test_weak_keyed_bad_delitem and
test_weak_keyed_cascading_deletes fail before this patch.
Bugfix candidate for 2.2.3 too, if someone else agrees with this patch.
float_pow(): Don't let the platform pow() raise -1.0 to an integer power
anymore; at least glibc gets it wrong in some cases. Note that
math.pow() will continue to deliver wrong (but platform-native) results
in such cases.
patterns as floats/doubles results in floating point exceptions.
Fix this by implementing a separate test_byteswap() for the floating
point tests. This new test compares the tostring() values of both arrays
instead of the arrays themselves.
Discovered by Neal Norwitz.
The compiler was reseting the list comprehension tmpname counter for each function, but the symtable was using the same counter for the entire module. Repair by move tmpname into the symtable entry.
Bugfix candidate.
one good use: a subclass adding a method to express the duration as
a number of hours (or minutes, or whatever else you want to add). The
native breakdown into days+seconds+us is often clumsy. Incidentally
moved a large chunk of object-initialization code closer to the top of
the file, to avoid worse forward-reference trickery.
attributes and methods work, that new arguments can be passed to the
constructor, and that inherited methods and attrs still work. Added
XXX comments about what to do when datetime becomes usably subclassable
too (it's not yet).
This file isn't meant to be executed, it's data input for test_tokenize.py.
The problem with the .py extension is that it uses "non-standard"
indentation, and it's good to test that, but reindent.py keeps wanting
to fix it. But fixing the indentation causes the expected-output file to
change, since exact line and column numbers are part of the
tokenize.tokenize() output getting tested.
Reverted a Py2.3b1 change to iterator in subclasses of list and tuple.
They had been changed to use __getitem__ whenever it had been overriden
in the subclass.
This caused some usabilty and performance problems. Also, it was
inconsistent with the rest of python where many container methods
access the underlying object directly without first checking for
an overridden getter. Users needing a change in iterator behavior
should override it directly.
('pgsql', '*', 252, []) and ('postgres', '*', 252, ['skip']),
but pwd.getgrgid(252) might return ('pgsql', '', 252, ['skip']).
Drop the test that tried to find a tuple similar to the one
returned from pwd.getgrgid() among those for the same gid returned
by pwd.getgrall(), as the only working definition of 'similar' seems
to be 'has the same gid'. This check can be done more directly.
This should fix SF bug #732783.
the itertoolsmodule.
* Taught itertools.repeat(obj, n) to treat negative repeat counts as
zero. This behavior matches that for sequences and prevents
infinite loops.