A new hashlib module to replace the md5 and sha modules. It adds
support for additional secure hashes such as SHA-256 and SHA-512. The
hashlib module uses OpenSSL for fast platform optimized
implementations of algorithms when available. The old md5 and sha
modules still exist as wrappers around hashlib to preserve backwards
compatibility.
Fix over-aggressive PyErr_Clear(). The same code fragment appears in
various guises in list.extend(), map(), filter(), zip(), and internally
in PySequence_Tuple().
a frozenset conversion when the initial search attempt fails with a
TypeError and the key is some type of set. Add a testcase.
* Eliminate a duplicate if-stmt.
s|=s, s&=s, s-=s, or s^=s). Add related tests.
* Improve names for several variables and functions.
* Provide alternate table access functions (next, contains, add, and discard)
that work with an entry argument instead of just a key. This improves
set-vs-set operations because we already have a hash value for each key
and can avoid unnecessary calls to PyObject_Hash(). Provides a 5% to 20%
speed-up for quick hashing elements like strings and integers. Provides
much more substantial improvements for slow hashing elements like tuples
or objects defining a custom __hash__() function.
* Have difference operations resize() when 1/5 of the elements are dummies.
Formerly, it was 1/6. The new ratio triggers less frequently and only
in cases that it can resize quicker and with greater benefit. The right
answer is probably either 1/4, 1/5, or 1/6. Picked the middle value for
an even trade-off between resize time and the space/time costs of dummy
entries.
- Handle both frozenset() and frozenset([]).
- Do not use singleton for frozenset subclasses.
- Finalize the singleton.
- Add test cases.
* Factor-out set_update_internal() from set_update(). Simplifies the
code for several internal callers.
* Factor constant expressions out of loop in set_merge_internal().
* Minor comment touch-ups.
[ 1229429 ] missing Py_DECREF in PyObject_CallMethod
Add a test in test_enumerate, which is a bit random, but suffices
(reversed_new calls PyObject_CallMethod under some circumstances).
Should significantly enhance the utility of the module by supporting
the creation of tools that modify the token stream and writeback the
modified result.
[ 1180995 ] binary formats for marshalling floats
Adds 2 new type codes for marshal (binary floats and binary complexes), a
new marshal version (2), updates MAGIC and fiddles the de-serializing of
code objects to be less likely to clobber the real reason for failing if
it fails.
[ 1181301 ] make float packing copy bytes when they can
which hasn't been reviewed, despite numerous threats to check it in
anyway if noone reviews it. Please read the diff on the checkin list,
at least!
The basic idea is to examine the bytes of some 'probe values' to see if
the current platform is a IEEE 754-ish platform, and if so
_PyFloat_{Pack,Unpack}{4,8} just copy bytes around.
The rest is hair for testing, and tests.
crashing, and indirectly on the fact that hash codes in
random.randrange(1000000000) were very unlikely to exhibit collisions.
To see the problem, replace this number with 500 and observe the crash on
either del target[key] or del keys[i].
The fix prevents recursive mutation, just as in the key insertion case.
conversion using the proper magic slot (e.g., __int__()). Also move conversion
code out of PyNumber_*() functions in the C API into the nb_* function.
Applied patch #1109424. Thanks Walter Doewald.
test_site often failed under "regrtest.py -r", because this xmlrpc test
left sys with a setdefaultencoding attribute, but loading site.py removes
that attribute and test_site.py verifies the attribute is gone. Changed
this test to get rid of sys.setdefaultencoding if it didn't exist when
this test started.
Don't know whether this is a bugfix (backport) candidate.
the last character read is "\r" (and size is None, i.e. we're allowed to
call read() multiple times), so that we can return the correct line ending
(this additional character might be a "\n").
If the stream is temporarily exhausted, we might return the wrong line ending
(if the last character read is "\r" and the next one (after the byte stream
provides more data) is "\n", but at least the atcr member ensure that we
get the correct number of lines (i.e. this "\n" will not be treated as
another line ending.)
[ 1165306 ] Property access with decorator makes interpreter crash
Don't allow the creation of unbound methods with NULL im_class, because
attempting to call such crashes.
Backport candidate.
instead of raising a TypeError. Allows other types to successfully
implement __radd__() style methods.
* Remove future division import from test suite.
* Remove test suite's shadowing of __builtin__.dir().
etc., had comments after the colon, and some other cases. This patch
take a simpler approach that doesn't rely on looking for a ':'. Thanks
Simon Percivall!
socket.gethostname() in the check for a valid return.
Also clarified docs (official and docstring) that the value from gethostname()
is returned if gethostbyaddr() doesn't do the job.
* Speed-up "x in y" where x has more than one character.
The existing code made excessive calls to the expensive memcmp() function.
The new code uses memchr() to rapidly find a start point for memcmp().
In addition to knowing that the first character is a match, the new code
also checks that the last character is a match. This significantly reduces
the incidence of false starts (saving memcmp() calls and making quadratic
behavior less likely).
Improves the timings on:
python -m timeit -r7 -s"x='a'*1000" "'ab' in x"
python -m timeit -r7 -s"x='a'*1000" "'bc' in x"
Once this code has proven itself, then string_find_internal() should refer
to it rather than running its own version. Also, something similar may
apply to unicode objects.
[ 1124295 ] Function's __name__ no longer accessible in restricted mode
which I introduced with a bit of mindless copy-paste when making
__name__ writable. You can't assign to __name__ in restricted mode,
which I'm going to pretend was intentional :)
PyNumber_Check, rather than trying to convert to a float. Reimplemented
writer - now raises exceptions when it sees a quotechar but neither
doublequote or escapechar are set. Doublequote results are now more
consistent (eg, single quote should generate """", rather than "",
which is ambiguous).
when this limit is reached. Limit defaults to 128k, and is changed
by module set_field_limit() method. Previously, an unmatched quote
character could result in the entire file being read into the field
buffer, potentially exhausting virtual memory.
was done because we were previously performing validation of the dialect
from python, but this is now down within the C module. Also, the method
we were using to detect classes did not work with new-style classes.
a delimiter. Previously, the 'network location' (<authority> in RFC 2396) would
become 'www.example.com?query=spam', while RFC 2396 does not allow a '?' in
<authority>. See bug #548176 for further discussion.
`glob.glob()` currently calls itself recursively to build a list of matches of
the dirname part of the pattern and then filters by the basename part. This is
effectively BFS. ``glob.glob('*/*/*/*/*/foo')`` will build a huge list of all
directories 5 levels deep even if only a handful of them contain a ``foo``
entry. A generator-based recusion would never have to store these list at once
by implementing DFS. This patch converts the `glob` function to an `iglob`
recursive generator . `glob()` now just returns ``list(iglob(pattern))``.
I also cleaned up the code a bit (reduced duplicate `has_magic()` checks and
created a second `glob0` helper func so that the main loop need not be
duplicated).
Thanks to Cherniavsky Beni for the patch!
test_threading.test_foreign_thread(): new test does a basic check that
"foreign" threads can using the threading module, and that they create
a _DummyThread instance in at least one use case. This isn't a very
good test, since a thread created by thread.start_new_thread() isn't
particularly "foreign".
trying to return a complete line even if a size parameter was given (see
http://www.python.org/sf/1076985). This leads to buffer overflows with long
source lines under Windows if e.g. cp1252 is used as the source encoding.
This patch reverts the behaviour of readline() to something that behaves more
like Python 2.3: If a size parameter is given, read() is called only once.
As a side effect of this, readline() now supports all types of linebreaks
supported by unicode.splitlines().
Note that the tokenizer is still broken and it's possible to provoke segfaults
(see http://www.python.org/sf/1089395).
more. Thanks to Simon Percivall!
The patch makes changes to inspect.py in two places:
* the pattern to match against functions at line 436 is
modified: lambdas should be matched even if not
preceded by whitespace, as long as "lambda" isn't part
of another word.
* the BlockFinder class is heavily modified. Changes are:
- checking for "def", "class" or "lambda" names
before setting self.started to True. Then checking the
same line for word characters after the colon (if the
colon is on that line). If so, and the line does not
end with a line continuation marker, raise EndOfBlock
immediately.
- adding self.passline to show that the line is to be
included and no more checking is necessary on that
line. Since a NEWLINE token is not generated when a
line continuation marker exists, this allows getsource
to continue with these functions even if the following
line would not be indented.
Also add a bunch of
'quite-unlikely-to-occur-in-real-life-but-working-anyway' tests.
is pointless.
Also add a note to the docs for the 'test' package that test cases should check
first that any conditions needed in the operating system are met before having
a test run.
Closes bug #1077302. THanks, Ian Holsman.
(http://www.cygwin.com/faq/faq_3.html#SEC41).
Also check whether onerror has actually been called so this test will
fail on assertion instead of on trying to chmod a non-existent file.
__getitem__() methods: compute only the new spellings needed to satisfy
the given indexing object. This is purely an optimization (it should
have no effect on visible semantics).
regrtest.py: skip rgbimg and imageop as they are not built on 64-bit systems.
_tkinter.c: replace %.8x with %p for printing pointers.
setup.py: add lib64 into the library directories.
showing that doctest's pdb.set_trace() support was dramatically broken.
doctest.py _OutputRedirectingPdb.trace_dispatch(): Return a local trace
function instead of (implicitly) None. Else interaction with pdb was
bizarre, noticing only 'call' events. Amazingly, the existing set_trace()
tests didn't care.
reliably on WinME with FAT32.
* Native speaker rewrite of the comment block.
* Removed unnecessary backslashes from the multi-line function defintions.
In cyclic gc, clear weakrefs to unreachable objects before allowing any
Python code (weakref callbacks or __del__ methods) to run.
This is a critical bugfix, affecting all versions of Python since weakrefs
were introduced. I'll backport to 2.3.
exposed in header files. Fixed a few comments in these headers.
As we might have expected, writing down invariants systematically exposed a
(minor) bug. In this case, function objects have a writeable func_code
attribute, which could be set to code objects with the wrong number of
free variables. Calling the resulting function segfaulted the interpreter.
Added a corresponding test.
of the year, and day of the week. Was not taking into consideration properly
the issue of when %U is used for the week of the year but the year starts on
Monday.
Closes bug #1045381 again.
The underlying bug still exists, but also existed in 2.3.4:
import.c's load_source_module() returns NULL if
PyOS_GetLastModificationTime() returns -1, but
PyOS_GetLastModificationTime() doesn't set any exception when it returns
-1, and neither does load_source_module() when it gets back -1. This
leads to "SystemError: NULL result without error in PyObject_Call"
on an import that fails in this way.
- Added a chunk of plist data as generated by Cocoa's NSDictionary and
verify we output the same (including formatting)
- Changed the "literal" plist code to match the raw test data
Peepholer could be fooled into misidentifying a tuple_of_constants.
Added code to count consecutive occurrences of LOAD_CONST.
Use the count to weed out the misidentified cases.
Added a unittest.
Turns out the mysterious "expected output" file contained exactly N dots,
because test_poll() has a loop that *usually* went around N times,
printing one dot on each loop trip. But there's no guarantee of that,
because the exact value of N depended on the vagaries of scheduling
time.sleep()s across two different processes. So stopped printing dots,
and got rid of the expected output file. Add a loop counter instead,
and verify that the loop goes around at least a couple of times. Also
cut the minimum time needed for this test from 4 seconds to 1.
tester that a DOS box is expected to flash. Slash the sleep from 2
seconds to a quarter second (why would we want to wait 2 seconds just
to stare at a DOS box?).
what this is trying to do. If it's necessary for it to create > 1000
processes, it should be controlled by a new resource and not run by
default on Windows.
display a test's docstring as "the name" of the test. So changed most
test docstrings to comments, and removed the clearly useless ones. Now
unittest reports the actual names of the test methods.
deque_item(): a performance bug: the linked list of blocks was followed
from the left in most cases, because the test (i < (deque->len >> 1)) was
after "i %= BLOCKLEN".
deque_clear(): replaced a call to deque_len() with deque->len; not sure what
this call was here for, nor if all compilers under the sun would inline it.
deque_traverse(): I belive that it could be called by the GC when the deque
has leftblock==rightblock==NULL, because it is tracked before the first block
is allocated (though closely before). Still, a C extension module subclassing
deque could provide its own tp_alloc that could trigger a GC collection after
the PyObject_GC_Track()...
deque_richcompare(): rewrote to cleanly check for end-of-iterations instead of
relying on deque.__iter__().next() to succeed exactly len(deque) times -- an
assumption which can break if deques are subclassed. Added a test.
I wonder if the length should be explicitely bounded to INT_MAX, with
OverflowErrors, as in listobject.c. On 64-bit machines, adding more than
INT_MAX in the deque will result in trouble. (Note to anyone/me fixing
this: carefully check for overflows if len is close to INT_MAX in the
following functions: deque_rotate(), deque_item(), deque_ass_item())
request. Tim says that "correct 'fuzzy' comparison of floats cannot
be automated." (The motivation behind adding the new option
was verifying interactive examples in Python's latex documentation;
several such examples use numbers that don't print consistently on
different platforms.)
Also, add a testcase.
Formerly, the list_extend() code used several local variables to remember
its state across iterations. Since an iteration could call arbitrary
Python code, it was possible for the list state to be changed. The new
code uses dynamic structure references instead of C locals. So, they
are always up-to-date.
After list_resize() is called, its size has been updated but the new
cells are filled with NULLs. These needed to be filled before arbitrary
iteration code was called; otherwise, that code could attempt to modify
a list that was in a semi-invalid state. The solution was to change
the ob->size field back to a value reflecting the actual number of valid
cells.
When an integer is compared to a float now, the int isn't coerced to float.
This avoids spurious overflow exceptions and insane results. This should
compute correct results, without raising spurious exceptions, in all cases
now -- although I expect that what happens when an int/long is compared to
a NaN is still a platform accident.
Note that we had potential problems here even with "short" ints, on boxes
where sizeof(long)==8. There's #ifdef'ed code here to handle that, but
I can't test it as intended. I tested it by changing the #ifdef to
trigger on my 32-bit box instead.
I suppose this is a bugfix candidate, but I won't backport it. It's
long-winded (for speed) and messy (because the problem is messy). Note
that this also depends on a previous 2.4 patch that introduced
_Py_SwappedOp[] as an extern.
all examples in a given text file. (analagous to "testmod")
- Minor docstring fixes.
- Added module_relative parameter to DocTestFile/DocTestSuite, which
controls whether paths are module-relative & os-independent, or
os-specific.
- Fixed bug in handling of absolute paths.
- If run from an interactive session, make paths relative to the
directory containing sys.argv[0] (since __main__ doesn't have
a __file__ attribute).
* The parameterization of "delimiter" was incomplete.
* safe_substitute's code for braced delimiters should only be executed
when braced is not None.
* Invalid pattern group names now raise a ValueError. Formerly, the
convert code would fall off the end and improperly return None.
Beefed-up tests.
* Test delimiter override for all paths in substitute and safe_substitute.
* Alter unittest invocation to match other modules (now it itemizes the
tests as they are run).
Renamed the new generator at Trevor's recommendation.
The name HardwareRandom suggested a bit more than it
delivered (no radioactive decay detectors or such).
with default False for testmod(). The real point of introducing this was
so that output from doctest.master.summarize() would be the same as in
2.3, and doctest.master in 2.4 is a backward-compatability hack used only
by testmod().
- Template no longer inherits from unicode.
- SafeTemplate is removed. Now Templates have both a substitute() and a
safe_substitute() method, so we don't need separate classes. No more
__mod__() operator.
- Adopt Tim Peter's idea for giving Template a metaclass, which makes the
delimiter, the identifier pattern, or the entire pattern easy to override
and document, while retaining efficiency of class-time compilation of the
regexp.
- More informative ValueError messages which will help a user narrow down the
bogus delimiter to the line and column in the original string (helpful for
long triple quoted strings).
decoding incomplete input (when the input stream is temporarily exhausted).
codecs.StreamReader now implements buffering, which enables proper
readline support for the UTF-16 decoders. codecs.StreamReader.read()
has a new argument chars which specifies the number of characters to
return. codecs.StreamReader.readline() and codecs.StreamReader.readlines()
have a new argument keepends. Trailing "\n"s will be stripped from the lines
if keepends is false. Added C APIs PyUnicode_DecodeUTF8Stateful and
PyUnicode_DecodeUTF16Stateful.
SF patch #1015989
The basic idea of this patch is to compute lineno attributes for all AST nodes. The actual
implementation lead to a lot of restructing and code cleanup.
The generated AST nodes now have an optional lineno argument to constructor. Remove the
top-level asList(), since it didn't seem to serve any purpose. Add an __iter__ to ast nodes.
Use isinstance() instead of explicit type tests.
Change transformer to use the new lineno attribute, which replaces three lines of code with one.
Use universal newlines so that we can get rid of special-case code for line endings. Use
lookup_node() in a few more frequently called, but simple com_xxx methods(). Change string
exception to class exception.
Several functions adopted the strategy of altering a full lengthed
string copy and resizing afterwards. That would fail if the initial
string was short enough (0 or 1) to be interned. Interning precluded
the subsequent resizing operation.
The solution was to make sure the initial string was at least two
characters long.
Added tests to verify that all binascii functions do not crater when
given an empty string argument.
"all or none" to "all or some".
This provides much greater test coverage without eating much time.
It also makes it more likely that routine regression testing will
unearth bugs.
in the new docs.
DocTestRunner.__run: Separate the determination of the example outcome
from reporting that outcome, to squash brittle code duplication and
excessive nesting.
this module imports itself explicitly from test (so the "file names"
current doctest synthesizes for examples don't vary depending on how
test_generators is run).
s.join([t]) is t
for (s, t) in (str, str), (unicode, unicode), and (str, unicode).
For (unicode, str), verify that it's *not* t (the result is promoted
to unicode instead). Also verify that when t is a subclass of str or
unicode that "the right thing" happens.
- Improvements to interactive debugging support:
- Changed the replacement pdb.set_trace to redirect stdout to the
real stdout *only* during interactive debugging; stdout from code
continues to go to the fake stdout.
- When the interactive debugger gets to the end of an example,
automatically continue.
- Use a replacement linecache.getlines that will return source lines
from doctest examples; this makes the source available to the
debugger for interactive debugging.
- In test_doctest, use a specialized _FakeOutput class instead of a
temporary file to fake stdin for the interactive interpreter.
and intervening text strings.
- Removed DocTestParser.get_program(): use script_from_examples()
instead.
- Fixed bug in DocTestParser._INDENT_RE
- Fixed bug in DocTestParser._min_indent
- Moved _want_comment() to the utility function section
actual output into lines created spurious empty lines at the ends of
each. Those matched, but the fancy diffs had surprising line counts (1
larger than expected), and tests kept having to slam <BLANKLINE> into the
expected output to account for this. Using the splitlines() string method
with keepends=True instead accomplishes what was intended directly.
NDIFF_DIFF->REPORT_NDIFF. This establishes the naming convention that
all reporting options should begin with "REPORT_" (since reporting
options are a different class from output comparison options; but they
are both set in optionflags).
to be more consistent with report_failure()
- If `want` or `got` is empty, then print "Expected nothing\n" or
"Got nothing\n" rather than "Expected:\n" or "Got:\n"
- Got rid of _tag_msg
exception message, or None if no exception is expected); and moved
exception parsing from DocTestRunner to DocTestParser. This is
architecturally cleaner, since it moves all parsing work to
DocTestParser; and it should make it easier for code outside
DocTestRunner (notably debugging code) to properly handle expected
exceptions.
a traceback message. I.e., examples that raise exceptions may no
longer generate pre-exception output. This restores the behavior of
doctest in python 2.3. The ability to check pre-exception output is
being removed because it makes the documentation simpler; and because
there are very few use cases for it.
This patch includes test cases and documentation updates, as well as NEWS file
updates.
This patch also updates the sre modules so that they don't import the string
module, breaking direct circular imports.
happen in 2.3, but nobody noticed it still was getting generated (the
warning was disabled by default). OverflowWarning and
PyExc_OverflowWarning should be removed for 2.5, and left notes all over
saying so.
* Make a pass to eliminate NOPs. Produce code that is more readable,
more compact, and a tiny bit faster. Makes the peepholer more flexible
in the scope of allowable transformations.
* With Guido's okay, bumped up the magic number so that this patch gets
widely exercised before the alpha goes out.
(Patch contributed by Nick Coghlan.)
Now joining string subtypes will always return a string.
Formerly, if there were only one item, it was returned unchanged.
- Test filenames sometimes had trailing .pyc or .pyo sufixes
(when module __file__ did).
- Trailing spaces spaces in expected output were dropped.
New default failure format:
- Separation of examples from file info makes examples easier to see
- More vertical separation, improving readability
- Emacs-recognized file info (also closer to Python exception format)
are non-obvious either way because the newline character "is invisible",
but it's still there all the same, and it's easier to explain/predict
if that reality is left alone.
truncate() left the stream position unchanged, which meant the
"truncated" data didn't go away:
>>> io.write('abc')
>>> io.truncate(0)
>>> io.write('xyz')
>>> io.getvalue()
'abcxyz'
Patch by Dima Dorfman.
test_queue has failed occasionally for years, and there's more than one
cause.
The primary cause in the SF report appears to be that the test driver
really needs entirely different code for thread tests that expect to
raise exceptions than for thread tests that are testing non-exceptional
blocking semantics. So gave them entirely different code, and added a
ton of explanation.
Another cause is that the blocking thread tests relied in several places
on the difference between sleep(.1) and sleep(.2) being long enough for
the trigger thread to do its stuff sot that the blocking thread could make
progress. That's just not reliable on a loaded machine. Boosted the 0.2's
to 10.0's instead, which should be long enough under any non-catastrophic
system conditions. That doesn't make the test take longer to run, the 10.0
is just how long the blocking thread is *willing* to wait for the trigger
thread to do something. But if the Queue module is plain broken, such
tests will indeed take 10 seconds to fail now.
For similar (heavy load) reasons, changed threaded-test termination to
be willing to wait 10 seconds for the signal thread to end too.
appeared at the end of a line. Repaired that. Also noted that it's
too easy to provoke this implementation into requiring exponential
time, and especially when a test fails. I'll replace the implementation
with an always-efficient one later.
error based on decorating with staticmethod too soon for the code to execute.
This meant that if the test didn't pass it just errored out. Now if the test
doesn't pass it leads to a failure instead.
[ 1009560 ] Fix @decorator evaluation order
From the description:
Changes in this patch:
- Change Grammar/Grammar to require
newlines between adjacent decorators.
- Fix order of evaluation of decorators
in the C (compile.c) and python
(Lib/compiler/pycodegen.py) compilers
- Add better order of evaluation check
to test_decorators.py (test_eval_order)
- Update the decorator documentation in
the reference manual (improve description
of evaluation order and update syntax
description)
and the comment:
Used Brett's evaluation order (see
http://mail.python.org/pipermail/python-dev/2004-August/047835.html)
(I'm checking this in for Anthony who was having problems getting SF to
talk to him)
normalize() in Draft 1.06 (9 October 2002):
The normalize operation has been added; it reduces a number to a
canonical form. (This replaces the trim operator, which only
removed trailing fractional zeros.)
version 2.39 of dectest.zip adds some new test files and because
some existing test files were getting skipped).
* Remove two docstrings which cluttered unittest's output.
* Simplify a for-loop with a list comprehension.
path, as normalizing the path may alter the meaning of the path if it contains
symlinks.
Also add tests for infinite symlink loops and parent symlinks that need to be
resolved.
[ 1005248 ] new.code() not cleanly checking its arguments
using the result of new.code() can still destroy the sun, but merely
calling the function shouldn't any more.
I also rewrote the existing tests of new.code() to use vastly less
un-bogus arguments, and added tests for the previous insane behaviours.
visually distinguish the expected output from the comments (use
"##" to mark expected outputs, and "#" to mark comments).
- If the string given to DocTestParser.get_program() is indented, then
strip its indentation. (In particular, find the min indentation of
non-blank lines, and strip that indentation from all lines.)
- Added comments for some regexps
- If the traceback type/message don't match, then still print full
traceback in report_failure (not just the first & last lines)
- Renamed DocTestRunner.__failure_header -> _failure_header
modify option flags for a single example; they do not turn options
on or off.)
- Added "indent" and "options" attributes for Example
- Got rid of add_newlines param to DocTestParser._parse_example (it's
no longer needed; Example's constructor now takes care of it).
- Added some docstrings
responsible for parsing the string.
- Renamed Parser to DocTestParser
- DocTestParser.get_*() now accept the string & name as command-line
arguments; the parser's constructor is now empty.
- Added DocTestParser.get_doctest() method
- Replaced "doctest_factory" argument to DocTestFinder with a "parser"
argument (takes a DocTestParser).
- Changed _tag_msg to take an indentation string argument.
the set_trace fiddling didn't make sense to me, and I ended up reworking
that part of the code. We really do want to save and restore
pdb.set_trace, so that each dynamically nested level of doctest gets
sys.stdout fiddled to what's appropriate for *it*. The only "trick"
really needed is that these layers of set_trace wrappers each call the
original pdb.set_trace (instead of the current pdb.set_trace).
the string one line at a time. The resulting code is (in my opinion,
anyway), much easier to read. In the process, I found and fixed a
bug in the orginal parser's line numbering in error messages (it was
inconsistant between 0-based and 1-based). Also, check for missing
blank lines after the prompt on all prompt lines, not just PS1 lines
(test added).
Added XXX comment about why the undocumented PyRange_New() API function
is too broken to be worth the considerable pain of repairing.
Changed range_new() to stop using PyRange_New(). This fixes a variety
of bogus errors. Nothing in the core uses PyRange_New() now.
Documented that xrange() is intended to be simple and fast, and that
CPython restricts its arguments, and length of its result sequence, to
native C longs.
Added some tests that failed before the patch, and repaired a test that
relied on a bogus OverflowError getting raised.
This got slammed in when find() was fixed to stop grabbing doctests
from modules imported *by* the module being tested. Such tests cannot
be expected to succeed, since they'll be run with the current module's
globals. Dozens of Zope3 doctests were failing because of that.
It wasn't clear why ignore_imports got added then. Maybe it's because
some existing tests failed when the change was made. Whatever, it's
a Bad Idea so it's gone now.
The only use of it was exceedingly obscure, in test_doctest's "Duplicate
Removal" test. It was "needed" there because, as an artifact of running
a doctest inside a doctest, the func_globals of functions compiled in
the second-level doctest don't match the module globals, and so the
test-finder believed these functions were from a foreign module and
skipped them. But that took a long time to figure out, and I actually
understand some of this stuff <0.9 wink>.
That problem was resolved by moving the source code for the second-level
doctest into an actual module (test/doctest_aliases.py).
The only remaining difficulty was that the test for the deprecated
Tester.rundict() then failed, because the test finder doesn't take
module=None at face value, trying to guess which module the user really
intended then. Its guess wasn't appropriate for what Tester.rundict
needs when module=None is given to *it*, which is "no, there is no
module here, and I mean it". So now passing module=False means exactly
that. This is hokey, but ignore_imports=False was really a hack to worm
around that there was no way to tell the test-finder that module=None
*sometimes* means what it says. There was no use case for the combination
of passing a real module with ignore_imports=False.
Ripped out the docs for the new DocTestFinder's namefilter argument,
and renamed it to _namefilter; this only existed to support isprivate.
Removed the new DocTestFinder's objfilter argument. No point adding
more cruft to a broken filtering design.
This test is insanely slow, so it requires a resource. On my machine,
it also appears to dump core. I think the problem is a stack
overflow, but haven't been able to confirm.
interning were not clear here -- a subclass could be mutable, for
example -- and had bugs. Explicitly interning a subclass of string
via intern() will raise a TypeError. Internal operations that attempt
to intern a string subclass will have no effect.
Added a few tests to test_builtin that includes the old buggy code and
verifies that calls like PyObject_SetAttr() don't fail. Perhaps these
tests should have gone in test_string.
The change to use the newer httplib interface admitted the possibility
that we'd get an HTTP/1.1 chunked response, but the code didn't handle
it correctly. The raw socket object can't be pass to addinfourl(),
because it would read the undecoded response. Instead, addinfourl()
must call HTTPResponse.read(), which will handle the decoding.
One extra wrinkle is that the HTTPReponse object can't be passed to
addinfourl() either, because it doesn't implement readline() or
readlines(). As a quick hack, use socket._fileobject(), which
implements those methods on top of a read buffer. (suggested by mwh)
Finally, add some tests based on test_urllibnet.
Thanks to Andrew Sawyers for originally reporting the chunked problem.
Specifically, time.strftime() no longer accepts a 0 in the yday position of a
time tuple, since that can crash some platform strftime() implementations.
parsedate_tz(): Change the return value to return 1 in the yday position.
Update tests in test_rfc822.py and test_email.py
Hack httplib to work with broken Akamai proxies.
Make sure that httplib doesn't add extract Accept-Encoding or
Content-Length headers if the client has already set them.
python.org . This way the delay should be great enough for
testConnectTimeout() to pass even when one has a really fast Net connection
that allows connections faster than .001 seconds.
the tim-doctest-merge-24a2 tag on the the tim-doctest-branch branch.
We did development on the branch in case it wouldn't land in time for
2.4a2, but the branch looked good: Edward's tests passed there, ditto
Python's tests, and ditto the Zope3 tests. Together, those hit doctest
heavily.
unicodedata.east_asian_width(). You can still implement your own
simple width() function using it like this:
def width(u):
w = 0
for c in unicodedata.normalize('NFC', u):
cwidth = unicodedata.east_asian_width(c)
if cwidth in ('W', 'F'): w += 2
else: w += 1
return w
or broken by basic ctype functions in 4.4BSD descendants. This
will be fixed in their future development branches but they'll keep
the POSIX-incompatibility for their backward-compatiblities in near
future.
* Fixes an incorrect variable in a PyDict_CheckExact.
* Allow general mapping locals arguments for the execfile() function
and exec statement.
* Add tests.
test_failing_import_sticks -- if an import raises an exception,
ensure that trying to import it again continues raising exceptions
test_failing_reload -- if a module loads OK, but a reload raises an
exception, ensure that the module is still in sys.modules, and
that its __dict__ reflects as much of the reload attempt as
succeeded. That doesn't seem like sane semantics, but it is
backward-compatible semantics <wink>.
This fixes 15 spurious test failures on Windows (probably all due to
the test leaving a wrong path in sys.argv[0], which then prevented
regrtest.py from finding the expected-output files for tests running
after test_optparse).
* add expansion of default values in help text: the string
"%default" in an option's help string is expanded to str() of
that option's default value, or "none" if no default value.
* bug #955889: option default values that happen to be strings are
now processed in the same way as values from the command line; this
allows generation of nicer help when using custom types. Can
be disabled with parser.set_process_default_values(False).
* bug #960515: don't crash when generating help for callback
options that specify 'type', but not 'dest' or 'metavar'.
* feature #815264: change the default help format for short options
that take an argument from e.g. "-oARG" to "-o ARG"; add
set_short_opt_delimiter() and set_long_opt_delimiter() methods to
HelpFormatter to allow (slight) customization of the formatting.
* patch #736940: internationalize Optik: all built-in user-
targeted literal strings are passed through gettext.gettext(). (If
you want translations (.po files), they're not included with Python
-- you'll find them in the Optik source distribution from
http://optik.sourceforge.net/ .)
* bug #878453: respect $COLUMNS environment variable for
wrapping help output.
* feature #988122: expand "%prog" in the 'description' passed
to OptionParser, just like in the 'usage' and 'version' strings.
(This is *not* done in the 'description' passed to OptionGroup.)
to NULL during the lifetime of the object.
* listobject.c nevertheless did not conform to the other invariants,
either; fixed.
* listobject.c now uses list_clear() as the obvious internal way to clear
a list, instead of abusing list_ass_slice() for that. It makes it easier
to enforce the invariant about ob_item == NULL.
* listsort() sets allocated to -1 during sort; any mutation will set it
to a value >= 0, so it is a safe way to detect mutation. A negative
value for allocated does not cause a problem elsewhere currently.
test_sort.py has a new test for this fix.
* listsort() leak: if items were added to the list during the sort, AND if
these items had a __del__ that puts still more stuff into the list,
then this more stuff (and the PyObject** array to hold them) were
overridden at the end of listsort() and never released.
__oct__, and __hex__. Raise TypeError if an invalid type is
returned. Note that PyNumber_Int and PyNumber_Long can still
return ints or longs. Fixes SF bug #966618.
and installed layouts to make maintenance simple and easy. And it
also adds four new codecs; big5hkscs, euc-jis-2004, shift-jis-2004
and iso2022-jp-2004.
causing test_pyclbr to fail on all other platforms. Added that routine
to the urllib "ignore" list.
Removed the special case for "g" in the pickle module. types.py deletes
"g" from its namespace; maybe it didn't always. Whatever, the special
case isn't needed today.
by the locals() call in the context constructor.
* Remove unnecessary properties for int, exp, and sign which duplicated
information returned by as_tuple().
the documented behavior: the function passed to the onerror()
handler can now also be os.listdir.
[I could've sworn I checked this in, but apparently I didn't, or it
got lost???]
module that is removed for testing "import" lines. Originally deleted the
entry from sys.modules and then just let other code that needed it to import it
again. Problem with this solution is that it lead to code that had already
imported the module in question to have their own reference to a new copy of
the module in question that new code couldn't reach. This lead to a failure in
test_strptime since it monkey-patched the 'time' module it had a reference to
while _strptime had its own reference to another copy of 'time' from being
imported by test___all__ that it was using for a calculation.
Also moved the testing code out of the PthFile class and into the actual test
class. This was to stop using 'assert' which is useless with a -O execution.
a non-standard protocol and on a lower port than the tcp/udp entries,
which breaks the assumption that there will only be one service by a
given name on a given port when no protocol is specified.
Previous versions of this code have had other problems as a result of
different service definitions amongst common platforms. As this platform
has an extra, unexpected, service entry, I've special cased the platform
rather than re-order the list of services checked to highlight the pitfall.
* Rename "trap_enablers" to just "traps".
* Simplify names of "settraps" and "setflags" to just "traps" and "flags".
* Show "capitals" in the context representation
* Simplify the Context constructor to match its repr form so that only
the set flags and traps need to be listed.
* Representation can now be run through eval().
Improve the error message when the Decimal constructor is given a float.
The test suite no longer needs a duplicate reset_flags method.