version 2.39 of dectest.zip adds some new test files and because
some existing test files were getting skipped).
* Remove two docstrings which cluttered unittest's output.
* Simplify a for-loop with a list comprehension.
path, as normalizing the path may alter the meaning of the path if it contains
symlinks.
Also add tests for infinite symlink loops and parent symlinks that need to be
resolved.
[ 1005248 ] new.code() not cleanly checking its arguments
using the result of new.code() can still destroy the sun, but merely
calling the function shouldn't any more.
I also rewrote the existing tests of new.code() to use vastly less
un-bogus arguments, and added tests for the previous insane behaviours.
visually distinguish the expected output from the comments (use
"##" to mark expected outputs, and "#" to mark comments).
- If the string given to DocTestParser.get_program() is indented, then
strip its indentation. (In particular, find the min indentation of
non-blank lines, and strip that indentation from all lines.)
- Added comments for some regexps
- If the traceback type/message don't match, then still print full
traceback in report_failure (not just the first & last lines)
- Renamed DocTestRunner.__failure_header -> _failure_header
modify option flags for a single example; they do not turn options
on or off.)
- Added "indent" and "options" attributes for Example
- Got rid of add_newlines param to DocTestParser._parse_example (it's
no longer needed; Example's constructor now takes care of it).
- Added some docstrings
responsible for parsing the string.
- Renamed Parser to DocTestParser
- DocTestParser.get_*() now accept the string & name as command-line
arguments; the parser's constructor is now empty.
- Added DocTestParser.get_doctest() method
- Replaced "doctest_factory" argument to DocTestFinder with a "parser"
argument (takes a DocTestParser).
- Changed _tag_msg to take an indentation string argument.
the set_trace fiddling didn't make sense to me, and I ended up reworking
that part of the code. We really do want to save and restore
pdb.set_trace, so that each dynamically nested level of doctest gets
sys.stdout fiddled to what's appropriate for *it*. The only "trick"
really needed is that these layers of set_trace wrappers each call the
original pdb.set_trace (instead of the current pdb.set_trace).
the string one line at a time. The resulting code is (in my opinion,
anyway), much easier to read. In the process, I found and fixed a
bug in the orginal parser's line numbering in error messages (it was
inconsistant between 0-based and 1-based). Also, check for missing
blank lines after the prompt on all prompt lines, not just PS1 lines
(test added).
Added XXX comment about why the undocumented PyRange_New() API function
is too broken to be worth the considerable pain of repairing.
Changed range_new() to stop using PyRange_New(). This fixes a variety
of bogus errors. Nothing in the core uses PyRange_New() now.
Documented that xrange() is intended to be simple and fast, and that
CPython restricts its arguments, and length of its result sequence, to
native C longs.
Added some tests that failed before the patch, and repaired a test that
relied on a bogus OverflowError getting raised.
This got slammed in when find() was fixed to stop grabbing doctests
from modules imported *by* the module being tested. Such tests cannot
be expected to succeed, since they'll be run with the current module's
globals. Dozens of Zope3 doctests were failing because of that.
It wasn't clear why ignore_imports got added then. Maybe it's because
some existing tests failed when the change was made. Whatever, it's
a Bad Idea so it's gone now.
The only use of it was exceedingly obscure, in test_doctest's "Duplicate
Removal" test. It was "needed" there because, as an artifact of running
a doctest inside a doctest, the func_globals of functions compiled in
the second-level doctest don't match the module globals, and so the
test-finder believed these functions were from a foreign module and
skipped them. But that took a long time to figure out, and I actually
understand some of this stuff <0.9 wink>.
That problem was resolved by moving the source code for the second-level
doctest into an actual module (test/doctest_aliases.py).
The only remaining difficulty was that the test for the deprecated
Tester.rundict() then failed, because the test finder doesn't take
module=None at face value, trying to guess which module the user really
intended then. Its guess wasn't appropriate for what Tester.rundict
needs when module=None is given to *it*, which is "no, there is no
module here, and I mean it". So now passing module=False means exactly
that. This is hokey, but ignore_imports=False was really a hack to worm
around that there was no way to tell the test-finder that module=None
*sometimes* means what it says. There was no use case for the combination
of passing a real module with ignore_imports=False.
Ripped out the docs for the new DocTestFinder's namefilter argument,
and renamed it to _namefilter; this only existed to support isprivate.
Removed the new DocTestFinder's objfilter argument. No point adding
more cruft to a broken filtering design.
This test is insanely slow, so it requires a resource. On my machine,
it also appears to dump core. I think the problem is a stack
overflow, but haven't been able to confirm.
interning were not clear here -- a subclass could be mutable, for
example -- and had bugs. Explicitly interning a subclass of string
via intern() will raise a TypeError. Internal operations that attempt
to intern a string subclass will have no effect.
Added a few tests to test_builtin that includes the old buggy code and
verifies that calls like PyObject_SetAttr() don't fail. Perhaps these
tests should have gone in test_string.
The change to use the newer httplib interface admitted the possibility
that we'd get an HTTP/1.1 chunked response, but the code didn't handle
it correctly. The raw socket object can't be pass to addinfourl(),
because it would read the undecoded response. Instead, addinfourl()
must call HTTPResponse.read(), which will handle the decoding.
One extra wrinkle is that the HTTPReponse object can't be passed to
addinfourl() either, because it doesn't implement readline() or
readlines(). As a quick hack, use socket._fileobject(), which
implements those methods on top of a read buffer. (suggested by mwh)
Finally, add some tests based on test_urllibnet.
Thanks to Andrew Sawyers for originally reporting the chunked problem.
Specifically, time.strftime() no longer accepts a 0 in the yday position of a
time tuple, since that can crash some platform strftime() implementations.
parsedate_tz(): Change the return value to return 1 in the yday position.
Update tests in test_rfc822.py and test_email.py
Hack httplib to work with broken Akamai proxies.
Make sure that httplib doesn't add extract Accept-Encoding or
Content-Length headers if the client has already set them.
python.org . This way the delay should be great enough for
testConnectTimeout() to pass even when one has a really fast Net connection
that allows connections faster than .001 seconds.