imports e.g. test_support must do so using an absolute package name
such as "import test.test_support" or "from test import test_support".
This also updates the README in Lib/test, and gets rid of the
duplicate data dirctory in Lib/test/data (replaced by
Lib/email/test/data).
Now Tim and Jack can have at it. :)
array. Our samplesort special-cases the snot out of this, running about
12x faster than *sort. The experimental mergesort runs it about 8x
faster than *sort without special-casing, but should really do better
than that (when merging runs of different lengths, right now it only
does something clever about finding where the second run begins in
the first and where the first run ends in the second, and that's more
of a temp-memory optimization).
from test.test_support import TestSkipped, run_unittest
to
from test_support import TestSkipped, run_unittest
Otherwise, if the Japanese codecs aren't installed, regrtest doesn't
believe the TestSkipped exception raised by this test matches the
except (ImportError, test_support.TestSkipped), msg:
it's looking for, and reports the skip as a crash failure instead of
as a skipped test.
I suppose this will make it harder to run this test outside of
regrtest, but under the assumption only Barry does that, better to
make it skip cleanly for everyone else.
(i.e. email.test), so move the guts of them here from Lib/test. The
latter directory will retain stubs to run the email.test tests using
Python's standard regression test.
test_email_torture.py is a torture tester which will not run under
Python's test suite because I don't want to commit megs of data to
that project (it will fail cleanly there). When run under the mimelib
project it'll stress test the package with megs of message samples
collected from various locations in the wild.
(i.e. email.test), so move the guts of them here from Lib/test. The
latter directory will retain stubs to run the email.test tests using
Python's standard regression test.
test_email_torture.py is a torture tester which will not run under
Python's test suite because I don't want to commit megs of data to
that project (it will fail cleanly there). When run under the mimelib
project it'll stress test the package with megs of message samples
collected from various locations in the wild.
email/test/data is a copy of Lib/test/data. The fate of the latter is
still undecided.
backwards compatibility, we're silently deprecating get_type(),
get_subtype() and get_main_type(). We may eventually noisily
deprecate these. For now, we'll just fix a bug in the splitting of
the main and subtypes.
get_content_type(), get_content_maintype(), get_content_subtype(): New
methods which replace the above. These /always/ return a content type
string and do not take a failobj, because an email message always at
least has a default content type.
set_default_type(): Someday there may be additional default content
types, so don't hard code an assertion about the value of the ctype
argument.
version of PySlice_GetIndicesEx"):
> OK. Michael, if you want to check in indices(), go ahead.
Then I did what was needed, but didn't check it in. Here it is.
quoting:
in non-strict mode, messages don't require a blank line at the end
with a missing end-terminator. A single newline is sufficient now.
Handle trailing whitespace at the end of a boundary. Had to switch
from using string.split() to re.split()
Handle whitespace on the end of a parameter list for Content-type.
Handle whitespace on the end of a plain content-type header.
Specifically,
get_type(): Strip the content type string.
_get_params_preserve(): Strip the parameter names and values on both
sides.
_parsebody(): Lots of changes as described above, with some stylistic
changes by Barry (who hopefully didn't screw things up ;).
the default range to end at 2**20 (machines are much faster now).
Fixed what was quite a arguably a bug, explaining an old mystery: the
"!sort" case here contructs what *was* a quadratic-time disaster for
the old quicksort implementation. But under the current samplesort, it
always ran much faster than *sort (the random case). This never made
sense. Turns out it was because !sort was sorting an integer array,
while all the other cases sort floats; and comparing ints goes much
quicker than comparing floats in Python. After changing !sort to chew
on floats instead, it's now slower than the random sort case, which
makes more sense (but is just a few percent slower; samplesort is
massively less sensitive to "bad patterns" than quicksort).
The implementation now stores all the lines of the request in a buffer
and makes a single send() call when the request is finished,
specifically when endheaders() is called.
This appears to improve performance. The old code called send() for
each line. The sends are all short, so they caused bad interactions
with the Nagle algorithm and delayed acknowledgements. In simple
tests, the second packet was delayed by 100s of ms. The second send was
delayed by the Nagle algorithm, waiting for the ack. The delayed ack
strategy delays the ack in hopes of piggybacking it on a data packet,
but the server won't send any data until it receives the complete
request.
This change minimizes the problem that Nagle + delayed ack will cause
a problem, although a request large enough to be broken into two
packets will still suffer some delay. Luckily the MSS is large enough
to accomodate most single packets.
XXX Bug fix candidate?
existed at the time atexit first got imported. That's a bug, and this
fixes it.
Also reworked test_atexit.py to test for this too, and to stop using
an "expected output" file, and to test what actually happens at exit
instead of just simulating what it thinks atexit will do at exit.
Bugfix candidate, but it's messy so I'll backport to 2.2 myself.
The test of httplib makes it difficult to maintain httplib. There are
two many idioms that pyclbr doesn't seem to understand, and I don't
understand how to update these tests to make them work.
Also remove commented out test of urllib2.
more spaces only crashed pdb.
While I was at it, cleaned up some style nits (spaces between function
and parenthesis, and redundant parentheses in if statement).
takes much longer to run in the context of the test suite than when run in
isolation. That's because it forces a large number of full collections,
which take time proportional to the total number of gc'ed objects in the
whole system.
But since the dangerous implementation trickery that caused this test to
fail in 2.0, 2.1 and 2.2 doesn't exist in 2.3 anymore (the trashcan
mechanism stopped doing evil things when the possibility for compiling
without cyclic gc was taken away), such an expensive test is no longer
justified. This checkin leaves the test intact, but fiddles the
constants to reduce the runtime by about a factor of 5.
debug-build failure when an instance of a new-style class is resurrected
by a __del__ method -- we simply never had any code that tried this.
This is already fixed in 2.3 CVS. In 2.2.1, it blows up via
Fatal Python error: GC object already in linked list
I'll fix it in 2.2.1 CVS next.
The recent SSL changes resulted in important, but subtle changes to
close() semantics. Since builtin socket makefile() is not called for
SSL connections, we don't get separately closeable fds for connection
and response. Comments in the code explain how to restore makefile
semantics.
Bug fix candidate.
.splitlines() on them, since they may be Header instances.
test_multilingual(), test_header_ctor_default_args(): New tests of
make_header() and that Header can take all default arguments.
create a Header instance. Closes feature request #539481.
Header.__init__(): Allow the initial string to be omitted.
__eq__(), __ne__(): Support rich comparisons for equality of Header
instances withy Header instances or strings.
Also, update a bunch of docstrings.
argument to the constructor -- defaulting to true -- which is
different than Anthony's approach of using global state.
parse(), parsestr(): Grow a `headersonly' argument which stops parsing
once the header block has been seen, i.e. it does /not/ parse or even
read the body of the message. This is used for parsing message/rfc822
type messages.
We need test cases for the non-strict parsing. Anthony will supply
these.
_parsebody(): We can get rid of the isdigest end-of-line kludges,
although we still need to know if we're parsing a multipart/digest so
we can set the default type accordingly.
text/plain but the RFCs state that inside a multipart/digest, the
default type is message/rfc822. To preserve idempotency, we need a
separate place to define the default type than the Content-Type:
header.
get_default_type(), set_default_type(): Accessor and mutator methods
for the default type.
recursive generation).
_dispatch(): If the message object doesn't have a Content-Type:
header, check its default type instead of assuming it's text/plain.
This makes for correct generation of message/rfc822 containers.
_handle_multipart(): We can get rid of the isdigest kludge. Just
print the message as normal and everything will work out correctly.
_handle_mulitpart_digest(): We don't need this anymore either.
ndiff function, so just alias it to assertEqual in that case.
Various: make sure all openfile()/read()'s are wrapped in
try/finally's so the file gets closed.
A bunch of new tests checking the corner cases for multipart/digest
and message/rfc822.
If multiple header fields with the same name occur, they are combined
according to the rules in RFC 2616 sec 4.2:
Appending each subsequent field-value to the first, each separated by
a comma. The order in which header fields with the same field-name are
received is significant to the interpretation of the combined field
value.
Section 19.6 of RFC 2616 (HTTP/1.1):
It is beyond the scope of a protocol specification to mandate
compliance with previous versions. HTTP/1.1 was deliberately
designed, however, to make supporting previous versions easy....
And we would expect HTTP/1.1 clients to:
- recognize the format of the Status-Line for HTTP/1.0 and 1.1
responses;
- understand any valid response in the format of HTTP/0.9, 1.0, or
1.1.
The changes to the code do handle response in the format of HTTP/0.9.
Some users may consider this a bug because all responses with a
sufficiently corrupted status line will look like an HTTP/0.9
response. These users can pass strict=1 to the HTTP constructors to
get a BadStatusLine exception instead.
While this is a new feature of sorts, it enhances the robustness of
the code (be tolerant in what you accept). Thus, I consider it a bug
fix candidate.
XXX strict needs to be documented.