and this broke a Zope "pipelining" test which read multiple responses
from the same connection (this attaches a new file object to the
socket for each response). Added a test for this too.
(I want to do some code cleanup too, but I thought I'd first fix
the problem with as little code as possible, and add a unit test
for this case. So that's what this checkin is about.)
on Windows. The test_sequence() ERROR is easily repaired if we're
willing to add an os.unlink() line to mhlib's updateline(). The
test_listfolders FAIL I gave up on -- I don't remember enough about Unix
link esoterica to recall why a link count of 2 is something a well-
written program should be keenly interested in <wink>.
Added new heapify() function, which transforms an arbitrary list into a
heap in linear time; that's a fundamental tool for using heaps in real
life <wink>.
Added heapyify() test. Added a "less naive" N-best algorithm to the test
suite, and noted that this could actually go much faster (building on
heapify()) if we had max-heaps instead of min-heaps (the iterative method
is appropriate when all the data isn't known in advance, but when it is
known in advance the tradeoffs get murkier).
at random, and replaces the elements at those positions with new random
values. I was pleasantly surprised by how fast this goes! It's hard to
conceive of an algorithm that could special-case for this effectively.
Plus it's exactly what happens if a burst of gamma rays corrupts your
sorted database on disk <wink>.
i 2**i *sort ... %sort
15 32768 0.18 ... 0.03
16 65536 0.24 ... 0.04
17 131072 0.53 ... 0.08
18 262144 1.17 ... 0.16
19 524288 2.56 ... 0.35
20 1048576 5.54 ... 0.77
and age of rampant computer breakins I imagine there are plenty of systems
with telnet disabled. Successful check of at least one getservbyname() call
is required for success
in the stability tests.
Bizarre: this takes 11x longer to run if and only if test_longexp is
run before it, on my box. The bigger REPS is in test_longexp, the
slower this gets. What happens on your box? It's not gc on my box
(which is good, because gc isn't a plausible candidate here).
The slowdown is massive in the parts of test_sort that implicitly
invoke a new-style class's __lt__ or __cmp__ methods. If I boost
REPS large enough in test_longexp, even the test_sort tests on an array
of size 64 visibly c-r-a-w-l. The relative slowdown is even worse in
a debug build. And if I reduce REPS in test_longexp, the slowdown in
test_sort goes away.
test_longexp does do horrid things to Win98's management of user
address space, but I thought I had made that a whole lot better a month
or so ago (by overallocating aggressively in the parser).
If the long is large enough, the return value will be a negative int.
In this case, calling the function a second time won't return the
original value passed in.
imports of test modules now import from the test package. Other
related oddities are also fixed (like DeprecationWarning filters that
weren't specifying the full import part, etc.). Also did a general
code cleanup to remove all "from test.test_support import *"'s. Other
from...import *'s weren't changed.
See there for a description.
Added test case.
Bugfix candidate for 2.2.x, not sure about previous versions:
probably low priority, because virtually no one runs debug builds.
imports e.g. test_support must do so using an absolute package name
such as "import test.test_support" or "from test import test_support".
This also updates the README in Lib/test, and gets rid of the
duplicate data dirctory in Lib/test/data (replaced by
Lib/email/test/data).
Now Tim and Jack can have at it. :)
array. Our samplesort special-cases the snot out of this, running about
12x faster than *sort. The experimental mergesort runs it about 8x
faster than *sort without special-casing, but should really do better
than that (when merging runs of different lengths, right now it only
does something clever about finding where the second run begins in
the first and where the first run ends in the second, and that's more
of a temp-memory optimization).