New test "+sort", tacking 10 random floats on to the end of a sorted

array.  Our samplesort special-cases the snot out of this, running about
12x faster than *sort.  The experimental mergesort runs it about 8x
faster than *sort without special-casing, but should really do better
than that (when merging runs of different lengths, right now it only
does something clever about finding where the second run begins in
the first and where the first run ends in the second, and that's more
of a temp-memory optimization).
This commit is contained in:
Tim Peters 2002-07-21 17:37:03 +00:00
parent 53d019cf5a
commit 7ea39b135a
1 changed files with 9 additions and 6 deletions

View File

@ -74,13 +74,14 @@ def tabulate(r):
*sort: random data
\sort: descending data
/sort: ascending data
3sort: ascending data but with 3 random exchanges
3sort: ascending, then 3 random exchanges
+sort: ascending, then 10 random at the end
~sort: many duplicates
=sort: all equal
!sort: worst case scenario
"""
cases = ("*sort", "\\sort", "/sort", "3sort", "~sort", "=sort", "!sort")
cases = tuple([ch + "sort" for ch in r"*\/3+~=!"])
fmt = ("%2s %7s" + " %6s"*len(cases))
print fmt % (("i", "2**i") + cases)
for i in r:
@ -100,6 +101,11 @@ def tabulate(r):
L[i1], L[i2] = L[i2], L[i1]
doit(L) # 3sort
# Replace the last 10 with random floats.
if n >= 10:
L[-10:] = [random.random() for dummy in range(10)]
doit(L) # +sort
# Arrange for lots of duplicates.
if n > 4:
del L[4:]
@ -117,10 +123,7 @@ def tabulate(r):
# This one looks like [3, 2, 1, 0, 0, 1, 2, 3]. It was a bad case
# for an older implementation of quicksort, which used the median
# of the first, last and middle elements as the pivot. It's still
# a worse-than-average case for samplesort, but on the order of a
# measly 5% worse, not a quadratic-time disaster as it was with
# quicksort.
# of the first, last and middle elements as the pivot.
half = n // 2
L = range(half - 1, -1, -1)
L.extend(range(half))