ldexp. Both methods are exact, and return the same results. Turns out
multiplication is a few (but just a few) percent faster on my box.
They're both significantly faster than using struct with a Q format
to convert bytes to a 64-bit long (struct.unpack() appears to lose due
to the tuple creation/teardown overhead), and calling _hexlify is
significantly faster than doing bytes.encode('hex'). So we appear to
have hit a local minimum (wrt speed) here.
components without division and without roundoff error for properly
sized mantissas (i.e. on systems with 53 or more mantissa bits per
float). Eliminates the previous implementation's rounding bias as
aptly demonstrated by Tim Peters.
Replaced outcomes from native Tcl/Tk tests. Maybe the diffs are legit,
maybe not. I noticed that the Tcl results I'm replacing here claimed
both that there were no failures, and that one file had tests with
failures, so I wasn't inclined to trust them <wink>.
to unittest, so make it official: new module constants COMPARISON_FLAGS
and REPORTING_FLAGS, which are bitmasks or'ing together the relevant
individual option flags.
set_unittest_reportflags(): Reworked to use REPORTING_FLAGS, and
simplified overly complicated flag logic.
class FakeModule: Removed this; neither documented nor used.
1) When a breakpoint is set via a function name:
- the breakpoint gets the lineno of the def statement
- a new funcname attribute is attached to the breakpoint
2) bdb.effective() calls new function checkfuncname() to handle:
- def statement is executed: don't break.
- a first executable line of a function with a breakpoint on the lineno of the
def statement is reached: break.
This fixes bugs 976878, 926369 and 875404. Thanks Ilya Sandler.
This checkin is adapted from part 2 (of 3) of Trevor Perrin's patch set.
BACKWARD INCOMPATIBILITY: SHIFT must now be divisible by 5. AFAIK,
nobody will care. long_pow() could be complicated to worm around that,
if necessary.
long_pow():
- BUGFIX: This leaked the base and power when the power was negative
(and so the computation delegated to float pow).
- Instead of doing right-to-left exponentiation, do left-to-right. This
is more efficient for small bases, which is the common case.
- In addition, if the exponent is large (more than FIVEARY_CUTOFF
digits), precompute [a**i % c for i in range(32)], and go left to
right 5 bits at a time.
l_divmod():
- The signature changed so that callers who don't want the quotient,
or don't want the remainder, can pass NULL in the slot they don't
want. This saves them from having to declare a vrbl for unwanted
stuff, and remembering to decref it.
long_mod(), long_div(), long_classic_div():
- Adjust to new l_divmod() signature, and simplified as a result.
This checkin is adapted from part 1 (of 3) of Trevor Perrin's patch set.
x_mul()
- sped a little by optimizing the C
- sped a lot (~2X) if it's doing a square; note that long_pow() squares
often
k_mul()
- more cache-friendly now if it's doing a square
KARATSUBA_CUTOFF
- boosted; gradeschool mult is quicker now, and it may have been too low
for many platforms anyway
KARATSUBA_SQUARE_CUTOFF
- new
- since x_mul is a lot faster at squaring now, the point at which
Karatsuba pays for squaring is much higher than for general mult
need to convert str objects from the iterable to unicode. So, if
someone set the system default encoding to something nasty enough,
the conversion process could mutate the input iterable as a side
effect, and PySequence_Fast doesn't hide that from us if the input was
a list. IOW, can't assume the size of PySequence_Fast's result is
invariant across PyUnicode_FromObject() calls.
much to reduce the size of the code, but greatly improves its clarity.
It's also quicker in what's probably the most common case (the argument
iterable is a list). Against it, if the iterable isn't a list or a tuple,
a temp tuple is materialized containing the entire input sequence, and
that's a bigger temp memory burden. Yawn.
this module imports itself explicitly from test (so the "file names"
current doctest synthesizes for examples don't vary depending on how
test_generators is run).
s.join([t]) is t
for (s, t) in (str, str), (unicode, unicode), and (str, unicode).
For (unicode, str), verify that it's *not* t (the result is promoted
to unicode instead). Also verify that when t is a subclass of str or
unicode that "the right thing" happens.