1) When a breakpoint is set via a function name:
- the breakpoint gets the lineno of the def statement
- a new funcname attribute is attached to the breakpoint
2) bdb.effective() calls new function checkfuncname() to handle:
- def statement is executed: don't break.
- a first executable line of a function with a breakpoint on the lineno of the
def statement is reached: break.
This fixes bugs 976878, 926369 and 875404. Thanks Ilya Sandler.
This checkin is adapted from part 2 (of 3) of Trevor Perrin's patch set.
BACKWARD INCOMPATIBILITY: SHIFT must now be divisible by 5. AFAIK,
nobody will care. long_pow() could be complicated to worm around that,
if necessary.
long_pow():
- BUGFIX: This leaked the base and power when the power was negative
(and so the computation delegated to float pow).
- Instead of doing right-to-left exponentiation, do left-to-right. This
is more efficient for small bases, which is the common case.
- In addition, if the exponent is large (more than FIVEARY_CUTOFF
digits), precompute [a**i % c for i in range(32)], and go left to
right 5 bits at a time.
l_divmod():
- The signature changed so that callers who don't want the quotient,
or don't want the remainder, can pass NULL in the slot they don't
want. This saves them from having to declare a vrbl for unwanted
stuff, and remembering to decref it.
long_mod(), long_div(), long_classic_div():
- Adjust to new l_divmod() signature, and simplified as a result.
This checkin is adapted from part 1 (of 3) of Trevor Perrin's patch set.
x_mul()
- sped a little by optimizing the C
- sped a lot (~2X) if it's doing a square; note that long_pow() squares
often
k_mul()
- more cache-friendly now if it's doing a square
KARATSUBA_CUTOFF
- boosted; gradeschool mult is quicker now, and it may have been too low
for many platforms anyway
KARATSUBA_SQUARE_CUTOFF
- new
- since x_mul is a lot faster at squaring now, the point at which
Karatsuba pays for squaring is much higher than for general mult
need to convert str objects from the iterable to unicode. So, if
someone set the system default encoding to something nasty enough,
the conversion process could mutate the input iterable as a side
effect, and PySequence_Fast doesn't hide that from us if the input was
a list. IOW, can't assume the size of PySequence_Fast's result is
invariant across PyUnicode_FromObject() calls.
much to reduce the size of the code, but greatly improves its clarity.
It's also quicker in what's probably the most common case (the argument
iterable is a list). Against it, if the iterable isn't a list or a tuple,
a temp tuple is materialized containing the entire input sequence, and
that's a bigger temp memory burden. Yawn.
this module imports itself explicitly from test (so the "file names"
current doctest synthesizes for examples don't vary depending on how
test_generators is run).
s.join([t]) is t
for (s, t) in (str, str), (unicode, unicode), and (str, unicode).
For (unicode, str), verify that it's *not* t (the result is promoted
to unicode instead). Also verify that when t is a subclass of str or
unicode that "the right thing" happens.
- Improvements to interactive debugging support:
- Changed the replacement pdb.set_trace to redirect stdout to the
real stdout *only* during interactive debugging; stdout from code
continues to go to the fake stdout.
- When the interactive debugger gets to the end of an example,
automatically continue.
- Use a replacement linecache.getlines that will return source lines
from doctest examples; this makes the source available to the
debugger for interactive debugging.
- In test_doctest, use a specialized _FakeOutput class instead of a
temporary file to fake stdin for the interactive interpreter.
1. u1.join([u2]) is u2
2. Be more careful about C-level int overflow.
Since PySequence_Fast() isn't needed to achieve #1, it's not used -- but
the code could sure be simpler if it were.
and intervening text strings.
- Removed DocTestParser.get_program(): use script_from_examples()
instead.
- Fixed bug in DocTestParser._INDENT_RE
- Fixed bug in DocTestParser._min_indent
- Moved _want_comment() to the utility function section
actual output into lines created spurious empty lines at the ends of
each. Those matched, but the fancy diffs had surprising line counts (1
larger than expected), and tests kept having to slam <BLANKLINE> into the
expected output to account for this. Using the splitlines() string method
with keepends=True instead accomplishes what was intended directly.
While a fancy diff can be confusing in the presence of ellipses, so far
I'm finding (2-0-0) that it's much more a major aid in narrowing down the
possibilities when an ellipsis-slinging test fails. So we no longer
refuse to do a fancy diff just because of ellipses.
This isn't ideal; it's just better.
rather than an expected output string. This gives the
output_difference method access to more information, such as the
indentation of the example, which might be useful.