exceptions that can't be raised any further, because (for instance) they
occur in __del__ methods. The coroutine tests in test_generators was
triggering this leak. Remove the leakers' testcase, and add a simpler
testcase that explicitly tests this leak to test_generators.
test_generators now no longer leaks at all, on my machine. This fix may also
solve other leaks, but my full refleakhunting run is still busy, so who
knows?
not be tracked by GC. This fixes 254 of test_generators' refleaks on my
machine, but I'm sure something else will make them come back :>
Not adding a separate test for this kind of cycle, since the existing
fib/m235 already test them in more extensive ways than any 'minimal' test
has been able to manage.
examples no longer require any explicit closing to avoid
leaking.
That the tee-based examples still do is (I think) still a
mystery. Part of the mystery is that gc.garbage remains
empty: if it were the case that some generator in a trash
cycle said it needed finalization, suppressing collection
of that cycle, that generator _would_ show up in gc.garbage.
So this is acting more like, e.g., some tp_traverse slot
isn't visiting all the pointers it should (in which case
the skipped pointer(s) would act like an external root,
silently suppressing collection of everything reachable
from it(them)).
problems: first, PyGen_NeedsFinalizing() had an off-by-one bug that
prevented it from ever saying a generator didn't need finalizing, and
second, frame objects cleared themselves in a way that caused their
owning generator to think they were still executable, causing a double
deallocation of objects on the value stack if there was still a loop
on the block stack. This revision also removes some unnecessary
close() operations from test_generators that are now appropriately
handled by the cycle collector.
we are about to leave behind. An example of the cause of this leak can be
found in the leakers directory, in case we ever want to tackle the
underlying problem.
itertools.tee->instance->attribute->itertools.tee and
itertools.tee->teedataobject->itertools.tee cycles, which can be found now
that itertools.tee and its teedataobject participate in GC, remain findable
and cleanable. The test won't fail when they aren't, but at least the
frequent hunt-refleaks runs would spot the rise in refleaks.
According to Jeremy, the comment only made sense when
the yield was disallowed. Now it's testing that the yield
is allowed, so it's not bad and the outer finally is irrelevant.
This change implements a new bytecode compiler, based on a
transformation of the parse tree to an abstract syntax defined in
Parser/Python.asdl.
The compiler implementation is not complete, but it is in stable
enough shape to run the entire test suite excepting two disabled
tests.
this module imports itself explicitly from test (so the "file names"
current doctest synthesizes for examples don't vary depending on how
test_generators is run).
imports e.g. test_support must do so using an absolute package name
such as "import test.test_support" or "from test import test_support".
This also updates the README in Lib/test, and gets rid of the
duplicate data dirctory in Lib/test/data (replaced by
Lib/email/test/data).
Now Tim and Jack can have at it. :)
This was a simple typo. Strange that the compiler didn't catch it!
Instead of WHY_CONTINUE, two tests used CONTINUE_LOOP, which isn't a
why_code at all, but an opcode; but even though 'why' is declared as
an enum, comparing it to an int is apparently not even worth a
warning -- not in gcc, and not in VC++. :-(
Will fix in 2.2 too.
PEP 285. Everything described in the PEP is here, and there is even
some documentation. I had to fix 12 unit tests; all but one of these
were printing Boolean outcomes that changed from 0/1 to False/True.
(The exception is test_unicode.py, which did a type(x) == type(y)
style comparison. I could've fixed that with a single line using
issubtype(x, type(y)), but instead chose to be explicit about those
places where a bool is expected.
Still to do: perhaps more documentation; change standard library
modules to return False/True from predicates.
Big Hammer to implement -Qnew as PEP 238 says it should work (a global
option affecting all instances of "/").
pydebug.h, main.c, pythonrun.c: define a private _Py_QnewFlag flag, true
iff -Qnew is passed on the command line. This should go away (as the
comments say) when true division becomes The Rule. This is
deliberately not exposed to runtime inspection or modification: it's
a one-way one-shot switch to pretend you're using Python 3.
ceval.c: when _Py_QnewFlag is set, treat BINARY_DIVIDE as
BINARY_TRUE_DIVIDE.
test_{descr, generators, zipfile}.py: fiddle so these pass under
-Qnew too. This was just a matter of s!/!//! in test_generators and
test_zipfile. test_descr was trickier, as testbinop() is passed
assumptions that "/" is the same as calling a "__div__" method; put
a temporary hack there to call "__truediv__" instead when the method
name is "__div__" and 1/2 evaluates to 0.5.
Three standard tests still fail under -Qnew (on Windows; somebody
please try the Linux tests with -Qnew too! Linux runs a whole bunch
of tests Windows doesn't):
test_augassign
test_class
test_coercion
I can't stay awake longer to stare at this (be my guest). Offhand
cures weren't obvious, nor was it even obvious that cures are possible
without major hackery.
Question: when -Qnew is in effect, should calls to __div__ magically
change into calls to __truediv__? See "major hackery" at tail end of
last paragraph <wink>.
horridly inefficient hack in regrtest's Compare class, but it's about as
clean as can be: regrtest has to set up the Compare instance before
importing a test module, and by the time the module *is* imported it's too
late to change that decision. The good news is that the more tests we
convert to unittest and doctest, the less the inefficiency here matters.
Even now there are few tests with large expected-output files (the new
cost here is a Python-level call per .write() when there's an expected-
output file).
bag. It's clearly wrong for classic classes, at heart because a classic
class doesn't have a __class__ attribute, and I'm unclear on whether
that's feature or bug. I'll repair this once I find out (in the
meantime, dir() applied to classic classes won't find the base classes,
while dir() applied to a classic-class instance *will* find the base
classes but not *their* base classes).
Please give the new dir() a try and see whether you love it or hate it.
The new dir([]) behavior is something I could come to love. Here's
something to hate:
>>> class C:
... pass
...
>>> c = C()
>>> dir(c)
['__doc__', '__module__']
>>>
The idea that an instance has a __doc__ attribute is jarring (of course
it's really c.__class__.__doc__ == C.__doc__; likewise for __module__).
OTOH, the code already has too many special cases, and dir(x) doesn't
have a compelling or clear purpose when x isn't a module.