getting Infs, NaNs, or nonsense in 2.1 and before; in yesterday's CVS we
were getting OverflowError; but these functions always make good sense
for positive arguments, no matter how large).
"/" and "//", and doesn't really care what they *mean*, just that both
are tried (and that, whatever they mean, they act similarly for int and
long arguments).
the fiddling is simply due to that no caller of PyLong_AsDouble ever
checked for failure (so that's fixing old bugs). PyLong_AsDouble is much
faster for big inputs now too, but that's more of a happy consequence
than a design goal.
of PyMapping_Keys because we know we have a real dict. Tolerate that
objects may have an attr named "__dict__" that's not a dict (Py_None
popped up during testing).
test_descr.py, test_dir(): Test the new classic-class behavior; beef up
the new-style class test similarly.
test_pyclbr.py, checkModule(): dir(C) is no longer a synonym for
C.__dict__.keys() when C is a classic class (looks like the same thing
that burned distutils! -- should it be *made* a synoym again? Then it
would be inconsistent with new-style class behavior.).
bag. It's clearly wrong for classic classes, at heart because a classic
class doesn't have a __class__ attribute, and I'm unclear on whether
that's feature or bug. I'll repair this once I find out (in the
meantime, dir() applied to classic classes won't find the base classes,
while dir() applied to a classic-class instance *will* find the base
classes but not *their* base classes).
Please give the new dir() a try and see whether you love it or hate it.
The new dir([]) behavior is something I could come to love. Here's
something to hate:
>>> class C:
... pass
...
>>> c = C()
>>> dir(c)
['__doc__', '__module__']
>>>
The idea that an instance has a __doc__ attribute is jarring (of course
it's really c.__class__.__doc__ == C.__doc__; likewise for __module__).
OTOH, the code already has too many special cases, and dir(x) doesn't
have a compelling or clear purpose when x isn't a module.
mapping object", in the same sense dict.update(x) requires of x (that x
has a keys() method and a getitem).
Questionable: The other type constructors accept a keyword argument, so I
did that here too (e.g., dictionary(mapping={1:2}) works). But type_call
doesn't pass the keyword args to the tp_new slot (it passes NULL), it only
passes them to the tp_init slot, so getting at them required adding a
tp_init slot to dicts. Looks like that makes the normal case (i.e., no
args at all) a little slower (the time it takes to call dict.tp_init and
have it figure out there's nothing to do).
Fix list comp code generation -- emit GET_ITER instead of Const(0)
after the list.
Add CO_GENERATOR flag to generators.
Get CO_xxx flags from the new module
try/except or try/finally.
Previous versions had only track SETUP_LOOP blocks and ignored the
exception part. This meant that it allowed continue inside a
try/except but generated buggy code. Now it does the right thing.
As the doc string for _lookupName() explains:
This routine uses a list instead of a dictionary, because a
dictionary can't store two different keys if the keys have the
same value but different types, e.g. 2 and 2L. The compiler
must treat these two separately, so it does an explicit type
comparison before comparing the values.
Avoid if/elif/elif/else tests where the final else is supposed to
handle exactly one case instead of all other cases. When the list of
operators is extended, the catchall else treats all new operators as
the last operator in the set of tests. Instead, raise an exception if
an unexpected operator occurs.
Use a dictionary instead of a list to map objects to their offsets in
a const/name tuple of a code object.
XXX The conversion is perhaps incomplete, in that we shouldn't have to
do the list2dict to start.
Add support for floor division (// and //=)
The implementation of getChildren() and getChildNodes() is intended to
be faster, because it avoids calling flatten() on every return value.
But it's not clear that it is a lot faster, because constructing a
tuple with just the right values ends up being slow. (Too many
attribute lookups probably.)
The ast.txt file is much more complicated, with funny characters at
the ends of names (*, &, !) to indicate the types of each child node.
The astgen script is also much more complex, making me wonder if it's
still useful.
64-bit INTs on 32-bit boxes (where they become longs). Also exploit that
int(str) and long(str) will ignore a trailing newline (saves creating a
new string at the Python level).
pickletester.py: Simulate reading a pickle produced by a 64-bit box.
varnames should list all the local variables (with arguments first).
The XXX_NAME ops typically occur at the module level and assignment
ops should create locals.
(Hard to believe these were never handled before)
Add misc.mangle() that mangles based on the rules in compile.c.
XXX Need to test the corner cases
Update CodeGenerator with a class_name attribute bound to None. If a
particular instance is created within a class scope, the instance's
class_name is bound to that class's name.
Add mangle() method to CodeGenerator that mangles if the class_name
has a class_name in it.
Modify the FunctionCodeGenerator family to handle an extra argument--
the class_name.
Wrap all name ops and attrnames in calls to self.mangle()
Make nested scopes enabled by default
Add is_constant_false() helper so that compiled code and symbols are
consistent with builtin compiler's handling of "if 0:"
Fix doc string handling to be consistent with recent change that
eliminates the doc string from the Module's node attribute.
Add fix to print handling from Evan & Shane.
Track change to visitor api by making "verbose" explicit.
Comment out setting CO_NESTED flag (it's unnecessary in 2.2).
Evan Simpson's fix. And his explanation:
If you defined two nested functions in a row that refer to the
same non-global variable, the second one will be generated as
though the variable were global.
The use of com_node() introduces a lot of extra stack frames, enough
to cause a stack overflow compiling test.test_parser with the standard
interpreter recursionlimit. The com_node() is a convenience function
that hides the dispatch details, but comes at a very high cost. It is
more efficient to dispatch directly in the callers. In these cases,
use lookup_node() and call the dispatched node directly.
Also handle yield_stmt in a way that will work with Python 2.1
(suggested by Shane Hathaway)
Remove _preorder as alias for dispatch and call dispatch directly.
Add an extra optional argument to walk()
XXX Also comment out some code that does debugging prints.
Modify rfc822.formatdate() to always generate English names,
regardless of locale. This is required by RFC 1123.
In open_local_file() of urllib and urllib2, use new formatdate() from
rfc822.
recent classobject.c change. When calling an unbound method with no
instances as first argument, the error message has changed. The
message now contains the class name, but the output text being
compared to is too generic, so skip printing it.
lambda (anonymous functions?), function, xrange, buffer, cell (need to
fill in), and (some) descriptor types.
Also added a new test case for testing repr truncation fixes.