NEEDS DOC CHANGES
A few more AttributeErrors turned into TypeErrors, but in test_contains
this time.
The full story for instance objects is pretty much unexplainable, because
instance_contains() tries its own flavor of iteration-based containment
testing first, and PySequence_Contains doesn't get a chance at it unless
instance_contains() blows up. A consequence is that
some_complex_number in some_instance
dies with a TypeError unless some_instance.__class__ defines __iter__ but
does not define __getitem__.
the code necessary to accomplish this is simpler and faster if confined to
the object implementations, so we only do this there.
This causes no behaviorial changes beyond a (very slight) speedup.
object's type didn't define tp_print, there were still cases where the
full "print uses str() which falls back to repr()" semantics weren't
honored. This resulted in
>>> print None
<None object at 0x80bd674>
>>> print type(u'')
<type object at 0x80c0a80>
Fixed this by always using the appropriate PyObject_Repr() or
PyObject_Str() call, rather than trying to emulate what they would do.
Also simplified PyObject_Str() to always fall back on PyObject_Repr()
when tp_str is not defined (rather than making an extra check for
instances with a __str__ method). And got rid of the special case for
strings.
Fix a very old flaw in PyObject_Print(). Amazing! When an object
type defines tp_str but not tp_repr, 'print x' to a real file
object would not call the tp_str slot but rather print a default style
representation: <foo object at 0x....>. This even though 'print x' to
a file-like-object would correctly call the tp_str slot.
and the test for errors, so that an error in the default compare
doesn't go undetected. This fixes SF Bug #132933 (submitted by
effbot) -- list.sort doesn't detect comparision errors.
PyObject_Dump(): New function that is useful when debugging Python's C
runtime. In something like gdb it can be a pain to get some useful
information out of PyObject*'s. This function prints the str() of the
object to stderr, along with the object's refcount and hex address.
PyGC_Dump(): Similar to PyObject_Dump() but knows how to cast from the
garbage collector prefix back to the PyObject* structure.
[See Misc/gdbinit for some useful gdb hooks]
none_dealloc(): Rather than SEGV if we accidentally decref None out of
existance, we assign None's and NotImplemented's destructor slot to
this function, which just calls abort().
Barry, that comment belongs in the code, not in the checkin msg.
The code *used* to do this correctly (as you well know, since you
& I went thru considerable pain to fix this the first time).
However, because the *reason* for the convolution wasn't recorded
in the code as a comment, somebody threw it all away the first
time it got reworked.
c-code-isn't-often-self-explanatory-ly y'rs - tim
default_3way_compare(): Stick the checkin message from 2.110 in a
comment.
to integer types (i.e. Py_uintptr_t, our spelling of C9X's uintptr_t).
ANSI specifies that pointer compares other than == and != to
non-related structures are undefined. This quiets an Insure
portability warning.
I found where rich comparison of unequal recursive objects gave
unintuituve results. In a discussion with Tim, where we discovered
that our intuition on when a<=b should be true was failing, we decided
to outlaw ordering comparisons on recursive objects. (Once we have
fixed our intuition and designed a matching algorithm that's practical
and reasonable to implement, we can allow such orderings again.)
- Refactored the recursive-object comparison framework; more is now
done in the support routines so less needs to be done in the calling
routines (even at the expense of slowing it down a bit -- this
should normally never be invoked, it's mostly just there to avoid
blowing up the interpreter).
- Changed the framework so that the comparison operator used is also
stored. (The dictionary now stores triples (v, w, op) instead of
pairs (v, w).)
- Changed the nesting limit to a more reasonable small 20; this only
slows down comparisons of very deeply nested objects (unlikely to
occur in practice), while speeding up comparisons of recursive
objects (previously, this would first waste time and space on 500
nested comparisons before it would start detecting recursion).
- Changed rich comparisons for recursive objects to raise a ValueError
exception when recursion is detected for ordering oprators (<, <=,
>, >=).
Unrelated change:
- Moved PyObject_Unicode() to just under PyObject_Str(), where it
belongs. MAL's patch must've inserted in a random spot between two
functions in the file -- between two helpers for rich comparison...
- Use the compare nesting level and in-progress dictionary properly in
PyObject_RichCompare().
- Change the in-progress code to use static variables instead of
globals (both the nesting level and the key for the thread dict were
globals but have no reason to be globals; the key can even be a
function-static variable in get_inprogress_dict()).
- Rewrote try_rich_to_3way_compare() to benefit from the similarity of
the three cases, making it table-driven.
- In try_rich_to_3way_compare(), test for EQ before LT and GT. This
turns out essential when comparing recursive UserList instances;
with the old code, these would recurse into rich comparison three
times for each nesting level up to NESTING_LIMIT/2, making the total
number of calls in the order of 3**(NESTING_LIMIT/2)!
NOTE: I'm not 100% comfortable with this. It works for the standard
test suite (which compares a few trivial recursive data structures
only), but I'm not sure that the in-progress dictionary is used
properly by the rich comparison code. Jeremy suggested that maybe the
operation should be included in the dict. Currently I presume that
objects in the dict are equal unless proven otherwise, and I set the
outcome for the rich comparison accordingly: true for operators EQ,
LE, GE, and false for the other three. But Jeremy seems to think that
there may be counter-examples where this doesn't do the right thing.
except that it always returns Unicode objects.
A new C API PyObject_Unicode() is also provided.
This closes patch #101664.
Written by Marc-Andre Lemburg. Copyright assigned to Guido van Rossum.
PyObject_RichCompare() and PyObject_RichCompareBool().
XXX Note: the code that checks for deeply nested rich comparisons is
bogus -- it assumes the two objects are always identical, rather than
using the same logic as PyObject_Compare(). I'll fix that later.
Add definitions of INT_MAX and LONG_MAX to pyport.h.
Remove includes of limits.h and conditional definitions of INT_MAX
and LONG_MAX elsewhere.
This closes SourceForge patch #101659 and bug #115323.
objects for the attribute name. Unicode objects are converted to
a string using the default encoding before trying the lookup.
Note that previously it was allowed to pass arbitrary objects as
attribute name in case the tp_getattro/setattro slots were defined.
This patch fixes this by applying an explicit string check first:
all uses of these slots expect string objects and do not check
for the type resulting in a core dump. The tp_getattro/setattro
are still useful as optimization for lookups using interned
string objects though.
This patch fixes bug #113829.
types (i.e. Py_uintptr_t, our spelling of C9X's uintptr_t). ANSI
specifies that pointer compares other than == and != to non-related
structures are undefined. This quiets an Insure portability warning.
This was a misleading bug -- the true "bug" was that hash(x) gave an error
return when x is an infinity. Fixed that. Added new Py_IS_INFINITY macro to
pyport.h. Rearranged code to reduce growing duplication in hashing of float and
complex numbers, pushing Trent's earlier stab at that to a logical conclusion.
Fixed exceedingly rare bug where hashing of floats could return -1 even if there
wasn't an error (didn't waste time trying to construct a test case, it was simply
obvious from the code that it *could* happen). Improved complex hash so that
hash(complex(x, y)) doesn't systematically equal hash(complex(y, x)) anymore.
comments, docstrings or error messages. I fixed two minor things in
test_winreg.py ("didn't" -> "Didn't" and "Didnt" -> "Didn't").
There is a minor style issue involved: Guido seems to have preferred English
grammar (behaviour, honour) in a couple places. This patch changes that to
American, which is the more prominent style in the source. I prefer English
myself, so if English is preferred, I'd be happy to supply a patch myself ;)