don't use getattr, but only look in the dict of the type and base types.
This prevents picking up all sorts of weird stuff, including things defined
by the metaclass when the object is a class (type).
For this purpose, a helper function lookup_method() was added. One or two
other places also use this.
PyString_FromFormatV(): In the final resize at the end, we can use
PyString_AS_STRING() since we know the object is a string and can
avoid the typechecking.
PyString_FromFormat(): GS sez: "For safety/propriety, you should call
va_end() on the vargs variable."
at least in the first two characters. %p is ill-defined, and people will
forever commit bad tests otherwise ("bad" in the sense that they fall
over (at least on Windows) for lack of a leading '0x'; 5 of the 7 tests
in test_repr.py failed on Windows for that reason this time around).
an inappropriate first argument. Now that there are more ways for
this to fail, make sure to report the name of the class of the
expected instance and of the actual instance.
PyErr_Format() these new C API methods can be used instead of
sprintf()'s into hardcoded char* buffers. This allows us to fix
many situation where long package, module, or class names get
truncated in reprs.
PyString_FromFormat() is the varargs variety.
PyString_FromFormatV() is the va_list variety
Original PyErr_Format() code was modified to allow %p and %ld
expansions.
Many reprs were converted to this, checkins coming soo. Not
changed: complex_repr(), float_repr(), float_print(), float_str(),
int_repr(). There may be other candidates not yet converted.
Closes patch #454743.
super(type) -> unbound super object
super(type, obj) -> bound super object; requires isinstance(obj, type)
Typical use to call a cooperative superclass method:
class C(B):
def meth(self, arg):
super(C, self).meth(arg);
the delete function. (Question: should the attribute name also be
recorded in the getset object? That makes the protocol more work, but
may give us better error messages.)
cases.
powu: Deleted.
This started with a nonsensical error msg:
>>> x = -1.
>>> import sys
>>> x**(-sys.maxint-1L)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
ValueError: negative number cannot be raised to a fractional power
>>>
The special-casing in float_pow was simply wrong in this case (there's
not even anything peculiar about these inputs), and I don't see any point
to it in *any* case: a decent libm pow should have worst-case error under
1 ULP, so in particular should deliver the exact result whenever the exact
result is representable (else its error is at least 1 ULP). Thus our
special fiddling for integral values "shouldn't" buy anything in accuracy,
and, to the contrary, repeated multiplication is less accurate than a
decent pow when the true result isn't exactly representable. So just
letting pow() do its job here (we may not be able to trust libm x-platform
in exceptional cases, but these are normal cases).
interpretation of negative indices, since neither the sq_*item slots
nor the slot_ wrappers do this. (Slices are a different story, there
the size wrapping is done too early.)
Classes that don't use __slots__ have a __weakref__ member added in
the same way as __dict__ is added (i.e. only if the base didn't
already have one). Classes using __slots__ can enable weak
referenceability by adding '__weakref__' to the __slots__ list.
Renamed the __weaklistoffset__ class member to __weakrefoffset__ --
it's not always a list, it seems. (Is tp_weaklistoffset a historical
misnomer, or do I misunderstand this?)
- Do not compile unicodeobject, unicodectype, and unicodedata if Unicode is disabled
- check for Py_USING_UNICODE in all places that use Unicode functions
- disables unicode literals, and the builtin functions
- add the types.StringTypes list
- remove Unicode literals from most tests.
the class dict). Anything but a nonnegative int in either place is
*ignored* (before, a non-Boolean was an error). The default is still
static -- in a comparative test, Jeremy's Tools/compiler package ran
twice as slow (compiling itself) using dynamic as the default. (The
static version, which requires a few tweaks to avoid modifying class
variables, runs at about the same speed as the classic version.)
slot_tp_descr_get(): this also needed fallback behavior.
slot_tp_getattro(): remove a debug fprintf() call.
the metatype passed in as an argument. This prevents infinite
recursion when a metatype written in Python calls type.__new__() as a
"super" call.
Also tweaked some comments.
calculate it on the fly. This way even modules with long package
names get an accurate repr instead of a truncated one. The extra
malloc/free cost shouldn't be a problem in a repr function.
Closes SF bug #437984
- type_module(), type_name(): if tp_name contains one or more period,
the part before the last period is __module__, the part after that
is __name__. Otherwise, for non-heap types, __module__ is
"__builtin__". For heap types, __module__ is looked up in
tp_defined.
- type_new(): heap types have their __module__ set from
globals().__name__; a pre-existing __module__ in their dict is not
overridden. This is not inherited.
- type_repr(): if __module__ exists and is not "__builtin__", it is
included in the string representation (just as it already is for
classes). For example <type '__main__.C'>.
- descrobject.c:descr_check(): only believe None means the same as
NULL if the type given is None's type.
- typeobject.c:wrap_descr_get(): don't "conventiently" default an
absent type to the type of the object argument. Let the called
function figure it out.
types -- currently Type, List, None and NotImplemented. To be called
from Py_Initialize() instead of accumulating calls there.
Also rename type(None) to NoneType and type(NotImplemented) to
NotImplementedType -- naming the type identical to the object was
confusing.
returns that. (This fix is also by MvL; checkin it in because I want
to make more changes here. I'm still not 100% satisfied -- see
comments attached to the patch.)
operators for which a default implementation exist now work, both in
dynamic classes and in static classes, overridden or not. This
affects __repr__, __str__, __hash__, __contains__, __nonzero__,
__cmp__, and the rich comparisons (__lt__ etc.). For dynamic
classes, this meant copying a lot of code from classobject! (XXX
There are still some holes, because the comparison code in object.c
uses PyInstance_Check(), meaning new-style classes don't get the
same dispensation. This needs more thinking.)
- Add object.__hash__, object.__repr__, object.__str__. The __str__
dispatcher now calls the __repr__ dispatcher, as it should.
- For static classes, the tp_compare, tp_richcompare and tp_hash slots
are now inherited together, or not at all. (XXX I fear there are
still some situations where you can inherit __hash__ when you
shouldn't, but mostly it's OK now, and I think there's no way we can
get that 100% right.)
setting and deleting a function's __dict__ attribute. Deleting
it, or setting it to a non-dictionary result in a TypeError. Note
that getting it the first time magically initializes it to an
empty dict so that func.__dict__ will always appear to be a
dictionary (never None).
Closes SF bug #446645.
The descr changes moved the dispatch for calling objects from
call_object() in ceval.c to PyObject_Call() in abstract.c.
call_object() and the many functions it used in ceval.c were no longer
used, but were not removed.
Rename meth_call() as PyCFunction_Call() so that it can be called by
the CALL_FUNCTION opcode in ceval.c.
Also, fix error message that referred to PyEval_EvalCodeEx() by its
old name eval_code2(). (I'll probably refer to it by its old name,
too.)