object.c, PyObject_Str: Don't try to optimize anything except exact
string objects here; in particular, let str subclasses go thru tp_str,
same as non-str objects. This allows overrides of tp_str to take
effect.
stringobject.c:
+ string_print (str's tp_print): If the argument isn't an exact string
object, get one from PyObject_Str.
+ string_str (str's tp_str): Make a genuine-string copy of the object if
it's of a proper str subclass type. str() applied to a str subclass
that doesn't override __str__ ends up here.
test_descr.py: New str_of_str_subclass() test.
efficient:
- recurse down subclasses only once rather than for each affected
slot;
- short-circuit recursing down subclasses when a subclass has its own
definition of the name that caused the update_slot() calls in the
first place;
- inline collect_ptrs().
using the same algorithm as the slot updates. The slotdefs array is
now sorted by slot offset and has an interned string object corresponding
to the name added to each item. More can be done but I need to commit
this first as a working intermediate stage.
The problem is that if fread() returns a short count, we attempt
another fread() the next time through the loop, and apparently glibc
clears or ignores the eof condition so the second fread() requires
another ^D to make it see the eof condition.
According to the man page (and the C std, I hope) fread() can only
return a short count on error or eof. I'm using that in the band-aid
solution to avoid calling fread() a second time after a short read.
Note that xreadlines() still has this problem: it calls
readlines(sizehint) until it gets a zero-length return. Since
xreadlines() is mostly used for reading real files, I won't worry
about this until we get a bug report.
inherit_slots(): tp_as_buffer was getting inherited as if it were a
method pointer, rather than a pointer to a vector of method pointers. As
a result, inheriting from a type that implemented buffer methods was
ineffective, leaving all the tp_as_buffer slots NULL in the subclass.
corresponding to a dispatch slot (e.g. __getitem__ or __add__) is set,
calculate the proper dispatch slot and propagate the change to all
subclasses. Because of multiple inheritance, there's no easy way to
avoid always recursing down the tree of subclasses. Who cares?
(There's more to do, but this works. There's also a test for this now.)
lseek(fp, 0L, SEEK_CUR) can make a filedescriptor unusable.
This workaround is expected to last only a few weeks (until GUSI
is fixed), but without it test_email fails.
the problem that slots weren't inherited properly. override_slots()
no longer exists; in its place comes fixup_slot_dispatchers() which
does more and different work and is table-based. (Eventually I want
this table also to replace all the little tab_foo tables.)
Also add a wrapper for __delslice__; this required a change in
test_descrtut.py.
without the Py_TPFLAGS_CHECKTYPES flag) in the wrappers. This
required a few changes in test_descr.py to cope with the fact that the
complex type has __int__, __long__ and __float__ methods that always
raise an exception.
is a list of weak references to types (new-style classes). Make this
accessible to Python as the function __subclasses__ which returns a
list of types -- we don't want Python programmers to be able to
manipulate the raw list.
In order to make this possible, I also had to add weak reference
support to type objects.
This will eventually be used together with a trap on attribute
assignment for dynamic classes for a major speed-up without losing the
dynamic properties of types: when a __foo__ method is added to a
class, the class and all its subclasses will get an appropriate tp_foo
slot function.
This simplifies the rounding in _PyObject_VAR_SIZE, allows to restore the
pre-rounding calling sequence, and allows some nice little simplifications
in its callers. I'm still making it return a size_t, though.
As Guido suggested, this makes the new subclassing code substantially
simpler. But the mechanics of doing it w/ C macro semantics are a mess,
and _PyObject_VAR_SIZE has a new calling sequence now.
Question: The PyObject_NEW_VAR macro appears to be part of the public API.
Regardless of what it expands to, the notion that it has to round up the
memory it allocates is new, and extensions containing the old
PyObject_NEW_VAR macro expansion (which was embedded in the
PyObject_NEW_VAR expansion) won't do this rounding. But the rounding
isn't actually *needed* except for new-style instances with dict pointers
after a variable-length blob of embedded data. So my guess is that we do
not need to bump the API version for this (as the rounding isn't needed
for anything an extension can do unless it's recompiled anyway). What's
your guess?
pad memory to properly align the __dict__ pointer in all cases.
gcmodule.c/objimpl.h, _PyObject_GC_Malloc:
+ Added a "padding" argument so that this flavor of malloc can allocate
enough bytes for alignment padding (it can't know this is needed, but
its callers do).
typeobject.c, PyType_GenericAlloc:
+ Allocated enough bytes to align the __dict__ pointer.
+ Sped and simplified the round-up-to-PTRSIZE logic.
+ Added blank lines so I could parse the if/else blocks <0.7 wink>.
+ Use the _PyObject_VAR_SIZE macro to compute object size.
+ Break the computation into lines convenient for debugger inspection.
+ Speed the round-up-to-pointer-size computation.
many types were subclassable but had a xxx_dealloc function that
called PyObject_DEL(self) directly instead of deferring to
self->ob_type->tp_free(self). It is permissible to set tp_free in the
type object directly to _PyObject_Del, for non-GC types, or to
_PyObject_GC_Del, for GC types. Still, PyObject_DEL was a tad faster,
so I'm fearing that our pystone rating is going down again. I'm not
sure if doing something like
void xxx_dealloc(PyObject *self)
{
if (PyXxxCheckExact(self))
PyObject_DEL(self);
else
self->ob_type->tp_free(self);
}
is any faster than always calling the else branch, so I haven't
attempted that -- however those types whose own dealloc is fancier
(int, float, unicode) do use this pattern.
For a dynamically constructed type object, fill in the tp_doc slot with
a copy of the argument dict's "__doc__" value, provided the latter exists
and is a string.
NOTE: I don't know what to do if it's a Unicode string, so in that case
tp_doc is left NULL (which shows up as Py_None if you do Class.__doc__).
Note that tp_doc holds a char*, not a general PyObject*.
test for getattribute==NULL was bogus because it always found
object.__getattribute__. Pick it apart using the trick we learned
from slot_sq_item, and if it's just a wrapper around
PyObject_GenericGetAttr, zap it. Also added a long XXX comment
explaining the consequences.
test dramatically:
class T(tuple): __dynamic__ = 1
t = T(range(1000))
for i in range(1000): tt = tuple(t)
The speedup was about 5x compared to the previous state of CVS (1.7
vs. 8.8, in arbitrary time units). But it's still more than twice as
slow as as the same test with __dynamic__ = 0 (0.8).
I'm not sure that I really want to go through the trouble of this kind
of speedup for every slot. Even doing it just for the most popular
slots will be a major effort (the new slot_sq_item is 40+ lines, while
the old one was one line with a powerful macro -- unfortunately the
speedup comes from expanding the macro and doing things in a way
specific to the slot signature).
An alternative that I'm currently considering is sketched in PLAN.txt:
trap setattr on type objects. But this will require keeping track of
all derived types using weak references.
pointing to a static variable to hold the object form of the string
was never used, causing endless calls to PyString_InternFromString().
One particular test (with lots of __getitem__ calls) became a third
faster with this!
Unknown whether this fixes it.
- stringobject.c, PyString_FromFormatV: don't assume that va_list is of
a type that can be copied via an initializer.
- errors.c, PyErr_Format: add a va_end() to balance the va_start().
instances).
Also added GC support to various auxiliary types: super, property,
descriptors, wrappers, dictproxy. (Only type objects have a tp_clear
field; the other types are.)
One change was necessary to the GC infrastructure. We have statically
allocated type objects that don't have a GC header (and can't easily
be given one) and heap-allocated type objects that do have a GC
header. Giving these different metatypes would be really ugly: I
tried, and I had to modify pickle.py, cPickle.c, copy.py, add a new
invent a new name for the new metatype and make it a built-in, change
affected tests... In short, a mess. So instead, we add a new type
slot tp_is_gc, which is a simple Boolean function that determines
whether a particular instance has GC headers or not. This slot is
only relevant for types that have the (new) GC flag bit set. If the
tp_is_gc slot is NULL (by far the most common case), all instances of
the type are deemed to have GC headers. This slot is called by the
PyObject_IS_GC() macro (which is only used twice, both times in
gcmodule.c).
I also changed the extern declarations for a bunch of GC-related
functions (_PyObject_GC_Del etc.): these always exist but objimpl.h
only declared them when WITH_CYCLE_GC was defined, but I needed to be
able to reference them without #ifdefs. (When WITH_CYCLE_GC is not
defined, they do the same as their non-GC counterparts anyway.)
- SLOT1BINFULL() macro: changed this to check for __rop__ overriding
__op__, like binary_op1() in abstract.c -- the latter only calls the
slot function once if both types use the same slot function, so the
slot function must make both calls -- which it already did for the
__op__, __rop__ order, but not yet for the __rop__, __op__ order
when B.__class__ is a subclass of A.__class__.
- slot_sq_contains(), slot_nb_nonzero(): use lookup_maybe() rather
than lookup_method() which sets an exception which we then clear.
- slot_nb_coerce(): don't give up when left argument's __coerce__
returns NotImplemented, but give the right argument a chance.
Generalize PyLong_AsLongLong to accept int arguments too. The real point
is so that PyArg_ParseTuple's 'L' code does too. That code was
undocumented (AFAICT), so documented it.
__rop__ now takes precendence over __op__. Those circumstances are:
- Both arguments are new-style classes
- Both arguments are new-style numbers
- Their implementation slots for tp_op differ
- Their types differ
- The right argument's type is a subtype of the left argument's type
Also did this for the ternary operator (pow) -- only the binary case
is dealt with properly though, since __rpow__ is not supported anyway.
their 'i' and 'r' variants) were not being generated if the
corresponding nb_ slots were present in the type object. I bet this
is because floor and true division were introduced after I last
looked at that part of the code.
- Made cls.__module__ writable.
- Ensure that obj.__dict__ is returned as {}, not None, even upon first
reference; it simply springs into life when you ask for it.
(*) The pickling support is provisional for the following reasons:
- It doesn't support classes with __slots__.
- It relies on additional support in copy_reg.py: the C method
__reduce__, defined in the object class, really calls calling
copy_reg._reduce(obj). Eventually the Python code in copy_reg.py
needs to be migrated to C, but I'd like to experiment with the
Python implementation first. The _reduce() code also relies on an
additional helper function, _reconstructor(), defined in
copy_reg.py; this should also be reimplemented in C.
than <type 'ClassName'>. Exception: if it's a built-in type or an
extension type, continue to call it <type 'ClassName>. Call me a
wimp, but I don't want to break more user code than necessary.
same. I hope the test for structural equivalence is stringent enough.
It only allows the assignment if the old and new types:
- have the same basic size
- have the same item size
- have the same dict offset
- have the same weaklist offset
- have the same GC flag bit
- have a common base that is the same except for maybe the dict and
weaklist (which may have been added separately at the same offsets
in both types)
- property() now takes 4 keyword arguments: fget, fset, fdel, doc.
Note that the real purpose of the 'f' prefix is to make fdel fit in
('del' is a keyword, so can't used as a keyword argument name).
- These map to visible readonly attributes 'fget', 'fset', 'fdel',
and '__doc__' in the property object.
- fget/fset/fdel weren't discoverable from Python before.
- __doc__ is new, and allows to associate a docstring with a property.
- if __getattribute__ exists, it is called first;
if it doesn't exists, PyObject_GenericGetAttr is called first.
- if the above raises AttributeError, and __getattr__ exists,
it is called.
classes to __getattribute__, to make it crystal-clear that it doesn't
have the same semantics as overriding __getattr__ on classic classes.
This is a halfway checkin -- I'll proceed to add a __getattr__ hook
that works the way it works in classic classes.
no backwards compatibility to worry about, so I just pushed the
'closure' struct member to the back -- it's never used in the current
code base (I may eliminate it, but that's more work because the getter
and setter signatures would have to change.)
As examples, I added actual docstrings to the getset attributes of a
few types: file.closed, xxsubtype.spamdict.state.
compatibility, this required all places where an array of "struct
memberlist" structures was declared that is referenced from a type's
tp_members slot to change the type of the structure to PyMemberDef;
"struct memberlist" is now only used by old code that still calls
PyMember_Get/Set. The code in PyObject_GenericGetAttr/SetAttr now
calls the new APIs PyMember_GetOne/SetOne, which take a PyMemberDef
argument.
As examples, I added actual docstrings to the attributes of a few
types: file, complex, instance method, super, and xxsubtype.spamlist.
Also converted the symtable to new style getattr.
elements which are not Unicode objects or strings. (This matches
the string.join() behaviour.)
Fix a memory leak in the .join() method which occurs in case
the Unicode resize fails.
Restore the test_unicode output.
complex_coerce() would never be called with a complex argument,
because PyNumber_Coerce[Ex] doesn't bother calling the type's coercion
method if the values already have the same type. But now, of course,
it's possible to pass an instance of a complex *subtype*, and those
must be accepted.
hack, and it's even more disgusting than a PyInstance_Check() call.
If the tp_compare slot is the slot used for overrides in Python,
it's always called.
Add some tests that show what should work too.
only safely call a type's tp_compare slot if the second argument is
also an instance of the same type. I hate to think what
e.g. int_compare() would do with a second argument that's a float!
descriptors for each attribute. The getattr() implementation is
similar to PyObject_GenericGetAttr(), but delegates to im_self instead
of looking in __dict__; I couldn't do this as a wrapper around
PyObject_GenericGetAttr().
XXX A problem here is that this is a case of *delegation*. dir()
doesn't see exactly the same attributes that are actually defined;
e.g. if the delegate is a Python function object, it supports
attributes like func_code etc., but these are not visible to dir(); on
the other hand, dynamic function attributes (stored in the function's
__dict__) *are* visible to dir(). Maybe we need a mechanism to tell
dir() about the delegation mechanism? I vaguely recall seeing a
request in the newsgroup for a more formal definition of attribute
delegation too. Sigh, time for a new PEP.
and are lists, and then just the string elements (if any)).
There are good and bad reasons for this. The good reason is to support
dir() "like before" on objects of extension types that haven't migrated
to the class introspection API yet. The bad reason is that Python's own
method objects are such a type, and this is the quickest way to get their
im_self etc attrs to "show up" via dir(). It looks much messier to move
them to the new scheme, as their current getattr implementation presents
a view of their attrs that's a untion of their own attrs plus their
im_func's attrs. In particular, methodobject.__dict__ actually returns
methodobject.im_func.__dict__, and if that's important to preserve it
doesn't seem to fit the class introspection model at all.
Both int and long multiplication are changed to be more careful in
their assumptions about when one of the arguments is a sequence: the
assumption that at least one of the arguments must be an int (or long,
respectively) is still held, but the assumption that these don't smell
like sequences is no longer true: a subtype of int or long may well
have a sequence-repeat thingie!
NotImplemented when the lookup fails, and use this for binary
operators. Also lookup_maybe() which doesn't raise an exception when
the lookup fails (still returning NULL).
- Don't turn a non-tuple argument into a one-tuple. Rather, the
caller must pass a format that causes Py_VaBuildValue() to return a
tuple.
- Speed things up by calling PyObject_Call (which is fairly low-level
and straightforward) rather than PyObject_CallObject (which calls
PyEval_CallObjectWithKeywords which calls PyObject_Call, and nothing
is really done in the mean time except some tests for NULL args and
valid types, which are already guaranteed).
- Cosmetics.
Other places:
- Make sure that the format argument to call_method() is surrounded by
parentheses, so it will cause a tuple to be created.
- Replace a few calls to PyEval_CallObject() with a surefire tuple for
args to calls to PyObject_Call(). (A few calls to
PyEval_CallObject() remain that have NULL for args.)
directly, as the only thing done here (replace NULL args with an empty
tuple) is also done there.
XXX Maybe we should take one step further and equate the two at the
macro level? That's harder though because PyEval_Call* is declared in
a header that's not included standard. But it is silly that
PyObject_CallObject calls PyEval_CallObject which calls back to
PyObject_Call. Maybe PyEval_CallObject should be moved into this file
instead? All I know is that there are too many call APIs! The
differences between PyObject_Call and PyEval_CallObjectWithKeywords is
that the latter allows args to be NULL, and does explicit type checks
for args and kwds.
A surprising number of changes to split tp_new into tp_new and tp_init.
Turned out the older PyFile_FromFile() didn't initialize the memory it
allocated in all (error) cases, which caused new sanity asserts
elsewhere to fail left & right (and could have, e.g., caused file_dealloc
to try decrefing random addresses).
keys are true strings -- no subclasses need apply. This may be debatable.
The problem is that a str subclass may very well want to override __eq__
and/or __hash__ (see the new example of case-insensitive strings in
test_descr), but go-fast shortcuts for strings are ubiquitous in our dicts
(and subclass overrides aren't even looked for then). Another go-fast
reason for the change is that PyCheck_StringExact() is a quicker test
than PyCheck_String(), and we make such a test on virtually every access
to every dict.
OTOH, a str subclass may also be perfectly happy using the base str eq
and hash, and this change slows them a lot. But those cases are still
hypothetical, while Python's own reliance on true-string dicts is not.
just by doing type(f) where f is any file object. This left a hole in
restricted execution mode that rexec.py can't plug by itself (although it
can plug part of it; the rest is plugged in fileobject.c now).
on to the tp_new slot (if non-NULL), as well as to the tp_init slot (if
any). A sane type implementing both tp_new and tp_init should probably
pay attention to the arguments in only one of them.
with the same value instead. This ensures that a string (or string
subclass) object's ob_sinterned pointer is always a str (or NULL), and
that the dict of interned strings only has strs as keys.
+ These were leaving the hash fields at 0, which all string and unicode
routines believe is a legitimate hash code. As a result, hash() applied
to str and unicode subclass instances always returned 0, which in turn
confused dict operations, etc.
+ Changed local names "new"; no point to antagonizing C++ compilers.
subclasses, all "the usual" ones (slicing etc), plus replace, translate,
ljust, rjust, center and strip. I don't know how to be sure they've all
been caught.
Question: Should we complain if someone tries to intern an instance of
a string subclass? I hate to slow any code on those paths.