trampolining going on with the tp_new descriptor, where the inherited
PyType_GenericNew was overwritten with the much slower slot_tp_new
which would end up calling tp_new_wrapper which would eventually call
PyType_GenericNew. Add a special case for this to update_one_slot().
XXX Hope there isn't a loophole in this. I'll buy the first person to
point out a bug in the reasoning a beer.
Backport candidate (but I won't do it).
the framework, the MacOSX apps and the unix tools.
Most of the hard work is done by Mac/OSX/Makefile.
Also, it should now be possible to install in a different directory,
such as /tmp/dist/Library/Frameworks, for building binary installers.
The fink crowd wanted this.
intern the string "__new__" so we can call PyObject_GetAttr() rather
than PyObject_GetAttrString(). (Though it's a mystery why slot_tp_new
is being called when a class doesn't define __new__. I'll look into
that tomorrow.)
2.2 backport candidate (but I won't do it).
a lot of work: it had to save and restore the current exception around
a call to lookup_maybe(), because that could fail in rare cases, and
most objects don't have a __del__ method, so the whole exercise was
usually a waste of time. Changed this to cache the __del__ method in
the type object just like all other special methods, in a new slot
tp_del. So now subtype_dealloc() can test whether tp_del is NULL and
skip the whole exercise if it is. The new slot doesn't need a new
flag bit: subtype_dealloc() is only called if the type was dynamically
allocated by type_new(), so it's guaranteed to have all current slots.
Types defined in C cannot fill in tp_del with a function of their own,
so there's no corresponding "wrapper". (That functionality is already
available through tp_dealloc.)
value; others were inconsistent in what to name the argument or return
value; a few module-global functions had "socket." in front of their
name, against convention.
least on OS/2 (see note on SF patch 555085 by A I MacIntyre) but
looks like the test *could* fail on any other platform too -- there's
no guarantee that recv() reads all data.
to delete the reference to self._sock, and the regular destructor will
do that just fine. This made some hacks in close() unnecessary.
The _fileobject class still has a __del__ method, because it must flush.
observation that _rbuf could never have more than one string in it.
So make _rbuf a string. The code branches for size<0 and size>=0
are completely separate now, both in read() and in readline().
I checked for tabs this time. :-)
to being a new-style class, to be more similar to the socket class
in the _socket module; it is now the same as the _socketobject class.
Added __slots__. Added docstrings, copied from the real socket class
where possible.
The _fileobject class is now also a new-style class with __slots__
(though without docstrings). The mode, name, softspace, bufsize and
closed attributes are properly supported (closed as a property; name
as a class attributes; the softspace, mode and bufsize as slots).
correctly (the test at least succeed, but they don't test everything yet).
Also fix a performance problem in read(-1): in unbuffered mode, this would
read 1 byte at a time. Since we're reading until EOF, that doesn't make
sense. Use the default buffer size if _rbufsize is <= 1.
prints function and module names, which is more informative now that
we repeat some tests in slightly modified subclasses.
Add a test for read() until EOF.
Add test suites for line-buffered (bufsize==1) and a small custom
buffer size (bufsize==2).
Restructure testUnbufferedRead() somewhat to avoid a potentially
infinite loop.
subtype_dealloc().
When call_finalizer() failed, it would return without going through
the trashcan end macro, thereby unbalancing the trashcan nesting level
counter, and thereby defeating the test case (slottrash() in
test_descr.py). This in turn meant that the assert in the GC_UNTRACK
macro wasn't triggered by the slottrash() test despite a bug in the
code: _PyTrash_destroy_chain() calls the dealloc routine with an
object that's untracked, and the assert in the GC_UNTRACK macro would
fail on this; but because of an earlier test that resurrects an
object, causing call_finalizer() to fail and the trashcan nesting
level to be unbalanced, so _PyTrash_destroy_chain() was never called.
Calling the slottrash() test in isolation *did* trigger the assert,
however.
So the fix is twofold: (1) call the GC_UnTrack() function instead of
the GC_UNTRACK macro, because the function is safe when the object is
already untracked; (2) when call_finalizer() fails, jump to a label
that exits through the trashcan end macro, keeping the trashcan
nesting balanced.