bit. For one, this class:
class C(object):
def __new__(myclass, ...): ...
would have no way to call the __new__ method of its base class, and
the workaround (to create an intermediate base class whose __new__ you
can call) is ugly.
So, I've come up with a better solution that restores object.__new__,
but still solves the original problem, which is that built-in and
extension types shouldn't inherit object.__new__. The solution is
simple: only "heap types" inherit tp_new. Simpler, less code,
perfect!
Previously, f.read() and f.readlines() checked for
errors on their file object and possibly raised an
IOError, but f.readline() didn't. This patch makes
f.readline() behave like the others.
Note that I've added a call to clearerr() since the other calls to
ferror() include that too.
I have no way to test this code. :-)
division. The basic binary operators now all correctly call the
__rxxx__ variant when they should.
In type_new(), I now make the new type a new-style number unless it
inherits from an old-style number that has numeric methods.
By way of cosmetics, I've changed the signatures of the SLOT<i> macros
to take actual function names and operator names as strings, rather
than rely on C preprocessor symbol manipulations. This makes the
calls slightly more verbose, but greatly helps simple searches through
the file: you can now find out where "__radd__" is used or where the
function slot_nb_power() is defined and where it is used.
This introduces:
- A new operator // that means floor division (the kind of division
where 1/2 is 0).
- The "future division" statement ("from __future__ import division)
which changes the meaning of the / operator to implement "true
division" (where 1/2 is 0.5).
- New overloadable operators __truediv__ and __floordiv__.
- New slots in the PyNumberMethods struct for true and floor division,
new abstract APIs for them, new opcodes, and so on.
I emphasize that without the future division statement, the semantics
of / will remain unchanged until Python 3.0.
Not yet implemented are warnings (default off) when / is used with int
or long arguments.
This has been on display since 7/31 as SF patch #443474.
Flames to /dev/null.
- Add an explicit call to PyType_Ready(&PyList_Type) to pythonrun.c
(just for the heck of it, really -- we should either explicitly
ready all types, or none).
- Add comment blocks explaining add_operators() and override_slots().
(This file could use some more explaining, but this is all I had
breath for today. :)
- Renamed the argument 'base' of add_wrappers() to 'wraps' because
it's not a base class (which is what the 'base' identifier is used
for elsewhere).
Small nits:
- Fix add_tp_new_wrapper() to avoid overwriting an existing __new__
descriptor in tp_defined.
- In add_operators(), check the return value of add_tp_new_wrapper().
Functional change:
- Remove the tp_new functionality from PyBaseObject_Type; this means
you can no longer instantiate the 'object' type. It's only useful
as a base class.
- To make up for the above loss, add tp_new to dynamic types. This
has to be done in a hackish way (after override_slots() has been
called, with an explicit call to add_tp_new_wrapper() at the very
end) because otherwise I ran into recursive calls of slot_tp_new().
Sigh.
And remove all the extern decls in the middle of .c files.
Apparently, it was excluded from the header file because it is
intended for internal use by the interpreter. It's still intended for
internal use and documented as such in the header file.
#caused warnings with the VMS C compiler. (SF bug #442998, in part.)
On a narrow system the current code should never be executed since ch
will always be < 0x10000.
Marc-Andre: you may end up fixing this a different way, since I
believe you have plans to generate \U for surrogate pairs. I'll leave
that to you.
particular, str(long) and repr(long) use base 10, and that gets a factor
of 4 speedup). Another factor of 2 can be gotten by refactoring divrem1 to
support in-place division, but that started getting messy so I'm leaving
that out.
raising an error. This was one of the two issues that the VPython
folks were particularly problematic for their students. (The other
one was integer division...) This implements (my) SF patch #440487.
raising an error. This was one of the two issues that the VPython
folks were particularly problematic for their students. (The other
one was integer division...) This implements (my) SF patch #440487.
Implement sys.maxunicode.
Explicitly wrap around upper/lower computations for wide Py_UNICODE.
When decoding large characters with UTF-8, represent expected test
results using the \U notation.
Add configure option --enable-unicode.
Add config.h macros Py_USING_UNICODE, PY_UNICODE_TYPE, Py_UNICODE_SIZE,
SIZEOF_WCHAR_T.
Define Py_UCS2.
Encode and decode large UTF-8 characters into single Py_UNICODE values
for wide Unicode types; likewise for UTF-16.
Remove test whether sizeof Py_UNICODE is two.
"mapping" object, specifically one that supports PyMapping_Keys() and
PyObject_GetItem(). This allows you to say e.g. {}.update(UserDict())
We keep the special case for concrete dict objects, although that
seems moderately questionable. OTOH, the code exists and works, so
why change that?
.update()'s docstring already claims that D.update(E) implies calling
E.keys() so it's appropriate not to transform AttributeErrors in
PyMapping_Keys() to TypeErrors.
Patch eyeballed by Tim.
unicodeobject.h, which forces sizeof(Py_UNICODE) == sizeof(Py_UCS4).
(this may be good enough for platforms that doesn't have a 16-bit
type. the UTF-16 codecs don't work, though)
the next free valuestack slot, not to the base (in America, stacks push
and pop at the top -- they mutate at the bottom in Australia <winK>).
eval_frame(): assert that f_stacktop isn't NULL upon entry.
frame_delloc(): avoid ordered pointer comparisons involving f_stacktop
when f_stacktop is NULL.
i_divmod: New and simpler algorithm. Old one returned gibberish on most
boxes when the numerator was -sys.maxint-1. Oddly enough, it worked in the
release (not debug) build on Windows, because the compiler optimized away
some tricky sign manipulations that were incorrect in this case.
Makes you wonder <wink> ...
Bugfix candidate.
Gave Python linear-time repr() implementations for dicts, lists, strings.
This means, e.g., that repr(range(50000)) is no longer 50x slower than
pprint.pprint() in 2.2 <wink>.
I don't consider this a bugfix candidate, as it's a performance boost.
Added _PyString_Join() to the internal string API. If we want that in the
public API, fine, but then it requires runtime error checks instead of
asserts.