tuple(i) repaired to return a true tuple when i is an instance of a
tuple subclass.
Added PyTuple_CheckExact macro.
PySequence_Tuple(): if a tuple-like object isn't exactly a tuple, it's
not safe to return the object as-is -- make a new tuple of it instead.
Given an immutable type M, and an instance I of a subclass of M, the
constructor call M(I) was just returning I as-is; but it should return a
new instance of M. This fixes it for M in {int, long}. Strings, floats
and tuples remain to be done.
Added new macros PyInt_CheckExact and PyLong_CheckExact, to more easily
distinguish between "is" and "is a" (i.e., only an int passes
PyInt_CheckExact, while any sublass of int passes PyInt_Check).
Added private API function _PyLong_Copy.
Subtlety on Windows: if we change test_largefile.py to use a file
> 4GB, it still fails. A debug session suggests this is because
fseek(fp, 0, 2) refuses to seek to the end of the file when the file
is > 4GB, because it uses the SetFilePointer() in 32-bit mode.
But it only fails when we seek relative to the end of the file,
because in the other seek modes only calls to fgetpos() and fsetpos()
are made, which use Get/SetFilePointer() in 64-bit mode. Solution:
#ifdef MS_WInDOWS, replace the call to fseek(fp, ...) with a call to
_lseeki64(fileno(fp), ...). Make sure to call fflush(fp) first.
(XXX Could also replace the entire branch with a call to _lseeki64().
Would that be more efficient? Certainly less generated code.)
(XXX This needs more testing. I can't actually test that it works for
files >4GB on my Win98 machine, because the filesystem here won't let
me create files >=4GB at all. Tim should test this on his Win2K
machine.)
- use PyModule_Check() instead of PyObject_TypeCheck(), now we can.
- don't assert that the __dict__ gotten out of a module is always
a dictionary; check its type, and raise an exception if it's not.
iterable object. I'm not sure how that got overlooked before!
Got rid of the internal _PySequence_IterContains, introduced a new
internal _PySequence_IterSearch, and rewrote all the iteration-based
"count of", "index of", and "is the object in it or not?" routines to
just call the new function. I suppose it's slower this way, but the
code duplication was getting depressing.
the base classes is not a classic class, and its class (the metaclass)
is callable, call the metaclass to do the deed.
One effect of this is that, when mixing classic and new-style classes
amongst the bases of a class, it doesn't matter whether the first base
class is a classic class or not: you will always get the error
"TypeError: metatype conflict among bases". (Formerly, with a classic
class first, you'd get "TypeError: PyClass_New: base must be a class".)
Another effect is that multiple inheritance from ExtensionClass.Base,
with a classic class as the first class, transfers control to the
ExtensionClass.Base class. This is what we need for SF #443239 (and
also for running Zope under 2.2a4, before ExtensionClass is replaced).
corresponding "getitem" operation (sq_item or mp_subscript) is
implemented. I realize that "sequence-ness" and "mapping-ness" are
poorly defined (and the tests may still be wrong for user-defined
instances, which always have both slots filled), but I believe that a
sequence that doesn't support its getitem operation should not be
considered a sequence. All other operations are optional though.
For example, the ZODB BTree tests crashed because PySequence_Check()
returned true for a dictionary! (In 2.2, the dictionary type has a
tp_as_sequence pointer, but the only field filled is sq_contains, so
you can write "if key in dict".) With this fix, all standalone ZODB
tests succeed.
a->tp_mro. If a doesn't have class, it's considered a subclass only
of itself or of 'object'.
This one fix is enough to prevent the ExtensionClass test suite from
dumping core, but that doesn't say much (it's a rather small test
suite). Also note that for ExtensionClass-defined types, a different
subclass test may be needed. But I haven't checked whether
PyType_IsSubtype() is actually used in situations where this matters
-- probably it doesn't, since we also don't check for classic classes.
Curious: the MS docs say stati64 etc are supported even on Win95, but
Win95 doesn't support a filesystem that allows partitions > 2 Gb.
test_largefile: This was opening its test file in text mode. I have no
idea how that worked under Win64, but it sure needs binary mode on Win98.
BTW, on Win98 test_largefile runs quickly (under a second).
While not even documented, they were clearly part of the C API,
there's no great difficulty to support them, and it has the cool
effect of not requiring any changes to ExtensionClass.c.
requires that errno ever get set, and it looks like glibc is already
playing that game. New rules:
+ Never use HUGE_VAL. Use the new Py_HUGE_VAL instead.
+ Never believe errno. If overflow is the only thing you're interested in,
use the new Py_OVERFLOWED(x) macro. If you're interested in any libm
errors, use the new Py_SET_ERANGE_IF_OVERFLOW(x) macro, which attempts
to set errno the way C89 said it worked.
Unfortunately, none of these are reliable, but they work on Windows and I
*expect* under glibc too.
I believe this works on Linux (tested both on a system with large file
support and one without it), and it may work on Solaris 2.7.
The changes are twofold:
(1) The configure script now boldly tries to set the two symbols that
are recommended (for Solaris and Linux), and then tries a test
script that does some simple seeking without writing.
(2) The _portable_{fseek,ftell} functions are a little more systematic
in how they try the different large file support options: first
try fseeko/ftello, but only if off_t is large; then try
fseek64/ftell64; then try hacking with fgetpos/fsetpos.
I'm keeping my fingers crossed. The meaning of the
HAVE_LARGEFILE_SUPPORT macro is not at all clear.
I'll see if I can get it to work on Windows as well.
the fiddling is simply due to that no caller of PyLong_AsDouble ever
checked for failure (so that's fixing old bugs). PyLong_AsDouble is much
faster for big inputs now too, but that's more of a happy consequence
than a design goal.
but will be the foundation for Good Things:
+ Speed PyLong_AsDouble.
+ Give PyLong_AsDouble the ability to detect overflow.
+ Make true division of long/long nearly as accurate as possible (no
spurious infinities or NaNs).
+ Return non-insane results from math.log and math.log10 when passing a
long that can't be approximated by a double better than HUGE_VAL.
mapping object", in the same sense dict.update(x) requires of x (that x
has a keys() method and a getitem).
Questionable: The other type constructors accept a keyword argument, so I
did that here too (e.g., dictionary(mapping={1:2}) works). But type_call
doesn't pass the keyword args to the tp_new slot (it passes NULL), it only
passes them to the tp_init slot, so getting at them required adding a
tp_init slot to dicts. Looks like that makes the normal case (i.e., no
args at all) a little slower (the time it takes to call dict.tp_init and
have it figure out there's nothing to do).
PEP 238. Changes:
- add a new flag variable Py_DivisionWarningFlag, declared in
pydebug.h, defined in object.c, set in main.c, and used in
{int,long,float,complex}object.c. When this flag is set, the
classic division operator issues a DeprecationWarning message.
- add a new API PyRun_SimpleStringFlags() to match
PyRun_SimpleString(). The main() function calls this so that
commands run with -c can also benefit from -Dnew.
- While I was at it, I changed the usage message in main() somewhat:
alphabetized the options, split it in *four* parts to fit in under
512 bytes (not that I still believe this is necessary -- doc strings
elsewhere are much longer), and perhaps most visibly, don't display
the full list of options on each command line error. Instead, the
full list is only displayed when -h is used, and otherwise a brief
reminder of -h is displayed. When -h is used, write to stdout so
that you can do `python -h | more'.
Notes:
- I don't want to use the -W option to control whether the classic
division warning is issued or not, because the machinery to decide
whether to display the warning or not is very expensive (it involves
calling into the warnings.py module). You can use -Werror to turn
the warnings into exceptions though.
- The -Dnew option doesn't select future division for all of the
program -- only for the __main__ module. I don't know if I'll ever
change this -- it would require changes to the .pyc file magic
number to do it right, and a more global notion of compiler flags.
- You can usefully combine -Dwarn and -Dnew: this gives the __main__
module new division, and warns about classic division everywhere
else.
__dict__ slot for string subtypes.
subtype_dealloc(): properly use _PyObject_GetDictPtr() to get the
(potentially negative) dict offset. Don't copy things into local
variables that are used only once.
type_new(): properly calculate a negative dict offset when tp_itemsize
is nonzero. The __dict__ attribute, if present, is now a calculated
attribute rather than a structure member.
tupledealloc(): only feed the free list when the type is really a
tuple, not a subtype. Otherwise, use PyObject_GC_Del().
_PyTuple_Resize(): disallow using this for tuple subtypes.
don't use getattr, but only look in the dict of the type and base types.
This prevents picking up all sorts of weird stuff, including things defined
by the metaclass when the object is a class (type).
For this purpose, a helper function lookup_method() was added. One or two
other places also use this.
PyString_FromFormatV(): In the final resize at the end, we can use
PyString_AS_STRING() since we know the object is a string and can
avoid the typechecking.
PyString_FromFormat(): GS sez: "For safety/propriety, you should call
va_end() on the vargs variable."
at least in the first two characters. %p is ill-defined, and people will
forever commit bad tests otherwise ("bad" in the sense that they fall
over (at least on Windows) for lack of a leading '0x'; 5 of the 7 tests
in test_repr.py failed on Windows for that reason this time around).
an inappropriate first argument. Now that there are more ways for
this to fail, make sure to report the name of the class of the
expected instance and of the actual instance.
PyErr_Format() these new C API methods can be used instead of
sprintf()'s into hardcoded char* buffers. This allows us to fix
many situation where long package, module, or class names get
truncated in reprs.
PyString_FromFormat() is the varargs variety.
PyString_FromFormatV() is the va_list variety
Original PyErr_Format() code was modified to allow %p and %ld
expansions.
Many reprs were converted to this, checkins coming soo. Not
changed: complex_repr(), float_repr(), float_print(), float_str(),
int_repr(). There may be other candidates not yet converted.
Closes patch #454743.
super(type) -> unbound super object
super(type, obj) -> bound super object; requires isinstance(obj, type)
Typical use to call a cooperative superclass method:
class C(B):
def meth(self, arg):
super(C, self).meth(arg);
the delete function. (Question: should the attribute name also be
recorded in the getset object? That makes the protocol more work, but
may give us better error messages.)
cases.
powu: Deleted.
This started with a nonsensical error msg:
>>> x = -1.
>>> import sys
>>> x**(-sys.maxint-1L)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
ValueError: negative number cannot be raised to a fractional power
>>>
The special-casing in float_pow was simply wrong in this case (there's
not even anything peculiar about these inputs), and I don't see any point
to it in *any* case: a decent libm pow should have worst-case error under
1 ULP, so in particular should deliver the exact result whenever the exact
result is representable (else its error is at least 1 ULP). Thus our
special fiddling for integral values "shouldn't" buy anything in accuracy,
and, to the contrary, repeated multiplication is less accurate than a
decent pow when the true result isn't exactly representable. So just
letting pow() do its job here (we may not be able to trust libm x-platform
in exceptional cases, but these are normal cases).
interpretation of negative indices, since neither the sq_*item slots
nor the slot_ wrappers do this. (Slices are a different story, there
the size wrapping is done too early.)
Classes that don't use __slots__ have a __weakref__ member added in
the same way as __dict__ is added (i.e. only if the base didn't
already have one). Classes using __slots__ can enable weak
referenceability by adding '__weakref__' to the __slots__ list.
Renamed the __weaklistoffset__ class member to __weakrefoffset__ --
it's not always a list, it seems. (Is tp_weaklistoffset a historical
misnomer, or do I misunderstand this?)
- Do not compile unicodeobject, unicodectype, and unicodedata if Unicode is disabled
- check for Py_USING_UNICODE in all places that use Unicode functions
- disables unicode literals, and the builtin functions
- add the types.StringTypes list
- remove Unicode literals from most tests.
the class dict). Anything but a nonnegative int in either place is
*ignored* (before, a non-Boolean was an error). The default is still
static -- in a comparative test, Jeremy's Tools/compiler package ran
twice as slow (compiling itself) using dynamic as the default. (The
static version, which requires a few tweaks to avoid modifying class
variables, runs at about the same speed as the classic version.)
slot_tp_descr_get(): this also needed fallback behavior.
slot_tp_getattro(): remove a debug fprintf() call.
the metatype passed in as an argument. This prevents infinite
recursion when a metatype written in Python calls type.__new__() as a
"super" call.
Also tweaked some comments.
calculate it on the fly. This way even modules with long package
names get an accurate repr instead of a truncated one. The extra
malloc/free cost shouldn't be a problem in a repr function.
Closes SF bug #437984
- type_module(), type_name(): if tp_name contains one or more period,
the part before the last period is __module__, the part after that
is __name__. Otherwise, for non-heap types, __module__ is
"__builtin__". For heap types, __module__ is looked up in
tp_defined.
- type_new(): heap types have their __module__ set from
globals().__name__; a pre-existing __module__ in their dict is not
overridden. This is not inherited.
- type_repr(): if __module__ exists and is not "__builtin__", it is
included in the string representation (just as it already is for
classes). For example <type '__main__.C'>.
- descrobject.c:descr_check(): only believe None means the same as
NULL if the type given is None's type.
- typeobject.c:wrap_descr_get(): don't "conventiently" default an
absent type to the type of the object argument. Let the called
function figure it out.
types -- currently Type, List, None and NotImplemented. To be called
from Py_Initialize() instead of accumulating calls there.
Also rename type(None) to NoneType and type(NotImplemented) to
NotImplementedType -- naming the type identical to the object was
confusing.
returns that. (This fix is also by MvL; checkin it in because I want
to make more changes here. I'm still not 100% satisfied -- see
comments attached to the patch.)
operators for which a default implementation exist now work, both in
dynamic classes and in static classes, overridden or not. This
affects __repr__, __str__, __hash__, __contains__, __nonzero__,
__cmp__, and the rich comparisons (__lt__ etc.). For dynamic
classes, this meant copying a lot of code from classobject! (XXX
There are still some holes, because the comparison code in object.c
uses PyInstance_Check(), meaning new-style classes don't get the
same dispensation. This needs more thinking.)
- Add object.__hash__, object.__repr__, object.__str__. The __str__
dispatcher now calls the __repr__ dispatcher, as it should.
- For static classes, the tp_compare, tp_richcompare and tp_hash slots
are now inherited together, or not at all. (XXX I fear there are
still some situations where you can inherit __hash__ when you
shouldn't, but mostly it's OK now, and I think there's no way we can
get that 100% right.)
setting and deleting a function's __dict__ attribute. Deleting
it, or setting it to a non-dictionary result in a TypeError. Note
that getting it the first time magically initializes it to an
empty dict so that func.__dict__ will always appear to be a
dictionary (never None).
Closes SF bug #446645.
The descr changes moved the dispatch for calling objects from
call_object() in ceval.c to PyObject_Call() in abstract.c.
call_object() and the many functions it used in ceval.c were no longer
used, but were not removed.
Rename meth_call() as PyCFunction_Call() so that it can be called by
the CALL_FUNCTION opcode in ceval.c.
Also, fix error message that referred to PyEval_EvalCodeEx() by its
old name eval_code2(). (I'll probably refer to it by its old name,
too.)
XXX There are still some loose ends: repr(), str(), hash() and
comparisons don't inherit a default implementation from object. This
must be resolved similarly to the way it's resolved for classic
instances.
XXX This is not sufficient: if a dynamic class has no __repr__ method
(for instance), but later one is added, that doesn't add a tp_repr
slot, so repr() doesn't call the __repr__ method. To make this work,
I'll have to add default implementations of several slots to 'object'.
XXX Also, dynamic types currently only inherit slots from their
dominant base.
problem). inherit_slots() is split in two parts: inherit_special()
which inherits the flags and a few very special members from the
dominant base; inherit_slots() which inherits only regular slots,
and is now called for each base in the MRO in turn. These are now
both void functions since they don't have error returns.
- Added object.__setitem__() back -- for the same reason as
object.__new__(): a subclass of object should be able to call
object.__new__().
- add_wrappers() was moved around to be closer to where it is used (it
was defined together with add_methods() etc., but has nothing to do
with these).
Removed all instances of Py_UCS2 from the codebase, and so also (I hope)
the last remaining reliance on the platform having an integral type
with exactly 16 bits.
PyUnicode_DecodeUTF16() and PyUnicode_EncodeUTF16() now read and write
one byte at a time.
bit. For one, this class:
class C(object):
def __new__(myclass, ...): ...
would have no way to call the __new__ method of its base class, and
the workaround (to create an intermediate base class whose __new__ you
can call) is ugly.
So, I've come up with a better solution that restores object.__new__,
but still solves the original problem, which is that built-in and
extension types shouldn't inherit object.__new__. The solution is
simple: only "heap types" inherit tp_new. Simpler, less code,
perfect!
Previously, f.read() and f.readlines() checked for
errors on their file object and possibly raised an
IOError, but f.readline() didn't. This patch makes
f.readline() behave like the others.
Note that I've added a call to clearerr() since the other calls to
ferror() include that too.
I have no way to test this code. :-)
division. The basic binary operators now all correctly call the
__rxxx__ variant when they should.
In type_new(), I now make the new type a new-style number unless it
inherits from an old-style number that has numeric methods.
By way of cosmetics, I've changed the signatures of the SLOT<i> macros
to take actual function names and operator names as strings, rather
than rely on C preprocessor symbol manipulations. This makes the
calls slightly more verbose, but greatly helps simple searches through
the file: you can now find out where "__radd__" is used or where the
function slot_nb_power() is defined and where it is used.
This introduces:
- A new operator // that means floor division (the kind of division
where 1/2 is 0).
- The "future division" statement ("from __future__ import division)
which changes the meaning of the / operator to implement "true
division" (where 1/2 is 0.5).
- New overloadable operators __truediv__ and __floordiv__.
- New slots in the PyNumberMethods struct for true and floor division,
new abstract APIs for them, new opcodes, and so on.
I emphasize that without the future division statement, the semantics
of / will remain unchanged until Python 3.0.
Not yet implemented are warnings (default off) when / is used with int
or long arguments.
This has been on display since 7/31 as SF patch #443474.
Flames to /dev/null.
- Add an explicit call to PyType_Ready(&PyList_Type) to pythonrun.c
(just for the heck of it, really -- we should either explicitly
ready all types, or none).
- Add comment blocks explaining add_operators() and override_slots().
(This file could use some more explaining, but this is all I had
breath for today. :)
- Renamed the argument 'base' of add_wrappers() to 'wraps' because
it's not a base class (which is what the 'base' identifier is used
for elsewhere).
Small nits:
- Fix add_tp_new_wrapper() to avoid overwriting an existing __new__
descriptor in tp_defined.
- In add_operators(), check the return value of add_tp_new_wrapper().
Functional change:
- Remove the tp_new functionality from PyBaseObject_Type; this means
you can no longer instantiate the 'object' type. It's only useful
as a base class.
- To make up for the above loss, add tp_new to dynamic types. This
has to be done in a hackish way (after override_slots() has been
called, with an explicit call to add_tp_new_wrapper() at the very
end) because otherwise I ran into recursive calls of slot_tp_new().
Sigh.
And remove all the extern decls in the middle of .c files.
Apparently, it was excluded from the header file because it is
intended for internal use by the interpreter. It's still intended for
internal use and documented as such in the header file.
#caused warnings with the VMS C compiler. (SF bug #442998, in part.)
On a narrow system the current code should never be executed since ch
will always be < 0x10000.
Marc-Andre: you may end up fixing this a different way, since I
believe you have plans to generate \U for surrogate pairs. I'll leave
that to you.
particular, str(long) and repr(long) use base 10, and that gets a factor
of 4 speedup). Another factor of 2 can be gotten by refactoring divrem1 to
support in-place division, but that started getting messy so I'm leaving
that out.
raising an error. This was one of the two issues that the VPython
folks were particularly problematic for their students. (The other
one was integer division...) This implements (my) SF patch #440487.
raising an error. This was one of the two issues that the VPython
folks were particularly problematic for their students. (The other
one was integer division...) This implements (my) SF patch #440487.
Implement sys.maxunicode.
Explicitly wrap around upper/lower computations for wide Py_UNICODE.
When decoding large characters with UTF-8, represent expected test
results using the \U notation.
Add configure option --enable-unicode.
Add config.h macros Py_USING_UNICODE, PY_UNICODE_TYPE, Py_UNICODE_SIZE,
SIZEOF_WCHAR_T.
Define Py_UCS2.
Encode and decode large UTF-8 characters into single Py_UNICODE values
for wide Unicode types; likewise for UTF-16.
Remove test whether sizeof Py_UNICODE is two.
"mapping" object, specifically one that supports PyMapping_Keys() and
PyObject_GetItem(). This allows you to say e.g. {}.update(UserDict())
We keep the special case for concrete dict objects, although that
seems moderately questionable. OTOH, the code exists and works, so
why change that?
.update()'s docstring already claims that D.update(E) implies calling
E.keys() so it's appropriate not to transform AttributeErrors in
PyMapping_Keys() to TypeErrors.
Patch eyeballed by Tim.
unicodeobject.h, which forces sizeof(Py_UNICODE) == sizeof(Py_UCS4).
(this may be good enough for platforms that doesn't have a 16-bit
type. the UTF-16 codecs don't work, though)
the next free valuestack slot, not to the base (in America, stacks push
and pop at the top -- they mutate at the bottom in Australia <winK>).
eval_frame(): assert that f_stacktop isn't NULL upon entry.
frame_delloc(): avoid ordered pointer comparisons involving f_stacktop
when f_stacktop is NULL.