tuple(i) repaired to return a true tuple when i is an instance of a
tuple subclass.
Added PyTuple_CheckExact macro.
PySequence_Tuple(): if a tuple-like object isn't exactly a tuple, it's
not safe to return the object as-is -- make a new tuple of it instead.
Given an immutable type M, and an instance I of a subclass of M, the
constructor call M(I) was just returning I as-is; but it should return a
new instance of M. This fixes it for M in {int, long}. Strings, floats
and tuples remain to be done.
Added new macros PyInt_CheckExact and PyLong_CheckExact, to more easily
distinguish between "is" and "is a" (i.e., only an int passes
PyInt_CheckExact, while any sublass of int passes PyInt_Check).
Added private API function _PyLong_Copy.
Subtlety on Windows: if we change test_largefile.py to use a file
> 4GB, it still fails. A debug session suggests this is because
fseek(fp, 0, 2) refuses to seek to the end of the file when the file
is > 4GB, because it uses the SetFilePointer() in 32-bit mode.
But it only fails when we seek relative to the end of the file,
because in the other seek modes only calls to fgetpos() and fsetpos()
are made, which use Get/SetFilePointer() in 64-bit mode. Solution:
#ifdef MS_WInDOWS, replace the call to fseek(fp, ...) with a call to
_lseeki64(fileno(fp), ...). Make sure to call fflush(fp) first.
(XXX Could also replace the entire branch with a call to _lseeki64().
Would that be more efficient? Certainly less generated code.)
(XXX This needs more testing. I can't actually test that it works for
files >4GB on my Win98 machine, because the filesystem here won't let
me create files >=4GB at all. Tim should test this on his Win2K
machine.)
- use PyModule_Check() instead of PyObject_TypeCheck(), now we can.
- don't assert that the __dict__ gotten out of a module is always
a dictionary; check its type, and raise an exception if it's not.
iterable object. I'm not sure how that got overlooked before!
Got rid of the internal _PySequence_IterContains, introduced a new
internal _PySequence_IterSearch, and rewrote all the iteration-based
"count of", "index of", and "is the object in it or not?" routines to
just call the new function. I suppose it's slower this way, but the
code duplication was getting depressing.
the base classes is not a classic class, and its class (the metaclass)
is callable, call the metaclass to do the deed.
One effect of this is that, when mixing classic and new-style classes
amongst the bases of a class, it doesn't matter whether the first base
class is a classic class or not: you will always get the error
"TypeError: metatype conflict among bases". (Formerly, with a classic
class first, you'd get "TypeError: PyClass_New: base must be a class".)
Another effect is that multiple inheritance from ExtensionClass.Base,
with a classic class as the first class, transfers control to the
ExtensionClass.Base class. This is what we need for SF #443239 (and
also for running Zope under 2.2a4, before ExtensionClass is replaced).
corresponding "getitem" operation (sq_item or mp_subscript) is
implemented. I realize that "sequence-ness" and "mapping-ness" are
poorly defined (and the tests may still be wrong for user-defined
instances, which always have both slots filled), but I believe that a
sequence that doesn't support its getitem operation should not be
considered a sequence. All other operations are optional though.
For example, the ZODB BTree tests crashed because PySequence_Check()
returned true for a dictionary! (In 2.2, the dictionary type has a
tp_as_sequence pointer, but the only field filled is sq_contains, so
you can write "if key in dict".) With this fix, all standalone ZODB
tests succeed.
a->tp_mro. If a doesn't have class, it's considered a subclass only
of itself or of 'object'.
This one fix is enough to prevent the ExtensionClass test suite from
dumping core, but that doesn't say much (it's a rather small test
suite). Also note that for ExtensionClass-defined types, a different
subclass test may be needed. But I haven't checked whether
PyType_IsSubtype() is actually used in situations where this matters
-- probably it doesn't, since we also don't check for classic classes.
Curious: the MS docs say stati64 etc are supported even on Win95, but
Win95 doesn't support a filesystem that allows partitions > 2 Gb.
test_largefile: This was opening its test file in text mode. I have no
idea how that worked under Win64, but it sure needs binary mode on Win98.
BTW, on Win98 test_largefile runs quickly (under a second).
While not even documented, they were clearly part of the C API,
there's no great difficulty to support them, and it has the cool
effect of not requiring any changes to ExtensionClass.c.
requires that errno ever get set, and it looks like glibc is already
playing that game. New rules:
+ Never use HUGE_VAL. Use the new Py_HUGE_VAL instead.
+ Never believe errno. If overflow is the only thing you're interested in,
use the new Py_OVERFLOWED(x) macro. If you're interested in any libm
errors, use the new Py_SET_ERANGE_IF_OVERFLOW(x) macro, which attempts
to set errno the way C89 said it worked.
Unfortunately, none of these are reliable, but they work on Windows and I
*expect* under glibc too.
I believe this works on Linux (tested both on a system with large file
support and one without it), and it may work on Solaris 2.7.
The changes are twofold:
(1) The configure script now boldly tries to set the two symbols that
are recommended (for Solaris and Linux), and then tries a test
script that does some simple seeking without writing.
(2) The _portable_{fseek,ftell} functions are a little more systematic
in how they try the different large file support options: first
try fseeko/ftello, but only if off_t is large; then try
fseek64/ftell64; then try hacking with fgetpos/fsetpos.
I'm keeping my fingers crossed. The meaning of the
HAVE_LARGEFILE_SUPPORT macro is not at all clear.
I'll see if I can get it to work on Windows as well.
the fiddling is simply due to that no caller of PyLong_AsDouble ever
checked for failure (so that's fixing old bugs). PyLong_AsDouble is much
faster for big inputs now too, but that's more of a happy consequence
than a design goal.
but will be the foundation for Good Things:
+ Speed PyLong_AsDouble.
+ Give PyLong_AsDouble the ability to detect overflow.
+ Make true division of long/long nearly as accurate as possible (no
spurious infinities or NaNs).
+ Return non-insane results from math.log and math.log10 when passing a
long that can't be approximated by a double better than HUGE_VAL.
mapping object", in the same sense dict.update(x) requires of x (that x
has a keys() method and a getitem).
Questionable: The other type constructors accept a keyword argument, so I
did that here too (e.g., dictionary(mapping={1:2}) works). But type_call
doesn't pass the keyword args to the tp_new slot (it passes NULL), it only
passes them to the tp_init slot, so getting at them required adding a
tp_init slot to dicts. Looks like that makes the normal case (i.e., no
args at all) a little slower (the time it takes to call dict.tp_init and
have it figure out there's nothing to do).
PEP 238. Changes:
- add a new flag variable Py_DivisionWarningFlag, declared in
pydebug.h, defined in object.c, set in main.c, and used in
{int,long,float,complex}object.c. When this flag is set, the
classic division operator issues a DeprecationWarning message.
- add a new API PyRun_SimpleStringFlags() to match
PyRun_SimpleString(). The main() function calls this so that
commands run with -c can also benefit from -Dnew.
- While I was at it, I changed the usage message in main() somewhat:
alphabetized the options, split it in *four* parts to fit in under
512 bytes (not that I still believe this is necessary -- doc strings
elsewhere are much longer), and perhaps most visibly, don't display
the full list of options on each command line error. Instead, the
full list is only displayed when -h is used, and otherwise a brief
reminder of -h is displayed. When -h is used, write to stdout so
that you can do `python -h | more'.
Notes:
- I don't want to use the -W option to control whether the classic
division warning is issued or not, because the machinery to decide
whether to display the warning or not is very expensive (it involves
calling into the warnings.py module). You can use -Werror to turn
the warnings into exceptions though.
- The -Dnew option doesn't select future division for all of the
program -- only for the __main__ module. I don't know if I'll ever
change this -- it would require changes to the .pyc file magic
number to do it right, and a more global notion of compiler flags.
- You can usefully combine -Dwarn and -Dnew: this gives the __main__
module new division, and warns about classic division everywhere
else.
__dict__ slot for string subtypes.
subtype_dealloc(): properly use _PyObject_GetDictPtr() to get the
(potentially negative) dict offset. Don't copy things into local
variables that are used only once.
type_new(): properly calculate a negative dict offset when tp_itemsize
is nonzero. The __dict__ attribute, if present, is now a calculated
attribute rather than a structure member.
tupledealloc(): only feed the free list when the type is really a
tuple, not a subtype. Otherwise, use PyObject_GC_Del().
_PyTuple_Resize(): disallow using this for tuple subtypes.