The MS compiler doesn't call it 'long long', it uses __int64,
so a new #define, LONG_LONG, has been added and all occurrences
of 'long long' are replaced with it.
Previously, this said "unsubscriptable object"; in 1.5.1, the reverse
problem existed, where None[''] would complain about a non-integer
index. This fix does the right thing in all cases (for get, set and
del item).
before calling it. This check was there when the objects were of the
same type *before* coercion, but not if they initially differed but
became the same *after* coercion.
Sparc Solaris 2.6 (fully patched!) that I don't want to dig into, but
which I suspect is a bug in the multithreaded malloc library that only
shows up when run on a multiprocessor. (The program wasn't using
threads, it was just using the multithreaded C library.)
faster (using PyList_GetSlice()). Also added a test for a NULL
argument, as with PySequence_Tuple(). (Hmm... Better names for these
two would be PyList_FromSequence() and PyTuple_FromSequence(). Oh well.)
"indefinite length" sequences. These should still have a length, but
the length is only used as a hint -- the actual length of the sequence
is determined by the item that raises IndexError, which may be either
smaller or larger than what len() returns. (This is a novelty; map(),
filter() and reduce() only allow the actual length to be larger than
what len() returns, not shorter. I'll fix that shortly.)
conversions. Formerly, for example, int('-') would return 0 instead
of raising ValueError, and int(' 0') would raise ValueError
(complaining about a null byte!) instead of 0...
+ Took the "list" argument out of the other functions that no longer need
it. This speeds things up a little more.
+ Small comment changes in accord with that.
+ Exploited the now-safe ability to cache values in the partitioning loop.
Makes no timing difference on my flavor of Pentium, but this machine ran out
of registers 12 iterations ago. It should yield a small speedup on a RISC
machine, and not hurt in any case.
instead of testing whether the list changed size after each
comparison, temporarily set the type of the list to an immutable list
type. This should allow continued use of the list for legitimate
purposes but disallows all operations that can change it in any way.
(Changes to the internals of list items are not caught, of cause;
that's not possible to detect, and it's not necessary to protect the
sort code, either.)
not in restricted mode.
__dict__ can be set to any dictionary; the cl_getattr, cl_setattr and
cl_delattr slots are refreshed.
__name__ can be set to any string.
__bases__ can be set to to a tuple of classes, provided they are not
subclasses of the class whose attribute is being assigned.
__getattr__, __setattr__ and __delattr__ can be set to anything, or
deleted; the appropriate slot (cl_getattr, cl_setattr, cl_delattr) is
refreshed.
(Note: __name__ really doesn't need to be a special attribute, but
that would be more work.)
From: "Tim Peters" <tim_one@email.msn.com>
To: "Guido van Rossum" <guido@CNRI.Reston.VA.US>
Date: Sat, 23 May 1998 21:45:53 -0400
Guido, the overflow checking in PyLong_AsLong is off a little:
1) If the C in use sign-extends right shifts on signed longs, there's a
spurious overflow error when converting the most-negative int:
Python 1.5.1 (#0, Apr 13 1998, 20:22:04) [MSC 32 bit (Intel)] on win32
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> x = -1L << 31
>>> x
-2147483648L
>>> int(x)
Traceback (innermost last):
File "<stdin>", line 1, in ?
OverflowError: long int too long to convert
>>>
2) If C does not sign-extend, some genuine overflows won't be caught.
The attached should repair both, and, because I installed a new disk and a C
compiler today, it's even been compiled this time <wink>.
Python 1.5.1 (#0, May 23 1998, 20:24:58) [MSC 32 bit (Intel)] on win32
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> x = -1L << 31
>>> x
-2147483648L
>>> int(x)
-2147483648
>>> int(-x)
Traceback (innermost last):
File "<stdin>", line 1, in ?
OverflowError: long int too long to convert
>>> int(-x-1)
2147483647
>>> int(x-1)
Traceback (innermost last):
File "<stdin>", line 1, in ?
OverflowError: long int too long to convert
>>>
end-casing-ly y'rs - tim
Make sure that no tp_as_numbers->nb_<whatever> function is called
without checking for a NULL pointer. Marc-Andre Lemburg will love it!
(Except that he's just rewritten all this code for a different
approach to coercions ;-( )
programming style.
Recoded many routines to incorporate better error checking, and/or
better versions of the same function found elsewhere
(e.g. bltinmodule.c or ceval.c). In particular,
Py_Number_{Int,Long,Float}() now convert from strings, just like the
built-in functions int(), long() and float().
Sequences and mappings are now safe to have NULL function pointers
anywhere in their tp_as_sequence or tp_as_mapping fields. (A few
places in other files need to be checked in too.)
Renamed PySequence_In() to PySequence_Contains().
clear_carefully() used to do in import.c. Differences: leave only
__builtins__ alone in the 2nd pass; and don't clear the dictionary (on
the theory that as long as there are references left to the
dictionary, those might be destructors that might expect __builtins__
to be alive when they run; and __builtins__ can't normally be part of
a cycle).
PyNumber_Coerce() except that when the coercion can't be done and no
other exceptions happen, it returns 1 instead of raising an
exception.
Use this function in PyObject_Compare() to avoid raising an exception
simply because two objects with numeric behavior can't be coerced to a
common type; instead, proceed with the non-numeric default comparison.
Note that this is a somewhat questionable practice -- comparisons for
numeric objects shouldn't default to random behavior like this, but it
is required for backward compatibility. (Case in point, it broke
comparison of kjDict objects to integers in Aaron Watters' kjbuckets
extension.) A correct fix (for python 2.0) should involve a different
definiton of comparison altogether.
sys.stdin.readline(), you get a fatal error (no current thread). This
is because there was a call to PyErr_CheckSignals() while there was no
current thread. I wonder how many more of these we find... I bnetter
go hunting for PyErr_CheckSignals() now...
in libmath.a so they are available to mathmodule.so (in case it is
shared). While this still gets triggered on Solaris 2.x, this appears
to be harmless there.
__getitem__(). This method never raises an exception; if the key is
not in the dictionary, the second (optional) argument is returned. If
the second argument is not provided and the key is missing, None is
returned.
mapp_methods: added "get" method.
arbitrary nested parens in a %(...)X style format.
#Also folded two lines and added more detail to the error message for
#unsupported format character.
former lets you give an instance a set of new instance vars. The
latter lets you give it a new class. Both are typechecked and
disallowed in restricted mode.
For classes, the check for read-only special attributes is tightened
so that only assignments to __dict__, __bases__, __name__,
__getattr__, __setattr__, and __delattr__ (these could be made to work
as well, but I don't know if that's useful -- let's see first whether
mucking with instances will help).
from the interned table. There are references in hard-to-find static
variables all over the interpreter, and it's not worth trying to get
rid of all those; but "uninterning" isn't fair either and may cause
subtle failures later -- so we have to keep them in the interned
table.
Also get rid of no-longer-needed insert of None in interned dict.
no valid directory is passed in. This prevents __del__ to fail when
invoked after __builtins__ has already been discarded.
Also add PyFrame_Fini() to discard the cache of frames.
In _Py_PrintReferences(), no longer suppress once-referenced string.
Add Py_Malloc and friends and PyMem_Malloc and friends (malloc
wrappers for third parties).