and left shifts. (Thanks to Kalle Svensson for SF patch 849227.)
This addresses most of the remaining semantic changes promised by
PEP 237, except for repr() of a long, which still shows the trailing
'L'. The PEP appears to promise warnings for operations that
changed semantics compared to Python 2.3, but this is not
implemented; we've suffered through enough warnings related to
hex/oct literals and I think it's best to be silent now.
* Add more tests
* Refactor and neaten the code a bit.
* Rename union_update() to update().
* Improve the algorithms (making them a closer to sets.py).
function.
* Add a better test for deepcopying.
* Add tests to show the __init__() function works like it does for list
and tuple. Add related test.
* Have shallow copies of frozensets return self. Add related test.
* Have frozenset(f) return f if f is already a frozenset. Add related test.
* Beefed-up some existing tests.
by the function object or by the method object, the function
object's attribute usually wins. Christian Tismer pointed out that
that this is really a mistake, because this only happens for special
methods (like __reduce__) where the method object's version is
really more appropriate than the function's attribute. So from now
on, all method attributes will have precedence over function
attributes with the same name.
* Improve the hash function to increase the chance that distinct sets will
have distinct xor'd hash totals.
* Use PyDict_Merge where possible (it is faster than an equivalent iter/set
pair).
* Don't rebuild dictionaries where the input already has one.
Also SF patch 843455.
This is a critical bugfix.
I'll backport to 2.3 maint, but not beyond that. The bugs this fixes
have been there since weakrefs were introduced.
* Install the unittests, docs, newsitem, include file, and makefile update.
* Exercise the new functions whereever sets.py was being used.
Includes the docs for libfuncs.tex. Separate docs for the types are
forthcoming.
subtype_dealloc(): This left the dying object exposed to gc, so that
if cyclic gc triggered during the weakref callback, gc tried to delete
the dying object a second time. That's a disaster. subtype_dealloc()
had a (I hope!) unique problem here, as every normal dealloc routine
untracks the object (from gc) before fiddling with weakrefs etc. But
subtype_dealloc has obscure technical reasons for re-registering the
dying object with gc (already explained in a large comment block at
the bottom of the function).
The fix amounts to simply refraining from reregistering the dying object
with gc until after the weakref callback (if any) has been called.
This is a critical bug (hard to predict, and causes seemingly random
memory corruption when it occurs). I'll backport it to 2.3 later.
charmaptranslate_makespace() allocated more memory than required for the
next replacement but didn't remember that fact, so memory size was growing
exponentially every time a replacement string is longer that one character.
This fixes SF bug #828737.
key provides C support for the decorate-sort-undecorate pattern.
reverse provide a stable sort of the list with the comparisions reversed.
* Amended the docs to guarantee sort stability.
If a length-1 Unicode string was in the freelist and it was
uninitialized or pointed to a very large (magnitude) negative number,
the check
unicode_latin1[unicode->str[0]] == unicode
could cause a segmentation violation, e.g. unicode->str[0] is 0xcbcbcbcb.
Fix this in two ways:
1. Change guard befor unicode_latin1[] to test against 256U. If I
understand correctly, the unsigned long used to store UCS4 on my
box was getting converted to a signed long to compare with the
signed constant 256.
2. Change _PyUnicode_New() to make sure the first element of str is
always initialized to zero. There are several places in the code
where the caller can exit with an error before initializing any
of str, which would leave junk in str[0].
Also, silence a compiler warning on pointer vs. int arithmetic.
Bug fix candidate.
The unicode_resize() family only returns -1 or 0 so simply checking
for != 0 is sufficient, but somewhat unclear. Many Python API
functions return < 0 on error, reserving the right to return 0 or 1 on
success. Change the call sites for consistency with these calls.
file_truncate(): C doesn't define what fflush(fp) does if fp is open
for update, and the preceding I/O operation on fp was input. On Windows,
fflush() actually changes the current file position then. Because
Windows doesn't support ftruncate() directly, this not only caused
Python's file.truncate() to change the file position (contra our docs),
it also caused the file not to change size.
Repaired by getting the initial file position at the start, restoring
it at the end, and tossing all the complicated micro-efficiency checks
trying to avoid "provably unnecessary" seeks. file.truncate() can't
be a frequent operation, and seeking to the current file position has
got to be cheap anyway.
Bugfix candidate.
[ 784825 ] fix obscure crash in descriptor handling
Should be applied to release23-maint and in all likelyhood
release22-maint, too.
Certainly doesn't apply to release21-maint.
number. This accounts for the 2 refcount leaks per test_complex run
Michael Hudson discovered (I figured only I would have the stomach to
look for leaks in floating-point code <wink>).
when an encoding error occurs and the callback name is unknown,
i.e. when the callback has to be called. The problem was that
the fact that the callback has already been looked up was only
recorded in a local variable in charmap_encoding_error(), because
charmap_encoding_error() got it's own copy of the errorHandler
pointer instead of a pointer to the pointer in
PyUnicode_EncodeCharmap().
Now test_descr only appears to leak two references & I think this
are in fact illusory (it's to do with things getting resurrected in
__del__ methods & it's easy to be believe confusion occurs when that
happens <wink>). Woohoo!
Sure looks like it to me! <wink>
When I run the leak2.py script I posted to python-dev, I only see
three reference leaks in all of test_descr. When I run
test_descr.test_main, I still see 46 leaks. This clearly demands
posting a yelp to python-dev :-)
This certainly should be applied to release23-maint, and in all
likelyhood release22-maint as well.
The !PyType_Check(base) check snuck in as part of rev 2.215, but was
unrelated to the SF patch that is mentioned in the checkin comment.
The test is currently unnecessary because base is set to the return
value of best_bases(), which returns a type or NULL.
float_pow(): Don't let the platform pow() raise -1.0 to an integer power
anymore; at least glibc gets it wrong in some cases. Note that
math.pow() will continue to deliver wrong (but platform-native) results
in such cases.
tp_free is NULL or PyObject_Del at the end. Because it's a base type
it must call tp_free in its dealloc function, and because it's gc'able
it must not call PyObject_Del.
inherit_slots(): Don't inherit tp_free unless the type and its base
agree about whether they're gc'able. If the type is gc'able and the
base is not, and the base uses the default PyObject_Del for its
tp_free, give the type PyObject_GC_Del for its tp_free (the appropriate
default for a gc'able type).
cPickle.c: The Pickler and Unpickler types claim to be base classes
and gc'able, but their dealloc functions didn't call tp_free.
Repaired that. Also call PyType_Ready() on these typeobjects, so
that the correct (PyObject_GC_Del) default memory-freeing function
gets plugged into these types' tp_free slots.
Reverted a Py2.3b1 change to iterator in subclasses of list and tuple.
They had been changed to use __getitem__ whenever it had been overriden
in the subclass.
This caused some usabilty and performance problems. Also, it was
inconsistent with the rest of python where many container methods
access the underlying object directly without first checking for
an overridden getter. Users needing a change in iterator behavior
should override it directly.
* Increase dictionary growth rate resulting in more sparse dictionaries,
fewer lookup collisions, increased memory use, and better cache
performance. For dicts with over 50k entries, keep the current
growth rate in case an application is suffering from tight memory
constraints.
* Set the most common case (no resize) to fall-through the test.
Some version of gcc in the "RTEMS port running on the Coldfire (m5200)
processor" generates bad code for a loop in long_from_binary_base(),
comparing the wrong half of an int to a short. The patch changes the
decl of the short temp to be an int temp instead. This "simplifies"
the code enough that gcc no longer blows it.
As a side issue on this bug, it was noted that list and tuple iterators
used macros to directly access containers and would not recognize
__getitem__ overrides. If the method is overridden, the patch returns
a generic sequence iterator which calls the __getitem__ method; otherwise,
it returns a high custom iterator with direct access to container elements.
raising an exception. This is consistent with calling the
constructors for the other builtin types -- called without argument
they all return the false value of that type. (SF patch #724135)
Thanks to Alex Martelli.
I'm finding some pretty baffling output, like reprs consisting entirely
of three left parens. At least this will let us know what type the object
is (it's not str -- there's no quote character in the repr).
New tool combinerefs.py, to combine the two output blocks produced via
PYTHONDUMPREFS.
new line.
New pvt API function _Py_PrintReferenceAddresses(): Prints only the
addresses and refcnts of the live objects. This is always safe to call,
because it has no dependence on Python's C API.
Py_Finalize(): If envar PYTHONDUMPREFS is set, call (the new)
_Py_PrintReferenceAddresses() right before dumping final pymalloc stats.
We can't print the reprs of the objects here because too much of the
interpreter has been shut down. You need to correlate the addresses
displayed here with the object reprs printed by the earlier
PYTHONDUMPREFS call to _Py_PrintReferences().
New functions:
unsigned long PyInt_AsUnsignedLongMask(PyObject *);
unsigned PY_LONG_LONG) PyInt_AsUnsignedLongLongMask(PyObject *);
unsigned long PyLong_AsUnsignedLongMask(PyObject *);
unsigned PY_LONG_LONG) PyLong_AsUnsignedLongLongMask(PyObject *);
New and changed format codes:
b unsigned char 0..UCHAR_MAX
B unsigned char none **
h unsigned short 0..USHRT_MAX
H unsigned short none **
i int INT_MIN..INT_MAX
I * unsigned int 0..UINT_MAX
l long LONG_MIN..LONG_MAX
k * unsigned long none
L long long LLONG_MIN..LLONG_MAX
K * unsigned long long none
Notes:
* New format codes.
** Changed from previous "range-and-a-half" to "none"; the
range-and-a-half checking wasn't particularly useful.
New test test_getargs2.py, to verify all this.
even farther down, to just before the call to
_PyObject_DebugMallocStats(). This required the following changes:
- pystate.c, PyThreadState_GetDict(): changed not to raise an
exception or issue a fatal error when no current thread state is
available, but simply return NULL without raising an exception
(ever).
- object.c, Py_ReprEnter(): when PyThreadState_GetDict() returns NULL,
don't raise an exception but return 0. This means that when
printing a container that's recursive, printing will go on and on
and on. But that shouldn't happen in the case we care about (see
first bullet).
- Updated Misc/NEWS and Doc/api/init.tex to reflect changes to
PyThreadState_GetDict() definition.
interpreted by slicing, so negative values count from the end of the
list. This was the only place where such an interpretation was not
placed on a list index.
* Doc - add doc for when functions were added
* UserString
* string object methods
* string module functions
'chars' is used for the last parameter everywhere.
These changes will be backported, since part of the changes
have already been made, but they were inconsistent.
If a class was defined inside a function, used a static or class
method, and used super() inside the method body, it would be caught in
an uncollectable cycle. (Simplified version: The static/class method
object would point to a function object with a closure that referred
to the class.)
Bugfix candidate.
Arranged that all the objects exposed by __builtin__ appear in the list
of all objects. I basically peed away two days tracking down a mystery
leak in sys.gettotalrefcount() in a ZODB app (== tons of code), because
the object leaking the references didn't appear in the sys.getobjects(0)
list. The object happened to be False. Now False is in the list, along
with other popular & previously missing leak candidates (like None).
Alas, we still don't have a choke point covering *all* Python objects,
so the list of all objects may still be incomplete.
_Py_AddToAllObjects() that simply inserts an object at the front of
the doubly-linked list of all objects. Changed PyType_Ready() (the
closest thing we've got to a choke point for type objects) to call
that.
a doubly-linked list, exposed by sys.getobjects(). Unfortunately, it's not
really all live objects, and it seems my fate to bump into programs where
sys.gettotalrefcount() keeps going up but where the reference leaks aren't
accounted for by anything in the list of all objects.
This patch helps a little: if COUNT_ALLOCS is also defined, from now on
type objects will also appear in this list, provided at least one object
of a type has been allocated.
constructor, when passed a single complex argument, returns the
argument unchanged. This should be done only for the complex base
class; a complex subclass should of course cast the value to the
subclass in this case.
The fix also revealed a segfault in complex_getnewargs(): the argument
for the Py_BuildValue() format code "D" is the *address* of a
Py_complex struct, not the value. (This corroborated by the API
documentation.)
I expect this needs to be backported to 2.2.3.
This still falls back to helpers in copy_reg for:
- pickle protocols < 2
- calculating the list of slot names (done only once per class)
- the __newobj__ function (which is used as a token but never called)
the PyInt_AsLong function, and this returns a long, the value is first
retrieved with PyLong_AsLong, but afterwards overwritten by a call to
PyInt_AS_LONG.
Fixes SF #690253.
Don't access tp_descr_{get,set} of a descriptor without checking the
flag bits of the descriptor's type. While we know that the main type
(the type of the object whose attribute is being accessed) has all the
right flag bits (or else PyObject_Generic{Get,Set}Attr wouldn't be
called), we don't know that for its class attributes!
Will backport to 2.2.
using super() for an instance in a metaclass situation. Because the
class was a metaclass, the instance was a class, and hence the
PyType_Check() branch was taken. But this branch didn't apply. Make
it so that if this branch doesn't apply, the other branch is still
tried. All tests pass.
the optional proto 2 slot state.
pickle.py, load_build(): CAUTION: Noted that cPickle's
load_build and pickle's load_build really don't do the same
things with the state, and didn't before this patch either.
cPickle never tries to do .update(), and has no backoff if
instance.__dict__ can't be retrieved. There are no tests
that can tell the difference, and part of what cPickle's
load_build() did looked accidental to me, so I don't know
what the true intent is here.
pickletester.py, test_pickle.py: Got rid of the hack for
exempting cPickle from running some of the proto 2 tests.
dictobject.c, PyDict_Next(): documented intended use.
This changes the default __new__ to refuse arguments iff tp_init is the
default __init__ implementation -- thus making it a TypeError when you
try to pass arguments to a constructor if the class doesn't override at
least __init__ or __new__.
folded; this will change in Python 2.4. On a 32-bit machine, this
happens for 0x80000000 through 0xffffffff, and for octal constants in
the same value range. No warning is issued if an explicit base is
given, *or* if the string contains a sign (since in those cases no
sign folding ever happens).
descr_check(); it wasn't useful. Change the type argument of the
various _get() methods to PyObject * because the call signature of
tp_descr_get doesn't guarantee its type.
when Python code calls a descriptor's __get__ method. It should
translate None to NULL in both argument positions, and insist that at
least one of the argument positions is not NULL after this
transformation.
For the case where the current globals match the previous frame's
globals, eliminates three tests in two if statements. For the case
where we just get __builtins__ from a module, eliminate a couple of
tests.
wasn't used outside the assert (and hence caused a compiler warning
about an unused variable in NDEBUG mode). The assert wasn't very
useful any more.
_PyLong_NumBits(): moved the calculation of ndigits after asserting
that v != NULL.
Assorted code cleanups; e.g., sizeof(char) is 1 by definition, so there's
no need to do things like multiply by sizeof(char) in hairy malloc
arguments. Fixed an undetected-overflow bug in readline_file().
longobject.c: Fixed a really stupid bug in the new _PyLong_NumBits.
pickle.py: Fixed stupid bug in save_long(): When proto is 2, it
wrote LONG1 or LONG4, but forgot to return then -- it went on to
append the proto 1 LONG opcode too.
Fixed equally stupid cancelling bugs in load_long1() and
load_long4(): they *returned* the unpickled long instead of pushing
it on the stack. The return values were ignored. Tests passed
before only because save_long() pickled the long twice.
Fixed bugs in encode_long().
Noted that decode_long() is quadratic-time despite our hopes,
because long(string, 16) is still quadratic-time in len(string).
It's hex() that's linear-time. I don't know a way to make decode_long()
linear-time in Python, short of maybe transforming the 256's-complement
bytes into marshal's funky internal format, and letting marshal decode
that. It would be more valuable to make long(string, 16) linear time.
pickletester.py: Added a global "protocols" vector so tests can try
all the protocols in a sane way. Changed test_ints() and test_unicode()
to do so. Added a new test_long(), but the tail end of it is disabled
because it "takes forever" under pickle.py (but runs very quickly under
cPickle: cPickle proto 2 for longs is linear-time).
__module__ is the string name of the module the function was defined
in, just like __module__ of classes. In some cases, particularly for
C functions, the __module__ may be None.
Change PyCFunction_New() from a function to a macro, but keep an
unused copy of the function around so that we don't change the binary
API.
Change pickle's save_global() to use whichmodule() if __module__ is
None, but add the __module__ logic to whichmodule() since it might be
used outside of pickle.
error handers in the Unicode codecs: Negative
positions are treated as being relative to the end of
the input and out of bounds positions result in an
IndexError.
Also update the PEP and include an explanation of
this in the documentation for codecs.register_error.
Fixes a small bug in iconv_codecs: if the position
from the callback is negative *add* it to the size
instead of substracting it.
From SF patch #677429.
needs of pickling longs. Backed off to a definition that's much easier
to understand. The pickler will have to work a little harder, but other
uses are more likely to be correct <0.5 wink>.
_PyLong_Sign(): New teensy function to characterize a long, as to <0, ==0,
or >0.
types. The special handling for these can now be removed from save_newobj().
Add some testing for this.
Also add support for setting the 'fast' flag on the Python Pickler class,
which suppresses use of the memo.
start for the C implemention of new pickle LONG1 and LONG4 opcodes (the
linear-time way to pickle a long is to call _PyLong_AsByteArray, but
the caller has no idea how big an array to allocate, and correct
calculation is a bit subtle).
was broken because new-in-2.3 code added a tp_as_mapping slot to tuples.
Repaired that.
Added basic docs to check_recursion().
The code that intended to exempt tuples and strings was also broken here,
and in 2.2: these should use PyXYZ_CheckExact(), not PyXYZ_Check() -- we
can't know whether subclass instances are immutable. This part (and this
part alone) is a bugfix candidate.
Christian Tismer pointed out the high cost of the loop overhead and
function call overhead for 'c' * n where n is large. Accordingly,
the new code only makes lg2(n) loops.
Interestingly, 'c' * 1000 * 1000 ran a bit faster with old code. At some
point, the loop and function call overhead became cheaper than invalidating
the cache with lengthy memcpys. But for more typical sizes of n, the new
code runs much faster and for larger values of n it runs only a bit slower.
Refactor code in PyCFunction_Call giving a modest (tiny) speed boost,
a slight improvement in semantics (now detects invalid flag combinations),
and (arguably) improved clarity (making it blindingly clear which flag
combinations are allowed). All this comes at a cost of a few lines of
code duplication.
* Folded test for METH_KEYWORDS into the switch/case.
* Deferred testing for an empty dictionary until when and where needed.
* Make a similar deferral for filling the "size" variable.
* Inverted the dictionary test so that the common case falls though
instead of making a jump.
645404). I'm not 100% sure this is the right fix, so I'll keep the
bug report open for Samuele, but this fixes the index error and passes
the test suite (and I can't see why it *shouldn't* be the right fix
:-).
Initialize the small integers and __builtins__ in startup.
This removes some if conditions.
Change XDECREF to DECREF for values which shouldn't be NULL.
andsq_inplace_repeat. This fixes a number of corner case bugs (see #624807).
Consolidate the int and long sequence repeat code. Before the change, integers
checked for integer overflow but longs did not.
Obtain cleaner coding and a system wide
performance boost by using the fast, pre-parsed
PyArg_Unpack function instead of PyArg_ParseTuple
function which is driven by a format string.
[ 643835 ] Set Next Statement for Python debuggers
with a few tweaks by me: adding an unsigned or two, mentioning that
not all jumps are allowed in the doc for pdb, adding a NEWS item and
a note to whatsnew, and AuCTeX doing something cosmetic to libpdb.tex.
[#521782] unreliable file.read() error handling
* Objects/fileobject.c
(file_read): Clear errors before leaving the loop in all situations,
and also check if some data was read before exiting the loop with an
EWOULDBLOCK exception.
* Doc/lib/libstdtypes.tex
* Objects/fileobject.c
Document that sometimes a read() operation can return less data than
what the user asked, if running in non-blocking mode.
* Misc/NEWS
Document the fix.
containing class objects) are allowed as the second argument.
This makes issubclass() more similar to isinstance() where recursive
tuples are allowed too.
supported as the second argument. This has the same meaning as
for isinstance(), i.e. issubclass(X, (A, B)) is equivalent
to issubclass(X, A) or issubclass(X, B). Compared to isinstance(),
this patch does not search the tuple recursively for classes, i.e.
any entry in the tuple that is not a class, will result in a
TypeError.
This closes SF patch #649608.