-- replace then with slightly faster PyObject_Call(o,a,NULL). (The
difference is that the latter requires a to be a tuple; the former
allows other values and wraps them in a tuple if necessary; it
involves two more levels of C function calls to accomplish all that.)
rigorous instead of hoping for testing not to turn up counterexamples.
Call me heretical, but despite that I'm wholly confident in the proof,
and have done it two different ways now, I still put more faith in
testing ...
[ 587993 ] SET_LINENO killer
Remove SET_LINENO. Tracing is now supported by inspecting co_lnotab.
Many sundry changes to document and adapt to this change.
ah*bh and al*bl. This is much easier than explaining why that's true
for (ah+al)*(bh+bl), and follows directly from the simple part of the
(ah+al)*(bh+bl) explanation.
space is no longer needed, so removed the code. It was only possible when
a degenerate (ah->ob_size == 0) split happened, but after that fix went
in I added k_lopsided_mul(), which saves the body of k_mul() from seeing
a degenerate split. So this removes code, and adds a honking long comment
block explaining why spilling out of bounds isn't possible anymore. Note:
ff we end up spilling out of bounds anyway <wink>, an assert in v_iadd()
is certain to trigger.
(rev. 2.86). The other type is only disqualified from sq_repeat when
it has the CHECKTYPES flag. This means that for extension types that
only support "old-style" numeric ops, such as Zope 2's ExtensionClass,
sq_repeat still trumps nb_multiply.
k_mul() when inputs have vastly different sizes, and a little more
efficient when they're close to a factor of 2 out of whack.
I consider this done now, although I'll set up some more correctness
tests to run overnight.
cases, overflow the allocated result object by 1 bit. In such cases,
it would have been brought back into range if we subtracted al*bl and
ah*bh from it first, but I don't want to do that because it hurts cache
behavior. Instead we just ignore the excess bit when it appears -- in
effect, this is forcing unsigned mod BASE**(asize + bsize) arithmetic
in a case where that doesn't happen all by itself.
1. You can now have __dict__ and/or __weakref__ in your __slots__
(before only __weakref__ was supported). This is treated
differently than before: it merely sets a flag that the object
should support the corresponding magic.
2. Dynamic types now always have descriptors __dict__ and __weakref__
thrust upon them. If the type in fact does not support one or the
other, that descriptor's __get__ method will raise AttributeError.
3. (This is the reason for all this; it fixes SF bug 575229, reported
by Cesar Douady.) Given this code:
class A(object): __slots__ = []
class B(object): pass
class C(A, B): __slots__ = []
the class object for C was broken; its size was less than that of
B, and some descriptors on B could cause a segfault. C now
correctly inherits __weakrefs__ and __dict__ from B, even though A
is the "primary" base (C.__base__ is A).
4. Some code cleanup, and a few comments added.
algorithm. MSVC 6 wasn't impressed <wink>.
Something odd: the x_mul algorithm appears to get substantially worse
than quadratic time as the inputs grow larger:
bits in each input x_mul time k_mul time
------------------ ---------- ----------
15360 0.01 0.00
30720 0.04 0.01
61440 0.16 0.04
122880 0.64 0.14
245760 2.56 0.40
491520 10.76 1.23
983040 71.28 3.69
1966080 459.31 11.07
That is, x_mul is perfectly quadratic-time until a little burp at
2.56->10.76, and after that goes to hell in a hurry. Under Karatsuba,
doubling the input size "should take" 3 times longer instead of 4, and
that remains the case throughout this range. I conclude that my "be nice
to the cache" reworkings of k_mul() are paying.
correct now, so added some final comments, did some cleanup, and enabled
it for all long-int multiplies. The KARAT envar no longer matters,
although I left some #if 0'ed code in there for my own use (temporary).
k_mul() is still much slower than x_mul() if the inputs have very
differenent sizes, and that still needs to be addressed.
(it's possible, but should be harmless -- this requires more thought,
and allocating enough space in advance to prevent it requires exactly
as much thought, to know exactly how much that is -- the end result
certainly fits in the allocated space -- hmm, but that's really all
the thought it needs! borrows/carries out of the high digits really
are harmless).
k_mul(): This didn't allocate enough result space when one input had
more than twice as many bits as the other. This was partly hidden by
that x_mul() didn't normalize its result.
The Karatsuba recurrence is pretty much hosed if the inputs aren't
roughly the same size. If one has at least twice as many bits as the
other, we get a degenerate case where the "high half" of the smaller
input is 0. Added a special case for that, for speed, but despite that
it helped, this can still be much slower than the "grade school" method.
It seems to take a really wild imbalance to trigger that; e.g., a
2**22-bit input times a 1000-bit input on my box runs about twice as slow
under k_mul than under x_mul. This still needs to be addressed.
I'm also not sure that allocating a->ob_size + b->ob_size digits is
enough, given that this is computing k = (ah+al)*(bh+bl) instead of
k = (ah-al)*(bl-bh); i.e., it's certainly enough for the final result,
but it's vaguely possible that adding in the "artificially" large k may
overflow that temporarily. If so, an assert will trigger in the debug
build, but we'll probably compute the right result anyway(!).
addition and subtraction. Reworked the tail end of k_mul() to use them.
This saves oodles of one-shot longobject allocations (this is a triply-
recursive routine, so saving one allocation in the body saves 3**n
allocations at depth n; we actually save 2 allocations in the body).
SF 560379: Karatsuba multiplication.
Lots of things were changed from that. This needs a lot more testing,
for correctness and speed, the latter especially when bit lengths are
unbalanced. For now, the Karatsuba code gets invoked if and only if
envar KARAT exists.
currently return inconsistent results for ints and longs; in
particular: hex/oct/%u/%o/%x/%X of negative short ints, and x<<n that
either loses bits or changes sign. (No warnings for repr() of a long,
though that will also change to lose the trailing 'L' eventually.)
This introduces some warnings in the test suite; I'll take care of
those later.
This is friendlier for caches.
2. Cut MIN_GALLOP to 7, but added a per-sort min_gallop vrbl that adapts
the "get into galloping mode" threshold higher when galloping isn't
paying, and lower when it is. There's no known case where this hurts.
It's (of course) neutral for /sort, \sort and =sort. It also happens
to be neutral for !sort. It cuts a tiny # of compares in 3sort and +sort.
For *sort, it reduces the # of compares to better than what this used to
do when MIN_GALLOP was hardcoded to 10 (it did about 0.1% more *sort
compares before, but given how close we are to the limit, this is "a
lot"!). %sort used to do about 1.5% more compares, and ~sort about
3.6% more. Here are exact counts:
i *sort 3sort +sort %sort ~sort !sort
15 449235 33019 33016 51328 188720 65534 before
448885 33016 33007 50426 182083 65534 after
0.08% 0.01% 0.03% 1.79% 3.65% 0.00% %ch from after
16 963714 65824 65809 103409 377634 131070
962991 65821 65808 101667 364341 131070
0.08% 0.00% 0.00% 1.71% 3.65% 0.00%
17 2059092 131413 131362 209130 755476 262142
2057533 131410 131361 206193 728871 262142
0.08% 0.00% 0.00% 1.42% 3.65% 0.00%
18 4380687 262440 262460 421998 1511174 524286
4377402 262437 262459 416347 1457945 524286
0.08% 0.00% 0.00% 1.36% 3.65% 0.00%
19 9285709 524581 524634 848590 3022584 1048574
9278734 524580 524633 837947 2916107 1048574
0.08% 0.00% 0.00% 1.27% 3.65% 0.00%
20 19621118 1048960 1048942 1715806 6045418 2097150
19606028 1048958 1048941 1694896 5832445 2097150
0.08% 0.00% 0.00% 1.23% 3.65% 0.00%
3. Added some key asserts I overlooked before.
4. Updated the doc file.
before %sort was introduced. Redid them (the numbers change, but the
conclusions don't). Also did the samplesort counts with the released
2.2.1, as they're slightly different under the last CVS 2.3 samplesort
(some higher, some lower -- CVS had been changed to stop doing the
special-case business on recursive samplesort calls).
example of where this changes behavior is when a new-style instance
defines '__mul__' and '__rmul__' and is multiplied by an int. Before the
change the '__rmul__' method is never called, even if the int is the
left operand.
trampolining going on with the tp_new descriptor, where the inherited
PyType_GenericNew was overwritten with the much slower slot_tp_new
which would end up calling tp_new_wrapper which would eventually call
PyType_GenericNew. Add a special case for this to update_one_slot().
XXX Hope there isn't a loophole in this. I'll buy the first person to
point out a bug in the reasoning a beer.
Backport candidate (but I won't do it).
intern the string "__new__" so we can call PyObject_GetAttr() rather
than PyObject_GetAttrString(). (Though it's a mystery why slot_tp_new
is being called when a class doesn't define __new__. I'll look into
that tomorrow.)
2.2 backport candidate (but I won't do it).
a lot of work: it had to save and restore the current exception around
a call to lookup_maybe(), because that could fail in rare cases, and
most objects don't have a __del__ method, so the whole exercise was
usually a waste of time. Changed this to cache the __del__ method in
the type object just like all other special methods, in a new slot
tp_del. So now subtype_dealloc() can test whether tp_del is NULL and
skip the whole exercise if it is. The new slot doesn't need a new
flag bit: subtype_dealloc() is only called if the type was dynamically
allocated by type_new(), so it's guaranteed to have all current slots.
Types defined in C cannot fill in tp_del with a function of their own,
so there's no corresponding "wrapper". (That functionality is already
available through tp_dealloc.)
subtype_dealloc().
When call_finalizer() failed, it would return without going through
the trashcan end macro, thereby unbalancing the trashcan nesting level
counter, and thereby defeating the test case (slottrash() in
test_descr.py). This in turn meant that the assert in the GC_UNTRACK
macro wasn't triggered by the slottrash() test despite a bug in the
code: _PyTrash_destroy_chain() calls the dealloc routine with an
object that's untracked, and the assert in the GC_UNTRACK macro would
fail on this; but because of an earlier test that resurrects an
object, causing call_finalizer() to fail and the trashcan nesting
level to be unbalanced, so _PyTrash_destroy_chain() was never called.
Calling the slottrash() test in isolation *did* trigger the assert,
however.
So the fix is twofold: (1) call the GC_UnTrack() function instead of
the GC_UNTRACK macro, because the function is safe when the object is
already untracked; (2) when call_finalizer() fails, jump to a label
that exits through the trashcan end macro, keeping the trashcan
nesting balanced.
This is inspired by SF patch 581742 (by Jonathan Hogg, who also
submitted the bug report, and two other suggested patches), but
separates the non-GC case from the GC case to avoid testing for GC
several times.
Had to fix an assert() from call_finalizer() that asserted that the
object wasn't untracked, because it's possible that the object isn't
GC'ed!
For a file f, iter(f) now returns f (unless f is closed), and f.next()
is similar to f.readline() when EOF is not reached; however, f.next()
uses a readahead buffer that messes up the file position, so mixing
f.next() and f.readline() (or other methods) doesn't work right.
Calling f.seek() drops the readahead buffer, but other operations
don't.
The real purpose of this change is to reduce the confusion between
objects and their iterators. By making a file its own iterator, it's
made clearer that using the iterator modifies the file object's state
(in particular the current position).
A nice side effect is that this speeds up "for line in f:" by not
having to use the xreadlines module. The f.xreadlines() method is
still supported for backwards compatibility, though it is the same as
iter(f) now.
(I made some cosmetic changes to Oren's code, and added a test for
"file closed" to file_iternext() and file_iter().)
directly when no comparison function is specified. This saves a layer
of function call on every compare then. Measured speedups:
i 2**i *sort \sort /sort 3sort +sort %sort ~sort =sort !sort
15 32768 12.5% 0.0% 0.0% 100.0% 0.0% 50.0% 100.0% 100.0% -50.0%
16 65536 8.7% 0.0% 0.0% 0.0% 0.0% 0.0% 12.5% 0.0% 0.0%
17 131072 8.0% 25.0% 0.0% 25.0% 0.0% 14.3% 5.9% 0.0% 0.0%
18 262144 6.3% -10.0% 12.5% 11.1% 0.0% 6.3% 5.6% 12.5% 0.0%
19 524288 5.3% 5.9% 0.0% 5.6% 0.0% 5.9% 5.4% 0.0% 2.9%
20 1048576 5.3% 2.9% 2.9% 5.1% 2.8% 1.3% 5.9% 2.9% 4.2%
The best indicators are those that take significant time (larger i), and
where sort doesn't do very few compares (so *sort and ~sort benefit most
reliably). The large numbers are due to roundoff noise combined with
platform variability; e.g., the 14.3% speedup for %sort at i=17 reflects
a printed elapsed time of 0.18 seconds falling to 0.17, but a change in
the last digit isn't really meaningful (indeed, if it really took 0.175
seconds, one electron having a lazy nanosecond could shift it to either
value <wink>). Similarly the 25% at 3sort i=17 was a meaningless change
from 0.05 to 0.04. However, almost all the "meaningless changes" were
in the same direction, which is good. The before-and-after times for
*sort are clearest:
before after
0.18 0.16
0.25 0.23
0.54 0.50
1.18 1.11
2.57 2.44
5.58 5.30
longer to run than normal. A profiler run showed that this was due to
PyFrame_New() taking up an unreasonable amount of time. A little
thinking showed that this was due to the while loop clearing the space
available for the stack. The solution is to only clear the local
variables (and cells and free variables), not the space available for
the stack, since anything beyond the stack top is considered to be
garbage anyway. Also, use memset() instead of a while loop counting
backwards. This should be a time savings for normal code too! (By a
probably unmeasurable amount. :-)
version of PySlice_GetIndicesEx"):
> OK. Michael, if you want to check in indices(), go ahead.
Then I did what was needed, but didn't check it in. Here it is.
listsort. If the former calls itself recursively, they're a waste of
time, since it's called on a random permutation of a random subset of
elements. OTOH, for exactly the same reason, they're an immeasurably
small waste of time (the odds of finding exploitable order in a random
permutation are ~= 0, so the special-case loops looking for order give
up quickly). The point is more for conceptual clarity.
Also changed some "assert comments" into real asserts; when this code
was first written, Python.h didn't supply assert.h.
introduced, list.sort() was rewritten to use only the "< or not <?"
distinction. After rich comparisons were introduced, docompare() was
fiddled to translate a Py_LT Boolean result into the old "-1 for <,
0 for ==, 1 for >" flavor of outcome, and the sorting code was left
alone. This left things more obscure than they should be, and turns
out it also cost measurable cycles.
So: The old CMPERROR novelty is gone. docompare() is renamed to islt(),
and now has the same return conditinos as PyObject_RichCompareBool. The
SETK macro is renamed to ISLT, and is even weirder than before (don't
complain unless you want to maintain the sort code <wink>).
Overall, this yields a 1-2% speedup in the usual (no explicit function
passed to list.sort()) case when sorting arrays of floats (as sortperf.py
does). The boost is higher for arrays of ints.
The staticforward define was needed to support certain broken C
compilers (notably SCO ODT 3.0, perhaps early AIX as well) botched the
static keyword when it was used with a forward declaration of a static
initialized structure. Standard C allows the forward declaration with
static, and we've decided to stop catering to broken C compilers. (In
fact, we expect that the compilers are all fixed eight years later.)
I'm leaving staticforward and statichere defined in object.h as
static. This is only for backwards compatibility with C extensions
that might still use it.
XXX I haven't updated the documentation.
PyType_Ready() because the tp_iternext slot is set (fortunately,
because using the tp_iternext implementation for the the next()
implementation is buggy). Also changed the allocation order in
enum_next() so that the underlying iterator is only moved ahead when
we have successfully allocated the result tuple and index.
di_dict field when the end of the list is reached. Also make the
error ("dictionary changed size during iteration") a sticky state.
Also remove the next() method -- one is supplied automatically by
PyType_Ready() because the tp_iternext slot is set. That's a good
thing, because the implementation given here was buggy (it never
raised StopIteration).
object references (it_seq for seqiterobject, it_callable and
it_sentinel for calliterobject) when the end of the list is reached.
Also remove the next() methods -- one is supplied automatically by
PyType_Ready() because the tp_iternext slot is set. That's a good
thing, because the implementation given here was buggy (it never
raised StopIteration).
it_seq field when the end of the list is reached.
Also remove the next() method -- one is supplied automatically by
PyType_Ready() because the tp_iternext slot is set. That's a good
thing, because the implementation given here was buggy (it never
raised StopIteration).
If the object is an ExtensionClass, for example, the slot is not even
defined. So we must check that the type has the slot (implied by
HAVE_CLASS) before calling tp_init().
explicit comparison function case: use PyObject_Call instead of
PyEval_CallObject. Same thing in context, but gives a 2.4% overall
speedup when sorting a list of ints via list.sort(__builtin__.cmp).
MSDN sample programs use it, apparently in error. The correct name
is WIN32_LEAN_AND_MEAN. After switching to the correct name, in two
cases more was needed because the code actually relied on things that
disappear when WIN32_LEAN_AND_MEAN is defined.
arg tuple. This was suggested on c.l.py but afraid I can't find the msg
again for proper attribution. For
list.sort(cmp)
where list is a list of random ints, and cmp is __builtin__.cmp, this
yields an overall 50-60% speedup on my Win2K box. Of course this is a
best case, because the overhead of calling cmp relative to the cost of
actually comparing two ints is at an extreme. Nevertheless it's huge
bang for the buck. An additionak 20-30% can be bought by making the arg
tuple an immortal static (avoiding all but "the first" PyTuple_New), but
that's tricky to make correct since docompare needs to be reentrant. So
this picks the cherry and leaves the pits for Fred <wink>.
Note that this makes no difference to the
list.sort()
case; an arg tuple gets built only if the user specifies an explicit
sort function.
helper macros to something saner, and used them appropriately in other
files too, to reduce #ifdef blocks.
classobject.c, instance_dealloc(): One of my worst Python Memories is
trying to fix this routine a few years ago when COUNT_ALLOCS was defined
but Py_TRACE_REFS wasn't. The special-build code here is way too
complicated. Now it's much simpler. Difference: in a Py_TRACE_REFS
build, the instance is no longer in the doubly-linked list of live
objects while its __del__ method is executing, and that may be visible
via sys.getobjects() called from a __del__ method. Tough -- the object
is presumed dead while its __del__ is executing anyway, and not calling
_Py_NewReference() at the start allows enormous code simplification.
typeobject.c, call_finalizer(): The special-build instance_dealloc()
pain apparently spread to here too via cut-'n-paste, and this is much
simpler now too. In addition, I didn't understand why this routine
was calling _PyObject_GC_TRACK() after a resurrection, since there's no
plausible way _PyObject_GC_UNTRACK() could have been called on the
object by this point. I suspect it was left over from pasting the
instance_delloc() code. Instead asserted that the object is still
tracked. Caution: I suspect we don't have a test that actually
exercises the subtype_dealloc() __del__-resurrected-me code.
more trivial lexical helper macros so that uses of these guys expand
to nothing at all when they're not enabled. This should help sub-
standard compilers that can't do a good job of optimizing away the
previous "(void)0" expressions.
Py_DECREF: There's only one definition of this now. Yay! That
was that last one in the family defined multiple times in an #ifdef
maze.
Py_FatalError(): Changed the char* signature to const char*.
_Py_NegativeRefcount(): New helper function for the Py_REF_DEBUG
expansion of Py_DECREF. Calling an external function cuts down on
the volume of generated code. The previous inline expansion of abort()
didn't work as intended on Windows (the program often kept going, and
the error msg scrolled off the screen unseen). _Py_NegativeRefcount
calls Py_FatalError instead, which captures our best knowledge of
how to abort effectively across platforms.
Repair segfaults and infinite loops in COUNT_ALLOCS builds in the
presence of new-style (heap-allocated) classes/types.
Bugfix candidate. I'll backport this to 2.2. It's irrelevant in 2.1.
that have taken me "too long" to reverse-engineer over the years.
Vastly reduced the nesting level and redundancy of #ifdef-ery.
Took a light stab at repairing comments that are no longer true.
sys_gettotalrefcount(): Changed to enable under Py_REF_DEBUG.
It was enabled under Py_TRACE_REFS, which was much heavier than
necessary. sys.gettotalrefcount() is now available in a
Py_REF_DEBUG-only build.
mechanism is no longer evil: it no longer plays dangerous games with
the type pointer or refcounts, and objects in extension modules can play
along too without needing to edit the core first.
Rewrote all the comments to explain this, and (I hope) give clear
guidance to extension authors who do want to play along. Documented
all the functions. Added more asserts (it may no longer be evil, but
it's still dangerous <0.9 wink>). Rearranged the generated code to
make it clearer, and to tolerate either the presence or absence of a
semicolon after the macros. Rewrote _PyTrash_destroy_chain() to call
tp_dealloc directly; it was doing a Py_DECREF again, and that has all
sorts of obscure distorting effects in non-release builds (Py_DECREF
was already called on the object!). Removed Christian's little "embedded
change log" comments -- that's what checkin messages are for, and since
it was impossible to correlate the comments with the code that changed,
I found them merely distracting.
In a fresh interpreter, type.mro(tuple) would segfault, because
PyType_Ready() isn't called for tuple yet. To fix, call
PyType_Ready(type) if type->tp_dict is NULL.
These built-in functions are replaced by their (now callable) type:
slice()
buffer()
and these types can also be called (but have no built-in named
function named after them)
classobj (type name used to be "class")
code
function
instance
instancemethod (type name used to be "instance method")
The module "new" has been replaced with a small backward compatibility
placeholder in Python.
A large portion of the patch simply removes the new module from
various platform-specific build recipes. The following binary Mac
project files still have references to it:
Mac/Build/PythonCore.mcp
Mac/Build/PythonStandSmall.mcp
Mac/Build/PythonStandalone.mcp
[I've tweaked the code layout and the doc strings here and there, and
added a comment to types.py about StringTypes vs. basestring. --Guido]
gotten from a weak reference to NULL instead of to None. This caused
the following assert() to fail (but only in 2.2 in the debug build --
I have to find a better test case). Will backport.
optional attribute, only clear the exception when the internal getattr
operation raised AttributeError. Many places in this file already had
that policy; but just as many didn't, and there didn't seem to be any
rhyme or reason to it. Be consistently cautious.
Question: should I backport this? On the one hand it's a bugfix. On
the other hand it's a change in behavior. Certain forms of buggy or
just weird code would work in the past but raise an exception under
the new rules; e.g. if you define a __getattr__ method that raises a
non-AttributeError exception.
473985. Through a subtle rearrangement of some members in the etype
struct (!), mapping methods are now preferred over sequence methods,
which is necessary to support str.__getitem__("hello", slice(4)) etc.
[ 400998 ] experimental support for extended slicing on lists
somewhat spruced up and better tested than it was when I wrote it.
Includes docs & tests. The whatsnew section needs expanding, and arrays
should support extended slices -- later.
discovered that subtype_traverse must traverse the type if it is a
heap type, because otherwise some cycles involving a type and its
instance would not be collected. Simplest example:
while 1:
class C(object): pass
C.ref = C()
This program grows without bounds before this fix. (It grows ever
slower since it spends ever more time in the collector.)
Simply adding the right visit() call to subtype_traverse() revealed
other problems. With MvL's help we re-learned that type_clear()
doesn't have to clear *all* references, only the ones that may not be
cleared by other means. Careful analysis (see comments in the code)
revealed that only tp_mro needs to be cleared. (The previous checkin
to this file adds a test for tp_mro==NULL to _PyType_Lookup() that's
essential to prevent crashes due to tp_mro being NULL when
subtype_dealloc() tries to look for a __del__ method.) The same kind
of analysis also revealed that subtype_clear() doesn't need to clear
the instance dict.
With this fix, a useful property of the collector is once again
guaranteed: a single gc.collect() call will clear out all garbage.
(It didn't always before, which put us on the track of this bug.)
Will backport to 2.2.
about the test case, slot_nb_power gets called on behalf of its second
argument, but with a non-None modulus it wouldn't check this, and
believes it is called on behalf of its first argument. Fix this
properly, and get rid of the code in _PyType_Lookup() that tries to
call _PyType_Ready(). But do leave a check for a NULL tp_mro there,
because this can still legitimately occur.
I'll fix this in 2.2.x too.
While I was at it, I added a tp_clear handler and changed the
tp_dealloc handler to use the clear_slots helper for the tp_clear
handler.
Also tightened the rules for slot names: they must now be proper
identifiers (ignoring the dirty little fact that <ctype.h> is locale
sensitive).
Also set mp->flags = READONLY for the __weakref__ pseudo-slot.
Most of this is a 2.2 bugfix candidate; I'll apply it there myself.
Change the module constructor (module_init) to have the signature
__init__(name:str, doc=None); this prevents the call from type_new()
to succeed. While we're at it, prevent repeated calling of
module_init for the same module from leaking the dict, changing the
semantics so that __dict__ is only initialized if NULL.
Also adding a unittest, test_module.py.
This is an incompatibility with 2.2, if anybody was instantiating the
module class before, their argument list was probably empty; so this
can't be backported to 2.2.x.
In the past, an object's tp_compare could return any value. In 2.2
the docs were tightened to require it to return -1, 0 or 1; and -1 for
an error.
We now issue a warning if the value is not in this range. When an
exception is raised, we allow -1 or -2 as return value, since -2 will
the recommended return value for errors in the future. (Eventually
tp_compare will also be allowed to return +2, to indicate
NotImplemented; but that can only be implemented once we know all
extensions return a value in [-2...1]. Or perhaps it will require the
type to set a flag bit.)
I haven't decided yet whether to backport this to 2.2.x. The patch
applies fine. But is it fair to start warning in 2.2.2 about code
that worked flawlessly in 2.2.1?
for 'str' and 'unicode', and can be used instead of
types.StringTypes, e.g. to test whether something is "a string":
isinstance(x, string) is True for Unicode and 8-bit strings. This
is an abstract base class and cannot be instantiated directly.
A MemoryError is now raised when the list cannot be created.
There is a test, but as the comment says, it really only
works for 32 bit systems. I don't know how to improve
the test for other systems (ie, 64 bit or systems
where the data size != addressable size,
e.g. 64 bit data, but 48 bit addressable memory)
returned a proxy for __class__ whose __bases__ was also a proxy. The
merge_class_dict() helper for dir() assumed incorrectly that __bases__
would always be a tuple and used the in-line tuple API on the proxy.
I will backport this to 2.2 as well.
handlers were both set, but were not compatible. This change uses only the
tp_getattro handler with a more "modern" approach.
This fixes SF bug #551285.
don't understand how this function works, also beefed up the docs. The
most common usage error is of this form (often spread out across gotos):
if (_PyString_Resize(&s, n) < 0) {
Py_DECREF(s);
s = NULL;
goto outtahere;
}
The error is that if _PyString_Resize runs out of memory, it automatically
decrefs the input string object s (which also deallocates it, since its
refcount must be 1 upon entry), and sets s to NULL. So if the "if"
branch ever triggers, it's an error to call Py_DECREF(s): s is already
NULL! A correct way to write the above is the simpler (and intended)
if (_PyString_Resize(&s, n) < 0)
goto outtahere;
Bugfix candidate.
This implements ideas from Marc-Andre, Martin, Guido and me on Python-Dev.
"Short" Unicode strings are encoded into a "big enough" stack buffer,
then exactly as much string space as they turn out to need is allocated
at the end. This should have speed benefits akin to Martin's "measure
once, allocate once" strategy, but without needing a distinct measuring
pass.
"Long" Unicode strings allocate as much heap space as they could possibly
need (4 x # Unicode chars), and do a realloc at the end to return the
untouched excess. Since the overallocation is likely to be substantial,
this shouldn't burden the platform realloc with unusably small excess
blocks.
Also simplified uses of the PyString_xyz functions. Also added a release-
build check that 4*size doesn't overflow a C int. Sooner or later, that's
going to happen.
left and right type were of the same type and not classic instances.
This shortcut is dangerous for proxy types, because it means that
coerce(Proxy(1), Proxy(2.1)) leaves Proxy(1) unchanged rather than
turning it into Proxy(1.0).
In an ever-so-slight change of semantics, I now only take the shortcut
when the left and right types are of the same type and don't have the
CHECKTYPES feature. It so happens that classic instances have this
flag, so the shortcut is still skipped in this case (i.e. nothing
changes for classic instances). Proxies also have this flag set
(otherwise implementing numeric operations on proxies would become
nightmarish) and this means that the shortcut is also skipped there,
as desired. It so happens that int, long and float also have this
flag set; that means that e.g. coerce(1, 1) will now invoke
int_coerce(). This is fine: int_coerce() can deal with this, and I'm
not worried about the performance; int_coerce() is only invoked when
the user explicitly calls coerce(), which should be rarer than rare.
This fixes the problem that Barry reported on python-dev:
>>> 23000 .__class__ = bool
crashes in the deallocator. This was because int inherited tp_free
from object, which uses the default allocator.
2.2. Bugfix candidate.
states can be for this function, and ensure that only AttributeErrors
are masked. Any other exception raised via the equivalent of
getattr(cls, '__bases__') should be propagated up.
abstract_issubclass(): If abstract_get_bases() returns NULL, we must
call PyErr_Occurred() to see if an exception is being propagated, and
return -1 or 0 as appropriate. This is the specific fix for a problem
whereby if getattr(derived, '__bases__') raised an exception, an
"undetected error" would occur (under a debug build). This nasty
situation was uncovered when writing a security proxy extension type
for the Zope3 project, where the security proxy raised a Forbidden
exception on getattr of __bases__.
PyObject_IsInstance(), PyObject_IsSubclass(): After both calls to
abstract_get_bases(), where we're setting the TypeError if the return
value is NULL, we must first check to see if an exception occurred,
and /not/ mask an existing exception.
Neil Schemenauer should double check that these changes don't break
his ExtensionClass examples (there aren't any test cases for those
examples and abstract_get_bases() was added by him in response to
problems with ExtensionClass). Neil, please add test cases if
possible!
I belive this is a bug fix candidate for Python 2.2.2.
http://www.python.org/sf/444708
This adds the optional argument for str.strip
to unicode.strip too and makes it possible
to call str.strip with a unicode argument
and unicode.strip with a str argument.
+ Continued looping until n bytes in the buffer have been filled, not
just when n bytes have been read from the file. This repairs the
bug that f.readlines() only sucked up the first 8192 bytes of the file
on Windows when universal newlines was enabled and f was opened in
U mode (see Python-Dev -- this was the ultimate cause of the
test_inspect.py failure).
+ Changed prototye to take a char* buffer (void* doesn't make much sense).
+ Squashed size_t vs int mismatches (in particular, besides the unsigned
vs signed distinction, size_t may be larger than int).
+ Gets out under all error conditions now (it's possible for fread() to
suffer an error even if it returns a number larger than 0 -- any
"short read" is an error or EOF condition).
+ Rearranged and simplified declarations.
pointers is a signed type. Changing "allocated" to a signed int makes
undetected overflow more likely, but there was no overflow detection
before either.
PyFrame_FastToLocals() and PyFrame_LocalsToFast() had a return if
f_nlocals was 0. I think this was a holdover from the pre 2.1 days
when regular locals were the only kind of local variables.
The change makes it possible to use a free variable in eval or exec if
it the variable is also used elsewhere in the same block, which is
what the documentation says.
consistency checks, enabled only in a debug (Py_DEBUG) build. Note that
this never gets called automatically unless PYMALLOC_DEBUG is #define'd
too, and the envar PYTHONMALLOCSTATS exists.
Change type_get_doc (the get function for __doc__) to look in tp_dict
more often, and if it finds a descriptor in tp_dict, to call it (with
a NULL instance). This means you can add a __doc__ descriptor to a
new-style class that returns instance docs when called on an instance,
and class docs when called on a class -- or the same docs in either
case, but lazily computed.
I'll also check this into the 2.2 maintenance branch.
PyNumber_InPlaceMultiply insisted on calling sq_inplace_repeat if it
existed, even if nb_inplace_multiply also existed and the arguments
weren't right for sq_inplace_repeat. Change this to only use
sq_inplace_repeat if nb_inplace_multiply isn't defined.
Bugfix candidate.
Add a method zfill to str, unicode and UserString and change
Lib/string.py accordingly.
This activates the zfill version in unicodeobject.c that was
commented out and implements the same in stringobject.c. It also
adds the test for unicode support in Lib/string.py back in and
uses repr() instead() of str() (as it was before Lib/string.py 1.62)
Complex numbers implement divmod() and //, neither of which makes one
lick of sense. Unfortunately this is documented, so I'm adding a
deprecation warning now, so we can delete this silliness, oh, around
2005 or so.
Bugfix candidate (At least for 2.2.2, I think.)
Highlights: import and friends will understand any of \r, \n and \r\n
as end of line. Python file input will do the same if you use mode 'U'.
Everything can be disabled by configuring with --without-universal-newlines.
See PEP278 for details.
Added code to call this when PYMALLOC_DEBUG is enabled, and envar
PYTHONMALLOCSTATS is set, whenever a new arena is obtained and once
late in the Python shutdown process.
Put a bound on the number of frameobjects that can live in the
frameobject free_list.
Am also backporting to 2.2. I don't intend to backport to 2.1 (too
much work -- lots of cyclic structures leak there, and the GC API).
Add optional arg to string methods strip(), lstrip(), rstrip().
The optional arg specifies characters to delete.
Also for UserString.
Still to do:
- Misc/NEWS
- LaTeX docs (I did the docstrings though)
- Unicode methods, and Unicode support in the string methods.
_PyObject_DebugMalloc: explicitly cast PyObject_Malloc's result to the
target pointer type.
_PyObject_DebugDumpStats: change decl of arena_alignment from unsigned
int to unsigned long.
This is for the 2.3 release only (it's new code).
most of the work. In particular, if the underlying realloc is able to
grow the memory block in place, great (this routine used to do a fresh
malloc + memcpy every time a block grew). BTW, I'm not so keen here on
avoiding possible quadratic-time realloc patterns as I am on making
the debug pymalloc more invisible (the more it uses memory "just like"
the underlying allocator, the better the chance that a suspected memory
corruption bug won't vanish when the debug malloc is turned on).
PyMem_{Del, DEL} doesn't work yet (compilation problems).
pyport.h: _PyMem_EXTRA is gone.
pmem.h: Repaired comments. PyMem_{Malloc, MALLOC} and
PyMem_{Realloc, REALLOC} now make the same x-platform guarantees when
asking for 0 bytes, and when passing a NULL pointer to the latter.
object.c: PyMem_{Malloc, Realloc} just call their macro versions
now, since the latter take care of the x-platform 0 and NULL stuff
by themselves now.
pypcre.c, grow_stack(): So sue me. On two lines, this called
PyMem_RESIZE to grow a "const" area. It's not legit to realloc a
const area, so the compiler warned given the new expansion of
PyMem_RESIZE. It would have gotten the same warning before if it
had used PyMem_Resize() instead; the older macro version, but not the
function version, silently cast away the constness. IMO that was a wrong
thing to do, and the docs say the macro versions of PyMem_xyz are
deprecated anyway. If somebody else is resizing const areas with the
macro spelling, they'll get a warning when they recompile now too.
The bug report pointed out a bogosity in the comment block explaining
thread safety for arena management. Repaired that comment, repaired a
couple others while I was at it, and added an assert.
_PyMalloc_DebugRealloc: If this needed to get more memory, but couldn't,
it erroneously freed the original memory. Repaired that.
This is for 2.3 only (unless we decide to backport the new pymalloc).
open_the_file: Some (not all) flavors of Windows set errno to EINVAL
when passed a syntactically invalid filename. Python turned that into an
incomprehensible complaint about the mode string. Fixed by special-casing
MSVC.
when PyType_Ready() was called, if ob_type was found to be NULL, it
was always set to &PyType_Type; now it is set to base->ob_type,
where base is tp_base, defaulting to &PyObject_Type.
- PyType_Ready() accidentally did not inherit tp_is_gc; now it does.
Bugfix candidate.
runtime multiplications and divisions, via the scheme developed with
Vladimir Marangozov on Python-Dev. The pool_header struct loses its
capacity member, but gains nextoffset and maxnextoffset members; this
still leaves it at 32 bytes on a 32-bit box (it has to be padded to a
multiple of 8 bytes).
speeds up __getitem__ and __setitem__ in subclasses of built-in
sequences.
It's much revised because I took the opportunity to refactor the code
somewhat (moving a large section of duplicated code to a helper
function) and added comments to a series of functions.
what these do given a 0 size argument. This is so that when pymalloc
is enabled, we don't need to wrap pymalloc calls in goofy little
routines special-casing 0. Note that it's virtually impossible to meet
the doc's promise that malloc(0) will never return NULL; this makes a
best effort, but not an insane effort. The code does promise that
realloc(not-NULL, 0) will never return NULL (malloc(0) is much harder).
_PyMalloc_Realloc: Changed to take over all requests for 0 bytes, and
rearranged to be a little quicker in expected cases.
All over the place: when resorting to the platform allocator, call
free/malloc/realloc directly, without indirecting thru macros. This
should avoid needing a nightmarish pile of #ifdef-ery if PYMALLOC_DEBUG
is changed so that pymalloc takes over all Py(Mem, Object} memory
operations (which would add useful debugging info to PyMem_xyz
allocations too).
PEP 285. Everything described in the PEP is here, and there is even
some documentation. I had to fix 12 unit tests; all but one of these
were printing Boolean outcomes that changed from 0/1 to False/True.
(The exception is test_unicode.py, which did a type(x) == type(y)
style comparison. I could've fixed that with a single line using
issubtype(x, type(y)), but instead chose to be explicit about those
places where a bool is expected.
Still to do: perhaps more documentation; change standard library
modules to return False/True from predicates.
possible pool states. I think it's much clearer now.
Added a new long overdue block-management overview comment block.
I believe the comments are in good shape now.
Added two comments about possible small optimizations (one getting rid
of runtime multiplications at the cost of a new pool_header member; the
other getting rid of runtime divisions and the pool_header capacity
member, at the cost of a static const vector of 32 uints).
This displays stats about the # of arenas, pools, blocks and bytes, to
stderr, both used and reserved but unused.
CAUTION: Because PYMALLOC_DEBUG is on, the debug malloc routine adds
16 bytes to each request. This makes each block appear two size classes
higher than it would be if PYMALLOC_DEBUG weren't on.
So far, playing with this confirms the obvious: there's a lot of activity
in the "small dict" size class, but nothing in the core makes any use of
the 8-byte or 16-byte classes.
the code so that the most frequent cases come first. Added comments.
Found a hidden assumption that a pool contains room for at least two
blocks, and added an assert to catch a violation if it ever happens in
a place where that matters. Gave the normal "I allocated this block"
case a longer basic block to work with before it has to do its first
branch (via breaking apart an embedded assignment in an "if", and
hoisting common code out of both branches).
address obtained from system malloc/realloc without holding the GIL.
When the vector of arena base addresses has to grow, the old vector is
deliberately leaked. This makes "stale" x-thread references safe.
arenas and narenas are also declared volatile, and changed in an order
that prevents a thread from picking up a value of narenas too large
for the value of arenas it sees.
Added more asserts.
Fixed an old inaccurate comment.
Added a comment explaining why it's safe to call pymalloc free/realloc
with an address obtained from system malloc/realloc even when arenas is
still NULL (this is obscure, since the ADDRESS_IN_RANGE macro
appears <wink> to index into arenas).
this. But added an overflow check just in case there is.
Got rid of the ushort macro. It wasn't used anymore (it was only used
in the no-longer-exists off_t macro), and there's no plausible use for it.
waste the first pool if malloc happens to return a pool-aligned address.
This means the number of pools per arena can now vary by 1. Unfortunately,
the code counted up from 0 to a presumed constant number of pools. So
changed the increasing "watermark" counter to a decreasing "nfreepools"
counter instead, and fiddled various stuff accordingly. This also allowed
getting rid of two more macros.
Also changed the code to align the first address to a pool boundary
instead of a page boundary. These are two parallel sets of macro #defines
that happen to be identical now, but the page macros are in theory more
restrictive (bigger), and there's simply no reason I can see that it
wasn't aligning to the less restrictive pool size all along (the code
only relies on pool alignment).
Hmm. The "page size" macros aren't used for anything *except* defining
the pool size macros, and the comments claim the latter isn't necessary.
So this has the feel of a layer of indirection that doesn't serve a
purpose; should probably get rid of the page macros now.
are called without the GIL. It's incredibly unlikely to fail, but I can't
make this bulletproof without either adding a lock for exclusion, or
giving up on growing the arena base-address vector (it would be safe if
this were a static array).
+ A new scheme for determining whether an address belongs to a pymalloc
arena. This should be 100% reliable. The poolp->pooladdr and
poolp->magic members are gone. A new poolp->arenaindex member takes
their place. Note that the pool header overhead doesn't actually
shrink, though, since the header is padded to a multiple of 8 bytes.
+ _PyMalloc_Free and _PyMalloc_Realloc should now be safe to call for
any legit address, whether obtained from a _PyMalloc function or from
the system malloc/realloc. It should even be safe to call
_PyMalloc_Free when *not* holding the GIL, provided that the passed-in
address was obtained from system malloc/realloc. Since this is
accomplished without any locks, you better believe the code is subtle.
I hope it's sufficiently commented.
+ The above implies we don't need the new PyMalloc_{New, NewVar, Del}
API anymore, and could switch back to PyObject_XXX without breaking
existing code mixing PyObject_XXX with PyMem_{Del, DEL, Free, FREE}.
Nothing is done here about that yet, and I'd like to see this new
code exercised more first.
+ The small object threshhold is boosted to 256 (the max). We should
play with that some more, but the old 64 was way too small for 2.3.
+ Getting a new arena is now done via new function new_arena().
+ Removed some unused macros, and squashed out some macros that were
used only once to define other macros.
+ Arenas are no longer linked together. A new vector of arena base
addresses had to be created anyway to make address classification
bulletproof.
+ A lot of the patch size is an illusion: given the way address
classification works now, it was more convenient to switch the
sense of the prime "if" tests in the realloc and free functions,
so the "if" and "else" blocks got swapped.
+ Assorted minor code, comment and whitespace cleanup.
Back to the Windows installer <wink>.
The fix makes it possible to call PyObject_GC_UnTrack() more than once
on the same object, and then move the PyObject_GC_UnTrack() call to
*before* the trashcan code is invoked.
BUGFIX CANDIDATE!
descriptor, as used for the tp_methods slot of a type. These new flag
bits are both optional, and mutually exclusive. Most methods will not
use either. These flags are used to create special method types which
exist in the same namespace as normal methods without having to use
tedious construction code to insert the new special method objects in
the type's tp_dict after PyType_Ready() has been called.
If METH_CLASS is specified, the method will represent a class method
like that returned by the classmethod() built-in.
If METH_STATIC is specified, the method will represent a static method
like that returned by the staticmethod() built-in.
These flags may not be used in the PyMethodDef table for modules since
these special method types are not meaningful in that case; a
ValueError will be raised if these flags are found in that context.
Assorted: bump the serial number via a trivial new bumpserialno()
function. The point is to give a single place to set a breakpoint when
waiting for a specific serial number.
of get_line. This makes test_bufio finish in 1.7 seconds instead of 57
seconds on my machine (with Py_DEBUG defined).
Also, rename the local variables n1 and n2 to used_v_size and
total_v_size.
When WITH_PYMALLOC is defined, define PYMALLOC_DEBUG to enable the debug
allocator. This can be done independent of build type (release or debug).
A debug build automatically defines PYMALLOC_DEBUG when pymalloc is
enabled. It's a detected error to define PYMALLOC_DEBUG when pymalloc
isn't enabled.
Two debugging entry points defined only under PYMALLOC_DEBUG:
+ _PyMalloc_DebugCheckAddress(const void *p) can be used (e.g., from gdb)
to sanity-check a memory block obtained from pymalloc. It sprays
info to stderr (see next) and dies via Py_FatalError if the block is
detectably damaged.
+ _PyMalloc_DebugDumpAddress(const void *p) can be used to spray info
about a debug memory block to stderr.
A tiny start at implementing "API family" checks isn't good for
anything yet.
_PyMalloc_DebugRealloc() has been optimized to do little when the new
size is <= old size. However, if the new size is larger, it really
can't call the underlying realloc() routine without either violating its
contract, or knowing something non-trivial about how the underlying
realloc() works. A memcpy is always done in this case.
This was a disaster for (and only) one of the std tests: test_bufio
creates single text file lines up to a million characters long. On
Windows, fileobject.c's get_line() uses the horridly funky
getline_via_fgets(), which keeps growing and growing a string object
hoping to find a newline. It grew the string object 1000 bytes each
time, so for a million-character string it took approximately forever
(I gave up after a few minutes).
So, also:
fileobject.c, getline_via_fgets(): When a single line is outrageously
long, grow the string object at a mildly exponential rate, instead of
just 1000 bytes at a time.
That's enough so that a debug-build test_bufio finishes in about 5 seconds
on my Win98SE box. I'm curious to try this on Win2K, because it has very
different memory behavior than Win9X, and test_bufio always took a factor
of 10 longer to complete on Win2K. It *could* be that the endless
reallocs were simply killing it on Win2K even in the release build.
Also move all _PyMalloc_XXX entry points into obmalloc.c.
The Windows build works fine.
The Unix build is changed here (Makefile.pre.in), but not tested.
No other platform's build process has been fiddled.
Konrad was too kind. Not only did it raise an exception, the specific
exception it raised made no sense. These are old bugs in complex_pow()
and friends:
1. Raising 0 to a negative power isn't a range error, it's a domain
error, so changed c_pow() to set errno to EDOM in that case instead
of ERANGE.
2. Changed complex_pow() to:
A. Used the Py_ADJUST_ERANGE2 macro to try to clear errno of a spurious
ERANGE error due to underflow in the libm pow() called by c_pow().
B. Produced different exceptions depending on the errno value:
i) For errno==EDOM, raise ZeroDivisionError instead of ValueError.
This is for consistency with the non-complex cases 0.0**-2 and
0**-2 and 0L**-2.
ii) For errno==ERANGE, raise OverflowError.
Bugfix candidate.
The proper fix is not quite what was submitted; it's really better to
take the class of the object passed rather than calling PyMethod_New
with NULL pointer args, because that can then cause other core dumps
later.
I also added a testcase for the fix to classmethods() in test_descr.py.
I've already applied this to the 2.2 branch.
As promised in my response to the bug report, I'm not really fixing
it; in fact, one could argule over what the proper fix should do.
Instead, I'm adding a little magic that raises TypeError if you try to
pickle an instance of a class that has __slots__ but doesn't define or
override __getstate__. This is done by adding a bozo __getstate__
that always raises TypeError.
There were several places that assumed the md_dict field was always
set, but it needn't be. Fixed these to be more careful.
I changed PyModule_GetDict() to initialize md_dict to a new dictionary
if it's NULL.
Bugfix candidate.
and (b) stop trying to prevent file growth.
Beef up the file.truncate() docs.
Change test_largefile.py to stop assuming that f.truncate() moves the
file pointer to the truncation point, and to verify instead that it leaves
the file position alone. Remove the test for what happens when a
specified size exceeds the original file size (it's ill-defined, according
to the Single Unix Spec).