more trivial lexical helper macros so that uses of these guys expand
to nothing at all when they're not enabled. This should help sub-
standard compilers that can't do a good job of optimizing away the
previous "(void)0" expressions.
Py_DECREF: There's only one definition of this now. Yay! That
was that last one in the family defined multiple times in an #ifdef
maze.
Py_FatalError(): Changed the char* signature to const char*.
_Py_NegativeRefcount(): New helper function for the Py_REF_DEBUG
expansion of Py_DECREF. Calling an external function cuts down on
the volume of generated code. The previous inline expansion of abort()
didn't work as intended on Windows (the program often kept going, and
the error msg scrolled off the screen unseen). _Py_NegativeRefcount
calls Py_FatalError instead, which captures our best knowledge of
how to abort effectively across platforms.
Repair segfaults and infinite loops in COUNT_ALLOCS builds in the
presence of new-style (heap-allocated) classes/types.
Bugfix candidate. I'll backport this to 2.2. It's irrelevant in 2.1.
that have taken me "too long" to reverse-engineer over the years.
Vastly reduced the nesting level and redundancy of #ifdef-ery.
Took a light stab at repairing comments that are no longer true.
sys_gettotalrefcount(): Changed to enable under Py_REF_DEBUG.
It was enabled under Py_TRACE_REFS, which was much heavier than
necessary. sys.gettotalrefcount() is now available in a
Py_REF_DEBUG-only build.
mechanism is no longer evil: it no longer plays dangerous games with
the type pointer or refcounts, and objects in extension modules can play
along too without needing to edit the core first.
Rewrote all the comments to explain this, and (I hope) give clear
guidance to extension authors who do want to play along. Documented
all the functions. Added more asserts (it may no longer be evil, but
it's still dangerous <0.9 wink>). Rearranged the generated code to
make it clearer, and to tolerate either the presence or absence of a
semicolon after the macros. Rewrote _PyTrash_destroy_chain() to call
tp_dealloc directly; it was doing a Py_DECREF again, and that has all
sorts of obscure distorting effects in non-release builds (Py_DECREF
was already called on the object!). Removed Christian's little "embedded
change log" comments -- that's what checkin messages are for, and since
it was impossible to correlate the comments with the code that changed,
I found them merely distracting.
In a fresh interpreter, type.mro(tuple) would segfault, because
PyType_Ready() isn't called for tuple yet. To fix, call
PyType_Ready(type) if type->tp_dict is NULL.
These built-in functions are replaced by their (now callable) type:
slice()
buffer()
and these types can also be called (but have no built-in named
function named after them)
classobj (type name used to be "class")
code
function
instance
instancemethod (type name used to be "instance method")
The module "new" has been replaced with a small backward compatibility
placeholder in Python.
A large portion of the patch simply removes the new module from
various platform-specific build recipes. The following binary Mac
project files still have references to it:
Mac/Build/PythonCore.mcp
Mac/Build/PythonStandSmall.mcp
Mac/Build/PythonStandalone.mcp
[I've tweaked the code layout and the doc strings here and there, and
added a comment to types.py about StringTypes vs. basestring. --Guido]
gotten from a weak reference to NULL instead of to None. This caused
the following assert() to fail (but only in 2.2 in the debug build --
I have to find a better test case). Will backport.
optional attribute, only clear the exception when the internal getattr
operation raised AttributeError. Many places in this file already had
that policy; but just as many didn't, and there didn't seem to be any
rhyme or reason to it. Be consistently cautious.
Question: should I backport this? On the one hand it's a bugfix. On
the other hand it's a change in behavior. Certain forms of buggy or
just weird code would work in the past but raise an exception under
the new rules; e.g. if you define a __getattr__ method that raises a
non-AttributeError exception.
473985. Through a subtle rearrangement of some members in the etype
struct (!), mapping methods are now preferred over sequence methods,
which is necessary to support str.__getitem__("hello", slice(4)) etc.
[ 400998 ] experimental support for extended slicing on lists
somewhat spruced up and better tested than it was when I wrote it.
Includes docs & tests. The whatsnew section needs expanding, and arrays
should support extended slices -- later.
discovered that subtype_traverse must traverse the type if it is a
heap type, because otherwise some cycles involving a type and its
instance would not be collected. Simplest example:
while 1:
class C(object): pass
C.ref = C()
This program grows without bounds before this fix. (It grows ever
slower since it spends ever more time in the collector.)
Simply adding the right visit() call to subtype_traverse() revealed
other problems. With MvL's help we re-learned that type_clear()
doesn't have to clear *all* references, only the ones that may not be
cleared by other means. Careful analysis (see comments in the code)
revealed that only tp_mro needs to be cleared. (The previous checkin
to this file adds a test for tp_mro==NULL to _PyType_Lookup() that's
essential to prevent crashes due to tp_mro being NULL when
subtype_dealloc() tries to look for a __del__ method.) The same kind
of analysis also revealed that subtype_clear() doesn't need to clear
the instance dict.
With this fix, a useful property of the collector is once again
guaranteed: a single gc.collect() call will clear out all garbage.
(It didn't always before, which put us on the track of this bug.)
Will backport to 2.2.
about the test case, slot_nb_power gets called on behalf of its second
argument, but with a non-None modulus it wouldn't check this, and
believes it is called on behalf of its first argument. Fix this
properly, and get rid of the code in _PyType_Lookup() that tries to
call _PyType_Ready(). But do leave a check for a NULL tp_mro there,
because this can still legitimately occur.
I'll fix this in 2.2.x too.
While I was at it, I added a tp_clear handler and changed the
tp_dealloc handler to use the clear_slots helper for the tp_clear
handler.
Also tightened the rules for slot names: they must now be proper
identifiers (ignoring the dirty little fact that <ctype.h> is locale
sensitive).
Also set mp->flags = READONLY for the __weakref__ pseudo-slot.
Most of this is a 2.2 bugfix candidate; I'll apply it there myself.
Change the module constructor (module_init) to have the signature
__init__(name:str, doc=None); this prevents the call from type_new()
to succeed. While we're at it, prevent repeated calling of
module_init for the same module from leaking the dict, changing the
semantics so that __dict__ is only initialized if NULL.
Also adding a unittest, test_module.py.
This is an incompatibility with 2.2, if anybody was instantiating the
module class before, their argument list was probably empty; so this
can't be backported to 2.2.x.
In the past, an object's tp_compare could return any value. In 2.2
the docs were tightened to require it to return -1, 0 or 1; and -1 for
an error.
We now issue a warning if the value is not in this range. When an
exception is raised, we allow -1 or -2 as return value, since -2 will
the recommended return value for errors in the future. (Eventually
tp_compare will also be allowed to return +2, to indicate
NotImplemented; but that can only be implemented once we know all
extensions return a value in [-2...1]. Or perhaps it will require the
type to set a flag bit.)
I haven't decided yet whether to backport this to 2.2.x. The patch
applies fine. But is it fair to start warning in 2.2.2 about code
that worked flawlessly in 2.2.1?