diff --git a/Objects/typeobject.c b/Objects/typeobject.c index e88f5f59248..d2e9b87be72 100644 --- a/Objects/typeobject.c +++ b/Objects/typeobject.c @@ -640,8 +640,11 @@ subtype_dealloc(PyObject *self) /* We get here only if the type has GC */ /* UnTrack and re-Track around the trashcan macro, alas */ + /* See explanation at end of funtion for full disclosure */ PyObject_GC_UnTrack(self); + ++_PyTrash_delete_nesting; Py_TRASHCAN_SAFE_BEGIN(self); + --_PyTrash_delete_nesting; _PyObject_GC_TRACK(self); /* We'll untrack for real later */ /* Maybe call finalizer; exit early if resurrected */ @@ -689,7 +692,97 @@ subtype_dealloc(PyObject *self) Py_DECREF(type); endlabel: + ++_PyTrash_delete_nesting; Py_TRASHCAN_SAFE_END(self); + --_PyTrash_delete_nesting; + + /* Explanation of the weirdness around the trashcan macros: + + Q. What do the trashcan macros do? + + A. Read the comment titled "Trashcan mechanism" in object.h. + For one, this explains why there must be a call to GC-untrack + before the trashcan begin macro. Without understanding the + trashcan code, the answers to the following questions don't make + sense. + + Q. Why do we GC-untrack before the trashcan and then immediately + GC-track again afterward? + + A. In the case that the base class is GC-aware, the base class + probably GC-untracks the object. If it does that using the + UNTRACK macro, this will crash when the object is already + untracked. Because we don't know what the base class does, the + only safe thing is to make sure the object is tracked when we + call the base class dealloc. But... The trashcan begin macro + requires that the object is *untracked* before it is called. So + the dance becomes: + + GC untrack + trashcan begin + GC track + + Q. Why the bizarre (net-zero) manipulation of + _PyTrash_delete_nesting around the trashcan macros? + + A. Some base classes (e.g. list) also use the trashcan mechanism. + The following scenario used to be possible: + + - suppose the trashcan level is one below the trashcan limit + + - subtype_dealloc() is called + + - the trashcan limit is not yet reached, so the trashcan level + is incremented and the code between trashcan begin and end is + executed + + - this destroys much of the object's contents, including its + slots and __dict__ + + - basedealloc() is called; this is really list_dealloc(), or + some other type which also uses the trashcan macros + + - the trashcan limit is now reached, so the object is put on the + trashcan's to-be-deleted-later list + + - basedealloc() returns + + - subtype_dealloc() decrefs the object's type + + - subtype_dealloc() returns + + - later, the trashcan code starts deleting the objects from its + to-be-deleted-later list + + - subtype_dealloc() is called *AGAIN* for the same object + + - at the very least (if the destroyed slots and __dict__ don't + cause problems) the object's type gets decref'ed a second + time, which is *BAD*!!! + + The remedy is to make sure that if the code between trashcan + begin and end in subtype_dealloc() is called, the code between + trashcan begin and end in basedealloc() will also be called. + This is done by decrementing the level after passing into the + trashcan block, and incrementing it just before leaving the + block. + + But now it's possible that a chain of objects consisting solely + of objects whose deallocator is subtype_dealloc() will defeat + the trashcan mechanism completely: the decremented level means + that the effective level never reaches the limit. Therefore, we + *increment* the level *before* entering the trashcan block, and + matchingly decrement it after leaving. This means the trashcan + code will trigger a little early, but that's no big deal. + + Q. Are there any live examples of code in need of all this + complexity? + + A. Yes. See SF bug 668433 for code that crashed (when Python was + compiled in debug mode) before the trashcan level manipulations + were added. For more discussion, see SF patches 581742, 575073 + and bug 574207. + */ } static PyTypeObject *solid_base(PyTypeObject *type);