have differing refcount semantics. If anyone sees a prettier way to
acheive the same ends, then please go for it.
I think this is the first time I've ever used Py_XINCREF.
* Fixes an incorrect variable in a PyDict_CheckExact.
* Allow general mapping locals arguments for the execfile() function
and exec statement.
* Add tests.
test_failing_import_sticks -- if an import raises an exception,
ensure that trying to import it again continues raising exceptions
test_failing_reload -- if a module loads OK, but a reload raises an
exception, ensure that the module is still in sys.modules, and
that its __dict__ reflects as much of the reload attempt as
succeeded. That doesn't seem like sane semantics, but it is
backward-compatible semantics <wink>.
PyImport_ReloadModule(): restore the module to sys.modules in error cases.
load_package(): semantic-neutral refactoring from an earlier stab at
this patch; giving it a common error exit made the code
easier to follow, so retaining that part.
_RemoveModule(): new little utility to delete a key from sys.modules.
of no more than 8 elements cannot fail.
listpop(): Take advantage of that its calls to list_resize() and
list_ass_slice() can't fail. This is assert'ed in a debug build now, but
in an icky way. That is, you can't say:
assert(some_call() >= 0);
because then some_call() won't occur at all in a release build. So it
has to be a big pile of #ifdefs on Py_DEBUG (yuck), or the pleasant:
status = some_call();
assert(status >= 0);
But in that case, compilers may whine in a release build, because status
appears unused then. I'm not certain the ugly trick I used here will
convince all compilers to shut up about status (status is always "used" now,
as the first (ignored) clause in a comma expression).
This fixes 15 spurious test failures on Windows (probably all due to
the test leaving a wrong path in sys.argv[0], which then prevented
regrtest.py from finding the expected-output files for tests running
after test_optparse).
* add expansion of default values in help text: the string
"%default" in an option's help string is expanded to str() of
that option's default value, or "none" if no default value.
* bug #955889: option default values that happen to be strings are
now processed in the same way as values from the command line; this
allows generation of nicer help when using custom types. Can
be disabled with parser.set_process_default_values(False).
* bug #960515: don't crash when generating help for callback
options that specify 'type', but not 'dest' or 'metavar'.
* feature #815264: change the default help format for short options
that take an argument from e.g. "-oARG" to "-o ARG"; add
set_short_opt_delimiter() and set_long_opt_delimiter() methods to
HelpFormatter to allow (slight) customization of the formatting.
* patch #736940: internationalize Optik: all built-in user-
targeted literal strings are passed through gettext.gettext(). (If
you want translations (.po files), they're not included with Python
-- you'll find them in the Optik source distribution from
http://optik.sourceforge.net/ .)
* bug #878453: respect $COLUMNS environment variable for
wrapping help output.
* feature #988122: expand "%prog" in the 'description' passed
to OptionParser, just like in the 'usage' and 'version' strings.
(This is *not* done in the 'description' passed to OptionGroup.)
impossible to remember, so renamed one to something obvious. Headed
off potential signed-vs-unsigned compiler complaints I introduced by
changing the type of a vrbl to unsigned. Removed the need for the
tedious explanation about "backward pointer loops" by looping on an
int instead.
result.
list_resize(): Document the intent. Code is increasingly relying on
subtle aspects of its behavior, and they deserve to be spelled out.
list_ass_slice(): A bit more simplification, by giving it a common
error exit and initializing more values.
Be clearer in comments about what "size" means (# of elements? # of
bytes?).
While the number of elements in a list slice must fit in an int, there's
no guarantee that the number of bytes occupied by the slice will. That
malloc() and memmove() take size_t arguments is a hint about that <wink>.
So changed to use size_t where appropriate.
ihigh - ilow should always be >= 0, but we never asserted that. We do
now.
The loop decref'ing the recycled slice had a subtle insecurity: C doesn't
guarantee that a pointer one slot *before* an array will compare "less
than" to a pointer within the array (it does guarantee that a pointer
one beyond the end of the array compares as expected). This was actually
an issue in KSR's C implementation, so isn't purely theoretical. Python
probably has other "go backwards" loops with a similar glitch.
list_clear() is OK (it marches an integer backwards, not a pointer).
though I tried to be very careful. This is a slight simplification, and it
adds a new feature: a small stack-allocated "recycled" array for the cases
when we don't remove too many items.
It allows PyList_SetSlice() to never fail if:
* you are sure that the object is a list; and
* you either do not remove more than 8 items, or clear the list.
This makes a number of other places in the source code correct again -- there
are some places that delete a single item without checking for MemoryErrors
raised by PyList_SetSlice(), or that clear the whole list, and sometimes the
context doesn't allow an error to be propagated.
invariants allows the ob_item != NULL check to be replaced with an
assertion.
* Added assertions to list_init() which document and verify that the
tp_new slot establishes the invariants. This may preclude a future
bug if a custom tp_new slot is written.
to NULL during the lifetime of the object.
* listobject.c nevertheless did not conform to the other invariants,
either; fixed.
* listobject.c now uses list_clear() as the obvious internal way to clear
a list, instead of abusing list_ass_slice() for that. It makes it easier
to enforce the invariant about ob_item == NULL.
* listsort() sets allocated to -1 during sort; any mutation will set it
to a value >= 0, so it is a safe way to detect mutation. A negative
value for allocated does not cause a problem elsewhere currently.
test_sort.py has a new test for this fix.
* listsort() leak: if items were added to the list during the sort, AND if
these items had a __del__ that puts still more stuff into the list,
then this more stuff (and the PyObject** array to hold them) were
overridden at the end of listsort() and never released.