Commit Graph

229 Commits

Author SHA1 Message Date
Raymond Hettinger fb09f0e85c Finalize the freelist of list objects. 2004-10-07 03:58:07 +00:00
Raymond Hettinger aa241e0149 Checkin Tim's fix to an error discussed on python-dev.
Also, add a testcase.

Formerly, the list_extend() code used several local variables to remember
its state across iterations.  Since an iteration could call arbitrary
Python code, it was possible for the list state to be changed.  The new
code uses dynamic structure references instead of C locals.  So, they
are always up-to-date.

After list_resize() is called, its size has been updated but the new
cells are filled with NULLs.  These needed to be filled before arbitrary
iteration code was called; otherwise, that code could attempt to modify
a list that was in a semi-invalid state.  The solution was to change
the ob->size field back to a value reflecting the actual number of valid
cells.
2004-09-26 19:24:20 +00:00
Raymond Hettinger a84f3abb9e SF #1022910: Conserve memory with list.pop()
The list resizing scheme only downsized when more than 16 elements were
removed in a single step:  del a[100:120].   As a result, the list would
never shrink when popping elements off one at a time.

This patch makes it shrink whenever more than half of the space is unused.

Also, at Tim's suggestion, renamed _new_size to new_allocated.  This makes
the code easier to understand.
2004-09-12 19:53:07 +00:00
Andrew M. Kuchling 55be9eab38 Typo fix: 'comparisions' is not a word 2004-09-10 12:59:54 +00:00
Neal Norwitz f076953eb1 SF patch #1005778, Fix seg fault if list object is modified during list.index()
Backport candidate
2004-08-13 03:18:29 +00:00
Brett Cannon 651dd52b3a Previous commit was viewed as "perverse". Changed to just cast the unused
variable to void..

Thanks to Sjoerd Mullender for the suggested change.
2004-08-08 21:21:18 +00:00
Brett Cannon 5ad28e14b6 Tweak previous patch to silence a warning about the unused left value in the
comma expression in listpop() that was being returned.  Still essentially
unused (as it is meant to be), but now the compiler thinks it is worth
*something* by having it incremented.
2004-08-03 04:53:29 +00:00
Tim Peters 8fc4a91665 list_ass_slice(): Document the obscure new intent that deleting a slice
of no more than 8 elements cannot fail.

listpop():  Take advantage of that its calls to list_resize() and
list_ass_slice() can't fail.  This is assert'ed in a debug build now, but
in an icky way.  That is, you can't say:

	assert(some_call() >= 0);

because then some_call() won't occur at all in a release build.  So it
has to be a big pile of #ifdefs on Py_DEBUG (yuck), or the pleasant:

        status = some_call();
        assert(status >= 0);

But in that case, compilers may whine in a release build, because status
appears unused then.  I'm not certain the ugly trick I used here will
convince all compilers to shut up about status (status is always "used" now,
as the first (ignored) clause in a comma expression).
2004-07-31 21:53:19 +00:00
Tim Peters 7357222d0e list_ass_slice(): The difference between "recycle" and "recycled" was
impossible to remember, so renamed one to something obvious.  Headed
off potential signed-vs-unsigned compiler complaints I introduced by
changing the type of a vrbl to unsigned.  Removed the need for the
tedious explanation about "backward pointer loops" by looping on an
int instead.
2004-07-31 02:54:42 +00:00
Tim Peters 8d9eb10c29 Armin asked for a list_ass_slice review in his checkin, so here's the
result.

list_resize():  Document the intent.  Code is increasingly relying on
subtle aspects of its behavior, and they deserve to be spelled out.

list_ass_slice():  A bit more simplification, by giving it a common
error exit and initializing more values.

Be clearer in comments about what "size" means (# of elements?  # of
bytes?).

While the number of elements in a list slice must fit in an int, there's
no guarantee that the number of bytes occupied by the slice will.  That
malloc() and memmove() take size_t arguments is a hint about that <wink>.
So changed to use size_t where appropriate.

ihigh - ilow should always be >= 0, but we never asserted that.  We do
now.

The loop decref'ing the recycled slice had a subtle insecurity:  C doesn't
guarantee that a pointer one slot *before* an array will compare "less
than" to a pointer within the array (it does guarantee that a pointer
one beyond the end of the array compares as expected).  This was actually
an issue in KSR's C implementation, so isn't purely theoretical.  Python
probably has other "go backwards" loops with a similar glitch.
list_clear() is OK (it marches an integer backwards, not a pointer).
2004-07-31 02:24:20 +00:00
Armin Rigo 1dd04a02e0 This is a reorganization of list_ass_slice(). It should probably be reviewed,
though I tried to be very careful.  This is a slight simplification, and it
adds a new feature: a small stack-allocated "recycled" array for the cases
when we don't remove too many items.

It allows PyList_SetSlice() to never fail if:
* you are sure that the object is a list; and
* you either do not remove more than 8 items, or clear the list.

This makes a number of other places in the source code correct again -- there
are some places that delete a single item without checking for MemoryErrors
raised by PyList_SetSlice(), or that clear the whole list, and sometimes the
context doesn't allow an error to be propagated.
2004-07-30 11:38:22 +00:00
Armin Rigo a37bbf2e5b What if you call lst.__init__() while it is being sorted? :-)
The invariant checks would break.
2004-07-30 11:20:18 +00:00
Raymond Hettinger c0aaa2db4f * Simplify and speed-up list_resize(). Relying on the newly documented
invariants allows the ob_item != NULL check to be replaced with an
  assertion.

* Added assertions to list_init() which document and verify that the
  tp_new slot establishes the invariants.  This may preclude a future
  bug if a custom tp_new slot is written.
2004-07-29 23:31:29 +00:00
Armin Rigo 93677f075d * drop the unreasonable list invariant that ob_item should never come back
to NULL during the lifetime of the object.

* listobject.c nevertheless did not conform to the other invariants,
  either; fixed.

* listobject.c now uses list_clear() as the obvious internal way to clear
  a list, instead of abusing list_ass_slice() for that.  It makes it easier
  to enforce the invariant about ob_item == NULL.

* listsort() sets allocated to -1 during sort; any mutation will set it
  to a value >= 0, so it is a safe way to detect mutation.  A negative
  value for allocated does not cause a problem elsewhere currently.
  test_sort.py has a new test for this fix.

* listsort() leak: if items were added to the list during the sort, AND if
  these items had a __del__ that puts still more stuff into the list,
  then this more stuff (and the PyObject** array to hold them) were
  overridden at the end of listsort() and never released.
2004-07-29 12:40:23 +00:00
Armin Rigo f414fc4004 Minor memory leak. 2004-07-29 10:56:55 +00:00
Tim Peters 51b4ade306 Fix obscure breakage (relative to 2.3) in listsort: the test for list
mutation during list.sort() used to rely on that listobject.c always
NULL'ed ob_item when ob_size fell to 0.  That's no longer true, so the
test for list mutation during a sort is no longer reliable.  Changed the
test to rely instead on that listobject.c now never NULLs-out ob_item
after (if ever) ob_item gets a non-NULL value.  This new assumption is
also documented now, as a required invariant in listobject.h.

The new assumption allowed some real simplification to some of the
hairier code in listsort(), so is a Good Thing on that count.
2004-07-29 04:07:15 +00:00
Tim Peters b38e2b61b3 Trimmed trailing whitespace. 2004-07-29 02:29:26 +00:00
Tim Peters 3986d4e660 PyList_New(): we went to all the trouble of computing and bounds-checking
the size_t nbytes, and passed nbytes to malloc, so it was confusing to
effectively recompute the same thing from scratch in the memset call.
2004-07-29 02:28:42 +00:00
Nicholas Bastin 9ba301e589 Moved SunPro warning suppression into pyport.h and out of individual
modules and objects.
2004-07-15 15:54:05 +00:00
Nicholas Bastin 1ce9e4cfc1 Fixed end-of-loop code not reached warning when using SunPro C 2004-06-17 18:27:18 +00:00
Raymond Hettinger fdfe618228 Nits:
- Neatened the braces in PyList_New().
- Made sure "indexerr" was initialized to NULL.
- Factored if blocks in PyList_Append().
- Made sure "allocated" is initialized in list_init().
2004-05-05 06:28:16 +00:00
Raymond Hettinger 0468e416c1 SF patch #947476: Apply freelist technique to lists
Re-use list object bodies.  Saves calls to malloc() and free() for
faster list instantiation and deallocation.
2004-05-05 05:37:53 +00:00
Raymond Hettinger 45d0b5cc44 Use Py_RETURN_NONE macro where applicable. 2004-04-12 17:21:03 +00:00
Raymond Hettinger 501f02cd02 Small refactoring saving one function() and eliminating some indirection.
* Applied app1() to listappend().
* Inlined ins() into its one remaining caller.
2004-04-12 14:01:16 +00:00
Raymond Hettinger 40a03821ae * Specialize ins1() into app1() for appends. Saves several unnecessary
steps and further improves the speed of list append.

* Add guards to the list iterator length method to handle corner cases.
2004-04-12 13:05:09 +00:00
Armin Rigo 70d172dda4 Get rid of listextend_internal() and explain why the special case
'a.extend(a)' isn't so special anyway.
2004-03-20 22:19:23 +00:00
Raymond Hettinger 435bf58b7b Make iterators length transparent where possible. 2004-03-18 22:43:10 +00:00
Raymond Hettinger d4ff741e78 Revert last change. Found an application that was worse off with resize
exact turned on.  The tiny space savings wasn't worth the additional time
and code.
2004-03-15 09:01:31 +00:00
Raymond Hettinger 0e91643bd2 list_resize() now has an "exact" option for bypassing the overallocation
scheme in situations that likely won't benefit from it.  This further
improves memory utilization from Py2.3 which always over-allocates
except for PyList_New().

Situations expected to benefit from over-allocation:
    list.insert(), list.pop(), list.append(), and list.extend()

Situations deemed unlikely to benefit:
    list_inplace_repeat, list_ass_slice, list_ass_subscript

The most gray area was for listextend_internal() which only runs
when the argument is a list or a tuple.  This could be viewed as
a one-time fixed length addition or it could be viewed as wrapping
a series of appends.  I left its over-allocation turned on but
could be convinced otherwise.
2004-03-14 06:42:23 +00:00
Raymond Hettinger 42bec93e5c Make PySequence_Fast_ITEMS public. (Thanks Skip.) 2004-03-12 16:38:17 +00:00
Raymond Hettinger 6e058d70ef * Eliminate duplicate call to PyObject_Size().
(Spotted by Michael Hudson.)

* Now that "selflen" is no longer inside a loop, it should not be a
  register variable.
2004-03-12 15:30:38 +00:00
Raymond Hettinger c1e4f9dd92 Use a new macro, PySequence_Fast_ITEMS to factor out code common to
three recent optimizations.  Aside from reducing code volume, it
increases readability.
2004-03-12 08:04:00 +00:00
Raymond Hettinger 57c4542bcd Now that list.extend() is at the root of many list operations, it becomes
worth it to in-line the call to PyIter_Next().

Saves another 15% on most list operations that acceptable a general
iterable argument (such as the list constructor).
2004-03-11 09:48:18 +00:00
Raymond Hettinger 8ca92ae54c Eliminate a big block of duplicate code in PySequence_List() by
exposing _PyList_Extend().
2004-03-11 09:13:12 +00:00
Raymond Hettinger 97bc618229 list_inplace_concat() is now expressed in terms of list_extend() which
avoids creating an intermediate tuple for iterable arguments other than
lists or tuples.

In other words, a+=b no longer requires extra memory when b is not a
list or tuple.  The list and tuple cases are unchanged.
2004-03-11 07:34:19 +00:00
Raymond Hettinger 66d31f8f38 Use memcpy() instead of memmove() when the buffers are known to be distinct. 2004-03-10 11:44:04 +00:00
Raymond Hettinger ef9bf4031a Tidied up the implementations of reversed (including the custom ones
for xrange and list objects).

* list.__reversed__ now checks the length of the sequence object before
  calling PyList_GET_ITEM() because the mutable could have changed length.

* all three implementations are now tranparent with respect to length and
  maintain the invariant len(it) == len(list(it)) even when the underlying
  sequence mutates.

* __builtin__.reversed() now frees the underlying sequence as soon
  as the iterator is exhausted.

* the code paths were rearranged so that the most common paths
  do not require a jump.
2004-03-10 10:10:42 +00:00
Raymond Hettinger a6366fe085 Optimize inner loops for subscript, repeat, and concat. 2004-03-09 13:05:22 +00:00
Raymond Hettinger f889e10c19 Optimize slice assignments.
* Replace sprintf message with a constant message string -- this error
  message ran on every invocation except straight deletions but it was
  only needed when the rhs was not iterable.  The message was also
  out-of-date and did not reflect that iterable arguments were allowed.

* For inner loops that do not make ref count adjustments, use memmove()
  for fast copying and better readability.

* For inner loops that do make ref count adjustments, speed them up by
  factoring out the constant structure reference and using vitem[] instead.
2004-03-09 08:04:33 +00:00
Raymond Hettinger b7d05db0be Optimize tuple_slice() and make further improvements to list_slice()
and list.extend().  Factoring the inner loops to remove the constant
structure references and fixed offsets gives speedups ranging from
20% to 30%.
2004-03-08 07:25:05 +00:00
Raymond Hettinger 99842b6534 Small optimizations for list_slice() and list_extend_internal().
* Using addition instead of substraction on array indices allows the
  compiler to use a fast addressing mode.  Saves about 10%.

* Using PyTuple_GET_ITEM and PyList_SET_ITEM is about 7% faster than
  PySequenceFast_GET_ITEM which has to make a list check on every pass.
2004-03-08 05:56:15 +00:00
Raymond Hettinger fa6c6f8a73 Keep the list.pop() optimization while restoring the many possibility
for types other than PyInt being accepted for the optional argument.
(Spotted by Neal Norwitz.)
2004-02-19 06:12:06 +00:00
Raymond Hettinger 9eb86b3c7c Double the speed of list.pop() which was spending most of its time parsing
arguments.
2004-02-17 11:36:16 +00:00
Raymond Hettinger 90a39bf12c Refactor list_extend() and list_fill() for gains in code size, memory
utilization, and speed:

* Moved the responsibility for emptying the previous list from list_fill
  to list_init.

* Replaced the code in list_extend with the superior code from list_fill.

* Eliminated list_fill.

Results:

* list.extend() no longer creates an intermediate tuple except to handle
  the special case of x.extend(x).  The saves memory and time.

* list.extend(x) runs
    5 to 10% faster when x is a list or tuple
    15% faster when x is an iterable not defining __len__
    twice as fast when x is an iterable defining __len__

* the code is about 15 lines shorter and no longer duplicates
  functionality.
2004-02-15 03:57:00 +00:00
Raymond Hettinger ab517d2eac Fine tune the speed/space trade-off for overallocating small lists.
The Py2.3 approach overallocated small lists by up to 8 elements.
The last checkin would limited this to one but slowed down (by 20 to 30%)
the creation of small lists between 3 to 8 elements.

This tune-up balances the two, limiting overallocation to 3 elements
(significantly reducing space consumption from Py2.3) and running faster
than the previous checkin.

The first part of the growth pattern (0, 4, 8, 16) neatly meshes with
allocators that trigger data movement only when crossing a power of two
boundary.  Also, then even numbers mesh well with common data alignments.
2004-02-14 18:34:46 +00:00
Raymond Hettinger 2731ae4d6d Fix missing return value. Spotted by Neal Norwitz 2004-02-14 03:07:21 +00:00
Raymond Hettinger cb3e580ebc Optimize list.pop() for the common special case of popping off the end.
More than doubles its speed.
2004-02-13 18:36:31 +00:00
Raymond Hettinger 4bb9540dd6 * Optimized list appends and pops by making fewer calls the underlying system
realloc().  This is achieved by tracking the overallocation size in a new
  field and using that information to skip calls to realloc() whenever
  possible.

* Simplified and tightened the amount of overallocation.  For larger lists,
  this overallocates by 1/8th (compared to the previous scheme which ranged
  between 1/4th to 1/32nd over-allocation).  For smaller lists (n<6), the
  maximum overallocation is one byte (formerly it could be upto eight bytes).
  This saves memory in applications with large numbers of small lists.

* Eliminated the NRESIZE macro in favor of a new, static list_resize function
  that encapsulates the resizing logic.  Coverting this back to macro would
  give a small (under 1%) speed-up.  This was too small to warrant the loss
  of readability, maintainability, and de-coupling.

* Some functions using NRESIZE had grown unnecessarily complex in their
  efforts to bend to the macro's calling pattern.  With the new list_resize
  function in place, those other functions could be simplified.  That is
  being saved for a separate patch.

* The ob_item==NULL check could be eliminated from the new list_resize
  function.  This would entail finding each piece of code that sets ob_item
  to NULL and adding a new line to invalidate the overallocation tracking
  field.  Rather than impose a new requirement on other pieces of list code,
  it was preferred to leave the NULL check in place and retain the benefits
  of decoupling, maintainability and information hiding (only PyList_New()
  and list_sort() need to know about the new field).  This approach also
  reduces the odds of breaking an extension module.

(Collaborative effort by Raymond Hettinger, Hye-Shik Chang, Tim Peters,
 and Armin Rigo.)
2004-02-13 11:36:39 +00:00
Tim Peters 7049d816fb Revert change accidentally checked in as part of a whitespace normalization
patch.
2004-01-18 20:31:02 +00:00
Tim Peters 58eb11cf62 Whitespace normalization. 2004-01-18 20:29:55 +00:00