cpython/Objects/tupleobject.c

579 lines
13 KiB
C
Raw Normal View History

1991-02-19 08:39:46 -04:00
/***********************************************************
Copyright 1991-1995 by Stichting Mathematisch Centrum, Amsterdam,
The Netherlands.
1991-02-19 08:39:46 -04:00
All Rights Reserved
1996-10-25 11:44:06 -03:00
Permission to use, copy, modify, and distribute this software and its
documentation for any purpose and without fee is hereby granted,
1991-02-19 08:39:46 -04:00
provided that the above copyright notice appear in all copies and that
1996-10-25 11:44:06 -03:00
both that copyright notice and this permission notice appear in
1991-02-19 08:39:46 -04:00
supporting documentation, and that the names of Stichting Mathematisch
1996-10-25 11:44:06 -03:00
Centrum or CWI or Corporation for National Research Initiatives or
CNRI not be used in advertising or publicity pertaining to
distribution of the software without specific, written prior
permission.
While CWI is the initial source for this software, a modified version
is made available by the Corporation for National Research Initiatives
(CNRI) at the Internet address ftp://ftp.python.org.
STICHTING MATHEMATISCH CENTRUM AND CNRI DISCLAIM ALL WARRANTIES WITH
REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL STICHTING MATHEMATISCH
CENTRUM OR CNRI BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL
DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR
PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.
1991-02-19 08:39:46 -04:00
******************************************************************/
1990-10-14 09:07:46 -03:00
/* Tuple object implementation */
1997-05-02 00:12:38 -03:00
#include "Python.h"
1990-10-14 09:07:46 -03:00
Patch by Charles G Waldman to avoid a sneaky memory leak in _PyTuple_Resize(). In addition, a change suggested by Jeremy Hylton to limit the size of the free lists is also merged into this patch. Charles wrote initially: """ Test Case: run the following code: class Nothing: def __len__(self): return 5 def __getitem__(self, i): if i < 3: return i else: raise IndexError, i def g(a,*b,**c): return for x in xrange(1000000): g(*Nothing()) and watch Python's memory use go up and up. Diagnosis: The analysis begins with the call to PySequence_Tuple at line 1641 in ceval.c - the argument to g is seen to be a sequence but not a tuple, so it needs to be converted from an abstract sequence to a concrete tuple. PySequence_Tuple starts off by creating a new tuple of length 5 (line 1122 in abstract.c). Then at line 1149, since only 3 elements were assigned, _PyTuple_Resize is called to make the 5-tuple into a 3-tuple. When we're all done the 3-tuple is decrefed, but rather than being freed it is placed on the free_tuples cache. The basic problem is that the 3-tuples are being added to the cache but never picked up again, since _PyTuple_Resize doesn't make use of the free_tuples cache. If you are resizing a 5-tuple to a 3-tuple and there is already a 3-tuple in free_tuples[3], instead of using this tuple, _PyTuple_Resize will realloc the 5-tuple to a 3-tuple. It would more efficient to use the existing 3-tuple and cache the 5-tuple. By making _PyTuple_Resize aware of the free_tuples (just as PyTuple_New), we not only save a few calls to realloc, but also prevent this misbehavior whereby tuples are being added to the free_tuples list but never properly "recycled". """ And later: """ This patch replaces my submission of Sun, 16 Apr and addresses Jeremy Hylton's suggestions that we also limit the size of the free tuple list. I chose 2000 as the maximum number of tuples of any particular size to save. There was also a problem with the previous version of this patch causing a core dump if Python was built with Py_TRACE_REFS. This is fixed in the below version of the patch, which uses tupledealloc instead of _Py_Dealloc. """
2000-04-21 18:15:05 -03:00
/* Speed optimization to avoid frequent malloc/free of small tuples */
#ifndef MAXSAVESIZE
Patch by Charles G Waldman to avoid a sneaky memory leak in _PyTuple_Resize(). In addition, a change suggested by Jeremy Hylton to limit the size of the free lists is also merged into this patch. Charles wrote initially: """ Test Case: run the following code: class Nothing: def __len__(self): return 5 def __getitem__(self, i): if i < 3: return i else: raise IndexError, i def g(a,*b,**c): return for x in xrange(1000000): g(*Nothing()) and watch Python's memory use go up and up. Diagnosis: The analysis begins with the call to PySequence_Tuple at line 1641 in ceval.c - the argument to g is seen to be a sequence but not a tuple, so it needs to be converted from an abstract sequence to a concrete tuple. PySequence_Tuple starts off by creating a new tuple of length 5 (line 1122 in abstract.c). Then at line 1149, since only 3 elements were assigned, _PyTuple_Resize is called to make the 5-tuple into a 3-tuple. When we're all done the 3-tuple is decrefed, but rather than being freed it is placed on the free_tuples cache. The basic problem is that the 3-tuples are being added to the cache but never picked up again, since _PyTuple_Resize doesn't make use of the free_tuples cache. If you are resizing a 5-tuple to a 3-tuple and there is already a 3-tuple in free_tuples[3], instead of using this tuple, _PyTuple_Resize will realloc the 5-tuple to a 3-tuple. It would more efficient to use the existing 3-tuple and cache the 5-tuple. By making _PyTuple_Resize aware of the free_tuples (just as PyTuple_New), we not only save a few calls to realloc, but also prevent this misbehavior whereby tuples are being added to the free_tuples list but never properly "recycled". """ And later: """ This patch replaces my submission of Sun, 16 Apr and addresses Jeremy Hylton's suggestions that we also limit the size of the free tuple list. I chose 2000 as the maximum number of tuples of any particular size to save. There was also a problem with the previous version of this patch causing a core dump if Python was built with Py_TRACE_REFS. This is fixed in the below version of the patch, which uses tupledealloc instead of _Py_Dealloc. """
2000-04-21 18:15:05 -03:00
#define MAXSAVESIZE 20 /* Largest tuple to save on free list */
#endif
#ifndef MAXSAVEDTUPLES
#define MAXSAVEDTUPLES 2000 /* Maximum number of tuples of each size to save */
#endif
#if MAXSAVESIZE > 0
Patch by Charles G Waldman to avoid a sneaky memory leak in _PyTuple_Resize(). In addition, a change suggested by Jeremy Hylton to limit the size of the free lists is also merged into this patch. Charles wrote initially: """ Test Case: run the following code: class Nothing: def __len__(self): return 5 def __getitem__(self, i): if i < 3: return i else: raise IndexError, i def g(a,*b,**c): return for x in xrange(1000000): g(*Nothing()) and watch Python's memory use go up and up. Diagnosis: The analysis begins with the call to PySequence_Tuple at line 1641 in ceval.c - the argument to g is seen to be a sequence but not a tuple, so it needs to be converted from an abstract sequence to a concrete tuple. PySequence_Tuple starts off by creating a new tuple of length 5 (line 1122 in abstract.c). Then at line 1149, since only 3 elements were assigned, _PyTuple_Resize is called to make the 5-tuple into a 3-tuple. When we're all done the 3-tuple is decrefed, but rather than being freed it is placed on the free_tuples cache. The basic problem is that the 3-tuples are being added to the cache but never picked up again, since _PyTuple_Resize doesn't make use of the free_tuples cache. If you are resizing a 5-tuple to a 3-tuple and there is already a 3-tuple in free_tuples[3], instead of using this tuple, _PyTuple_Resize will realloc the 5-tuple to a 3-tuple. It would more efficient to use the existing 3-tuple and cache the 5-tuple. By making _PyTuple_Resize aware of the free_tuples (just as PyTuple_New), we not only save a few calls to realloc, but also prevent this misbehavior whereby tuples are being added to the free_tuples list but never properly "recycled". """ And later: """ This patch replaces my submission of Sun, 16 Apr and addresses Jeremy Hylton's suggestions that we also limit the size of the free tuple list. I chose 2000 as the maximum number of tuples of any particular size to save. There was also a problem with the previous version of this patch causing a core dump if Python was built with Py_TRACE_REFS. This is fixed in the below version of the patch, which uses tupledealloc instead of _Py_Dealloc. """
2000-04-21 18:15:05 -03:00
/* Entries 1 up to MAXSAVESIZE are free lists, entry 0 is the empty
tuple () of which at most one instance will be allocated.
*/
1997-05-02 00:12:38 -03:00
static PyTupleObject *free_tuples[MAXSAVESIZE];
Patch by Charles G Waldman to avoid a sneaky memory leak in _PyTuple_Resize(). In addition, a change suggested by Jeremy Hylton to limit the size of the free lists is also merged into this patch. Charles wrote initially: """ Test Case: run the following code: class Nothing: def __len__(self): return 5 def __getitem__(self, i): if i < 3: return i else: raise IndexError, i def g(a,*b,**c): return for x in xrange(1000000): g(*Nothing()) and watch Python's memory use go up and up. Diagnosis: The analysis begins with the call to PySequence_Tuple at line 1641 in ceval.c - the argument to g is seen to be a sequence but not a tuple, so it needs to be converted from an abstract sequence to a concrete tuple. PySequence_Tuple starts off by creating a new tuple of length 5 (line 1122 in abstract.c). Then at line 1149, since only 3 elements were assigned, _PyTuple_Resize is called to make the 5-tuple into a 3-tuple. When we're all done the 3-tuple is decrefed, but rather than being freed it is placed on the free_tuples cache. The basic problem is that the 3-tuples are being added to the cache but never picked up again, since _PyTuple_Resize doesn't make use of the free_tuples cache. If you are resizing a 5-tuple to a 3-tuple and there is already a 3-tuple in free_tuples[3], instead of using this tuple, _PyTuple_Resize will realloc the 5-tuple to a 3-tuple. It would more efficient to use the existing 3-tuple and cache the 5-tuple. By making _PyTuple_Resize aware of the free_tuples (just as PyTuple_New), we not only save a few calls to realloc, but also prevent this misbehavior whereby tuples are being added to the free_tuples list but never properly "recycled". """ And later: """ This patch replaces my submission of Sun, 16 Apr and addresses Jeremy Hylton's suggestions that we also limit the size of the free tuple list. I chose 2000 as the maximum number of tuples of any particular size to save. There was also a problem with the previous version of this patch causing a core dump if Python was built with Py_TRACE_REFS. This is fixed in the below version of the patch, which uses tupledealloc instead of _Py_Dealloc. """
2000-04-21 18:15:05 -03:00
static int num_free_tuples[MAXSAVESIZE];
#endif
#ifdef COUNT_ALLOCS
int fast_tuple_allocs;
int tuple_zero_allocs;
#endif
1997-05-02 00:12:38 -03:00
PyObject *
PyTuple_New(size)
1990-10-14 09:07:46 -03:00
register int size;
{
register int i;
1997-05-02 00:12:38 -03:00
register PyTupleObject *op;
1990-10-14 09:07:46 -03:00
if (size < 0) {
1997-05-02 00:12:38 -03:00
PyErr_BadInternalCall();
1990-10-14 09:07:46 -03:00
return NULL;
}
#if MAXSAVESIZE > 0
if (size == 0 && free_tuples[0]) {
op = free_tuples[0];
1997-05-02 00:12:38 -03:00
Py_INCREF(op);
#ifdef COUNT_ALLOCS
tuple_zero_allocs++;
#endif
1997-05-02 00:12:38 -03:00
return (PyObject *) op;
}
1997-05-02 00:12:38 -03:00
if (0 < size && size < MAXSAVESIZE &&
(op = free_tuples[size]) != NULL)
{
free_tuples[size] = (PyTupleObject *) op->ob_item[0];
Patch by Charles G Waldman to avoid a sneaky memory leak in _PyTuple_Resize(). In addition, a change suggested by Jeremy Hylton to limit the size of the free lists is also merged into this patch. Charles wrote initially: """ Test Case: run the following code: class Nothing: def __len__(self): return 5 def __getitem__(self, i): if i < 3: return i else: raise IndexError, i def g(a,*b,**c): return for x in xrange(1000000): g(*Nothing()) and watch Python's memory use go up and up. Diagnosis: The analysis begins with the call to PySequence_Tuple at line 1641 in ceval.c - the argument to g is seen to be a sequence but not a tuple, so it needs to be converted from an abstract sequence to a concrete tuple. PySequence_Tuple starts off by creating a new tuple of length 5 (line 1122 in abstract.c). Then at line 1149, since only 3 elements were assigned, _PyTuple_Resize is called to make the 5-tuple into a 3-tuple. When we're all done the 3-tuple is decrefed, but rather than being freed it is placed on the free_tuples cache. The basic problem is that the 3-tuples are being added to the cache but never picked up again, since _PyTuple_Resize doesn't make use of the free_tuples cache. If you are resizing a 5-tuple to a 3-tuple and there is already a 3-tuple in free_tuples[3], instead of using this tuple, _PyTuple_Resize will realloc the 5-tuple to a 3-tuple. It would more efficient to use the existing 3-tuple and cache the 5-tuple. By making _PyTuple_Resize aware of the free_tuples (just as PyTuple_New), we not only save a few calls to realloc, but also prevent this misbehavior whereby tuples are being added to the free_tuples list but never properly "recycled". """ And later: """ This patch replaces my submission of Sun, 16 Apr and addresses Jeremy Hylton's suggestions that we also limit the size of the free tuple list. I chose 2000 as the maximum number of tuples of any particular size to save. There was also a problem with the previous version of this patch causing a core dump if Python was built with Py_TRACE_REFS. This is fixed in the below version of the patch, which uses tupledealloc instead of _Py_Dealloc. """
2000-04-21 18:15:05 -03:00
num_free_tuples[size]--;
#ifdef COUNT_ALLOCS
fast_tuple_allocs++;
#endif
/* PyObject_InitVar is inlined */
#ifdef Py_TRACE_REFS
op->ob_size = size;
op->ob_type = &PyTuple_Type;
#endif
_Py_NewReference((PyObject *)op);
}
else
#endif
{
int nbytes = size * sizeof(PyObject *);
/* Check for overflow */
if (nbytes / sizeof(PyObject *) != (size_t)size ||
(nbytes += sizeof(PyTupleObject) - sizeof(PyObject *))
<= 0)
{
return PyErr_NoMemory();
}
/* PyObject_NewVar is inlined */
op = (PyTupleObject *) PyObject_MALLOC(nbytes);
if (op == NULL)
1997-05-02 00:12:38 -03:00
return PyErr_NoMemory();
PyObject_INIT_VAR(op, &PyTuple_Type, size);
}
1990-10-14 09:07:46 -03:00
for (i = 0; i < size; i++)
op->ob_item[i] = NULL;
#if MAXSAVESIZE > 0
if (size == 0) {
free_tuples[0] = op;
Patch by Charles G Waldman to avoid a sneaky memory leak in _PyTuple_Resize(). In addition, a change suggested by Jeremy Hylton to limit the size of the free lists is also merged into this patch. Charles wrote initially: """ Test Case: run the following code: class Nothing: def __len__(self): return 5 def __getitem__(self, i): if i < 3: return i else: raise IndexError, i def g(a,*b,**c): return for x in xrange(1000000): g(*Nothing()) and watch Python's memory use go up and up. Diagnosis: The analysis begins with the call to PySequence_Tuple at line 1641 in ceval.c - the argument to g is seen to be a sequence but not a tuple, so it needs to be converted from an abstract sequence to a concrete tuple. PySequence_Tuple starts off by creating a new tuple of length 5 (line 1122 in abstract.c). Then at line 1149, since only 3 elements were assigned, _PyTuple_Resize is called to make the 5-tuple into a 3-tuple. When we're all done the 3-tuple is decrefed, but rather than being freed it is placed on the free_tuples cache. The basic problem is that the 3-tuples are being added to the cache but never picked up again, since _PyTuple_Resize doesn't make use of the free_tuples cache. If you are resizing a 5-tuple to a 3-tuple and there is already a 3-tuple in free_tuples[3], instead of using this tuple, _PyTuple_Resize will realloc the 5-tuple to a 3-tuple. It would more efficient to use the existing 3-tuple and cache the 5-tuple. By making _PyTuple_Resize aware of the free_tuples (just as PyTuple_New), we not only save a few calls to realloc, but also prevent this misbehavior whereby tuples are being added to the free_tuples list but never properly "recycled". """ And later: """ This patch replaces my submission of Sun, 16 Apr and addresses Jeremy Hylton's suggestions that we also limit the size of the free tuple list. I chose 2000 as the maximum number of tuples of any particular size to save. There was also a problem with the previous version of this patch causing a core dump if Python was built with Py_TRACE_REFS. This is fixed in the below version of the patch, which uses tupledealloc instead of _Py_Dealloc. """
2000-04-21 18:15:05 -03:00
++num_free_tuples[0];
1997-05-02 00:12:38 -03:00
Py_INCREF(op); /* extra INCREF so that this is never freed */
}
#endif
1997-05-02 00:12:38 -03:00
return (PyObject *) op;
1990-10-14 09:07:46 -03:00
}
int
1997-05-02 00:12:38 -03:00
PyTuple_Size(op)
register PyObject *op;
1990-10-14 09:07:46 -03:00
{
1997-05-02 00:12:38 -03:00
if (!PyTuple_Check(op)) {
PyErr_BadInternalCall();
1990-10-14 09:07:46 -03:00
return -1;
}
else
1997-05-02 00:12:38 -03:00
return ((PyTupleObject *)op)->ob_size;
1990-10-14 09:07:46 -03:00
}
1997-05-02 00:12:38 -03:00
PyObject *
PyTuple_GetItem(op, i)
register PyObject *op;
1990-10-14 09:07:46 -03:00
register int i;
{
1997-05-02 00:12:38 -03:00
if (!PyTuple_Check(op)) {
PyErr_BadInternalCall();
1990-10-14 09:07:46 -03:00
return NULL;
}
1997-05-02 00:12:38 -03:00
if (i < 0 || i >= ((PyTupleObject *)op) -> ob_size) {
PyErr_SetString(PyExc_IndexError, "tuple index out of range");
1990-10-14 09:07:46 -03:00
return NULL;
}
1997-05-02 00:12:38 -03:00
return ((PyTupleObject *)op) -> ob_item[i];
1990-10-14 09:07:46 -03:00
}
int
1997-05-02 00:12:38 -03:00
PyTuple_SetItem(op, i, newitem)
register PyObject *op;
1990-10-14 09:07:46 -03:00
register int i;
1997-05-02 00:12:38 -03:00
PyObject *newitem;
1990-10-14 09:07:46 -03:00
{
1997-05-02 00:12:38 -03:00
register PyObject *olditem;
register PyObject **p;
if (!PyTuple_Check(op) || op->ob_refcnt != 1) {
1997-05-02 00:12:38 -03:00
Py_XDECREF(newitem);
PyErr_BadInternalCall();
1990-10-21 19:15:08 -03:00
return -1;
1990-10-14 09:07:46 -03:00
}
1997-05-02 00:12:38 -03:00
if (i < 0 || i >= ((PyTupleObject *)op) -> ob_size) {
Py_XDECREF(newitem);
PyErr_SetString(PyExc_IndexError,
"tuple assignment index out of range");
1990-10-21 19:15:08 -03:00
return -1;
1990-10-14 09:07:46 -03:00
}
1997-05-02 00:12:38 -03:00
p = ((PyTupleObject *)op) -> ob_item + i;
1995-03-09 08:12:50 -04:00
olditem = *p;
*p = newitem;
1997-05-02 00:12:38 -03:00
Py_XDECREF(olditem);
1990-10-14 09:07:46 -03:00
return 0;
}
/* Methods */
static void
tupledealloc(op)
1997-05-02 00:12:38 -03:00
register PyTupleObject *op;
1990-10-14 09:07:46 -03:00
{
register int i;
Patch by Charles G Waldman to avoid a sneaky memory leak in _PyTuple_Resize(). In addition, a change suggested by Jeremy Hylton to limit the size of the free lists is also merged into this patch. Charles wrote initially: """ Test Case: run the following code: class Nothing: def __len__(self): return 5 def __getitem__(self, i): if i < 3: return i else: raise IndexError, i def g(a,*b,**c): return for x in xrange(1000000): g(*Nothing()) and watch Python's memory use go up and up. Diagnosis: The analysis begins with the call to PySequence_Tuple at line 1641 in ceval.c - the argument to g is seen to be a sequence but not a tuple, so it needs to be converted from an abstract sequence to a concrete tuple. PySequence_Tuple starts off by creating a new tuple of length 5 (line 1122 in abstract.c). Then at line 1149, since only 3 elements were assigned, _PyTuple_Resize is called to make the 5-tuple into a 3-tuple. When we're all done the 3-tuple is decrefed, but rather than being freed it is placed on the free_tuples cache. The basic problem is that the 3-tuples are being added to the cache but never picked up again, since _PyTuple_Resize doesn't make use of the free_tuples cache. If you are resizing a 5-tuple to a 3-tuple and there is already a 3-tuple in free_tuples[3], instead of using this tuple, _PyTuple_Resize will realloc the 5-tuple to a 3-tuple. It would more efficient to use the existing 3-tuple and cache the 5-tuple. By making _PyTuple_Resize aware of the free_tuples (just as PyTuple_New), we not only save a few calls to realloc, but also prevent this misbehavior whereby tuples are being added to the free_tuples list but never properly "recycled". """ And later: """ This patch replaces my submission of Sun, 16 Apr and addresses Jeremy Hylton's suggestions that we also limit the size of the free tuple list. I chose 2000 as the maximum number of tuples of any particular size to save. There was also a problem with the previous version of this patch causing a core dump if Python was built with Py_TRACE_REFS. This is fixed in the below version of the patch, which uses tupledealloc instead of _Py_Dealloc. """
2000-04-21 18:15:05 -03:00
register int len = op->ob_size;
Py_TRASHCAN_SAFE_BEGIN(op)
Patch by Charles G Waldman to avoid a sneaky memory leak in _PyTuple_Resize(). In addition, a change suggested by Jeremy Hylton to limit the size of the free lists is also merged into this patch. Charles wrote initially: """ Test Case: run the following code: class Nothing: def __len__(self): return 5 def __getitem__(self, i): if i < 3: return i else: raise IndexError, i def g(a,*b,**c): return for x in xrange(1000000): g(*Nothing()) and watch Python's memory use go up and up. Diagnosis: The analysis begins with the call to PySequence_Tuple at line 1641 in ceval.c - the argument to g is seen to be a sequence but not a tuple, so it needs to be converted from an abstract sequence to a concrete tuple. PySequence_Tuple starts off by creating a new tuple of length 5 (line 1122 in abstract.c). Then at line 1149, since only 3 elements were assigned, _PyTuple_Resize is called to make the 5-tuple into a 3-tuple. When we're all done the 3-tuple is decrefed, but rather than being freed it is placed on the free_tuples cache. The basic problem is that the 3-tuples are being added to the cache but never picked up again, since _PyTuple_Resize doesn't make use of the free_tuples cache. If you are resizing a 5-tuple to a 3-tuple and there is already a 3-tuple in free_tuples[3], instead of using this tuple, _PyTuple_Resize will realloc the 5-tuple to a 3-tuple. It would more efficient to use the existing 3-tuple and cache the 5-tuple. By making _PyTuple_Resize aware of the free_tuples (just as PyTuple_New), we not only save a few calls to realloc, but also prevent this misbehavior whereby tuples are being added to the free_tuples list but never properly "recycled". """ And later: """ This patch replaces my submission of Sun, 16 Apr and addresses Jeremy Hylton's suggestions that we also limit the size of the free tuple list. I chose 2000 as the maximum number of tuples of any particular size to save. There was also a problem with the previous version of this patch causing a core dump if Python was built with Py_TRACE_REFS. This is fixed in the below version of the patch, which uses tupledealloc instead of _Py_Dealloc. """
2000-04-21 18:15:05 -03:00
if (len > 0) {
i = len;
while (--i >= 0)
Py_XDECREF(op->ob_item[i]);
#if MAXSAVESIZE > 0
Patch by Charles G Waldman to avoid a sneaky memory leak in _PyTuple_Resize(). In addition, a change suggested by Jeremy Hylton to limit the size of the free lists is also merged into this patch. Charles wrote initially: """ Test Case: run the following code: class Nothing: def __len__(self): return 5 def __getitem__(self, i): if i < 3: return i else: raise IndexError, i def g(a,*b,**c): return for x in xrange(1000000): g(*Nothing()) and watch Python's memory use go up and up. Diagnosis: The analysis begins with the call to PySequence_Tuple at line 1641 in ceval.c - the argument to g is seen to be a sequence but not a tuple, so it needs to be converted from an abstract sequence to a concrete tuple. PySequence_Tuple starts off by creating a new tuple of length 5 (line 1122 in abstract.c). Then at line 1149, since only 3 elements were assigned, _PyTuple_Resize is called to make the 5-tuple into a 3-tuple. When we're all done the 3-tuple is decrefed, but rather than being freed it is placed on the free_tuples cache. The basic problem is that the 3-tuples are being added to the cache but never picked up again, since _PyTuple_Resize doesn't make use of the free_tuples cache. If you are resizing a 5-tuple to a 3-tuple and there is already a 3-tuple in free_tuples[3], instead of using this tuple, _PyTuple_Resize will realloc the 5-tuple to a 3-tuple. It would more efficient to use the existing 3-tuple and cache the 5-tuple. By making _PyTuple_Resize aware of the free_tuples (just as PyTuple_New), we not only save a few calls to realloc, but also prevent this misbehavior whereby tuples are being added to the free_tuples list but never properly "recycled". """ And later: """ This patch replaces my submission of Sun, 16 Apr and addresses Jeremy Hylton's suggestions that we also limit the size of the free tuple list. I chose 2000 as the maximum number of tuples of any particular size to save. There was also a problem with the previous version of this patch causing a core dump if Python was built with Py_TRACE_REFS. This is fixed in the below version of the patch, which uses tupledealloc instead of _Py_Dealloc. """
2000-04-21 18:15:05 -03:00
if (len < MAXSAVESIZE && num_free_tuples[len] < MAXSAVEDTUPLES) {
op->ob_item[0] = (PyObject *) free_tuples[len];
num_free_tuples[len]++;
free_tuples[len] = op;
goto done; /* return */
}
#endif
}
PyObject_DEL(op);
done:
Py_TRASHCAN_SAFE_END(op)
1990-10-14 09:07:46 -03:00
}
static int
1990-10-14 09:07:46 -03:00
tupleprint(op, fp, flags)
1997-05-02 00:12:38 -03:00
PyTupleObject *op;
1990-10-14 09:07:46 -03:00
FILE *fp;
int flags;
{
int i;
fprintf(fp, "(");
for (i = 0; i < op->ob_size; i++) {
if (i > 0)
1990-10-14 09:07:46 -03:00
fprintf(fp, ", ");
1997-05-02 00:12:38 -03:00
if (PyObject_Print(op->ob_item[i], fp, 0) != 0)
return -1;
1990-10-14 09:07:46 -03:00
}
if (op->ob_size == 1)
fprintf(fp, ",");
fprintf(fp, ")");
return 0;
1990-10-14 09:07:46 -03:00
}
1997-05-02 00:12:38 -03:00
static PyObject *
1990-10-14 09:07:46 -03:00
tuplerepr(v)
1997-05-02 00:12:38 -03:00
PyTupleObject *v;
1990-10-14 09:07:46 -03:00
{
1997-05-02 00:12:38 -03:00
PyObject *s, *comma;
1990-10-14 09:07:46 -03:00
int i;
1997-05-02 00:12:38 -03:00
s = PyString_FromString("(");
comma = PyString_FromString(", ");
1990-10-14 09:07:46 -03:00
for (i = 0; i < v->ob_size && s != NULL; i++) {
if (i > 0)
1997-05-02 00:12:38 -03:00
PyString_Concat(&s, comma);
PyString_ConcatAndDel(&s, PyObject_Repr(v->ob_item[i]));
1990-10-14 09:07:46 -03:00
}
1997-05-02 00:12:38 -03:00
Py_DECREF(comma);
1994-08-30 05:27:36 -03:00
if (v->ob_size == 1)
1997-05-02 00:12:38 -03:00
PyString_ConcatAndDel(&s, PyString_FromString(","));
PyString_ConcatAndDel(&s, PyString_FromString(")"));
1990-10-14 09:07:46 -03:00
return s;
}
static int
tuplecompare(v, w)
1997-05-02 00:12:38 -03:00
register PyTupleObject *v, *w;
1990-10-14 09:07:46 -03:00
{
register int len =
(v->ob_size < w->ob_size) ? v->ob_size : w->ob_size;
register int i;
for (i = 0; i < len; i++) {
1997-05-02 00:12:38 -03:00
int cmp = PyObject_Compare(v->ob_item[i], w->ob_item[i]);
1990-10-14 09:07:46 -03:00
if (cmp != 0)
return cmp;
}
return v->ob_size - w->ob_size;
}
static long
tuplehash(v)
1997-05-02 00:12:38 -03:00
PyTupleObject *v;
{
register long x, y;
register int len = v->ob_size;
1997-05-02 00:12:38 -03:00
register PyObject **p;
x = 0x345678L;
p = v->ob_item;
while (--len >= 0) {
1997-05-02 00:12:38 -03:00
y = PyObject_Hash(*p++);
if (y == -1)
return -1;
1996-12-16 13:55:46 -04:00
x = (1000003*x) ^ y;
}
x ^= v->ob_size;
if (x == -1)
x = -2;
return x;
}
1990-10-14 09:07:46 -03:00
static int
tuplelength(a)
1997-05-02 00:12:38 -03:00
PyTupleObject *a;
1990-10-14 09:07:46 -03:00
{
return a->ob_size;
}
static int
tuplecontains(a, el)
PyTupleObject *a;
PyObject *el;
{
int i, cmp;
for (i = 0; i < a->ob_size; ++i) {
cmp = PyObject_Compare(el, PyTuple_GET_ITEM(a, i));
if (cmp == 0)
return 1;
if (PyErr_Occurred())
return -1;
}
return 0;
}
1997-05-02 00:12:38 -03:00
static PyObject *
1990-10-14 09:07:46 -03:00
tupleitem(a, i)
1997-05-02 00:12:38 -03:00
register PyTupleObject *a;
1990-10-14 09:07:46 -03:00
register int i;
{
if (i < 0 || i >= a->ob_size) {
1997-05-02 00:12:38 -03:00
PyErr_SetString(PyExc_IndexError, "tuple index out of range");
1990-10-14 09:07:46 -03:00
return NULL;
}
1997-05-02 00:12:38 -03:00
Py_INCREF(a->ob_item[i]);
1990-10-14 09:07:46 -03:00
return a->ob_item[i];
}
1997-05-02 00:12:38 -03:00
static PyObject *
1990-10-14 09:07:46 -03:00
tupleslice(a, ilow, ihigh)
1997-05-02 00:12:38 -03:00
register PyTupleObject *a;
1990-10-14 09:07:46 -03:00
register int ilow, ihigh;
{
1997-05-02 00:12:38 -03:00
register PyTupleObject *np;
1990-10-14 09:07:46 -03:00
register int i;
if (ilow < 0)
ilow = 0;
if (ihigh > a->ob_size)
ihigh = a->ob_size;
if (ihigh < ilow)
ihigh = ilow;
if (ilow == 0 && ihigh == a->ob_size) {
/* XXX can only do this if tuples are immutable! */
1997-05-02 00:12:38 -03:00
Py_INCREF(a);
return (PyObject *)a;
1990-10-14 09:07:46 -03:00
}
1997-05-02 00:12:38 -03:00
np = (PyTupleObject *)PyTuple_New(ihigh - ilow);
1990-10-14 09:07:46 -03:00
if (np == NULL)
return NULL;
for (i = ilow; i < ihigh; i++) {
1997-05-02 00:12:38 -03:00
PyObject *v = a->ob_item[i];
Py_INCREF(v);
1990-10-14 09:07:46 -03:00
np->ob_item[i - ilow] = v;
}
1997-05-02 00:12:38 -03:00
return (PyObject *)np;
1990-10-14 09:07:46 -03:00
}
1997-05-02 00:12:38 -03:00
PyObject *
PyTuple_GetSlice(op, i, j)
PyObject *op;
1992-01-14 14:45:33 -04:00
int i, j;
{
1997-05-02 00:12:38 -03:00
if (op == NULL || !PyTuple_Check(op)) {
PyErr_BadInternalCall();
1992-01-14 14:45:33 -04:00
return NULL;
}
1997-05-02 00:12:38 -03:00
return tupleslice((PyTupleObject *)op, i, j);
1992-01-14 14:45:33 -04:00
}
1997-05-02 00:12:38 -03:00
static PyObject *
1990-10-14 09:07:46 -03:00
tupleconcat(a, bb)
1997-05-02 00:12:38 -03:00
register PyTupleObject *a;
register PyObject *bb;
1990-10-14 09:07:46 -03:00
{
register int size;
register int i;
1997-05-02 00:12:38 -03:00
PyTupleObject *np;
if (!PyTuple_Check(bb)) {
PyErr_Format(PyExc_TypeError,
"can only append tuple (not \"%.200s\") to tuple",
bb->ob_type->tp_name);
1990-10-14 09:07:46 -03:00
return NULL;
}
1997-05-02 00:12:38 -03:00
#define b ((PyTupleObject *)bb)
1990-10-14 09:07:46 -03:00
size = a->ob_size + b->ob_size;
1997-05-02 00:12:38 -03:00
np = (PyTupleObject *) PyTuple_New(size);
1990-10-14 09:07:46 -03:00
if (np == NULL) {
return NULL;
1990-10-14 09:07:46 -03:00
}
for (i = 0; i < a->ob_size; i++) {
1997-05-02 00:12:38 -03:00
PyObject *v = a->ob_item[i];
Py_INCREF(v);
1990-10-14 09:07:46 -03:00
np->ob_item[i] = v;
}
for (i = 0; i < b->ob_size; i++) {
1997-05-02 00:12:38 -03:00
PyObject *v = b->ob_item[i];
Py_INCREF(v);
1990-10-14 09:07:46 -03:00
np->ob_item[i + a->ob_size] = v;
}
1997-05-02 00:12:38 -03:00
return (PyObject *)np;
1990-10-14 09:07:46 -03:00
#undef b
}
1997-05-02 00:12:38 -03:00
static PyObject *
tuplerepeat(a, n)
1997-05-02 00:12:38 -03:00
PyTupleObject *a;
int n;
{
int i, j;
int size;
1997-05-02 00:12:38 -03:00
PyTupleObject *np;
PyObject **p;
if (n < 0)
n = 0;
if (a->ob_size == 0 || n == 1) {
/* Since tuples are immutable, we can return a shared
copy in this case */
1997-05-02 00:12:38 -03:00
Py_INCREF(a);
return (PyObject *)a;
}
size = a->ob_size * n;
if (size/a->ob_size != n)
return PyErr_NoMemory();
1997-05-02 00:12:38 -03:00
np = (PyTupleObject *) PyTuple_New(size);
if (np == NULL)
return NULL;
p = np->ob_item;
for (i = 0; i < n; i++) {
for (j = 0; j < a->ob_size; j++) {
*p = a->ob_item[j];
1997-05-02 00:12:38 -03:00
Py_INCREF(*p);
p++;
}
}
1997-05-02 00:12:38 -03:00
return (PyObject *) np;
}
1997-05-02 00:12:38 -03:00
static PySequenceMethods tuple_as_sequence = {
1994-08-30 05:27:36 -03:00
(inquiry)tuplelength, /*sq_length*/
(binaryfunc)tupleconcat, /*sq_concat*/
(intargfunc)tuplerepeat, /*sq_repeat*/
(intargfunc)tupleitem, /*sq_item*/
(intintargfunc)tupleslice, /*sq_slice*/
1990-10-14 09:07:46 -03:00
0, /*sq_ass_item*/
0, /*sq_ass_slice*/
(objobjproc)tuplecontains, /*sq_contains*/
1990-10-14 09:07:46 -03:00
};
1997-05-02 00:12:38 -03:00
PyTypeObject PyTuple_Type = {
PyObject_HEAD_INIT(&PyType_Type)
1990-10-14 09:07:46 -03:00
0,
"tuple",
1997-05-02 00:12:38 -03:00
sizeof(PyTupleObject) - sizeof(PyObject *),
sizeof(PyObject *),
1994-08-30 05:27:36 -03:00
(destructor)tupledealloc, /*tp_dealloc*/
(printfunc)tupleprint, /*tp_print*/
1990-10-14 09:07:46 -03:00
0, /*tp_getattr*/
0, /*tp_setattr*/
1994-08-30 05:27:36 -03:00
(cmpfunc)tuplecompare, /*tp_compare*/
(reprfunc)tuplerepr, /*tp_repr*/
1990-10-14 09:07:46 -03:00
0, /*tp_as_number*/
&tuple_as_sequence, /*tp_as_sequence*/
0, /*tp_as_mapping*/
1994-08-30 05:27:36 -03:00
(hashfunc)tuplehash, /*tp_hash*/
1990-10-14 09:07:46 -03:00
};
/* The following function breaks the notion that tuples are immutable:
it changes the size of a tuple. We get away with this only if there
is only one module referencing the object. You can also think of it
as creating a new tuple object and destroying the old one, only
more efficiently. In any case, don't use this if the tuple may
already be known to some other part of the code...
If last_is_sticky is set, the tuple will grow or shrink at the
front, otherwise it will grow or shrink at the end. */
int
1997-05-02 00:12:38 -03:00
_PyTuple_Resize(pv, newsize, last_is_sticky)
PyObject **pv;
int newsize;
int last_is_sticky;
{
1997-05-02 00:12:38 -03:00
register PyTupleObject *v;
register PyTupleObject *sv;
int i;
int sizediff;
1997-05-02 00:12:38 -03:00
v = (PyTupleObject *) *pv;
if (v == NULL || !PyTuple_Check(v) || v->ob_refcnt != 1) {
*pv = 0;
1997-05-02 00:12:38 -03:00
Py_DECREF(v);
PyErr_BadInternalCall();
return -1;
}
1995-08-04 01:05:10 -03:00
sizediff = newsize - v->ob_size;
if (sizediff == 0)
return 0;
/* XXX UNREF/NEWREF interface should be more symmetrical */
1996-05-23 19:46:51 -03:00
#ifdef Py_REF_DEBUG
1995-03-29 12:57:48 -04:00
--_Py_RefTotal;
#endif
_Py_ForgetReference((PyObject *)v);
if (last_is_sticky && sizediff < 0) {
1997-05-02 00:12:38 -03:00
/* shrinking:
move entries to the front and zero moved entries */
for (i = 0; i < newsize; i++) {
1997-05-02 00:12:38 -03:00
Py_XDECREF(v->ob_item[i]);
v->ob_item[i] = v->ob_item[i - sizediff];
v->ob_item[i - sizediff] = NULL;
}
}
for (i = newsize; i < v->ob_size; i++) {
1997-05-02 00:12:38 -03:00
Py_XDECREF(v->ob_item[i]);
v->ob_item[i] = NULL;
}
Patch by Charles G Waldman to avoid a sneaky memory leak in _PyTuple_Resize(). In addition, a change suggested by Jeremy Hylton to limit the size of the free lists is also merged into this patch. Charles wrote initially: """ Test Case: run the following code: class Nothing: def __len__(self): return 5 def __getitem__(self, i): if i < 3: return i else: raise IndexError, i def g(a,*b,**c): return for x in xrange(1000000): g(*Nothing()) and watch Python's memory use go up and up. Diagnosis: The analysis begins with the call to PySequence_Tuple at line 1641 in ceval.c - the argument to g is seen to be a sequence but not a tuple, so it needs to be converted from an abstract sequence to a concrete tuple. PySequence_Tuple starts off by creating a new tuple of length 5 (line 1122 in abstract.c). Then at line 1149, since only 3 elements were assigned, _PyTuple_Resize is called to make the 5-tuple into a 3-tuple. When we're all done the 3-tuple is decrefed, but rather than being freed it is placed on the free_tuples cache. The basic problem is that the 3-tuples are being added to the cache but never picked up again, since _PyTuple_Resize doesn't make use of the free_tuples cache. If you are resizing a 5-tuple to a 3-tuple and there is already a 3-tuple in free_tuples[3], instead of using this tuple, _PyTuple_Resize will realloc the 5-tuple to a 3-tuple. It would more efficient to use the existing 3-tuple and cache the 5-tuple. By making _PyTuple_Resize aware of the free_tuples (just as PyTuple_New), we not only save a few calls to realloc, but also prevent this misbehavior whereby tuples are being added to the free_tuples list but never properly "recycled". """ And later: """ This patch replaces my submission of Sun, 16 Apr and addresses Jeremy Hylton's suggestions that we also limit the size of the free tuple list. I chose 2000 as the maximum number of tuples of any particular size to save. There was also a problem with the previous version of this patch causing a core dump if Python was built with Py_TRACE_REFS. This is fixed in the below version of the patch, which uses tupledealloc instead of _Py_Dealloc. """
2000-04-21 18:15:05 -03:00
#if MAXSAVESIZE > 0
if (newsize == 0 && free_tuples[0]) {
num_free_tuples[0]--;
sv = free_tuples[0];
sv->ob_size = 0;
Py_INCREF(sv);
#ifdef COUNT_ALLOCS
tuple_zero_allocs++;
#endif
tupledealloc(v);
*pv = (PyObject*) sv;
return 0;
}
if (0 < newsize && newsize < MAXSAVESIZE &&
(sv = free_tuples[newsize]) != NULL)
{
free_tuples[newsize] = (PyTupleObject *) sv->ob_item[0];
num_free_tuples[newsize]--;
#ifdef COUNT_ALLOCS
fast_tuple_allocs++;
#endif
#ifdef Py_TRACE_REFS
sv->ob_type = &PyTuple_Type;
#endif
for (i = 0; i < newsize; ++i){
sv->ob_item[i] = v->ob_item[i];
v->ob_item[i] = NULL;
}
sv->ob_size = v->ob_size;
tupledealloc(v);
*pv = (PyObject *) sv;
} else
#endif
{
sv = (PyTupleObject *)
PyObject_REALLOC((char *)v,
Patch by Charles G Waldman to avoid a sneaky memory leak in _PyTuple_Resize(). In addition, a change suggested by Jeremy Hylton to limit the size of the free lists is also merged into this patch. Charles wrote initially: """ Test Case: run the following code: class Nothing: def __len__(self): return 5 def __getitem__(self, i): if i < 3: return i else: raise IndexError, i def g(a,*b,**c): return for x in xrange(1000000): g(*Nothing()) and watch Python's memory use go up and up. Diagnosis: The analysis begins with the call to PySequence_Tuple at line 1641 in ceval.c - the argument to g is seen to be a sequence but not a tuple, so it needs to be converted from an abstract sequence to a concrete tuple. PySequence_Tuple starts off by creating a new tuple of length 5 (line 1122 in abstract.c). Then at line 1149, since only 3 elements were assigned, _PyTuple_Resize is called to make the 5-tuple into a 3-tuple. When we're all done the 3-tuple is decrefed, but rather than being freed it is placed on the free_tuples cache. The basic problem is that the 3-tuples are being added to the cache but never picked up again, since _PyTuple_Resize doesn't make use of the free_tuples cache. If you are resizing a 5-tuple to a 3-tuple and there is already a 3-tuple in free_tuples[3], instead of using this tuple, _PyTuple_Resize will realloc the 5-tuple to a 3-tuple. It would more efficient to use the existing 3-tuple and cache the 5-tuple. By making _PyTuple_Resize aware of the free_tuples (just as PyTuple_New), we not only save a few calls to realloc, but also prevent this misbehavior whereby tuples are being added to the free_tuples list but never properly "recycled". """ And later: """ This patch replaces my submission of Sun, 16 Apr and addresses Jeremy Hylton's suggestions that we also limit the size of the free tuple list. I chose 2000 as the maximum number of tuples of any particular size to save. There was also a problem with the previous version of this patch causing a core dump if Python was built with Py_TRACE_REFS. This is fixed in the below version of the patch, which uses tupledealloc instead of _Py_Dealloc. """
2000-04-21 18:15:05 -03:00
sizeof(PyTupleObject) + newsize * sizeof(PyObject *));
*pv = (PyObject *) sv;
if (sv == NULL) {
PyObject_DEL(v);
Patch by Charles G Waldman to avoid a sneaky memory leak in _PyTuple_Resize(). In addition, a change suggested by Jeremy Hylton to limit the size of the free lists is also merged into this patch. Charles wrote initially: """ Test Case: run the following code: class Nothing: def __len__(self): return 5 def __getitem__(self, i): if i < 3: return i else: raise IndexError, i def g(a,*b,**c): return for x in xrange(1000000): g(*Nothing()) and watch Python's memory use go up and up. Diagnosis: The analysis begins with the call to PySequence_Tuple at line 1641 in ceval.c - the argument to g is seen to be a sequence but not a tuple, so it needs to be converted from an abstract sequence to a concrete tuple. PySequence_Tuple starts off by creating a new tuple of length 5 (line 1122 in abstract.c). Then at line 1149, since only 3 elements were assigned, _PyTuple_Resize is called to make the 5-tuple into a 3-tuple. When we're all done the 3-tuple is decrefed, but rather than being freed it is placed on the free_tuples cache. The basic problem is that the 3-tuples are being added to the cache but never picked up again, since _PyTuple_Resize doesn't make use of the free_tuples cache. If you are resizing a 5-tuple to a 3-tuple and there is already a 3-tuple in free_tuples[3], instead of using this tuple, _PyTuple_Resize will realloc the 5-tuple to a 3-tuple. It would more efficient to use the existing 3-tuple and cache the 5-tuple. By making _PyTuple_Resize aware of the free_tuples (just as PyTuple_New), we not only save a few calls to realloc, but also prevent this misbehavior whereby tuples are being added to the free_tuples list but never properly "recycled". """ And later: """ This patch replaces my submission of Sun, 16 Apr and addresses Jeremy Hylton's suggestions that we also limit the size of the free tuple list. I chose 2000 as the maximum number of tuples of any particular size to save. There was also a problem with the previous version of this patch causing a core dump if Python was built with Py_TRACE_REFS. This is fixed in the below version of the patch, which uses tupledealloc instead of _Py_Dealloc. """
2000-04-21 18:15:05 -03:00
PyErr_NoMemory();
return -1;
}
}
_Py_NewReference((PyObject *)sv);
for (i = sv->ob_size; i < newsize; i++)
sv->ob_item[i] = NULL;
if (last_is_sticky && sizediff > 0) {
/* growing: move entries to the end and zero moved entries */
for (i = newsize - 1; i >= sizediff; i--) {
sv->ob_item[i] = sv->ob_item[i - sizediff];
sv->ob_item[i - sizediff] = NULL;
}
}
sv->ob_size = newsize;
return 0;
}
void
PyTuple_Fini()
{
#if MAXSAVESIZE > 0
int i;
Py_XDECREF(free_tuples[0]);
free_tuples[0] = NULL;
for (i = 1; i < MAXSAVESIZE; i++) {
PyTupleObject *p, *q;
p = free_tuples[i];
free_tuples[i] = NULL;
while (p) {
q = p;
p = (PyTupleObject *)(p->ob_item[0]);
PyObject_DEL(q);
}
}
#endif
}