Include/*.h should be the "portable Python API", whereas
Include/cpython/*.h should be the "CPython API": CPython
implementation details.
Changes:
* Create Include/cpython/ subdirectory
* "make install" now creates $prefix/include/cpython and copy
Include/cpython/* to $prefix/include/cpython
* Create Include/cpython/objimpl.h: move objimpl.h code
surrounded by "#ifndef Py_LIMITED_API" to cpython/objimpl.h.
* objimpl.h now includes cpython/objimpl.h
* Windows installer (MSI) now also install Include/ subdirectories:
Include/cpython/ and Include/internal/.
Partially revert commit 1a6be91e6f,
move back PyGC API from the internal API to the C API:
* _PyGCHead_NEXT(g), _PyGCHead_SET_NEXT(g, p)
* _PyGCHead_PREV(g), _PyGCHead_SET_PREV(g, p)
* _PyGCHead_FINALIZED(g), _PyGCHead_SET_FINALIZED(g)
* _PyGC_FINALIZED(o), _PyGC_SET_FINALIZED(o)
* _PyGC_PREV_MASK_FINALIZED
* _PyGC_PREV_MASK_COLLECTING
* _PyGC_PREV_SHIFT
* _PyGC_PREV_MASK
_PyObject_GC_TRACK(o) and _PyObject_GC_UNTRACK(o) remain in the
internal API.
* Convert PyObject_INIT() and PyObject_INIT_VAR() macros to static
inline functions.
* Fix usage of these functions: cast to PyObject* or PyVarObject*.
During development of the limited API support for PySide,
we saw an error in a macro that accessed a type field.
This patch fixes the 7 errors in the Python headers.
Macros which were not written as capitals were implemented
as function.
To do the necessary analysis again, a script was included that
parses all headers and looks for "->tp_" in serctions which can
be reached with active limited API.
It is easily possible to call this script as a test.
Error listing:
../../Include/objimpl.h:243
#define PyObject_IS_GC(o) (PyType_IS_GC(Py_TYPE(o)) && \
(Py_TYPE(o)->tp_is_gc == NULL || Py_TYPE(o)->tp_is_gc(o)))
Action: commented only
../../Include/objimpl.h:362
#define PyType_SUPPORTS_WEAKREFS(t) ((t)->tp_weaklistoffset > 0)
Action: commented only
../../Include/objimpl.h:364
#define PyObject_GET_WEAKREFS_LISTPTR(o) \
((PyObject **) (((char *) (o)) + Py_TYPE(o)->tp_weaklistoffset))
Action: commented only
../../Include/pyerrors.h:143
#define PyExceptionClass_Name(x) \
((char *)(((PyTypeObject*)(x))->tp_name))
Action: implemented function
../../Include/abstract.h:593
#define PyIter_Check(obj) \
((obj)->ob_type->tp_iternext != NULL && \
(obj)->ob_type->tp_iternext != &_PyObject_NextNotImplemented)
Action: implemented function
../../Include/abstract.h:713
#define PyIndex_Check(obj) \
((obj)->ob_type->tp_as_number != NULL && \
(obj)->ob_type->tp_as_number->nb_index != NULL)
Action: implemented function
../../Include/abstract.h:924
#define PySequence_ITEM(o, i)\
( Py_TYPE(o)->tp_as_sequence->sq_item(o, i) )
Action: commented only
* Py_Main() now starts by reading Py_xxx configuration variables to
only work on its own private structure, and then later writes back
the configuration into these variables.
* Replace Py_GETENV() with pymain_get_env_var() which ignores empty
variables.
* Add _PyCoreConfig.dump_refs
* Add _PyCoreConfig.malloc_stats
* _PyObject_DebugMallocStats() is now responsible to check if debug
hooks are installed. The function returns 1 if stats were written,
or 0 if the hooks are disabled. Mark _PyMem_PymallocEnabled() as
static.
PyObject_Calloc(), _PyObject_GC_Calloc(). bytes(int) and bytearray(int) are now
using ``calloc()`` instead of ``malloc()`` for large objects which is faster
and use less memory (until the bytearray buffer is filled with data).
Add new enum:
* PyMemAllocatorDomain
Add new structures:
* PyMemAllocator
* PyObjectArenaAllocator
Add new functions:
* PyMem_RawMalloc(), PyMem_RawRealloc(), PyMem_RawFree()
* PyMem_GetAllocator(), PyMem_SetAllocator()
* PyObject_GetArenaAllocator(), PyObject_SetArenaAllocator()
* PyMem_SetupDebugHooks()
Changes:
* PyMem_Malloc()/PyObject_Realloc() now always call malloc()/realloc(), instead
of calling PyObject_Malloc()/PyObject_Realloc() in debug mode.
* PyObject_Malloc()/PyObject_Realloc() now falls back to
PyMem_Malloc()/PyMem_Realloc() for allocations larger than 512 bytes.
* Redesign debug checks on memory block allocators as hooks, instead of using C
macros
* Add a new PyMemAllocators structure
* New functions:
- PyMem_RawMalloc(), PyMem_RawRealloc(), PyMem_RawFree(): GIL-free memory
allocator functions
- PyMem_GetRawAllocators(), PyMem_SetRawAllocators()
- PyMem_GetAllocators(), PyMem_SetAllocators()
- PyMem_SetupDebugHooks()
- _PyObject_GetArenaAllocators(), _PyObject_SetArenaAllocators()
* Add unit test for PyMem_Malloc(0) and PyObject_Malloc(0)
* Add unit test for new get/set allocators functions
* PyObject_Malloc() now falls back on PyMem_Malloc() instead of malloc() if
size is bigger than SMALL_REQUEST_THRESHOLD, and PyObject_Realloc() falls
back on PyMem_Realloc() instead of realloc()
* PyMem_Malloc() and PyMem_Realloc() now always call malloc() and realloc(),
instead of calling PyObject_Malloc() and PyObject_Realloc() in debug mode
longer required as of Python 2.5+ when the gc_refs changed from an int (4
bytes) to a Py_ssize_t (8 bytes) as the minimum size is 16 bytes.
The use of a 'long double' triggered a warning by Clang trunk's
Undefined-Behavior Sanitizer as on many platforms a long double requires
16-byte alignment but the Python memory allocator only guarantees 8 byte
alignment.
So our code would allocate and use these structures with technically improper
alignment. Though it didn't matter since the 'dummy' field is never used.
This silences that warning.
Spelunking into code history, the double was added in 2001 to force better
alignment on some platforms and changed to a long double in 2002 to appease
Tru64. That issue should no loner be present since the upgrade from int to
Py_ssize_t where the minimum structure size increased to 16 (unless anyone
knows of a platform where ssize_t is 4 bytes?) or 24 bytes depending on if the
build uses 4 or 8 byte pointers.
We can probably get rid of the double and this union hack all together today.
That is a slightly more invasive change that can be left for later.
A more correct non-hacky alternative if any alignment issues are still found
would be to use a compiler specific alignment declaration on the structure and
determine which value to use at configure time.
svn+ssh://pythondev@svn.python.org/python/trunk
........
r70546 | antoine.pitrou | 2009-03-23 19:41:45 +0100 (lun., 23 mars 2009) | 9 lines
Issue #4688: Add a heuristic so that tuples and dicts containing only
untrackable objects are not tracked by the garbage collector. This can
reduce the size of collections and therefore the garbage collection overhead
on long-running programs, depending on their particular use of datatypes.
(trivia: this makes the "binary_trees" benchmark from the Computer Language
Shootout 40% faster)
........
variations of the type struct and its attachments. In Py3k, all type
structs have to have all fields -- no binary backwards compatibility.
Had to change the complex object to a new-style number!
number of tests, all because of the codecs/_multibytecodecs issue described
here (it's not a Py3K issue, just something Py3K discovers):
http://mail.python.org/pipermail/python-dev/2006-April/064051.html
Hye-Shik Chang promised to look for a fix, so no need to fix it here. The
tests that are expected to break are:
test_codecencodings_cn
test_codecencodings_hk
test_codecencodings_jp
test_codecencodings_kr
test_codecencodings_tw
test_codecs
test_multibytecodec
This merge fixes an actual test failure (test_weakref) in this branch,
though, so I believe merging is the right thing to do anyway.
This was mostly a matter of adding comments and light code rearrangement.
Upon untracking, gc_next is still set to NULL. It's a cheap way to
provoke memory faults if calling code is insane. It's also used in some
way by the trashcan mechanism.