* Can now test for basic blocks.
* Optimize inverted comparisions.
* Optimize unary_not followed by a conditional jump.
* Added a new opcode, NOP, to keep code size constant.
* Applied NOP to previous transformations where appropriate.
Note, the NOP would not be necessary if other functions were
added to re-target jump addresses and update the co_lnotab mapping.
That would yield slightly faster and cleaner bytecode at the
expense of optimizer simplicity and of keeping it decoupled
from the line-numbering structure.
new line.
New pvt API function _Py_PrintReferenceAddresses(): Prints only the
addresses and refcnts of the live objects. This is always safe to call,
because it has no dependence on Python's C API.
Py_Finalize(): If envar PYTHONDUMPREFS is set, call (the new)
_Py_PrintReferenceAddresses() right before dumping final pymalloc stats.
We can't print the reprs of the objects here because too much of the
interpreter has been shut down. You need to correlate the addresses
displayed here with the object reprs printed by the earlier
PYTHONDUMPREFS call to _Py_PrintReferences().
New functions:
unsigned long PyInt_AsUnsignedLongMask(PyObject *);
unsigned PY_LONG_LONG) PyInt_AsUnsignedLongLongMask(PyObject *);
unsigned long PyLong_AsUnsignedLongMask(PyObject *);
unsigned PY_LONG_LONG) PyLong_AsUnsignedLongLongMask(PyObject *);
New and changed format codes:
b unsigned char 0..UCHAR_MAX
B unsigned char none **
h unsigned short 0..USHRT_MAX
H unsigned short none **
i int INT_MIN..INT_MAX
I * unsigned int 0..UINT_MAX
l long LONG_MIN..LONG_MAX
k * unsigned long none
L long long LLONG_MIN..LLONG_MAX
K * unsigned long long none
Notes:
* New format codes.
** Changed from previous "range-and-a-half" to "none"; the
range-and-a-half checking wasn't particularly useful.
New test test_getargs2.py, to verify all this.
Arranged that all the objects exposed by __builtin__ appear in the list
of all objects. I basically peed away two days tracking down a mystery
leak in sys.gettotalrefcount() in a ZODB app (== tons of code), because
the object leaking the references didn't appear in the sys.getobjects(0)
list. The object happened to be False. Now False is in the list, along
with other popular & previously missing leak candidates (like None).
Alas, we still don't have a choke point covering *all* Python objects,
so the list of all objects may still be incomplete.
_Py_AddToAllObjects() that simply inserts an object at the front of
the doubly-linked list of all objects. Changed PyType_Ready() (the
closest thing we've got to a choke point for type objects) to call
that.
variables to store internal data. As a result, any atempts to use the
unicode system with multiple active interpreters, or successive
interpreter executions, would fail.
Now that information is stored into members of the PyInterpreterState
structure.
with an indented code block but no newline would raise SyntaxError.
This would have been a four-line change in parsetok.c... Except
codeop.py depends on this behavior, so a compilation flag had to be
invented that causes the tokenizer to revert to the old behavior;
this required extra changes to 2 .h files, 2 .c files, and 2 .py
files. (Fixes SF bug #501622.)
-DCALL_PROFILE: Count the number of function calls executed.
When this symbol is defined, the ceval mainloop and helper functions
count the number of function calls made. It keeps detailed statistics
about what kind of object was called and whether the call hit any of
the special fast paths in the code.
Optimization:
When we take the fast_function() path, which seems to be taken for
most function calls, and there is minimal frame setup to do, avoid
call PyEval_EvalCodeEx(). The eval code ex function does a lot of
work to handle keywords args and star args, free variables,
generators, etc. The inlined version simply allocates the frame and
copies the arguments values into the frame.
The optimization gets a little help from compile.c which adds a
CO_NOFREE flag to code objects that don't have free variables or cell
variables. This change allows fast_function() to get into the fast
path with fewer tests.
I measure a couple of percent speedup in pystone with this change, but
there's surely more that can be done.
__module__ is the string name of the module the function was defined
in, just like __module__ of classes. In some cases, particularly for
C functions, the __module__ may be None.
Change PyCFunction_New() from a function to a macro, but keep an
unused copy of the function around so that we don't change the binary
API.
Change pickle's save_global() to use whichmodule() if __module__ is
None, but add the __module__ logic to whichmodule() since it might be
used outside of pickle.
needs of pickling longs. Backed off to a definition that's much easier
to understand. The pickler will have to work a little harder, but other
uses are more likely to be correct <0.5 wink>.
_PyLong_Sign(): New teensy function to characterize a long, as to <0, ==0,
or >0.
start for the C implemention of new pickle LONG1 and LONG4 opcodes (the
linear-time way to pickle a long is to call _PyLong_AsByteArray, but
the caller has no idea how big an array to allocate, and correct
calculation is a bit subtle).
the AEDesc data shouldn't be disposed when the Python object is.
Added a C call AEDesc_NewBorrowed() to create these objects and a Python
method old=AEDesc.AutoDispose(onoff) to change auto-dispose state.
hoped it would be, but not too bad. A test had to change:
time.__setstate__() can no longer add a non-None tzinfo member to a time
object that didn't already have one, since storage for a tzinfo member
doesn't exist in that case.
The attached patch enables shared extension
modules to build cleanly under Cygwin without
moving the static initialization of certain function
pointers (i.e., ones exported from the Python
DLL core) to a module initialization function.
Additionally, this patch fixes the modules that
have been changed in the past to accommodate
Cygwin.
Initialize the small integers and __builtins__ in startup.
This removes some if conditions.
Change XDECREF to DECREF for values which shouldn't be NULL.
- new import hooks in import.c, exposed in the sys module
- new module called 'zipimport'
- various changes to allow bootstrapping from zip files
I hope I didn't break the Windows build (or anything else for that
matter), but then again, it's been sitting on sf long enough...
Regarding the latest discussions on python-dev: zipimport sets
pkg.__path__ as specified in PEP 273, and likewise, sys.path item such as
/path/to/Archive.zip/subdir/ are supported again.
This can cause core dumps when Python runs. Python relies on the 754-
(and C99-) mandated default "non-stop" mode for FP exceptions. This
patch from Ben Laurie disables at least one FP exception on FreeBSD at
Python startup time.
Most of these patches are from Thomas Heller, with long lines folded
by Tim. The change to test_descr.py is from Guido. See the bug report.
Not a bugfix candidate -- METH_CLASS is new in 2.3.
system headers have two declarations for wchar_t, with different guard macros.
Not sure whether this is a bugfix candidate, that depends on what changed in the
curses module.
Py_Init crash". refchain cannot be cleared because objects can live across
Py_Finalize() and Py_Initialize() if they are kept alive by circular
references.
the end, in the hope of saving some bytes on 64-bit machines. (Too
bad n_nchildren can't be made an unsigned short, but
test/test_longexp.py specifically tests for more than 2**16 subtrees
at one level.)
I don't expect any binary compatibility issues here, unless someone
has an old binary of parsermodule.so saved away.
globals, _Py_Ticker and _Py_CheckInterval. This also implements Jeremy's
shortcut in Py_AddPendingCall that zeroes out _Py_Ticker. This allows the
test in the main loop to only test a single value.
The gory details are at
http://python.org/sf/602191
Use a slightly different strategy to determine when not to call the line
trace function. This removes the need for the RETURN_NONE opcode, so
that's gone again. Update docs and comments to match.
Thanks to Neal and Armin!
Also add a test suite. This should have come with the original patch...
interning. I modified Oren's patch significantly, but the basic idea
and most of the implementation is unchanged. Interned strings created
with PyString_InternInPlace() are now mortal, and you must keep a
reference to the resulting string around; use the new function
PyString_InternImmortal() to create immortal interned strings.
[ 587993 ] SET_LINENO killer
Remove SET_LINENO. Tracing is now supported by inspecting co_lnotab.
Many sundry changes to document and adapt to this change.
a lot of work: it had to save and restore the current exception around
a call to lookup_maybe(), because that could fail in rare cases, and
most objects don't have a __del__ method, so the whole exercise was
usually a waste of time. Changed this to cache the __del__ method in
the type object just like all other special methods, in a new slot
tp_del. So now subtype_dealloc() can test whether tp_del is NULL and
skip the whole exercise if it is. The new slot doesn't need a new
flag bit: subtype_dealloc() is only called if the type was dynamically
allocated by type_new(), so it's guaranteed to have all current slots.
Types defined in C cannot fill in tp_del with a function of their own,
so there's no corresponding "wrapper". (That functionality is already
available through tp_dealloc.)
For a file f, iter(f) now returns f (unless f is closed), and f.next()
is similar to f.readline() when EOF is not reached; however, f.next()
uses a readahead buffer that messes up the file position, so mixing
f.next() and f.readline() (or other methods) doesn't work right.
Calling f.seek() drops the readahead buffer, but other operations
don't.
The real purpose of this change is to reduce the confusion between
objects and their iterators. By making a file its own iterator, it's
made clearer that using the iterator modifies the file object's state
(in particular the current position).
A nice side effect is that this speeds up "for line in f:" by not
having to use the xreadlines module. The f.xreadlines() method is
still supported for backwards compatibility, though it is the same as
iter(f) now.
(I made some cosmetic changes to Oren's code, and added a test for
"file closed" to file_iternext() and file_iter().)
actual script to run in case we are running from an applet. If we are indeed
running an applet we skip the normal option processing leaving it all to the
applet code.
This allows us to get use the normal python binary in the Python.app bundle,
giving us all the normal command line options through PythonLauncher while
still allowing Python.app to be used as the template for building applets.
Consequently, pythonforbundle is gone, and Mac/Python/macmain.c isn't used
on OSX anymore.
PyErr_SetExcFromWindowsErr(), PyErr_SetExcFromWindowsErrWithFilename().
Similar to PyErr_SetFromWindowsErrWithFilename() and
PyErr_SetFromWindowsErr(), but they allow to specify
the exception type to raise. Available on Windows.
See SF patch #576458.
This gets compilation of posixmodule.c to succeed on Tru64 and does no
harm on Linux. We may need to undefine it on some platforms, but
let's wait and see.
Martin says:
> I think it is generally the right thing to define _XOPEN_SOURCE on
> Unix, providing a negative list of systems that cannot support this
> setting (or preferably solving whatever problems remain).
>
> I'd put an (unconditional) AC_DEFINE into configure.in early on; it
> *should* go into confdefs.h as configure proceeds, and thus be active
> when other tests are performed.
The staticforward define was needed to support certain broken C
compilers (notably SCO ODT 3.0, perhaps early AIX as well) botched the
static keyword when it was used with a forward declaration of a static
initialized structure. Standard C allows the forward declaration with
static, and we've decided to stop catering to broken C compilers. (In
fact, we expect that the compilers are all fixed eight years later.)
I'm leaving staticforward and statichere defined in object.h as
static. This is only for backwards compatibility with C extensions
that might still use it.
XXX I haven't updated the documentation.
helper macros to something saner, and used them appropriately in other
files too, to reduce #ifdef blocks.
classobject.c, instance_dealloc(): One of my worst Python Memories is
trying to fix this routine a few years ago when COUNT_ALLOCS was defined
but Py_TRACE_REFS wasn't. The special-build code here is way too
complicated. Now it's much simpler. Difference: in a Py_TRACE_REFS
build, the instance is no longer in the doubly-linked list of live
objects while its __del__ method is executing, and that may be visible
via sys.getobjects() called from a __del__ method. Tough -- the object
is presumed dead while its __del__ is executing anyway, and not calling
_Py_NewReference() at the start allows enormous code simplification.
typeobject.c, call_finalizer(): The special-build instance_dealloc()
pain apparently spread to here too via cut-'n-paste, and this is much
simpler now too. In addition, I didn't understand why this routine
was calling _PyObject_GC_TRACK() after a resurrection, since there's no
plausible way _PyObject_GC_UNTRACK() could have been called on the
object by this point. I suspect it was left over from pasting the
instance_delloc() code. Instead asserted that the object is still
tracked. Caution: I suspect we don't have a test that actually
exercises the subtype_dealloc() __del__-resurrected-me code.
that their uses can be prettier. I've come to despise the names I picked
for these things, though, and expect to change all of them -- I changed
a bunch of other files to use them (replacing #ifdef blocks), but the
names were so obscure out of context that I backed that all out again.
more trivial lexical helper macros so that uses of these guys expand
to nothing at all when they're not enabled. This should help sub-
standard compilers that can't do a good job of optimizing away the
previous "(void)0" expressions.
Py_DECREF: There's only one definition of this now. Yay! That
was that last one in the family defined multiple times in an #ifdef
maze.
Py_FatalError(): Changed the char* signature to const char*.
_Py_NegativeRefcount(): New helper function for the Py_REF_DEBUG
expansion of Py_DECREF. Calling an external function cuts down on
the volume of generated code. The previous inline expansion of abort()
didn't work as intended on Windows (the program often kept going, and
the error msg scrolled off the screen unseen). _Py_NegativeRefcount
calls Py_FatalError instead, which captures our best knowledge of
how to abort effectively across platforms.
that have taken me "too long" to reverse-engineer over the years.
Vastly reduced the nesting level and redundancy of #ifdef-ery.
Took a light stab at repairing comments that are no longer true.
sys_gettotalrefcount(): Changed to enable under Py_REF_DEBUG.
It was enabled under Py_TRACE_REFS, which was much heavier than
necessary. sys.gettotalrefcount() is now available in a
Py_REF_DEBUG-only build.
mechanism is no longer evil: it no longer plays dangerous games with
the type pointer or refcounts, and objects in extension modules can play
along too without needing to edit the core first.
Rewrote all the comments to explain this, and (I hope) give clear
guidance to extension authors who do want to play along. Documented
all the functions. Added more asserts (it may no longer be evil, but
it's still dangerous <0.9 wink>). Rearranged the generated code to
make it clearer, and to tolerate either the presence or absence of a
semicolon after the macros. Rewrote _PyTrash_destroy_chain() to call
tp_dealloc directly; it was doing a Py_DECREF again, and that has all
sorts of obscure distorting effects in non-release builds (Py_DECREF
was already called on the object!). Removed Christian's little "embedded
change log" comments -- that's what checkin messages are for, and since
it was impossible to correlate the comments with the code that changed,
I found them merely distracting.
This was mostly a matter of adding comments and light code rearrangement.
Upon untracking, gc_next is still set to NULL. It's a cheap way to
provoke memory faults if calling code is insane. It's also used in some
way by the trashcan mechanism.
object should now have a well-defined gc_refs value, with clear transitions
among gc_refs states. As a result, none of the visit_XYZ traversal
callbacks need to check IS_TRACKED() anymore, and those tests were removed.
(They were already looking for objects with specific gc_refs states, and
the gc_refs state of an untracked object can no longer match any other
gc_refs state by accident.)
Added more asserts.
I expect that the gc_next == NULL indicator for an untracked object is
now redundant and can also be removed, but I ran out of time for this.
[ 400998 ] experimental support for extended slicing on lists
somewhat spruced up and better tested than it was when I wrote it.
Includes docs & tests. The whatsnew section needs expanding, and arrays
should support extended slices -- later.
This patch complies with the following request found
near the top of configure.in:
# This is for stuff that absolutely must end up in pyconfig.h.
# Please use pyport.h instead, if possible.
I tested this patch under Cygwin, Win32, and Red
Hat Linux. Python built and ran successfully on
each of these platforms.
for 'str' and 'unicode', and can be used instead of
types.StringTypes, e.g. to test whether something is "a string":
isinstance(x, string) is True for Unicode and 8-bit strings. This
is an abstract base class and cannot be instantiated directly.
The old syntax suggested that a trailing comma was OK inside backticks,
but in fact (due to ideosyncrasies of pgen) it was not. Fix the grammar
to avoid the ambiguity. Fred: you may want to update the refman.
This patch complies with the following request found
near the top of configure.in:
# This is for stuff that absolutely must end up in pyconfig.h.
# Please use pyport.h instead, if possible.
I tested this patch under Cygwin, Win32, and Red
Hat Linux. Python built and ran successfully on
each of these platforms.
As threatened, PyMem_{Free, FREE} also invoke the object deallocator now
when pymalloc is enabled (well, it does when pymalloc isn't enabled too,
but in that case "the object deallocator" is plain free()).
This is maximally backward-compatible, but it leaves a bitter aftertaste.
Also massive reworking of comments.
http://www.python.org/sf/444708
This adds the optional argument for str.strip
to unicode.strip too and makes it possible
to call str.strip with a unicode argument
and unicode.strip with a str argument.
+ Redirect PyMem_{Del, DEL} to the object allocator's free() when
pymalloc is enabled. Needed so old extensions can continue to
mix PyObject_New with PyMem_DEL.
+ This implies that pgen needs to be able to see the PyObject_XYZ
declarations too. pgenheaders.h now includes Python.h. An
implication is that I expect obmalloc.o needs to get linked into
pgen on non-Windows boxes.
+ When PYMALLOC_DEBUG is defined, *all* Py memory API functions
now funnel through the debug allocator wrapper around pymalloc.
This is the default in a debug build.
+ That caused compile.c to fail: it indirectly mixed PyMem_Malloc
with raw platform free() in one place. This is verbotten.
+ Continued looping until n bytes in the buffer have been filled, not
just when n bytes have been read from the file. This repairs the
bug that f.readlines() only sucked up the first 8192 bytes of the file
on Windows when universal newlines was enabled and f was opened in
U mode (see Python-Dev -- this was the ultimate cause of the
test_inspect.py failure).
+ Changed prototye to take a char* buffer (void* doesn't make much sense).
+ Squashed size_t vs int mismatches (in particular, besides the unsigned
vs signed distinction, size_t may be larger than int).
+ Gets out under all error conditions now (it's possible for fread() to
suffer an error even if it returns a number larger than 0 -- any
"short read" is an error or EOF condition).
+ Rearranged and simplified declarations.
Highlights: import and friends will understand any of \r, \n and \r\n
as end of line. Python file input will do the same if you use mode 'U'.
Everything can be disabled by configuring with --without-universal-newlines.
See PEP278 for details.
Added code to call this when PYMALLOC_DEBUG is enabled, and envar
PYTHONMALLOCSTATS is set, whenever a new arena is obtained and once
late in the Python shutdown process.
PyMem_{Del, DEL} doesn't work yet (compilation problems).
pyport.h: _PyMem_EXTRA is gone.
pmem.h: Repaired comments. PyMem_{Malloc, MALLOC} and
PyMem_{Realloc, REALLOC} now make the same x-platform guarantees when
asking for 0 bytes, and when passing a NULL pointer to the latter.
object.c: PyMem_{Malloc, Realloc} just call their macro versions
now, since the latter take care of the x-platform 0 and NULL stuff
by themselves now.
pypcre.c, grow_stack(): So sue me. On two lines, this called
PyMem_RESIZE to grow a "const" area. It's not legit to realloc a
const area, so the compiler warned given the new expansion of
PyMem_RESIZE. It would have gotten the same warning before if it
had used PyMem_Resize() instead; the older macro version, but not the
function version, silently cast away the constness. IMO that was a wrong
thing to do, and the docs say the macro versions of PyMem_xyz are
deprecated anyway. If somebody else is resizing const areas with the
macro spelling, they'll get a warning when they recompile now too.
it's enabled.
Allow PyObject_Del, PyObject_Free, and PyObject_GC_Del to be used as
function designators. Provide source compatibility macros.
Make PyObject_GC_Track and PyObject_GC_UnTrack functions instead of
trivial macros wrapping functions.
PEP 285. Everything described in the PEP is here, and there is even
some documentation. I had to fix 12 unit tests; all but one of these
were printing Boolean outcomes that changed from 0/1 to False/True.
(The exception is test_unicode.py, which did a type(x) == type(y)
style comparison. I could've fixed that with a single line using
issubtype(x, type(y)), but instead chose to be explicit about those
places where a bool is expected.
Still to do: perhaps more documentation; change standard library
modules to return False/True from predicates.
This displays stats about the # of arenas, pools, blocks and bytes, to
stderr, both used and reserved but unused.
CAUTION: Because PYMALLOC_DEBUG is on, the debug malloc routine adds
16 bytes to each request. This makes each block appear two size classes
higher than it would be if PYMALLOC_DEBUG weren't on.
So far, playing with this confirms the obvious: there's a lot of activity
in the "small dict" size class, but nothing in the core makes any use of
the 8-byte or 16-byte classes.
descriptor, as used for the tp_methods slot of a type. These new flag
bits are both optional, and mutually exclusive. Most methods will not
use either. These flags are used to create special method types which
exist in the same namespace as normal methods without having to use
tedious construction code to insert the new special method objects in
the type's tp_dict after PyType_Ready() has been called.
If METH_CLASS is specified, the method will represent a class method
like that returned by the classmethod() built-in.
If METH_STATIC is specified, the method will represent a static method
like that returned by the staticmethod() built-in.
These flags may not be used in the PyMethodDef table for modules since
these special method types are not meaningful in that case; a
ValueError will be raised if these flags are found in that context.
When WITH_PYMALLOC is defined, define PYMALLOC_DEBUG to enable the debug
allocator. This can be done independent of build type (release or debug).
A debug build automatically defines PYMALLOC_DEBUG when pymalloc is
enabled. It's a detected error to define PYMALLOC_DEBUG when pymalloc
isn't enabled.
Two debugging entry points defined only under PYMALLOC_DEBUG:
+ _PyMalloc_DebugCheckAddress(const void *p) can be used (e.g., from gdb)
to sanity-check a memory block obtained from pymalloc. It sprays
info to stderr (see next) and dies via Py_FatalError if the block is
detectably damaged.
+ _PyMalloc_DebugDumpAddress(const void *p) can be used to spray info
about a debug memory block to stderr.
A tiny start at implementing "API family" checks isn't good for
anything yet.
_PyMalloc_DebugRealloc() has been optimized to do little when the new
size is <= old size. However, if the new size is larger, it really
can't call the underlying realloc() routine without either violating its
contract, or knowing something non-trivial about how the underlying
realloc() works. A memcpy is always done in this case.
This was a disaster for (and only) one of the std tests: test_bufio
creates single text file lines up to a million characters long. On
Windows, fileobject.c's get_line() uses the horridly funky
getline_via_fgets(), which keeps growing and growing a string object
hoping to find a newline. It grew the string object 1000 bytes each
time, so for a million-character string it took approximately forever
(I gave up after a few minutes).
So, also:
fileobject.c, getline_via_fgets(): When a single line is outrageously
long, grow the string object at a mildly exponential rate, instead of
just 1000 bytes at a time.
That's enough so that a debug-build test_bufio finishes in about 5 seconds
on my Win98SE box. I'm curious to try this on Win2K, because it has very
different memory behavior than Win9X, and test_bufio always took a factor
of 10 longer to complete on Win2K. It *could* be that the endless
reallocs were simply killing it on Win2K even in the release build.
and not pymalloc. Add the functions PyMalloc_New, PyMalloc_NewVar, and
PyMalloc_Del that will use pymalloc if it's enabled. If pymalloc is
not enabled then they use the standard malloc (PyMem_*).
Windows some modules are considered (by me, and I don't care what anyone
else thinks about this <wink>) to be part of "the core" despite that they
happen to be compiled into separate DLLs (the "to DLL or not to DLL?"
question on Windows is nearly arbitrary). Making the pymalloc entry
points available to them allows the Windows build to complete without
incident when WITH_PYMALLOC is #define'd.
Note that this isn't unprecedented. Other "private API" functions we
export include _PySequence_IterSearch, _PyEval_SliceIndex, _PyCodec_Lookup,
_Py_ZeroStruct, _Py_TrueStruct, _PyLong_New and _PyModule_Clear.
Another year in the quest to out-guess random C behavior.
Added macros Py_ADJUST_ERANGE1(X) and Py_ADJUST_ERANGE2(X, Y). The latter
is useful for functions with complex results. Two corrections to errno-
after-libm-call are attempted:
1. If the platform set errno to ERANGE due to underflow, clear errno.
Some unknown subset of libm versions and link options do this. It's
allowed by C89, but I never figured anyone would do it.
2. If the platform did not set errno but overflow occurred, force
errno to ERANGE. C89 required setting errno to ERANGE, but C99
doesn't. Some unknown subset of libm versions and link options do
it the C99 way now.
Bugfix candidate, but hold off until some Linux people actually try it,
with and without -lieee. I'll send a help plea to Python-Dev.
platform realloc(p, 0) returns NULL, so MALLOC_ZERO_RETURNS_NULL can
be correctly undefined yet realloc(p, 0) can return NULL anyway.
Prevent realloc(p, 0) doing free(p) and returning NULL via a different
hack. Would probably be better to get rid of MALLOC_ZERO_RETURNS_NULL
entirely.
Bugfix candidate.
alignment gimmick. David Abrahams notes that the standard "long double"
actually requires stricter alignment than "double" on some Tru64 box.
On my box and yours <wink>, it's the same, so no harm done on most
boxes.
the first 3 characters of this string in several places, so for as long
as they remain "2.2" it confuses the heck out of attempts to build 2.3
stuff using distutils.
Removed the ancient "#define ANY void".
Bugfix candidate? Hard call. The bug report claims the existence of
this #define creates conflicts with other packages, which is easy to
believe. OTOH, some extension authors may still be relying on its
presence. I'm afraid you can't win on this one.
PyDict_UpdateFromSeq2(): removed it.
PyDict_MergeFromSeq2(): made it public and documented it.
PyDict_Merge() docs: updated to reveal <wink> that the second
argument can be any mapping object.
type.__module__ behavior.
This adds the module name and a dot in front of the type name in every
type object initializer, except for built-in types (and those that
already had this). Note that it touches lots of Mac modules -- I have
no way to test these but the changes look right. Apologies if they're
not. This also touches the weakref docs, which contains a sample type
object initializer. It also touches the mmap test output, because the
mmap type's repr is included in that output. It touches object.h to
put the correct description in a comment.
Big Hammer to implement -Qnew as PEP 238 says it should work (a global
option affecting all instances of "/").
pydebug.h, main.c, pythonrun.c: define a private _Py_QnewFlag flag, true
iff -Qnew is passed on the command line. This should go away (as the
comments say) when true division becomes The Rule. This is
deliberately not exposed to runtime inspection or modification: it's
a one-way one-shot switch to pretend you're using Python 3.
ceval.c: when _Py_QnewFlag is set, treat BINARY_DIVIDE as
BINARY_TRUE_DIVIDE.
test_{descr, generators, zipfile}.py: fiddle so these pass under
-Qnew too. This was just a matter of s!/!//! in test_generators and
test_zipfile. test_descr was trickier, as testbinop() is passed
assumptions that "/" is the same as calling a "__div__" method; put
a temporary hack there to call "__truediv__" instead when the method
name is "__div__" and 1/2 evaluates to 0.5.
Three standard tests still fail under -Qnew (on Windows; somebody
please try the Linux tests with -Qnew too! Linux runs a whole bunch
of tests Windows doesn't):
test_augassign
test_class
test_coercion
I can't stay awake longer to stare at this (be my guest). Offhand
cures weren't obvious, nor was it even obvious that cures are possible
without major hackery.
Question: when -Qnew is in effect, should calls to __div__ magically
change into calls to __truediv__? See "major hackery" at tail end of
last paragraph <wink>.
There's now a new structmember code, T_OBJECT_EX, which is used for
all __slot__ variables (except __weakref__, which has special behavior
anyway). This new code raises AttributeError when the variable is
NULL rather than converting NULL to None.
use wrappers on all platforms, to make this as consistent as possible x-
platform (in particular, make sure there's at least one \0 byte in
the output buffer). Also document more of the truth about what these do.
getargs.c, seterror(): Three computations of remaining buffer size were
backwards, thus telling PyOS_snprintf the buffer is larger than it
actually is. This matters a lot now that PyOS_snprintf ensures there's a
trailing \0 byte (because it didn't get the truth about the buffer size,
it was storing \0 beyond the true end of the buffer).
sysmodule.c, mywrite(): Simplify, now that PyOS_vsnprintf guarantees to
produce a \0 byte.
const char* instead of char*. The change is conceptually correct, and
indirectly fixes a compiler wng introduced when somebody else innocently
passed a const char* to this function.
RISCOS/Makefile:
include structseq and weakrefobject;
changes to keep command line length below 2048
RISCOS/Modules/riscosmodule.c:
typos from the stat structseq patch
Include/pyport.h:
don't re-#define __attribute__(__x) on RISC OS as it is already defined in c library
object.h: Added PyType_CheckExact macro.
typeobject.c, type_new():
+ Use the new macro.
+ Assert that the arguments have the right types rather than do incomplete
runtime checks "sometimes".
+ If this isn't the 1-argument flavor() of type, and there aren't 3 args
total, produce a "types() takes 1 or 3 args" msg before
PyArg_ParseTupleAndKeywords produces a "takes exactly 3" msg.
PyObject_CallFunctionObArgs() and PyObject_CallMethodObArgs() have the
advantage that no format strings need to be parsed. The CallMethod
variant also avoids creating a new string object in order to retrieve
a method from an object as well.
outer level, the iterator protocol is used for memory-efficiency (the
outer sequence may be very large if fully materialized); at the inner
level, PySequence_Fast() is used for time-efficiency (these should
always be sequences of length 2).
dictobject.c, new functions PyDict_{Merge,Update}FromSeq2. These are
wholly analogous to PyDict_{Merge,Update}, but process a sequence-of-2-
sequences argument instead of a mapping object. For now, I left these
functions file static, so no corresponding doc changes. It's tempting
to change dict.update() to allow a sequence-of-2-seqs argument too.
Also changed the name of dictionary's keyword argument from "mapping"
to "x". Got a better name? "mapping_or_sequence_of_pairs" isn't
attractive, although more so than "mosop" <wink>.
abstract.h, abstract.tex: Added new PySequence_Fast_GET_SIZE function,
much faster than going thru the all-purpose PySequence_Size.
libfuncs.tex:
- Document dictionary().
- Fiddle tuple() and list() to admit that their argument is optional.
- The long-winded repetitions of "a sequence, a container that supports
iteration, or an iterator object" is getting to be a PITA. Many
months ago I suggested factoring this out into "iterable object",
where the definition of that could include being explicit about
generators too (as is, I'm not sure a reader outside of PythonLabs
could guess that "an iterator object" includes a generator call).
- Please check my curly braces -- I'm going blind <0.9 wink>.
abstract.c, PySequence_Tuple(): When PyObject_GetIter() fails, leave
its error msg alone now (the msg it produces has improved since
PySequence_Tuple was generalized to accept iterable objects, and
PySequence_Tuple was also stomping on the msg in cases it shouldn't
have even before PyObject_GetIter grew a better msg).
STRICT_SYSV_CURSES when compiling curses module on HP/UX. Generalize
access to _flags on systems where WINDOW is opaque. Fixes bugs
#432497, #422265, and the curses parts of #467145 and #473150.
'slotdef' structure typedef and 'struct wrapperbase'. By adding the
wrapper docstrings to the slotdef structure, the slotdefs array can
serve as the data structure that drives add_operators(); the wrapper
descriptor contains a pointer to slotdef structure. This replaces
lots of custom code from add_operators() by a loop over the slotdefs
array, and does away with all the tab_xxx tables.
This patch implements what we have discussed on python-dev late in
September: str(obj) and unicode(obj) should behave similar, while
the old behaviour is retained for unicode(obj, encoding, errors).
The patch also adds a new feature with which objects can provide
unicode(obj) with input data: the __unicode__ method. Currently no
new tp_unicode slot is implemented; this is left as option for the
future.
Note that PyUnicode_FromEncodedObject() no longer accepts Unicode
objects as input. The API name already suggests that Unicode
objects do not belong in the list of acceptable objects and the
functionality was only needed because
PyUnicode_FromEncodedObject() was being used directly by
unicode(). The latter was changed in the discussed way:
* unicode(obj) calls PyObject_Unicode()
* unicode(obj, encoding, errors) calls PyUnicode_FromEncodedObject()
One thing left open to discussion is whether to leave the
PyUnicode_FromObject() API as a thin API extension on top of
PyUnicode_FromEncodedObject() or to turn it into a (macro) alias
for PyObject_Unicode() and deprecate it. Doing so would have some
surprising consequences though, e.g. u"abc" + 123 would turn out
as u"abc123"...
[Marc-Andre didn't have time to check this in before the deadline. I
hope this is OK, Marc-Andre! You can still make changes and commit
them on the trunk after the branch has been made, but then please mail
Barry a context diff if you want the change to be merged into the
2.2b1 release branch. GvR]
This is a big one, touching lots of files. Some of the platforms
aren't tested yet. Briefly, this changes the return value of the
os/posix functions stat(), fstat(), statvfs(), fstatvfs(), and the
time functions localtime(), gmtime(), and strptime() from tuples into
pseudo-sequences. When accessed as a sequence, they behave exactly as
before. But they also have attributes like st_mtime or tm_year. The
stat return value, moreover, has a few platform-specific attributes
that are not available through the sequence interface (because
everybody expects the sequence to have a fixed length, these couldn't
be added there). If your platform's struct stat doesn't define
st_blksize, st_blocks or st_rdev, they won't be accessible from Python
either.
(Still missing is a documentation update.)
This changes Pythread_start_thread() to return the thread ID, or -1
for an error. (It's technically an incompatible API change, but I
doubt anyone calls it.)
"for <var> in <testlist> may no longer be a single test followed by
a comma. This solves SF bug #431886. Note that if the testlist
contains more than one test, a trailing comma is still allowed, for
maximum backward compatibility; but this example is not:
[(x, y) for x in range(10), for y in range(10)]
^
The fix involved creating a new nonterminal 'testlist_safe' whose
definition doesn't allow the trailing comma if there's only one test:
testlist_safe: test [(',' test)+ [',']]
The platform requires 8-byte alignment for doubles, but the GC header
was 12 bytes and that threw off the natural alignment of the double
members of a subtype of complex. The fix puts the GC header into a
union with a double as the other member, to force no-looser-than
double alignment of GC headers. On boxes that require 8-byte alignment
for doubles, this may add pad bytes to the GC header accordingly; ditto
for platforms that *prefer* 8-byte alignment for doubles. On platforms
that don't care, it shouldn't change the memory layout (because the
size of the old GC header is certainly greater than the size of a double
on all platforms, so unioning with a double shouldn't change size or
alignment on such boxes).
is a list of weak references to types (new-style classes). Make this
accessible to Python as the function __subclasses__ which returns a
list of types -- we don't want Python programmers to be able to
manipulate the raw list.
In order to make this possible, I also had to add weak reference
support to type objects.
This will eventually be used together with a trap on attribute
assignment for dynamic classes for a major speed-up without losing the
dynamic properties of types: when a __foo__ method is added to a
class, the class and all its subclasses will get an appropriate tp_foo
slot function.
This simplifies the rounding in _PyObject_VAR_SIZE, allows to restore the
pre-rounding calling sequence, and allows some nice little simplifications
in its callers. I'm still making it return a size_t, though.
As Guido suggested, this makes the new subclassing code substantially
simpler. But the mechanics of doing it w/ C macro semantics are a mess,
and _PyObject_VAR_SIZE has a new calling sequence now.
Question: The PyObject_NEW_VAR macro appears to be part of the public API.
Regardless of what it expands to, the notion that it has to round up the
memory it allocates is new, and extensions containing the old
PyObject_NEW_VAR macro expansion (which was embedded in the
PyObject_NEW_VAR expansion) won't do this rounding. But the rounding
isn't actually *needed* except for new-style instances with dict pointers
after a variable-length blob of embedded data. So my guess is that we do
not need to bump the API version for this (as the rounding isn't needed
for anything an extension can do unless it's recompiled anyway). What's
your guess?
pad memory to properly align the __dict__ pointer in all cases.
gcmodule.c/objimpl.h, _PyObject_GC_Malloc:
+ Added a "padding" argument so that this flavor of malloc can allocate
enough bytes for alignment padding (it can't know this is needed, but
its callers do).
typeobject.c, PyType_GenericAlloc:
+ Allocated enough bytes to align the __dict__ pointer.
+ Sped and simplified the round-up-to-PTRSIZE logic.
+ Added blank lines so I could parse the if/else blocks <0.7 wink>.
test dramatically:
class T(tuple): __dynamic__ = 1
t = T(range(1000))
for i in range(1000): tt = tuple(t)
The speedup was about 5x compared to the previous state of CVS (1.7
vs. 8.8, in arbitrary time units). But it's still more than twice as
slow as as the same test with __dynamic__ = 0 (0.8).
I'm not sure that I really want to go through the trouble of this kind
of speedup for every slot. Even doing it just for the most popular
slots will be a major effort (the new slot_sq_item is 40+ lines, while
the old one was one line with a powerful macro -- unfortunately the
speedup comes from expanding the macro and doing things in a way
specific to the slot signature).
An alternative that I'm currently considering is sketched in PLAN.txt:
trap setattr on type objects. But this will require keeping track of
all derived types using weak references.
instances).
Also added GC support to various auxiliary types: super, property,
descriptors, wrappers, dictproxy. (Only type objects have a tp_clear
field; the other types are.)
One change was necessary to the GC infrastructure. We have statically
allocated type objects that don't have a GC header (and can't easily
be given one) and heap-allocated type objects that do have a GC
header. Giving these different metatypes would be really ugly: I
tried, and I had to modify pickle.py, cPickle.c, copy.py, add a new
invent a new name for the new metatype and make it a built-in, change
affected tests... In short, a mess. So instead, we add a new type
slot tp_is_gc, which is a simple Boolean function that determines
whether a particular instance has GC headers or not. This slot is
only relevant for types that have the (new) GC flag bit set. If the
tp_is_gc slot is NULL (by far the most common case), all instances of
the type are deemed to have GC headers. This slot is called by the
PyObject_IS_GC() macro (which is only used twice, both times in
gcmodule.c).
I also changed the extern declarations for a bunch of GC-related
functions (_PyObject_GC_Del etc.): these always exist but objimpl.h
only declared them when WITH_CYCLE_GC was defined, but I needed to be
able to reference them without #ifdefs. (When WITH_CYCLE_GC is not
defined, they do the same as their non-GC counterparts anyway.)
The patch repaired internal gcc compiler errors on BeOS.
This checkin repairs them in a simpler way, by explicitly casting the
platform INFINITY to double.
no backwards compatibility to worry about, so I just pushed the
'closure' struct member to the back -- it's never used in the current
code base (I may eliminate it, but that's more work because the getter
and setter signatures would have to change.)
As examples, I added actual docstrings to the getset attributes of a
few types: file.closed, xxsubtype.spamdict.state.
compatibility, this required all places where an array of "struct
memberlist" structures was declared that is referenced from a type's
tp_members slot to change the type of the structure to PyMemberDef;
"struct memberlist" is now only used by old code that still calls
PyMember_Get/Set. The code in PyObject_GenericGetAttr/SetAttr now
calls the new APIs PyMember_GetOne/SetOne, which take a PyMemberDef
argument.
As examples, I added actual docstrings to the attributes of a few
types: file, complex, instance method, super, and xxsubtype.spamlist.
Also converted the symtable to new style getattr.
hack, and it's even more disgusting than a PyInstance_Check() call.
If the tp_compare slot is the slot used for overrides in Python,
it's always called.
Add some tests that show what should work too.
Renamed the 'readonly' field to 'flags' and defined some new flag
bits: READ_RESTRICTED and WRITE_RESTRICTED, as well as a shortcut
RESTRICTED that means both.
tuple(i) repaired to return a true tuple when i is an instance of a
tuple subclass.
Added PyTuple_CheckExact macro.
PySequence_Tuple(): if a tuple-like object isn't exactly a tuple, it's
not safe to return the object as-is -- make a new tuple of it instead.
Given an immutable type M, and an instance I of a subclass of M, the
constructor call M(I) was just returning I as-is; but it should return a
new instance of M. This fixes it for M in {int, long}. Strings, floats
and tuples remain to be done.
Added new macros PyInt_CheckExact and PyLong_CheckExact, to more easily
distinguish between "is" and "is a" (i.e., only an int passes
PyInt_CheckExact, while any sublass of int passes PyInt_Check).
Added private API function _PyLong_Copy.
iterable object. I'm not sure how that got overlooked before!
Got rid of the internal _PySequence_IterContains, introduced a new
internal _PySequence_IterSearch, and rewrote all the iteration-based
"count of", "index of", and "is the object in it or not?" routines to
just call the new function. I suppose it's slower this way, but the
code duplication was getting depressing.
While not even documented, they were clearly part of the C API,
there's no great difficulty to support them, and it has the cool
effect of not requiring any changes to ExtensionClass.c.