The embed2.diff patch solves the user's problem by exporting the missing
symbols from the Python core so Python can be embedded in another Cygwin
application (well, at lest vim).
A new API (only accessible from C) to interrupt a thread by sending it
an exception. This is not always effective, but might help some people.
Requested by Just van Rossum and Alex Martelli. It is intentional
that you have to write your own C extension to call it from Python.
Docs will have to wait.
the purpose. Increased my claim to two bits, hoping that nobody
will complain about it. I'm taking the highest two bits, whatever
the integer word size may be.
The compiler was reseting the list comprehension tmpname counter for each function, but the symtable was using the same counter for the entire module. Repair by move tmpname into the symtable entry.
Bugfix candidate.
* Can now test for basic blocks.
* Optimize inverted comparisions.
* Optimize unary_not followed by a conditional jump.
* Added a new opcode, NOP, to keep code size constant.
* Applied NOP to previous transformations where appropriate.
Note, the NOP would not be necessary if other functions were
added to re-target jump addresses and update the co_lnotab mapping.
That would yield slightly faster and cleaner bytecode at the
expense of optimizer simplicity and of keeping it decoupled
from the line-numbering structure.
new line.
New pvt API function _Py_PrintReferenceAddresses(): Prints only the
addresses and refcnts of the live objects. This is always safe to call,
because it has no dependence on Python's C API.
Py_Finalize(): If envar PYTHONDUMPREFS is set, call (the new)
_Py_PrintReferenceAddresses() right before dumping final pymalloc stats.
We can't print the reprs of the objects here because too much of the
interpreter has been shut down. You need to correlate the addresses
displayed here with the object reprs printed by the earlier
PYTHONDUMPREFS call to _Py_PrintReferences().
New functions:
unsigned long PyInt_AsUnsignedLongMask(PyObject *);
unsigned PY_LONG_LONG) PyInt_AsUnsignedLongLongMask(PyObject *);
unsigned long PyLong_AsUnsignedLongMask(PyObject *);
unsigned PY_LONG_LONG) PyLong_AsUnsignedLongLongMask(PyObject *);
New and changed format codes:
b unsigned char 0..UCHAR_MAX
B unsigned char none **
h unsigned short 0..USHRT_MAX
H unsigned short none **
i int INT_MIN..INT_MAX
I * unsigned int 0..UINT_MAX
l long LONG_MIN..LONG_MAX
k * unsigned long none
L long long LLONG_MIN..LLONG_MAX
K * unsigned long long none
Notes:
* New format codes.
** Changed from previous "range-and-a-half" to "none"; the
range-and-a-half checking wasn't particularly useful.
New test test_getargs2.py, to verify all this.
Arranged that all the objects exposed by __builtin__ appear in the list
of all objects. I basically peed away two days tracking down a mystery
leak in sys.gettotalrefcount() in a ZODB app (== tons of code), because
the object leaking the references didn't appear in the sys.getobjects(0)
list. The object happened to be False. Now False is in the list, along
with other popular & previously missing leak candidates (like None).
Alas, we still don't have a choke point covering *all* Python objects,
so the list of all objects may still be incomplete.
_Py_AddToAllObjects() that simply inserts an object at the front of
the doubly-linked list of all objects. Changed PyType_Ready() (the
closest thing we've got to a choke point for type objects) to call
that.
variables to store internal data. As a result, any atempts to use the
unicode system with multiple active interpreters, or successive
interpreter executions, would fail.
Now that information is stored into members of the PyInterpreterState
structure.
with an indented code block but no newline would raise SyntaxError.
This would have been a four-line change in parsetok.c... Except
codeop.py depends on this behavior, so a compilation flag had to be
invented that causes the tokenizer to revert to the old behavior;
this required extra changes to 2 .h files, 2 .c files, and 2 .py
files. (Fixes SF bug #501622.)
-DCALL_PROFILE: Count the number of function calls executed.
When this symbol is defined, the ceval mainloop and helper functions
count the number of function calls made. It keeps detailed statistics
about what kind of object was called and whether the call hit any of
the special fast paths in the code.
Optimization:
When we take the fast_function() path, which seems to be taken for
most function calls, and there is minimal frame setup to do, avoid
call PyEval_EvalCodeEx(). The eval code ex function does a lot of
work to handle keywords args and star args, free variables,
generators, etc. The inlined version simply allocates the frame and
copies the arguments values into the frame.
The optimization gets a little help from compile.c which adds a
CO_NOFREE flag to code objects that don't have free variables or cell
variables. This change allows fast_function() to get into the fast
path with fewer tests.
I measure a couple of percent speedup in pystone with this change, but
there's surely more that can be done.
__module__ is the string name of the module the function was defined
in, just like __module__ of classes. In some cases, particularly for
C functions, the __module__ may be None.
Change PyCFunction_New() from a function to a macro, but keep an
unused copy of the function around so that we don't change the binary
API.
Change pickle's save_global() to use whichmodule() if __module__ is
None, but add the __module__ logic to whichmodule() since it might be
used outside of pickle.
needs of pickling longs. Backed off to a definition that's much easier
to understand. The pickler will have to work a little harder, but other
uses are more likely to be correct <0.5 wink>.
_PyLong_Sign(): New teensy function to characterize a long, as to <0, ==0,
or >0.
start for the C implemention of new pickle LONG1 and LONG4 opcodes (the
linear-time way to pickle a long is to call _PyLong_AsByteArray, but
the caller has no idea how big an array to allocate, and correct
calculation is a bit subtle).
the AEDesc data shouldn't be disposed when the Python object is.
Added a C call AEDesc_NewBorrowed() to create these objects and a Python
method old=AEDesc.AutoDispose(onoff) to change auto-dispose state.
hoped it would be, but not too bad. A test had to change:
time.__setstate__() can no longer add a non-None tzinfo member to a time
object that didn't already have one, since storage for a tzinfo member
doesn't exist in that case.
The attached patch enables shared extension
modules to build cleanly under Cygwin without
moving the static initialization of certain function
pointers (i.e., ones exported from the Python
DLL core) to a module initialization function.
Additionally, this patch fixes the modules that
have been changed in the past to accommodate
Cygwin.
Initialize the small integers and __builtins__ in startup.
This removes some if conditions.
Change XDECREF to DECREF for values which shouldn't be NULL.
- new import hooks in import.c, exposed in the sys module
- new module called 'zipimport'
- various changes to allow bootstrapping from zip files
I hope I didn't break the Windows build (or anything else for that
matter), but then again, it's been sitting on sf long enough...
Regarding the latest discussions on python-dev: zipimport sets
pkg.__path__ as specified in PEP 273, and likewise, sys.path item such as
/path/to/Archive.zip/subdir/ are supported again.
This can cause core dumps when Python runs. Python relies on the 754-
(and C99-) mandated default "non-stop" mode for FP exceptions. This
patch from Ben Laurie disables at least one FP exception on FreeBSD at
Python startup time.
Most of these patches are from Thomas Heller, with long lines folded
by Tim. The change to test_descr.py is from Guido. See the bug report.
Not a bugfix candidate -- METH_CLASS is new in 2.3.
system headers have two declarations for wchar_t, with different guard macros.
Not sure whether this is a bugfix candidate, that depends on what changed in the
curses module.
Py_Init crash". refchain cannot be cleared because objects can live across
Py_Finalize() and Py_Initialize() if they are kept alive by circular
references.
the end, in the hope of saving some bytes on 64-bit machines. (Too
bad n_nchildren can't be made an unsigned short, but
test/test_longexp.py specifically tests for more than 2**16 subtrees
at one level.)
I don't expect any binary compatibility issues here, unless someone
has an old binary of parsermodule.so saved away.
globals, _Py_Ticker and _Py_CheckInterval. This also implements Jeremy's
shortcut in Py_AddPendingCall that zeroes out _Py_Ticker. This allows the
test in the main loop to only test a single value.
The gory details are at
http://python.org/sf/602191
Use a slightly different strategy to determine when not to call the line
trace function. This removes the need for the RETURN_NONE opcode, so
that's gone again. Update docs and comments to match.
Thanks to Neal and Armin!
Also add a test suite. This should have come with the original patch...
interning. I modified Oren's patch significantly, but the basic idea
and most of the implementation is unchanged. Interned strings created
with PyString_InternInPlace() are now mortal, and you must keep a
reference to the resulting string around; use the new function
PyString_InternImmortal() to create immortal interned strings.
[ 587993 ] SET_LINENO killer
Remove SET_LINENO. Tracing is now supported by inspecting co_lnotab.
Many sundry changes to document and adapt to this change.
a lot of work: it had to save and restore the current exception around
a call to lookup_maybe(), because that could fail in rare cases, and
most objects don't have a __del__ method, so the whole exercise was
usually a waste of time. Changed this to cache the __del__ method in
the type object just like all other special methods, in a new slot
tp_del. So now subtype_dealloc() can test whether tp_del is NULL and
skip the whole exercise if it is. The new slot doesn't need a new
flag bit: subtype_dealloc() is only called if the type was dynamically
allocated by type_new(), so it's guaranteed to have all current slots.
Types defined in C cannot fill in tp_del with a function of their own,
so there's no corresponding "wrapper". (That functionality is already
available through tp_dealloc.)
For a file f, iter(f) now returns f (unless f is closed), and f.next()
is similar to f.readline() when EOF is not reached; however, f.next()
uses a readahead buffer that messes up the file position, so mixing
f.next() and f.readline() (or other methods) doesn't work right.
Calling f.seek() drops the readahead buffer, but other operations
don't.
The real purpose of this change is to reduce the confusion between
objects and their iterators. By making a file its own iterator, it's
made clearer that using the iterator modifies the file object's state
(in particular the current position).
A nice side effect is that this speeds up "for line in f:" by not
having to use the xreadlines module. The f.xreadlines() method is
still supported for backwards compatibility, though it is the same as
iter(f) now.
(I made some cosmetic changes to Oren's code, and added a test for
"file closed" to file_iternext() and file_iter().)
actual script to run in case we are running from an applet. If we are indeed
running an applet we skip the normal option processing leaving it all to the
applet code.
This allows us to get use the normal python binary in the Python.app bundle,
giving us all the normal command line options through PythonLauncher while
still allowing Python.app to be used as the template for building applets.
Consequently, pythonforbundle is gone, and Mac/Python/macmain.c isn't used
on OSX anymore.
PyErr_SetExcFromWindowsErr(), PyErr_SetExcFromWindowsErrWithFilename().
Similar to PyErr_SetFromWindowsErrWithFilename() and
PyErr_SetFromWindowsErr(), but they allow to specify
the exception type to raise. Available on Windows.
See SF patch #576458.
This gets compilation of posixmodule.c to succeed on Tru64 and does no
harm on Linux. We may need to undefine it on some platforms, but
let's wait and see.
Martin says:
> I think it is generally the right thing to define _XOPEN_SOURCE on
> Unix, providing a negative list of systems that cannot support this
> setting (or preferably solving whatever problems remain).
>
> I'd put an (unconditional) AC_DEFINE into configure.in early on; it
> *should* go into confdefs.h as configure proceeds, and thus be active
> when other tests are performed.
The staticforward define was needed to support certain broken C
compilers (notably SCO ODT 3.0, perhaps early AIX as well) botched the
static keyword when it was used with a forward declaration of a static
initialized structure. Standard C allows the forward declaration with
static, and we've decided to stop catering to broken C compilers. (In
fact, we expect that the compilers are all fixed eight years later.)
I'm leaving staticforward and statichere defined in object.h as
static. This is only for backwards compatibility with C extensions
that might still use it.
XXX I haven't updated the documentation.
helper macros to something saner, and used them appropriately in other
files too, to reduce #ifdef blocks.
classobject.c, instance_dealloc(): One of my worst Python Memories is
trying to fix this routine a few years ago when COUNT_ALLOCS was defined
but Py_TRACE_REFS wasn't. The special-build code here is way too
complicated. Now it's much simpler. Difference: in a Py_TRACE_REFS
build, the instance is no longer in the doubly-linked list of live
objects while its __del__ method is executing, and that may be visible
via sys.getobjects() called from a __del__ method. Tough -- the object
is presumed dead while its __del__ is executing anyway, and not calling
_Py_NewReference() at the start allows enormous code simplification.
typeobject.c, call_finalizer(): The special-build instance_dealloc()
pain apparently spread to here too via cut-'n-paste, and this is much
simpler now too. In addition, I didn't understand why this routine
was calling _PyObject_GC_TRACK() after a resurrection, since there's no
plausible way _PyObject_GC_UNTRACK() could have been called on the
object by this point. I suspect it was left over from pasting the
instance_delloc() code. Instead asserted that the object is still
tracked. Caution: I suspect we don't have a test that actually
exercises the subtype_dealloc() __del__-resurrected-me code.
that their uses can be prettier. I've come to despise the names I picked
for these things, though, and expect to change all of them -- I changed
a bunch of other files to use them (replacing #ifdef blocks), but the
names were so obscure out of context that I backed that all out again.
more trivial lexical helper macros so that uses of these guys expand
to nothing at all when they're not enabled. This should help sub-
standard compilers that can't do a good job of optimizing away the
previous "(void)0" expressions.
Py_DECREF: There's only one definition of this now. Yay! That
was that last one in the family defined multiple times in an #ifdef
maze.
Py_FatalError(): Changed the char* signature to const char*.
_Py_NegativeRefcount(): New helper function for the Py_REF_DEBUG
expansion of Py_DECREF. Calling an external function cuts down on
the volume of generated code. The previous inline expansion of abort()
didn't work as intended on Windows (the program often kept going, and
the error msg scrolled off the screen unseen). _Py_NegativeRefcount
calls Py_FatalError instead, which captures our best knowledge of
how to abort effectively across platforms.
that have taken me "too long" to reverse-engineer over the years.
Vastly reduced the nesting level and redundancy of #ifdef-ery.
Took a light stab at repairing comments that are no longer true.
sys_gettotalrefcount(): Changed to enable under Py_REF_DEBUG.
It was enabled under Py_TRACE_REFS, which was much heavier than
necessary. sys.gettotalrefcount() is now available in a
Py_REF_DEBUG-only build.
mechanism is no longer evil: it no longer plays dangerous games with
the type pointer or refcounts, and objects in extension modules can play
along too without needing to edit the core first.
Rewrote all the comments to explain this, and (I hope) give clear
guidance to extension authors who do want to play along. Documented
all the functions. Added more asserts (it may no longer be evil, but
it's still dangerous <0.9 wink>). Rearranged the generated code to
make it clearer, and to tolerate either the presence or absence of a
semicolon after the macros. Rewrote _PyTrash_destroy_chain() to call
tp_dealloc directly; it was doing a Py_DECREF again, and that has all
sorts of obscure distorting effects in non-release builds (Py_DECREF
was already called on the object!). Removed Christian's little "embedded
change log" comments -- that's what checkin messages are for, and since
it was impossible to correlate the comments with the code that changed,
I found them merely distracting.
This was mostly a matter of adding comments and light code rearrangement.
Upon untracking, gc_next is still set to NULL. It's a cheap way to
provoke memory faults if calling code is insane. It's also used in some
way by the trashcan mechanism.
object should now have a well-defined gc_refs value, with clear transitions
among gc_refs states. As a result, none of the visit_XYZ traversal
callbacks need to check IS_TRACKED() anymore, and those tests were removed.
(They were already looking for objects with specific gc_refs states, and
the gc_refs state of an untracked object can no longer match any other
gc_refs state by accident.)
Added more asserts.
I expect that the gc_next == NULL indicator for an untracked object is
now redundant and can also be removed, but I ran out of time for this.
[ 400998 ] experimental support for extended slicing on lists
somewhat spruced up and better tested than it was when I wrote it.
Includes docs & tests. The whatsnew section needs expanding, and arrays
should support extended slices -- later.
This patch complies with the following request found
near the top of configure.in:
# This is for stuff that absolutely must end up in pyconfig.h.
# Please use pyport.h instead, if possible.
I tested this patch under Cygwin, Win32, and Red
Hat Linux. Python built and ran successfully on
each of these platforms.
for 'str' and 'unicode', and can be used instead of
types.StringTypes, e.g. to test whether something is "a string":
isinstance(x, string) is True for Unicode and 8-bit strings. This
is an abstract base class and cannot be instantiated directly.
The old syntax suggested that a trailing comma was OK inside backticks,
but in fact (due to ideosyncrasies of pgen) it was not. Fix the grammar
to avoid the ambiguity. Fred: you may want to update the refman.
This patch complies with the following request found
near the top of configure.in:
# This is for stuff that absolutely must end up in pyconfig.h.
# Please use pyport.h instead, if possible.
I tested this patch under Cygwin, Win32, and Red
Hat Linux. Python built and ran successfully on
each of these platforms.
As threatened, PyMem_{Free, FREE} also invoke the object deallocator now
when pymalloc is enabled (well, it does when pymalloc isn't enabled too,
but in that case "the object deallocator" is plain free()).
This is maximally backward-compatible, but it leaves a bitter aftertaste.
Also massive reworking of comments.
http://www.python.org/sf/444708
This adds the optional argument for str.strip
to unicode.strip too and makes it possible
to call str.strip with a unicode argument
and unicode.strip with a str argument.
+ Redirect PyMem_{Del, DEL} to the object allocator's free() when
pymalloc is enabled. Needed so old extensions can continue to
mix PyObject_New with PyMem_DEL.
+ This implies that pgen needs to be able to see the PyObject_XYZ
declarations too. pgenheaders.h now includes Python.h. An
implication is that I expect obmalloc.o needs to get linked into
pgen on non-Windows boxes.
+ When PYMALLOC_DEBUG is defined, *all* Py memory API functions
now funnel through the debug allocator wrapper around pymalloc.
This is the default in a debug build.
+ That caused compile.c to fail: it indirectly mixed PyMem_Malloc
with raw platform free() in one place. This is verbotten.
+ Continued looping until n bytes in the buffer have been filled, not
just when n bytes have been read from the file. This repairs the
bug that f.readlines() only sucked up the first 8192 bytes of the file
on Windows when universal newlines was enabled and f was opened in
U mode (see Python-Dev -- this was the ultimate cause of the
test_inspect.py failure).
+ Changed prototye to take a char* buffer (void* doesn't make much sense).
+ Squashed size_t vs int mismatches (in particular, besides the unsigned
vs signed distinction, size_t may be larger than int).
+ Gets out under all error conditions now (it's possible for fread() to
suffer an error even if it returns a number larger than 0 -- any
"short read" is an error or EOF condition).
+ Rearranged and simplified declarations.
Highlights: import and friends will understand any of \r, \n and \r\n
as end of line. Python file input will do the same if you use mode 'U'.
Everything can be disabled by configuring with --without-universal-newlines.
See PEP278 for details.
Added code to call this when PYMALLOC_DEBUG is enabled, and envar
PYTHONMALLOCSTATS is set, whenever a new arena is obtained and once
late in the Python shutdown process.
PyMem_{Del, DEL} doesn't work yet (compilation problems).
pyport.h: _PyMem_EXTRA is gone.
pmem.h: Repaired comments. PyMem_{Malloc, MALLOC} and
PyMem_{Realloc, REALLOC} now make the same x-platform guarantees when
asking for 0 bytes, and when passing a NULL pointer to the latter.
object.c: PyMem_{Malloc, Realloc} just call their macro versions
now, since the latter take care of the x-platform 0 and NULL stuff
by themselves now.
pypcre.c, grow_stack(): So sue me. On two lines, this called
PyMem_RESIZE to grow a "const" area. It's not legit to realloc a
const area, so the compiler warned given the new expansion of
PyMem_RESIZE. It would have gotten the same warning before if it
had used PyMem_Resize() instead; the older macro version, but not the
function version, silently cast away the constness. IMO that was a wrong
thing to do, and the docs say the macro versions of PyMem_xyz are
deprecated anyway. If somebody else is resizing const areas with the
macro spelling, they'll get a warning when they recompile now too.
it's enabled.
Allow PyObject_Del, PyObject_Free, and PyObject_GC_Del to be used as
function designators. Provide source compatibility macros.
Make PyObject_GC_Track and PyObject_GC_UnTrack functions instead of
trivial macros wrapping functions.
PEP 285. Everything described in the PEP is here, and there is even
some documentation. I had to fix 12 unit tests; all but one of these
were printing Boolean outcomes that changed from 0/1 to False/True.
(The exception is test_unicode.py, which did a type(x) == type(y)
style comparison. I could've fixed that with a single line using
issubtype(x, type(y)), but instead chose to be explicit about those
places where a bool is expected.
Still to do: perhaps more documentation; change standard library
modules to return False/True from predicates.
This displays stats about the # of arenas, pools, blocks and bytes, to
stderr, both used and reserved but unused.
CAUTION: Because PYMALLOC_DEBUG is on, the debug malloc routine adds
16 bytes to each request. This makes each block appear two size classes
higher than it would be if PYMALLOC_DEBUG weren't on.
So far, playing with this confirms the obvious: there's a lot of activity
in the "small dict" size class, but nothing in the core makes any use of
the 8-byte or 16-byte classes.
descriptor, as used for the tp_methods slot of a type. These new flag
bits are both optional, and mutually exclusive. Most methods will not
use either. These flags are used to create special method types which
exist in the same namespace as normal methods without having to use
tedious construction code to insert the new special method objects in
the type's tp_dict after PyType_Ready() has been called.
If METH_CLASS is specified, the method will represent a class method
like that returned by the classmethod() built-in.
If METH_STATIC is specified, the method will represent a static method
like that returned by the staticmethod() built-in.
These flags may not be used in the PyMethodDef table for modules since
these special method types are not meaningful in that case; a
ValueError will be raised if these flags are found in that context.
When WITH_PYMALLOC is defined, define PYMALLOC_DEBUG to enable the debug
allocator. This can be done independent of build type (release or debug).
A debug build automatically defines PYMALLOC_DEBUG when pymalloc is
enabled. It's a detected error to define PYMALLOC_DEBUG when pymalloc
isn't enabled.
Two debugging entry points defined only under PYMALLOC_DEBUG:
+ _PyMalloc_DebugCheckAddress(const void *p) can be used (e.g., from gdb)
to sanity-check a memory block obtained from pymalloc. It sprays
info to stderr (see next) and dies via Py_FatalError if the block is
detectably damaged.
+ _PyMalloc_DebugDumpAddress(const void *p) can be used to spray info
about a debug memory block to stderr.
A tiny start at implementing "API family" checks isn't good for
anything yet.
_PyMalloc_DebugRealloc() has been optimized to do little when the new
size is <= old size. However, if the new size is larger, it really
can't call the underlying realloc() routine without either violating its
contract, or knowing something non-trivial about how the underlying
realloc() works. A memcpy is always done in this case.
This was a disaster for (and only) one of the std tests: test_bufio
creates single text file lines up to a million characters long. On
Windows, fileobject.c's get_line() uses the horridly funky
getline_via_fgets(), which keeps growing and growing a string object
hoping to find a newline. It grew the string object 1000 bytes each
time, so for a million-character string it took approximately forever
(I gave up after a few minutes).
So, also:
fileobject.c, getline_via_fgets(): When a single line is outrageously
long, grow the string object at a mildly exponential rate, instead of
just 1000 bytes at a time.
That's enough so that a debug-build test_bufio finishes in about 5 seconds
on my Win98SE box. I'm curious to try this on Win2K, because it has very
different memory behavior than Win9X, and test_bufio always took a factor
of 10 longer to complete on Win2K. It *could* be that the endless
reallocs were simply killing it on Win2K even in the release build.
and not pymalloc. Add the functions PyMalloc_New, PyMalloc_NewVar, and
PyMalloc_Del that will use pymalloc if it's enabled. If pymalloc is
not enabled then they use the standard malloc (PyMem_*).
Windows some modules are considered (by me, and I don't care what anyone
else thinks about this <wink>) to be part of "the core" despite that they
happen to be compiled into separate DLLs (the "to DLL or not to DLL?"
question on Windows is nearly arbitrary). Making the pymalloc entry
points available to them allows the Windows build to complete without
incident when WITH_PYMALLOC is #define'd.
Note that this isn't unprecedented. Other "private API" functions we
export include _PySequence_IterSearch, _PyEval_SliceIndex, _PyCodec_Lookup,
_Py_ZeroStruct, _Py_TrueStruct, _PyLong_New and _PyModule_Clear.
Another year in the quest to out-guess random C behavior.
Added macros Py_ADJUST_ERANGE1(X) and Py_ADJUST_ERANGE2(X, Y). The latter
is useful for functions with complex results. Two corrections to errno-
after-libm-call are attempted:
1. If the platform set errno to ERANGE due to underflow, clear errno.
Some unknown subset of libm versions and link options do this. It's
allowed by C89, but I never figured anyone would do it.
2. If the platform did not set errno but overflow occurred, force
errno to ERANGE. C89 required setting errno to ERANGE, but C99
doesn't. Some unknown subset of libm versions and link options do
it the C99 way now.
Bugfix candidate, but hold off until some Linux people actually try it,
with and without -lieee. I'll send a help plea to Python-Dev.
platform realloc(p, 0) returns NULL, so MALLOC_ZERO_RETURNS_NULL can
be correctly undefined yet realloc(p, 0) can return NULL anyway.
Prevent realloc(p, 0) doing free(p) and returning NULL via a different
hack. Would probably be better to get rid of MALLOC_ZERO_RETURNS_NULL
entirely.
Bugfix candidate.
alignment gimmick. David Abrahams notes that the standard "long double"
actually requires stricter alignment than "double" on some Tru64 box.
On my box and yours <wink>, it's the same, so no harm done on most
boxes.
the first 3 characters of this string in several places, so for as long
as they remain "2.2" it confuses the heck out of attempts to build 2.3
stuff using distutils.