This can cause core dumps when Python runs. Python relies on the 754-
(and C99-) mandated default "non-stop" mode for FP exceptions. This
patch from Ben Laurie disables at least one FP exception on FreeBSD at
Python startup time.
This patch complies with the following request found
near the top of configure.in:
# This is for stuff that absolutely must end up in pyconfig.h.
# Please use pyport.h instead, if possible.
I tested this patch under Cygwin, Win32, and Red
Hat Linux. Python built and ran successfully on
each of these platforms.
This patch complies with the following request found
near the top of configure.in:
# This is for stuff that absolutely must end up in pyconfig.h.
# Please use pyport.h instead, if possible.
I tested this patch under Cygwin, Win32, and Red
Hat Linux. Python built and ran successfully on
each of these platforms.
PyMem_{Del, DEL} doesn't work yet (compilation problems).
pyport.h: _PyMem_EXTRA is gone.
pmem.h: Repaired comments. PyMem_{Malloc, MALLOC} and
PyMem_{Realloc, REALLOC} now make the same x-platform guarantees when
asking for 0 bytes, and when passing a NULL pointer to the latter.
object.c: PyMem_{Malloc, Realloc} just call their macro versions
now, since the latter take care of the x-platform 0 and NULL stuff
by themselves now.
pypcre.c, grow_stack(): So sue me. On two lines, this called
PyMem_RESIZE to grow a "const" area. It's not legit to realloc a
const area, so the compiler warned given the new expansion of
PyMem_RESIZE. It would have gotten the same warning before if it
had used PyMem_Resize() instead; the older macro version, but not the
function version, silently cast away the constness. IMO that was a wrong
thing to do, and the docs say the macro versions of PyMem_xyz are
deprecated anyway. If somebody else is resizing const areas with the
macro spelling, they'll get a warning when they recompile now too.
Another year in the quest to out-guess random C behavior.
Added macros Py_ADJUST_ERANGE1(X) and Py_ADJUST_ERANGE2(X, Y). The latter
is useful for functions with complex results. Two corrections to errno-
after-libm-call are attempted:
1. If the platform set errno to ERANGE due to underflow, clear errno.
Some unknown subset of libm versions and link options do this. It's
allowed by C89, but I never figured anyone would do it.
2. If the platform did not set errno but overflow occurred, force
errno to ERANGE. C89 required setting errno to ERANGE, but C99
doesn't. Some unknown subset of libm versions and link options do
it the C99 way now.
Bugfix candidate, but hold off until some Linux people actually try it,
with and without -lieee. I'll send a help plea to Python-Dev.
platform realloc(p, 0) returns NULL, so MALLOC_ZERO_RETURNS_NULL can
be correctly undefined yet realloc(p, 0) can return NULL anyway.
Prevent realloc(p, 0) doing free(p) and returning NULL via a different
hack. Would probably be better to get rid of MALLOC_ZERO_RETURNS_NULL
entirely.
Bugfix candidate.
Removed the ancient "#define ANY void".
Bugfix candidate? Hard call. The bug report claims the existence of
this #define creates conflicts with other packages, which is easy to
believe. OTOH, some extension authors may still be relying on its
presence. I'm afraid you can't win on this one.
RISCOS/Makefile:
include structseq and weakrefobject;
changes to keep command line length below 2048
RISCOS/Modules/riscosmodule.c:
typos from the stat structseq patch
Include/pyport.h:
don't re-#define __attribute__(__x) on RISC OS as it is already defined in c library
The patch repaired internal gcc compiler errors on BeOS.
This checkin repairs them in a simpler way, by explicitly casting the
platform INFINITY to double.
requires that errno ever get set, and it looks like glibc is already
playing that game. New rules:
+ Never use HUGE_VAL. Use the new Py_HUGE_VAL instead.
+ Never believe errno. If overflow is the only thing you're interested in,
use the new Py_OVERFLOWED(x) macro. If you're interested in any libm
errors, use the new Py_SET_ERANGE_IF_OVERFLOW(x) macro, which attempts
to set errno the way C89 said it worked.
Unfortunately, none of these are reliable, but they work on Windows and I
*expect* under glibc too.
pyport.h: typedef a new Py_intptr_t type.
DELICATE ASSUMPTION: That HAVE_UINTPTR_T implies intptr_t is
available as well as uintptr_t. If that turns out not to be
true, things must get uglier (C99 wants both, so I think it's
an assumption we're *likely* to get away with).
thread_nt.h, PyThread_start_new_thread: MS _beginthread is documented
as returning unsigned long; no idea why uintptr_t was being used.
Others: Always use Py_[u]intptr_t, never [u]intptr_t directly.
supposed to be declared in system include files (with a proper prototype.)
Should be moved to a platform-specific block if anyone finds out which
broken platforms need it :-)
#define'd to an unreasonable value (several recent gcc systems have
misdefined it, causing bogus overflows in integer multiplication). Nuke
CHAR_BIT entirely.
Add definitions of INT_MAX and LONG_MAX to pyport.h.
Remove includes of limits.h and conditional definitions of INT_MAX
and LONG_MAX elsewhere.
This closes SourceForge patch #101659 and bug #115323.
It's hard to sort out what the bug was, exactly. So, Big Hammer:
1. Python shouldn't be in the business of #define'ing NULL, period.
2. Users of the Python C API shouldn't be in the business of not including
Python.h, period.
Hence:
1. Removed all #define's of NULL in Python source code (pyport.h and
object.h).
2. Since we're *relying* on stdio.h defining NULL, put an #error in
Python.h after its #include of stdio.h if NULL isn't defined then.
by the following.
typedef in a portable way the Python name for the C9X uintptr_t type.
This latter is the most portable way to spell an integral type to
which a void* can be cast to and back again without losing
information. Parallel checkin hacks configure to check if the
platform/compiler supports the C9X name.
This was a misleading bug -- the true "bug" was that hash(x) gave an error
return when x is an infinity. Fixed that. Added new Py_IS_INFINITY macro to
pyport.h. Rearranged code to reduce growing duplication in hashing of float and
complex numbers, pushing Trent's earlier stab at that to a logical conclusion.
Fixed exceedingly rare bug where hashing of floats could return -1 even if there
wasn't an error (didn't waste time trying to construct a test case, it was simply
obvious from the code that it *could* happen). Improved complex hash so that
hash(complex(x, y)) doesn't systematically equal hash(complex(y, x)) anymore.
for systems that are missing those declarations from system include files.
Start by moving a pointy-haired ones from their previous locations to the
new section.
(The gethostname() one, for instance, breaks on several systems, because
some define it as (char *, size_t) and some as (char *, int).)
I purposely decided not to include the summary of used #defines like Tim did
in the first section of pyport.h. In my opinion, the number of #defines
likedly to be used by this section would make such an overview unwieldy. I
would suggest documenting the non-obvious ones, though.
handlers "return void", according to ANSI C.
Removed the new Py_RETURN_FROM_SIGNAL_HANDLER macro.
Left RETSIGTYPE in the config stuff, because it's not clear to
me that others aren't relying on it (e.g., extension modules).
good C practice hasn't been available to everything all along.
Added Py_SAFE_DOWNCAST(VALUE, WIDE, NARROW) macro to pyport.h; this
just casts VALUE from type WIDE to type NARROW, but assert-fails if
Py_DEBUG is defined and info is lost due to casting.
Replaced a line in Fredrik's fix to marshal.c to use the new macro.
#if RETSIGTYPE != void
That isn't C, and MSVC properly refuses to compile it.
Introduced new Py_RETURN_FROM_SIGNAL_HANDLER macro in pyport.h
to expand to the correct thing based on RETSIGTYPE. However,
only void is ANSI! Do we still have platforms that return int?
The Unix config mess appears to #define RETSIGTYPE by magic
without being asked to, so I assume it's "a problem" across
Unices still.
This was a convenient excuse to create the pyport.h file recently
discussed!
Please use new Py_ARITHMETIC_RIGHT_SHIFT when right-shifting a
signed int and you *need* sign-extension. This is #define'd in
pyport.h, keying off new config symbol SIGNED_RIGHT_SHIFT_ZERO_FILLS.
If you're running on a platform that needs that symbol #define'd,
the std tests never would have worked for you (in particular,
at least test_long would have failed).
The autoconfig stuff got added to Python after my Unix days, so
I don't know how that works. Would someone please look into doing
& testing an auto-config of the SIGNED_RIGHT_SHIFT_ZERO_FILLS
symbol? It needs to be defined if & only if, e.g., (-1) >> 3 is
not -1.