round included:
* Revert round to its 2.6 behavior (half away from 0).
* Because round, floor, and ceil always return float again, it's no
longer necessary to have them delegate to __xxx___, so I've ripped
that out of their implementations and the Real ABC. This also helps
in implementing types that work in both 2.6 and 3.0: you return int
from the __xxx__ methods, and let it get enabled by the version
upgrade.
* Make pow(-1, .5) raise a ValueError again.
the complex_pow part), r56649, r56652, r56715, r57296, r57302, r57359, r57361,
r57372, r57738, r57739, r58017, r58039, r58040, and r59390, and new
documentation. The only significant difference is that round(x) returns a float
to preserve backward-compatibility. See http://bugs.python.org/issue1689.
Issue #1580: New free format floating point representation based on "Floating-Point Printer Sample Code", by Robert G. Burger. For example repr(11./5) now returns '2.2' instead of '2.2000000000000002'.
Thanks to noam for the patch! I had to modify doubledigits.c slightly to support X64 and IA64 machines on Windows. I also added the new file to the three project files.
Added PyFloat_GetMax(), PyFloat_GetMin() and PyFloat_GetInfo() to the float API.
Added a dictionary sys.float_info with information about the internal floating point type to the sys module.
OverflowError while x*x succeeds and produces infinity; apparently
these inconsistencies cannot be fixed across ``all'' platforms and
there's a widespread feeling that therefore ``every'' platform
should keep suffering forevermore. Ah well.
inf) but didn't; added a test to test_float to verify that, and ignored the
ERANGE value for errno in the pow operation to make the new test pass (with
help from Marilyn Davis at the Google Python Sprint -- thanks!).
PyTypeObject structures, I had to make prototypes for the functions, and
move the structure definition ahead of the functions. I'd dearly like a better
way to do this - to change this would make for a massive set of changes to
the codebase.
There's still some warnings - this is purely to get rid of errors first.
In C++, it's an error to pass a string literal to a char* function
without a const_cast(). Rather than require every C++ extension
module to put a cast around string literals, fix the API to state the
const-ness.
I focused on parts of the API where people usually pass literals:
PyArg_ParseTuple() and friends, Py_BuildValue(), PyMethodDef, the type
slots, etc. Predictably, there were a large set of functions that
needed to be fixed as a result of these changes. The most pervasive
change was to make the keyword args list passed to
PyArg_ParseTupleAndKewords() to be a const char *kwlist[].
One cast was required as a result of the changes: A type object
mallocs the memory for its tp_doc slot and later frees it.
PyTypeObject says that tp_doc is const char *; but if the type was
created by type_new(), we know it is safe to cast to char *.
[ 1346144 ] Segfaults from unaligned loads in floatobject.c
by using memcpy and not just blinding casting char* to double*.
Thanks to Rune Holm for the report.
[ 1181301 ] make float packing copy bytes when they can
which hasn't been reviewed, despite numerous threats to check it in
anyway if noone reviews it. Please read the diff on the checkin list,
at least!
The basic idea is to examine the bytes of some 'probe values' to see if
the current platform is a IEEE 754-ish platform, and if so
_PyFloat_{Pack,Unpack}{4,8} just copy bytes around.
The rest is hair for testing, and tests.
conversion using the proper magic slot (e.g., __int__()). Also move conversion
code out of PyNumber_*() functions in the C API into the nb_* function.
Applied patch #1109424. Thanks Walter Doewald.
When an integer is compared to a float now, the int isn't coerced to float.
This avoids spurious overflow exceptions and insane results. This should
compute correct results, without raising spurious exceptions, in all cases
now -- although I expect that what happens when an int/long is compared to
a NaN is still a platform accident.
Note that we had potential problems here even with "short" ints, on boxes
where sizeof(long)==8. There's #ifdef'ed code here to handle that, but
I can't test it as intended. I tested it by changing the #ifdef to
trigger on my 32-bit box instead.
I suppose this is a bugfix candidate, but I won't backport it. It's
long-winded (for speed) and messy (because the problem is messy). Note
that this also depends on a previous 2.4 patch that introduced
_Py_SwappedOp[] as an extern.
recent gcc on Linux/x86)
[ 899109 ] 1==float('nan')
by implementing rich comparisons for floats.
Seems to make comparisons involving NaNs somewhat less surprising
when the underlying C compiler actually implements C99 semantics.
float_pow(): Don't let the platform pow() raise -1.0 to an integer power
anymore; at least glibc gets it wrong in some cases. Note that
math.pow() will continue to deliver wrong (but platform-native) results
in such cases.
types. The special handling for these can now be removed from save_newobj().
Add some testing for this.
Also add support for setting the 'fast' flag on the Python Pickler class,
which suppresses use of the memo.
long but the double is too big to fit in a long. Prevent that. This
closes some recent bug or patch on SF, but SF is down now so I can't
say which.
Bugfix candidate.
comments everywhere that bugged me: /* Foo is inlined */ instead of
/* Inline Foo */. Somehow the "is inlined" phrase always confused me
for half a second (thinking, "No it isn't" until I added the missing
"here"). The new phrase is hopefully unambiguous.
The staticforward define was needed to support certain broken C
compilers (notably SCO ODT 3.0, perhaps early AIX as well) botched the
static keyword when it was used with a forward declaration of a static
initialized structure. Standard C allows the forward declaration with
static, and we've decided to stop catering to broken C compilers. (In
fact, we expect that the compilers are all fixed eight years later.)
I'm leaving staticforward and statichere defined in object.h as
static. This is only for backwards compatibility with C extensions
that might still use it.
XXX I haven't updated the documentation.
Another year in the quest to out-guess random C behavior.
Added macros Py_ADJUST_ERANGE1(X) and Py_ADJUST_ERANGE2(X, Y). The latter
is useful for functions with complex results. Two corrections to errno-
after-libm-call are attempted:
1. If the platform set errno to ERANGE due to underflow, clear errno.
Some unknown subset of libm versions and link options do this. It's
allowed by C89, but I never figured anyone would do it.
2. If the platform did not set errno but overflow occurred, force
errno to ERANGE. C89 required setting errno to ERANGE, but C99
doesn't. Some unknown subset of libm versions and link options do
it the C99 way now.
Bugfix candidate, but hold off until some Linux people actually try it,
with and without -lieee. I'll send a help plea to Python-Dev.
delivered bizarre results. Check float_divmod for a Py_NotImplemented
return and pass it along (instead of treating Py_NotImplemented as a
2-tuple).
CONVERT_TO_DOUBLE: Added comments; this macro is obscure.
pass the buffer length. Stop using it. It should be deprecated, but too
late in the release cycle to do that now.
New static format_float() does the same thing but requires passing the
buffer length too. Use it instead.
Try to ensure that divmod(-0.0, 1.0) -> (-0.0, +0.0) across platforms.
It always did on Windows, and still does. It didn't on Linux. Alas,
there's no platform-independent way to write a test case for this.
Bugfix candidate.
presence of NaNs. So pass the issue on to the platform libm fabs();
after all, fabs() is a std C function because you can't implement it
correctly in portable C89.
many types were subclassable but had a xxx_dealloc function that
called PyObject_DEL(self) directly instead of deferring to
self->ob_type->tp_free(self). It is permissible to set tp_free in the
type object directly to _PyObject_Del, for non-GC types, or to
_PyObject_GC_Del, for GC types. Still, PyObject_DEL was a tad faster,
so I'm fearing that our pystone rating is going down again. I'm not
sure if doing something like
void xxx_dealloc(PyObject *self)
{
if (PyXxxCheckExact(self))
PyObject_DEL(self);
else
self->ob_type->tp_free(self);
}
is any faster than always calling the else branch, so I haven't
attempted that -- however those types whose own dealloc is fancier
(int, float, unicode) do use this pattern.
requires that errno ever get set, and it looks like glibc is already
playing that game. New rules:
+ Never use HUGE_VAL. Use the new Py_HUGE_VAL instead.
+ Never believe errno. If overflow is the only thing you're interested in,
use the new Py_OVERFLOWED(x) macro. If you're interested in any libm
errors, use the new Py_SET_ERANGE_IF_OVERFLOW(x) macro, which attempts
to set errno the way C89 said it worked.
Unfortunately, none of these are reliable, but they work on Windows and I
*expect* under glibc too.
the fiddling is simply due to that no caller of PyLong_AsDouble ever
checked for failure (so that's fixing old bugs). PyLong_AsDouble is much
faster for big inputs now too, but that's more of a happy consequence
than a design goal.
PEP 238. Changes:
- add a new flag variable Py_DivisionWarningFlag, declared in
pydebug.h, defined in object.c, set in main.c, and used in
{int,long,float,complex}object.c. When this flag is set, the
classic division operator issues a DeprecationWarning message.
- add a new API PyRun_SimpleStringFlags() to match
PyRun_SimpleString(). The main() function calls this so that
commands run with -c can also benefit from -Dnew.
- While I was at it, I changed the usage message in main() somewhat:
alphabetized the options, split it in *four* parts to fit in under
512 bytes (not that I still believe this is necessary -- doc strings
elsewhere are much longer), and perhaps most visibly, don't display
the full list of options on each command line error. Instead, the
full list is only displayed when -h is used, and otherwise a brief
reminder of -h is displayed. When -h is used, write to stdout so
that you can do `python -h | more'.
Notes:
- I don't want to use the -W option to control whether the classic
division warning is issued or not, because the machinery to decide
whether to display the warning or not is very expensive (it involves
calling into the warnings.py module). You can use -Werror to turn
the warnings into exceptions though.
- The -Dnew option doesn't select future division for all of the
program -- only for the __main__ module. I don't know if I'll ever
change this -- it would require changes to the .pyc file magic
number to do it right, and a more global notion of compiler flags.
- You can usefully combine -Dwarn and -Dnew: this gives the __main__
module new division, and warns about classic division everywhere
else.
cases.
powu: Deleted.
This started with a nonsensical error msg:
>>> x = -1.
>>> import sys
>>> x**(-sys.maxint-1L)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
ValueError: negative number cannot be raised to a fractional power
>>>
The special-casing in float_pow was simply wrong in this case (there's
not even anything peculiar about these inputs), and I don't see any point
to it in *any* case: a decent libm pow should have worst-case error under
1 ULP, so in particular should deliver the exact result whenever the exact
result is representable (else its error is at least 1 ULP). Thus our
special fiddling for integral values "shouldn't" buy anything in accuracy,
and, to the contrary, repeated multiplication is less accurate than a
decent pow when the true result isn't exactly representable. So just
letting pow() do its job here (we may not be able to trust libm x-platform
in exceptional cases, but these are normal cases).
- Do not compile unicodeobject, unicodectype, and unicodedata if Unicode is disabled
- check for Py_USING_UNICODE in all places that use Unicode functions
- disables unicode literals, and the builtin functions
- add the types.StringTypes list
- remove Unicode literals from most tests.
This introduces:
- A new operator // that means floor division (the kind of division
where 1/2 is 0).
- The "future division" statement ("from __future__ import division)
which changes the meaning of the / operator to implement "true
division" (where 1/2 is 0.5).
- New overloadable operators __truediv__ and __floordiv__.
- New slots in the PyNumberMethods struct for true and floor division,
new abstract APIs for them, new opcodes, and so on.
I emphasize that without the future division statement, the semantics
of / will remain unchanged until Python 3.0.
Not yet implemented are warnings (default off) when / is used with int
or long arguments.
This has been on display since 7/31 as SF patch #443474.
Flames to /dev/null.
Store floats and doubles to full precision in marshal.
Test that floats read from .pyc/.pyo closely match those read from .py.
Declare PyFloat_AsString() in floatobject header file.
Add new PyFloat_AsReprString() API function.
Document the functions declared in floatobject.h.
raise ValueError. Checked in the patch as far as it went, but also changed
all of ints, longs and floats to raise ZeroDivisionError instead when raising
0 to a negative number. This is what 754-inspired stds require, as the "true
result" is an infinity obtained from finite operands, i.e. it's a singularity.
Also changed float pow to not be so timid about using its square-and-multiply
algorithm. Note that what math.pow does is unrelated to what builtin pow
does, and will still vary by platform.
Add definitions of INT_MAX and LONG_MAX to pyport.h.
Remove includes of limits.h and conditional definitions of INT_MAX
and LONG_MAX elsewhere.
This closes SourceForge patch #101659 and bug #115323.
I fixed the specific complaint but left the (many) large issues untouched.
See the (very long) bug report discussion for why:
http://sourceforge.net/bugs/?func=detailbug&group_id=5470&bug_id=110624
Note that while I left the interface to the undocumented public API function
PyFloat_FromString alone, its 2nd argument is useless. From a comment block
in the code:
RED_FLAG 22-Sep-2000 tim
PyFloat_FromString's pend argument is braindead. Prior to this RED_FLAG,
1. If v was a regular string, *pend was set to point to its terminating
null byte. That's useless (the caller can find that without any
help from this function!).
2. If v was a Unicode string, or an object convertible to a character
buffer, *pend was set to point into stack trash (the auto temp
vector holding the character buffer). That was downright dangerous.
Since we can't change the interface of a public API function, pend is
still supported but now *officially* useless: if pend is not NULL,
*pend is set to NULL.
scope. Previously, s_buffer[] was defined inside the
PyUnicode_Check() scope, but referred to in the outer scope via
assignment to s. This quiets an Insure portability warning.
This was a misleading bug -- the true "bug" was that hash(x) gave an error
return when x is an infinity. Fixed that. Added new Py_IS_INFINITY macro to
pyport.h. Rearranged code to reduce growing duplication in hashing of float and
complex numbers, pushing Trent's earlier stab at that to a logical conclusion.
Fixed exceedingly rare bug where hashing of floats could return -1 even if there
wasn't an error (didn't waste time trying to construct a test case, it was simply
obvious from the code that it *could* happen). Improved complex hash so that
hash(complex(x, y)) doesn't systematically equal hash(complex(y, x)) anymore.