is allocated than needed (used to allocate 80 bytes of digit space no
matter how small the long input). This also runs faster, at least on 32-
bit boxes.
the new PyLong_{As,From}{Unsigned,}LongLong tests, so the bulk of the
code is in the new #include file testcapi_long.h, which generates
different code depending on how macros are set. This sucks, but I couldn't
think of anything that sucked less.
UNIX headache? If we still maintain dependencies by hand, someone who
knows what they're doing should teach whatever needs it that
_testcapimodule.c includes testcapi_long.h.
Unfortunately, the std-mode bBhHIL codes don't do any range-checking; if
and when some of those get fixed, remove their letters from the
IntTester.BUGGY_RANGE_CHECK string. In the meantime, a msg saying that
range-tests are getting skipped is printed to stdout whenever one is
skipped.
Replaced PyLong_{As,From}{Unsigned,}LongLong guts with calls
to _PyLong_{As,From}ByteArray.
_testcapimodule.c:
Added strong tests of PyLong_{As,From}{Unsigned,}LongLong.
Fixes SF bug #432552 PyLong_AsLongLong() problems.
Possible bugfix candidate, but the fix relies on code added to longobject
to support the new q/Q structmodule format codes.
functions. I intend to replace their guts with calls to the new
_PyLong_{As,From}ByteArray() functions, but AFAICT there's no tests for
them at all now; I also suspect PyLong_AsLongLong() isn't catching all
overflow cases, but without a std test to demonstrate that why should you
believe me <wink>.
Also added a raiseTestError() utility function.
Add a -F option similar to "cvs commit -F <file>".
Add a -t option to allow specifying the prefix to the directory into which
the docs should be unpacked (useful when I start trying out new styles for
the presentation).
This completes the q/Q project.
longobject.c _PyLong_AsByteArray: The original code had a gross bug:
the most-significant Python digit doesn't necessarily have SHIFT
significant bits, and you really need to count how many copies of the sign
bit it has else spurious overflow errors result.
test_struct.py: This now does exhaustive std q/Q testing at, and on both
sides of, all relevant power-of-2 boundaries, both positive and negative.
NEWS: Added brief dict news while I was at it.
_PyLong_FromByteArray
_PyLong_AsByteArray
Untested and probably buggy -- they compile OK, but nothing calls them
yet. Will soon be called by the struct module, to implement x-platform
'q' and 'Q'.
If other people have uses for them, we could move them into the public API.
See longobject.h for usage details.
functions -- these are not available on traditional Mac OS platforms.
Corrected the version annotations for the spawn*() functions and related
constants; these were added in Python 1.6, not 1.5.2.
native mode, and only when config #defines HAVE_LONG_LONG. Standard mode
will eventually treat them as 8-byte ints across all platforms, but that
likely requires a new set of routines in longobject.c first (while
sizeof(long) >= 4 is guaranteed by C, there's nothing in C we can rely
on x-platform to hold 8 bytes of int, so we'll have to roll our own;
I'm thinking of a simple pair of conversion functions, Python long
to/from sized vector of unsigned bytes; that may be useful for GMP
conversions too; std q/Q would call them with size fixed at 8).
test_struct.py: In addition to adding some native-mode 'q' and 'Q' tests,
got rid of unused code, and repaired a non-portable assumption about
native sizeof(short) (it isn't 2 on some Cray boxes).
libstruct.tex: In addition to adding a bit of 'q'/'Q' docs (more needed
later), removed an erroneous footnote about 'I' behavior.
Patch from Michael Hundson.
format_exception_only() blew up when trying to report a SyntaxError
from a string input (line is None in this case, but it assumed a string).
Bugfix candidate.
Armin Rigo pointed out that the way the line-# table got built didn't work
for lines generating more than 255 bytes of bytecode. Fixed as he
suggested, plus corresponding changes to pyassem.py, plus added some
long overdue docs about this subtle table to compile.c.
Bugfix candidate (line numbers may be off in tracebacks under -O).
case of objects with equal types which support tp_compare. Give
type objects a tp_compare function.
Also add c<0 tests before a few PyErr_Occurred tests.
need to understand about the binary & decimal fp, so that representation
weirdness is documented somewhere. This makes it easier to repond to "bug"
reports caused by user confusion & ignorance of the issues.
This closes SF patch #426208.
about setting up the dispatch table, and update the OldProfile and
HotProfile classes to the current implementations, showing the adjusted
construction for the dispatch table.
that should be used to cache an interned version of the event
string passed to the profile/trace function. call_trace() will
create interned strings and cache them in using the storage
specified by this additional parameter, avoiding a lot of string
object creation at runtime when using the profiling or tracing
functions.
All call sites are modified to pass the additional parameter, and four
static PyObject* variables are allocated to cache the interned string
objects.
This closes SF patch #431257.
Ensure that all the default timers are called as functions, not an
expensive method wrapper around a variety of different functions.
Agressively avoid dictionary lookups.
Modify the dispatch scheme (Profile.trace_dispatch_*(), where * is not
'call', 'exception' or 'return') so that the callables dispatched to
are simple functions and not bound methods -- this reduces the number
of layers of Python call machinery that gets touched.
Remove a couple of duplicate imports from the "if __name__ == ..."
section.
This closes SF patch #430948.