In C++, it's an error to pass a string literal to a char* function
without a const_cast(). Rather than require every C++ extension
module to put a cast around string literals, fix the API to state the
const-ness.
I focused on parts of the API where people usually pass literals:
PyArg_ParseTuple() and friends, Py_BuildValue(), PyMethodDef, the type
slots, etc. Predictably, there were a large set of functions that
needed to be fixed as a result of these changes. The most pervasive
change was to make the keyword args list passed to
PyArg_ParseTupleAndKewords() to be a const char *kwlist[].
One cast was required as a result of the changes: A type object
mallocs the memory for its tp_doc slot and later frees it.
PyTypeObject says that tp_doc is const char *; but if the type was
created by type_new(), we know it is safe to cast to char *.
In addition, long_pow() skipped a necessary (albeit extremely unlikely
to trigger) error check when converting an int modulus to long.
Alas, I was unable to write a test case that crashed due to either
cause.
Bugfix candidate.
conversion using the proper magic slot (e.g., __int__()). Also move conversion
code out of PyNumber_*() functions in the C API into the nb_* function.
Applied patch #1109424. Thanks Walter Doewald.
This checkin is adapted from part 2 (of 3) of Trevor Perrin's patch set.
BACKWARD INCOMPATIBILITY: SHIFT must now be divisible by 5. AFAIK,
nobody will care. long_pow() could be complicated to worm around that,
if necessary.
long_pow():
- BUGFIX: This leaked the base and power when the power was negative
(and so the computation delegated to float pow).
- Instead of doing right-to-left exponentiation, do left-to-right. This
is more efficient for small bases, which is the common case.
- In addition, if the exponent is large (more than FIVEARY_CUTOFF
digits), precompute [a**i % c for i in range(32)], and go left to
right 5 bits at a time.
l_divmod():
- The signature changed so that callers who don't want the quotient,
or don't want the remainder, can pass NULL in the slot they don't
want. This saves them from having to declare a vrbl for unwanted
stuff, and remembering to decref it.
long_mod(), long_div(), long_classic_div():
- Adjust to new l_divmod() signature, and simplified as a result.
This checkin is adapted from part 1 (of 3) of Trevor Perrin's patch set.
x_mul()
- sped a little by optimizing the C
- sped a lot (~2X) if it's doing a square; note that long_pow() squares
often
k_mul()
- more cache-friendly now if it's doing a square
KARATSUBA_CUTOFF
- boosted; gradeschool mult is quicker now, and it may have been too low
for many platforms anyway
KARATSUBA_SQUARE_CUTOFF
- new
- since x_mul is a lot faster at squaring now, the point at which
Karatsuba pays for squaring is much higher than for general mult
Some version of gcc in the "RTEMS port running on the Coldfire (m5200)
processor" generates bad code for a loop in long_from_binary_base(),
comparing the wrong half of an int to a short. The patch changes the
decl of the short temp to be an int temp instead. This "simplifies"
the code enough that gcc no longer blows it.
New functions:
unsigned long PyInt_AsUnsignedLongMask(PyObject *);
unsigned PY_LONG_LONG) PyInt_AsUnsignedLongLongMask(PyObject *);
unsigned long PyLong_AsUnsignedLongMask(PyObject *);
unsigned PY_LONG_LONG) PyLong_AsUnsignedLongLongMask(PyObject *);
New and changed format codes:
b unsigned char 0..UCHAR_MAX
B unsigned char none **
h unsigned short 0..USHRT_MAX
H unsigned short none **
i int INT_MIN..INT_MAX
I * unsigned int 0..UINT_MAX
l long LONG_MIN..LONG_MAX
k * unsigned long none
L long long LLONG_MIN..LLONG_MAX
K * unsigned long long none
Notes:
* New format codes.
** Changed from previous "range-and-a-half" to "none"; the
range-and-a-half checking wasn't particularly useful.
New test test_getargs2.py, to verify all this.
wasn't used outside the assert (and hence caused a compiler warning
about an unused variable in NDEBUG mode). The assert wasn't very
useful any more.
_PyLong_NumBits(): moved the calculation of ndigits after asserting
that v != NULL.
Assorted code cleanups; e.g., sizeof(char) is 1 by definition, so there's
no need to do things like multiply by sizeof(char) in hairy malloc
arguments. Fixed an undetected-overflow bug in readline_file().
longobject.c: Fixed a really stupid bug in the new _PyLong_NumBits.
pickle.py: Fixed stupid bug in save_long(): When proto is 2, it
wrote LONG1 or LONG4, but forgot to return then -- it went on to
append the proto 1 LONG opcode too.
Fixed equally stupid cancelling bugs in load_long1() and
load_long4(): they *returned* the unpickled long instead of pushing
it on the stack. The return values were ignored. Tests passed
before only because save_long() pickled the long twice.
Fixed bugs in encode_long().
Noted that decode_long() is quadratic-time despite our hopes,
because long(string, 16) is still quadratic-time in len(string).
It's hex() that's linear-time. I don't know a way to make decode_long()
linear-time in Python, short of maybe transforming the 256's-complement
bytes into marshal's funky internal format, and letting marshal decode
that. It would be more valuable to make long(string, 16) linear time.
pickletester.py: Added a global "protocols" vector so tests can try
all the protocols in a sane way. Changed test_ints() and test_unicode()
to do so. Added a new test_long(), but the tail end of it is disabled
because it "takes forever" under pickle.py (but runs very quickly under
cPickle: cPickle proto 2 for longs is linear-time).
needs of pickling longs. Backed off to a definition that's much easier
to understand. The pickler will have to work a little harder, but other
uses are more likely to be correct <0.5 wink>.
_PyLong_Sign(): New teensy function to characterize a long, as to <0, ==0,
or >0.
types. The special handling for these can now be removed from save_newobj().
Add some testing for this.
Also add support for setting the 'fast' flag on the Python Pickler class,
which suppresses use of the memo.
start for the C implemention of new pickle LONG1 and LONG4 opcodes (the
linear-time way to pickle a long is to call _PyLong_AsByteArray, but
the caller has no idea how big an array to allocate, and correct
calculation is a bit subtle).
globals, _Py_Ticker and _Py_CheckInterval. This also implements Jeremy's
shortcut in Py_AddPendingCall that zeroes out _Py_Ticker. This allows the
test in the main loop to only test a single value.
The gory details are at
http://python.org/sf/602191
SHIFT and MASK, and widen digit. One problem is that code of the form
digit << small_integer
implicitly assumes that the result fits in an int or unsigned int
(platform-dependent, but "int sized" in any case), since digit is
promoted "just" to int or unsigned via the usual integer promotions.
But if digit is typedef'ed as unsigned int, this loses information.
The cure for this is just to cast digit to twodigits first.
rigorous instead of hoping for testing not to turn up counterexamples.
Call me heretical, but despite that I'm wholly confident in the proof,
and have done it two different ways now, I still put more faith in
testing ...
ah*bh and al*bl. This is much easier than explaining why that's true
for (ah+al)*(bh+bl), and follows directly from the simple part of the
(ah+al)*(bh+bl) explanation.
space is no longer needed, so removed the code. It was only possible when
a degenerate (ah->ob_size == 0) split happened, but after that fix went
in I added k_lopsided_mul(), which saves the body of k_mul() from seeing
a degenerate split. So this removes code, and adds a honking long comment
block explaining why spilling out of bounds isn't possible anymore. Note:
ff we end up spilling out of bounds anyway <wink>, an assert in v_iadd()
is certain to trigger.
k_mul() when inputs have vastly different sizes, and a little more
efficient when they're close to a factor of 2 out of whack.
I consider this done now, although I'll set up some more correctness
tests to run overnight.
cases, overflow the allocated result object by 1 bit. In such cases,
it would have been brought back into range if we subtracted al*bl and
ah*bh from it first, but I don't want to do that because it hurts cache
behavior. Instead we just ignore the excess bit when it appears -- in
effect, this is forcing unsigned mod BASE**(asize + bsize) arithmetic
in a case where that doesn't happen all by itself.
algorithm. MSVC 6 wasn't impressed <wink>.
Something odd: the x_mul algorithm appears to get substantially worse
than quadratic time as the inputs grow larger:
bits in each input x_mul time k_mul time
------------------ ---------- ----------
15360 0.01 0.00
30720 0.04 0.01
61440 0.16 0.04
122880 0.64 0.14
245760 2.56 0.40
491520 10.76 1.23
983040 71.28 3.69
1966080 459.31 11.07
That is, x_mul is perfectly quadratic-time until a little burp at
2.56->10.76, and after that goes to hell in a hurry. Under Karatsuba,
doubling the input size "should take" 3 times longer instead of 4, and
that remains the case throughout this range. I conclude that my "be nice
to the cache" reworkings of k_mul() are paying.
correct now, so added some final comments, did some cleanup, and enabled
it for all long-int multiplies. The KARAT envar no longer matters,
although I left some #if 0'ed code in there for my own use (temporary).
k_mul() is still much slower than x_mul() if the inputs have very
differenent sizes, and that still needs to be addressed.
(it's possible, but should be harmless -- this requires more thought,
and allocating enough space in advance to prevent it requires exactly
as much thought, to know exactly how much that is -- the end result
certainly fits in the allocated space -- hmm, but that's really all
the thought it needs! borrows/carries out of the high digits really
are harmless).
k_mul(): This didn't allocate enough result space when one input had
more than twice as many bits as the other. This was partly hidden by
that x_mul() didn't normalize its result.
The Karatsuba recurrence is pretty much hosed if the inputs aren't
roughly the same size. If one has at least twice as many bits as the
other, we get a degenerate case where the "high half" of the smaller
input is 0. Added a special case for that, for speed, but despite that
it helped, this can still be much slower than the "grade school" method.
It seems to take a really wild imbalance to trigger that; e.g., a
2**22-bit input times a 1000-bit input on my box runs about twice as slow
under k_mul than under x_mul. This still needs to be addressed.
I'm also not sure that allocating a->ob_size + b->ob_size digits is
enough, given that this is computing k = (ah+al)*(bh+bl) instead of
k = (ah-al)*(bl-bh); i.e., it's certainly enough for the final result,
but it's vaguely possible that adding in the "artificially" large k may
overflow that temporarily. If so, an assert will trigger in the debug
build, but we'll probably compute the right result anyway(!).
addition and subtraction. Reworked the tail end of k_mul() to use them.
This saves oodles of one-shot longobject allocations (this is a triply-
recursive routine, so saving one allocation in the body saves 3**n
allocations at depth n; we actually save 2 allocations in the body).
SF 560379: Karatsuba multiplication.
Lots of things were changed from that. This needs a lot more testing,
for correctness and speed, the latter especially when bit lengths are
unbalanced. For now, the Karatsuba code gets invoked if and only if
envar KARAT exists.
The staticforward define was needed to support certain broken C
compilers (notably SCO ODT 3.0, perhaps early AIX as well) botched the
static keyword when it was used with a forward declaration of a static
initialized structure. Standard C allows the forward declaration with
static, and we've decided to stop catering to broken C compilers. (In
fact, we expect that the compilers are all fixed eight years later.)
I'm leaving staticforward and statichere defined in object.h as
static. This is only for backwards compatibility with C extensions
that might still use it.
XXX I haven't updated the documentation.
many types were subclassable but had a xxx_dealloc function that
called PyObject_DEL(self) directly instead of deferring to
self->ob_type->tp_free(self). It is permissible to set tp_free in the
type object directly to _PyObject_Del, for non-GC types, or to
_PyObject_GC_Del, for GC types. Still, PyObject_DEL was a tad faster,
so I'm fearing that our pystone rating is going down again. I'm not
sure if doing something like
void xxx_dealloc(PyObject *self)
{
if (PyXxxCheckExact(self))
PyObject_DEL(self);
else
self->ob_type->tp_free(self);
}
is any faster than always calling the else branch, so I haven't
attempted that -- however those types whose own dealloc is fancier
(int, float, unicode) do use this pattern.
Generalize PyLong_AsLongLong to accept int arguments too. The real point
is so that PyArg_ParseTuple's 'L' code does too. That code was
undocumented (AFAICT), so documented it.
Both int and long multiplication are changed to be more careful in
their assumptions about when one of the arguments is a sequence: the
assumption that at least one of the arguments must be an int (or long,
respectively) is still held, but the assumption that these don't smell
like sequences is no longer true: a subtype of int or long may well
have a sequence-repeat thingie!
Given an immutable type M, and an instance I of a subclass of M, the
constructor call M(I) was just returning I as-is; but it should return a
new instance of M. This fixes it for M in {int, long}. Strings, floats
and tuples remain to be done.
Added new macros PyInt_CheckExact and PyLong_CheckExact, to more easily
distinguish between "is" and "is a" (i.e., only an int passes
PyInt_CheckExact, while any sublass of int passes PyInt_Check).
Added private API function _PyLong_Copy.
the fiddling is simply due to that no caller of PyLong_AsDouble ever
checked for failure (so that's fixing old bugs). PyLong_AsDouble is much
faster for big inputs now too, but that's more of a happy consequence
than a design goal.
but will be the foundation for Good Things:
+ Speed PyLong_AsDouble.
+ Give PyLong_AsDouble the ability to detect overflow.
+ Make true division of long/long nearly as accurate as possible (no
spurious infinities or NaNs).
+ Return non-insane results from math.log and math.log10 when passing a
long that can't be approximated by a double better than HUGE_VAL.
PEP 238. Changes:
- add a new flag variable Py_DivisionWarningFlag, declared in
pydebug.h, defined in object.c, set in main.c, and used in
{int,long,float,complex}object.c. When this flag is set, the
classic division operator issues a DeprecationWarning message.
- add a new API PyRun_SimpleStringFlags() to match
PyRun_SimpleString(). The main() function calls this so that
commands run with -c can also benefit from -Dnew.
- While I was at it, I changed the usage message in main() somewhat:
alphabetized the options, split it in *four* parts to fit in under
512 bytes (not that I still believe this is necessary -- doc strings
elsewhere are much longer), and perhaps most visibly, don't display
the full list of options on each command line error. Instead, the
full list is only displayed when -h is used, and otherwise a brief
reminder of -h is displayed. When -h is used, write to stdout so
that you can do `python -h | more'.
Notes:
- I don't want to use the -W option to control whether the classic
division warning is issued or not, because the machinery to decide
whether to display the warning or not is very expensive (it involves
calling into the warnings.py module). You can use -Werror to turn
the warnings into exceptions though.
- The -Dnew option doesn't select future division for all of the
program -- only for the __main__ module. I don't know if I'll ever
change this -- it would require changes to the .pyc file magic
number to do it right, and a more global notion of compiler flags.
- You can usefully combine -Dwarn and -Dnew: this gives the __main__
module new division, and warns about classic division everywhere
else.
- Do not compile unicodeobject, unicodectype, and unicodedata if Unicode is disabled
- check for Py_USING_UNICODE in all places that use Unicode functions
- disables unicode literals, and the builtin functions
- add the types.StringTypes list
- remove Unicode literals from most tests.
This introduces:
- A new operator // that means floor division (the kind of division
where 1/2 is 0).
- The "future division" statement ("from __future__ import division)
which changes the meaning of the / operator to implement "true
division" (where 1/2 is 0.5).
- New overloadable operators __truediv__ and __floordiv__.
- New slots in the PyNumberMethods struct for true and floor division,
new abstract APIs for them, new opcodes, and so on.
I emphasize that without the future division statement, the semantics
of / will remain unchanged until Python 3.0.
Not yet implemented are warnings (default off) when / is used with int
or long arguments.
This has been on display since 7/31 as SF patch #443474.
Flames to /dev/null.
particular, str(long) and repr(long) use base 10, and that gets a factor
of 4 speedup). Another factor of 2 can be gotten by refactoring divrem1 to
support in-place division, but that started getting messy so I'm leaving
that out.
raising an error. This was one of the two issues that the VPython
folks were particularly problematic for their students. (The other
one was integer division...) This implements (my) SF patch #440487.
is allocated than needed (used to allocate 80 bytes of digit space no
matter how small the long input). This also runs faster, at least on 32-
bit boxes.
Replaced PyLong_{As,From}{Unsigned,}LongLong guts with calls
to _PyLong_{As,From}ByteArray.
_testcapimodule.c:
Added strong tests of PyLong_{As,From}{Unsigned,}LongLong.
Fixes SF bug #432552 PyLong_AsLongLong() problems.
Possible bugfix candidate, but the fix relies on code added to longobject
to support the new q/Q structmodule format codes.
This completes the q/Q project.
longobject.c _PyLong_AsByteArray: The original code had a gross bug:
the most-significant Python digit doesn't necessarily have SHIFT
significant bits, and you really need to count how many copies of the sign
bit it has else spurious overflow errors result.
test_struct.py: This now does exhaustive std q/Q testing at, and on both
sides of, all relevant power-of-2 boundaries, both positive and negative.
NEWS: Added brief dict news while I was at it.
_PyLong_FromByteArray
_PyLong_AsByteArray
Untested and probably buggy -- they compile OK, but nothing calls them
yet. Will soon be called by the struct module, to implement x-platform
'q' and 'Q'.
If other people have uses for them, we could move them into the public API.
See longobject.h for usage details.
raise ValueError. Checked in the patch as far as it went, but also changed
all of ints, longs and floats to raise ZeroDivisionError instead when raising
0 to a negative number. This is what 754-inspired stds require, as the "true
result" is an infinity obtained from finite operands, i.e. it's a singularity.
Also changed float pow to not be so timid about using its square-and-multiply
algorithm. Note that what math.pow does is unrelated to what builtin pow
does, and will still vary by platform.
This was a misleading bug -- the true "bug" was that hash(x) gave an error
return when x is an infinity. Fixed that. Added new Py_IS_INFINITY macro to
pyport.h. Rearranged code to reduce growing duplication in hashing of float and
complex numbers, pushing Trent's earlier stab at that to a logical conclusion.
Fixed exceedingly rare bug where hashing of floats could return -1 even if there
wasn't an error (didn't waste time trying to construct a test case, it was simply
obvious from the code that it *could* happen). Improved complex hash so that
hash(complex(x, y)) doesn't systematically equal hash(complex(y, x)) anymore.
This was a convenient excuse to create the pyport.h file recently
discussed!
Please use new Py_ARITHMETIC_RIGHT_SHIFT when right-shifting a
signed int and you *need* sign-extension. This is #define'd in
pyport.h, keying off new config symbol SIGNED_RIGHT_SHIFT_ZERO_FILLS.
If you're running on a platform that needs that symbol #define'd,
the std tests never would have worked for you (in particular,
at least test_long would have failed).
The autoconfig stuff got added to Python after my Unix days, so
I don't know how that works. Would someone please look into doing
& testing an auto-config of the SIGNED_RIGHT_SHIFT_ZERO_FILLS
symbol? It needs to be defined if & only if, e.g., (-1) >> 3 is
not -1.
Stein -- thanks!). Incidentally removed all the Py_PROTO macros
from object.h, as they prevented my editor from magically finding
the definitions of the "coercion", "cmpfunc" and "reprfunc"
typedefs that were being redundantly applied in longobject.c.
This patch correct bounds checking in PyLong_FromLongLong. Currently, it does
not check properly for negative values when checking to see if the incoming
value fits in a long or unsigned long. This results in possible silent
truncation of the value for very large negative values.
For more comments, read the patches@python.org archives.
For documentation read the comments in mymalloc.h and objimpl.h.
(This is not exactly what Vladimir posted to the patches list; I've
made a few changes, and Vladimir sent me a fix in private email for a
problem that only occurs in debug mode. I'm also holding back on his
change to main.c, which seems unnecessary to me.)
his copy of test_contains.py seems to be broken -- the lines he
deleted were already absent). Checkin messages:
New Unicode support for int(), float(), complex() and long().
- new APIs PyInt_FromUnicode() and PyLong_FromUnicode()
- added support for Unicode to PyFloat_FromString()
- new encoding API PyUnicode_EncodeDecimal() which converts
Unicode to a decimal char* string (used in the above new
APIs)
- shortcuts for calls like int(<int object>) and float(<float obj>)
- tests for all of the above
Unicode compares and contains checks:
- comparing Unicode and non-string types now works; TypeErrors
are masked, all other errors such as ValueError during
Unicode coercion are passed through (note that PyUnicode_Compare
does not implement the masking -- PyObject_Compare does this)
- contains now works for non-string types too; TypeErrors are
masked and 0 returned; all other errors are passed through
Better testing support for the standard codecs.
Misc minor enhancements, such as an alias dbcs for the mbcs codec.
Changes:
- PyLong_FromString() now applies the same error checks as
does PyInt_FromString(): trailing garbage is reported
as error and not longer silently ignored. The only characters
which may be trailing the digits are 'L' and 'l' -- these
are still silently ignored.
- string.ato?() now directly interface to int(), long() and
float(). The error strings are now a little different, but
the type still remains the same. These functions are now
ready to get declared obsolete ;-)
- PyNumber_Int() now also does a check for embedded NULL chars
in the input string; PyNumber_Long() already did this (and
still does)
Followed by:
Looks like I've gone a step too far there... (and test_contains.py
seem to have a bug too).
I've changed back to reporting all errors in PyUnicode_Contains()
and added a few more test cases to test_contains.py (plus corrected
the join() NameError).
trailing 'L' is appended to the representation,
otherwise not.
All existing call sites are modified to pass true for
addL.
Remove incorrect statement about external use of this
function from elsewhere; it's static!
long_str(): Handler for the tp_str slot in the type object.
Identical to long_repr(), but passes false as the addL
parameter of long_format().
The MS compiler doesn't call it 'long long', it uses __int64,
so a new #define, LONG_LONG, has been added and all occurrences
of 'long long' are replaced with it.
From: "Tim Peters" <tim_one@email.msn.com>
To: "Guido van Rossum" <guido@CNRI.Reston.VA.US>
Date: Sat, 23 May 1998 21:45:53 -0400
Guido, the overflow checking in PyLong_AsLong is off a little:
1) If the C in use sign-extends right shifts on signed longs, there's a
spurious overflow error when converting the most-negative int:
Python 1.5.1 (#0, Apr 13 1998, 20:22:04) [MSC 32 bit (Intel)] on win32
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> x = -1L << 31
>>> x
-2147483648L
>>> int(x)
Traceback (innermost last):
File "<stdin>", line 1, in ?
OverflowError: long int too long to convert
>>>
2) If C does not sign-extend, some genuine overflows won't be caught.
The attached should repair both, and, because I installed a new disk and a C
compiler today, it's even been compiled this time <wink>.
Python 1.5.1 (#0, May 23 1998, 20:24:58) [MSC 32 bit (Intel)] on win32
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> x = -1L << 31
>>> x
-2147483648L
>>> int(x)
-2147483648
>>> int(-x)
Traceback (innermost last):
File "<stdin>", line 1, in ?
OverflowError: long int too long to convert
>>> int(-x-1)
2147483647
>>> int(x-1)
Traceback (innermost last):
File "<stdin>", line 1, in ?
OverflowError: long int too long to convert
>>>
end-casing-ly y'rs - tim
entirely redone operator overloading. The rules for class
instances are now much more relaxed than for other built-in types
(whose coerce must still return two objects of the same type)
* Objects/floatobject.c: add overflow check when converting float
to int and implement truncation towards zero using ceil/float
* Objects/longobject.c: change ValueError to OverflowError when
converting to int
* Objects/rangeobject.c: modernized
* Objects/stringobject.c: use HAVE_LIMITS instead of __STDC__
* Objects/xxobject.c: changed to use new style (not finished?)
Added $(SYSDEF) to its build rule in Makefile.
* cgensupport.[ch], modsupport.[ch]: removed some old stuff. Also
changed files that still used it... And made several things static
that weren't but should have been... And other minor cleanups...
* listobject.[ch]: add external interfaces {set,get}listslice
* socketmodule.c: fix bugs in new send() argument parsing.
* sunaudiodevmodule.c: added flush() and close().
* Stubs for faster implementation of local variables (not yet finished)
* Added function name to code object. Print it for code and function
objects. THIS MAKES THE .PYC FILE FORMAT INCOMPATIBLE (the version
number has changed accordingly)
* Print address of self for built-in methods
* New internal functions getattro and setattro (getattr/setattr with
string object arg)
* Replaced "dictobject" with more powerful "mappingobject"
* New per-type functio tp_hash to implement arbitrary object hashing,
and hashobject() to interface to it
* Added built-in functions hash(v) and hasattr(v, 'name')
* classobject: made some functions static that accidentally weren't;
added __hash__ special instance method to implement hash()
* Added proper comparison for built-in methods and functions
* Fixcprt.py: added [-y file] option, do only files younger than file.
* modsupport.[ch]: added vmkvalue().
* intobject.c: use mkvalue().
* stringobject.c: added "formatstring"; renamed string* to string_*;
ceval.c: call formatstring for string % value.
* longobject.c: close memory leak in divmod.
* parsetok.c: set result node to NULL when returning an error.
* various modules: added 1993 to copyright.
* thread.c: added copyright notice.
* ceval.c: minor change to error message for "+"
* stdwinmodule.c: check for error from wfetchcolor
* config.c: MS-DOS fixes (define PYTHONPATH, use DELIM, use osdefs.h)
* Add declaration of inittab to import.h
* sysmodule.c: added sys.builtin_module_names
* xxmodule.c, xxobject.c: fix minor errors
stdwinmodule.c: wsetfont can now return an error
Makefile: add CL_USE and CL_LIB*S; config.c: move CL part around
New things in imgfile; also in Makefile.
longobject.c: fix comparison of negative long ints... [REAL BUG!]
marshal.c: add dumps() and loads() to read/write strings
timemodule.c: make sure there's always a floatsleep()
posixmodule.c: rationalize struct returned by times()
Makefile: add test target, disable imgfile by default
thread.c: Improved coexistance with dl module (sjoerd)
stdwinmodule.c: Change include stdwin.h if macintosh
rotormodule.c: added missing last argument to RTR_?_region calls
confic.c: merged with configmac.c, added 1993 to copyright message
fileobject.c: int compared to NULL in writestring(); change fopenRF ifdef
timemodule.c: simplify times() using mkvalue; include myselect.h
earlier (for sequent).
posixmodule: for sequent, include unistd.h instead of explicit
extern definitions and don't define rename()
Makefile: change misleading/wrong MD5 comments
* flmodule.c: added some missing functions; changed readonly flags of
some data members based upon FORMS documentation.
* listobject.c: fixed int/long arg lint bug (bites PC compilers).
* several: removed redundant print methods (repr is good enough).
* posixmodule.c: added (still experimental) process group functions.
coercion is now completely generic.
* ceval.c: for instances, don't coerce for + and *; * reverses
arguments if left one is non-instance numeric and right one sequence.