This simplifies the rounding in _PyObject_VAR_SIZE, allows to restore the
pre-rounding calling sequence, and allows some nice little simplifications
in its callers. I'm still making it return a size_t, though.
As Guido suggested, this makes the new subclassing code substantially
simpler. But the mechanics of doing it w/ C macro semantics are a mess,
and _PyObject_VAR_SIZE has a new calling sequence now.
Question: The PyObject_NEW_VAR macro appears to be part of the public API.
Regardless of what it expands to, the notion that it has to round up the
memory it allocates is new, and extensions containing the old
PyObject_NEW_VAR macro expansion (which was embedded in the
PyObject_NEW_VAR expansion) won't do this rounding. But the rounding
isn't actually *needed* except for new-style instances with dict pointers
after a variable-length blob of embedded data. So my guess is that we do
not need to bump the API version for this (as the rounding isn't needed
for anything an extension can do unless it's recompiled anyway). What's
your guess?
pad memory to properly align the __dict__ pointer in all cases.
gcmodule.c/objimpl.h, _PyObject_GC_Malloc:
+ Added a "padding" argument so that this flavor of malloc can allocate
enough bytes for alignment padding (it can't know this is needed, but
its callers do).
typeobject.c, PyType_GenericAlloc:
+ Allocated enough bytes to align the __dict__ pointer.
+ Sped and simplified the round-up-to-PTRSIZE logic.
+ Added blank lines so I could parse the if/else blocks <0.7 wink>.
+ Use the _PyObject_VAR_SIZE macro to compute object size.
+ Break the computation into lines convenient for debugger inspection.
+ Speed the round-up-to-pointer-size computation.
many types were subclassable but had a xxx_dealloc function that
called PyObject_DEL(self) directly instead of deferring to
self->ob_type->tp_free(self). It is permissible to set tp_free in the
type object directly to _PyObject_Del, for non-GC types, or to
_PyObject_GC_Del, for GC types. Still, PyObject_DEL was a tad faster,
so I'm fearing that our pystone rating is going down again. I'm not
sure if doing something like
void xxx_dealloc(PyObject *self)
{
if (PyXxxCheckExact(self))
PyObject_DEL(self);
else
self->ob_type->tp_free(self);
}
is any faster than always calling the else branch, so I haven't
attempted that -- however those types whose own dealloc is fancier
(int, float, unicode) do use this pattern.
For a dynamically constructed type object, fill in the tp_doc slot with
a copy of the argument dict's "__doc__" value, provided the latter exists
and is a string.
NOTE: I don't know what to do if it's a Unicode string, so in that case
tp_doc is left NULL (which shows up as Py_None if you do Class.__doc__).
Note that tp_doc holds a char*, not a general PyObject*.
test for getattribute==NULL was bogus because it always found
object.__getattribute__. Pick it apart using the trick we learned
from slot_sq_item, and if it's just a wrapper around
PyObject_GenericGetAttr, zap it. Also added a long XXX comment
explaining the consequences.
test dramatically:
class T(tuple): __dynamic__ = 1
t = T(range(1000))
for i in range(1000): tt = tuple(t)
The speedup was about 5x compared to the previous state of CVS (1.7
vs. 8.8, in arbitrary time units). But it's still more than twice as
slow as as the same test with __dynamic__ = 0 (0.8).
I'm not sure that I really want to go through the trouble of this kind
of speedup for every slot. Even doing it just for the most popular
slots will be a major effort (the new slot_sq_item is 40+ lines, while
the old one was one line with a powerful macro -- unfortunately the
speedup comes from expanding the macro and doing things in a way
specific to the slot signature).
An alternative that I'm currently considering is sketched in PLAN.txt:
trap setattr on type objects. But this will require keeping track of
all derived types using weak references.
pointing to a static variable to hold the object form of the string
was never used, causing endless calls to PyString_InternFromString().
One particular test (with lots of __getitem__ calls) became a third
faster with this!
Unknown whether this fixes it.
- stringobject.c, PyString_FromFormatV: don't assume that va_list is of
a type that can be copied via an initializer.
- errors.c, PyErr_Format: add a va_end() to balance the va_start().
instances).
Also added GC support to various auxiliary types: super, property,
descriptors, wrappers, dictproxy. (Only type objects have a tp_clear
field; the other types are.)
One change was necessary to the GC infrastructure. We have statically
allocated type objects that don't have a GC header (and can't easily
be given one) and heap-allocated type objects that do have a GC
header. Giving these different metatypes would be really ugly: I
tried, and I had to modify pickle.py, cPickle.c, copy.py, add a new
invent a new name for the new metatype and make it a built-in, change
affected tests... In short, a mess. So instead, we add a new type
slot tp_is_gc, which is a simple Boolean function that determines
whether a particular instance has GC headers or not. This slot is
only relevant for types that have the (new) GC flag bit set. If the
tp_is_gc slot is NULL (by far the most common case), all instances of
the type are deemed to have GC headers. This slot is called by the
PyObject_IS_GC() macro (which is only used twice, both times in
gcmodule.c).
I also changed the extern declarations for a bunch of GC-related
functions (_PyObject_GC_Del etc.): these always exist but objimpl.h
only declared them when WITH_CYCLE_GC was defined, but I needed to be
able to reference them without #ifdefs. (When WITH_CYCLE_GC is not
defined, they do the same as their non-GC counterparts anyway.)
- SLOT1BINFULL() macro: changed this to check for __rop__ overriding
__op__, like binary_op1() in abstract.c -- the latter only calls the
slot function once if both types use the same slot function, so the
slot function must make both calls -- which it already did for the
__op__, __rop__ order, but not yet for the __rop__, __op__ order
when B.__class__ is a subclass of A.__class__.
- slot_sq_contains(), slot_nb_nonzero(): use lookup_maybe() rather
than lookup_method() which sets an exception which we then clear.
- slot_nb_coerce(): don't give up when left argument's __coerce__
returns NotImplemented, but give the right argument a chance.
Generalize PyLong_AsLongLong to accept int arguments too. The real point
is so that PyArg_ParseTuple's 'L' code does too. That code was
undocumented (AFAICT), so documented it.
__rop__ now takes precendence over __op__. Those circumstances are:
- Both arguments are new-style classes
- Both arguments are new-style numbers
- Their implementation slots for tp_op differ
- Their types differ
- The right argument's type is a subtype of the left argument's type
Also did this for the ternary operator (pow) -- only the binary case
is dealt with properly though, since __rpow__ is not supported anyway.
their 'i' and 'r' variants) were not being generated if the
corresponding nb_ slots were present in the type object. I bet this
is because floor and true division were introduced after I last
looked at that part of the code.
- Made cls.__module__ writable.
- Ensure that obj.__dict__ is returned as {}, not None, even upon first
reference; it simply springs into life when you ask for it.
(*) The pickling support is provisional for the following reasons:
- It doesn't support classes with __slots__.
- It relies on additional support in copy_reg.py: the C method
__reduce__, defined in the object class, really calls calling
copy_reg._reduce(obj). Eventually the Python code in copy_reg.py
needs to be migrated to C, but I'd like to experiment with the
Python implementation first. The _reduce() code also relies on an
additional helper function, _reconstructor(), defined in
copy_reg.py; this should also be reimplemented in C.
than <type 'ClassName'>. Exception: if it's a built-in type or an
extension type, continue to call it <type 'ClassName>. Call me a
wimp, but I don't want to break more user code than necessary.
same. I hope the test for structural equivalence is stringent enough.
It only allows the assignment if the old and new types:
- have the same basic size
- have the same item size
- have the same dict offset
- have the same weaklist offset
- have the same GC flag bit
- have a common base that is the same except for maybe the dict and
weaklist (which may have been added separately at the same offsets
in both types)
- property() now takes 4 keyword arguments: fget, fset, fdel, doc.
Note that the real purpose of the 'f' prefix is to make fdel fit in
('del' is a keyword, so can't used as a keyword argument name).
- These map to visible readonly attributes 'fget', 'fset', 'fdel',
and '__doc__' in the property object.
- fget/fset/fdel weren't discoverable from Python before.
- __doc__ is new, and allows to associate a docstring with a property.
- if __getattribute__ exists, it is called first;
if it doesn't exists, PyObject_GenericGetAttr is called first.
- if the above raises AttributeError, and __getattr__ exists,
it is called.
classes to __getattribute__, to make it crystal-clear that it doesn't
have the same semantics as overriding __getattr__ on classic classes.
This is a halfway checkin -- I'll proceed to add a __getattr__ hook
that works the way it works in classic classes.
no backwards compatibility to worry about, so I just pushed the
'closure' struct member to the back -- it's never used in the current
code base (I may eliminate it, but that's more work because the getter
and setter signatures would have to change.)
As examples, I added actual docstrings to the getset attributes of a
few types: file.closed, xxsubtype.spamdict.state.
compatibility, this required all places where an array of "struct
memberlist" structures was declared that is referenced from a type's
tp_members slot to change the type of the structure to PyMemberDef;
"struct memberlist" is now only used by old code that still calls
PyMember_Get/Set. The code in PyObject_GenericGetAttr/SetAttr now
calls the new APIs PyMember_GetOne/SetOne, which take a PyMemberDef
argument.
As examples, I added actual docstrings to the attributes of a few
types: file, complex, instance method, super, and xxsubtype.spamlist.
Also converted the symtable to new style getattr.
elements which are not Unicode objects or strings. (This matches
the string.join() behaviour.)
Fix a memory leak in the .join() method which occurs in case
the Unicode resize fails.
Restore the test_unicode output.
complex_coerce() would never be called with a complex argument,
because PyNumber_Coerce[Ex] doesn't bother calling the type's coercion
method if the values already have the same type. But now, of course,
it's possible to pass an instance of a complex *subtype*, and those
must be accepted.
hack, and it's even more disgusting than a PyInstance_Check() call.
If the tp_compare slot is the slot used for overrides in Python,
it's always called.
Add some tests that show what should work too.
only safely call a type's tp_compare slot if the second argument is
also an instance of the same type. I hate to think what
e.g. int_compare() would do with a second argument that's a float!
descriptors for each attribute. The getattr() implementation is
similar to PyObject_GenericGetAttr(), but delegates to im_self instead
of looking in __dict__; I couldn't do this as a wrapper around
PyObject_GenericGetAttr().
XXX A problem here is that this is a case of *delegation*. dir()
doesn't see exactly the same attributes that are actually defined;
e.g. if the delegate is a Python function object, it supports
attributes like func_code etc., but these are not visible to dir(); on
the other hand, dynamic function attributes (stored in the function's
__dict__) *are* visible to dir(). Maybe we need a mechanism to tell
dir() about the delegation mechanism? I vaguely recall seeing a
request in the newsgroup for a more formal definition of attribute
delegation too. Sigh, time for a new PEP.
and are lists, and then just the string elements (if any)).
There are good and bad reasons for this. The good reason is to support
dir() "like before" on objects of extension types that haven't migrated
to the class introspection API yet. The bad reason is that Python's own
method objects are such a type, and this is the quickest way to get their
im_self etc attrs to "show up" via dir(). It looks much messier to move
them to the new scheme, as their current getattr implementation presents
a view of their attrs that's a untion of their own attrs plus their
im_func's attrs. In particular, methodobject.__dict__ actually returns
methodobject.im_func.__dict__, and if that's important to preserve it
doesn't seem to fit the class introspection model at all.
Both int and long multiplication are changed to be more careful in
their assumptions about when one of the arguments is a sequence: the
assumption that at least one of the arguments must be an int (or long,
respectively) is still held, but the assumption that these don't smell
like sequences is no longer true: a subtype of int or long may well
have a sequence-repeat thingie!
NotImplemented when the lookup fails, and use this for binary
operators. Also lookup_maybe() which doesn't raise an exception when
the lookup fails (still returning NULL).
- Don't turn a non-tuple argument into a one-tuple. Rather, the
caller must pass a format that causes Py_VaBuildValue() to return a
tuple.
- Speed things up by calling PyObject_Call (which is fairly low-level
and straightforward) rather than PyObject_CallObject (which calls
PyEval_CallObjectWithKeywords which calls PyObject_Call, and nothing
is really done in the mean time except some tests for NULL args and
valid types, which are already guaranteed).
- Cosmetics.
Other places:
- Make sure that the format argument to call_method() is surrounded by
parentheses, so it will cause a tuple to be created.
- Replace a few calls to PyEval_CallObject() with a surefire tuple for
args to calls to PyObject_Call(). (A few calls to
PyEval_CallObject() remain that have NULL for args.)
directly, as the only thing done here (replace NULL args with an empty
tuple) is also done there.
XXX Maybe we should take one step further and equate the two at the
macro level? That's harder though because PyEval_Call* is declared in
a header that's not included standard. But it is silly that
PyObject_CallObject calls PyEval_CallObject which calls back to
PyObject_Call. Maybe PyEval_CallObject should be moved into this file
instead? All I know is that there are too many call APIs! The
differences between PyObject_Call and PyEval_CallObjectWithKeywords is
that the latter allows args to be NULL, and does explicit type checks
for args and kwds.
A surprising number of changes to split tp_new into tp_new and tp_init.
Turned out the older PyFile_FromFile() didn't initialize the memory it
allocated in all (error) cases, which caused new sanity asserts
elsewhere to fail left & right (and could have, e.g., caused file_dealloc
to try decrefing random addresses).
keys are true strings -- no subclasses need apply. This may be debatable.
The problem is that a str subclass may very well want to override __eq__
and/or __hash__ (see the new example of case-insensitive strings in
test_descr), but go-fast shortcuts for strings are ubiquitous in our dicts
(and subclass overrides aren't even looked for then). Another go-fast
reason for the change is that PyCheck_StringExact() is a quicker test
than PyCheck_String(), and we make such a test on virtually every access
to every dict.
OTOH, a str subclass may also be perfectly happy using the base str eq
and hash, and this change slows them a lot. But those cases are still
hypothetical, while Python's own reliance on true-string dicts is not.
just by doing type(f) where f is any file object. This left a hole in
restricted execution mode that rexec.py can't plug by itself (although it
can plug part of it; the rest is plugged in fileobject.c now).
on to the tp_new slot (if non-NULL), as well as to the tp_init slot (if
any). A sane type implementing both tp_new and tp_init should probably
pay attention to the arguments in only one of them.
with the same value instead. This ensures that a string (or string
subclass) object's ob_sinterned pointer is always a str (or NULL), and
that the dict of interned strings only has strs as keys.
+ These were leaving the hash fields at 0, which all string and unicode
routines believe is a legitimate hash code. As a result, hash() applied
to str and unicode subclass instances always returned 0, which in turn
confused dict operations, etc.
+ Changed local names "new"; no point to antagonizing C++ compilers.
subclasses, all "the usual" ones (slicing etc), plus replace, translate,
ljust, rjust, center and strip. I don't know how to be sure they've all
been caught.
Question: Should we complain if someone tries to intern an instance of
a string subclass? I hate to slow any code on those paths.
tuple(i) repaired to return a true tuple when i is an instance of a
tuple subclass.
Added PyTuple_CheckExact macro.
PySequence_Tuple(): if a tuple-like object isn't exactly a tuple, it's
not safe to return the object as-is -- make a new tuple of it instead.
Given an immutable type M, and an instance I of a subclass of M, the
constructor call M(I) was just returning I as-is; but it should return a
new instance of M. This fixes it for M in {int, long}. Strings, floats
and tuples remain to be done.
Added new macros PyInt_CheckExact and PyLong_CheckExact, to more easily
distinguish between "is" and "is a" (i.e., only an int passes
PyInt_CheckExact, while any sublass of int passes PyInt_Check).
Added private API function _PyLong_Copy.
Subtlety on Windows: if we change test_largefile.py to use a file
> 4GB, it still fails. A debug session suggests this is because
fseek(fp, 0, 2) refuses to seek to the end of the file when the file
is > 4GB, because it uses the SetFilePointer() in 32-bit mode.
But it only fails when we seek relative to the end of the file,
because in the other seek modes only calls to fgetpos() and fsetpos()
are made, which use Get/SetFilePointer() in 64-bit mode. Solution:
#ifdef MS_WInDOWS, replace the call to fseek(fp, ...) with a call to
_lseeki64(fileno(fp), ...). Make sure to call fflush(fp) first.
(XXX Could also replace the entire branch with a call to _lseeki64().
Would that be more efficient? Certainly less generated code.)
(XXX This needs more testing. I can't actually test that it works for
files >4GB on my Win98 machine, because the filesystem here won't let
me create files >=4GB at all. Tim should test this on his Win2K
machine.)
- use PyModule_Check() instead of PyObject_TypeCheck(), now we can.
- don't assert that the __dict__ gotten out of a module is always
a dictionary; check its type, and raise an exception if it's not.
iterable object. I'm not sure how that got overlooked before!
Got rid of the internal _PySequence_IterContains, introduced a new
internal _PySequence_IterSearch, and rewrote all the iteration-based
"count of", "index of", and "is the object in it or not?" routines to
just call the new function. I suppose it's slower this way, but the
code duplication was getting depressing.
the base classes is not a classic class, and its class (the metaclass)
is callable, call the metaclass to do the deed.
One effect of this is that, when mixing classic and new-style classes
amongst the bases of a class, it doesn't matter whether the first base
class is a classic class or not: you will always get the error
"TypeError: metatype conflict among bases". (Formerly, with a classic
class first, you'd get "TypeError: PyClass_New: base must be a class".)
Another effect is that multiple inheritance from ExtensionClass.Base,
with a classic class as the first class, transfers control to the
ExtensionClass.Base class. This is what we need for SF #443239 (and
also for running Zope under 2.2a4, before ExtensionClass is replaced).
corresponding "getitem" operation (sq_item or mp_subscript) is
implemented. I realize that "sequence-ness" and "mapping-ness" are
poorly defined (and the tests may still be wrong for user-defined
instances, which always have both slots filled), but I believe that a
sequence that doesn't support its getitem operation should not be
considered a sequence. All other operations are optional though.
For example, the ZODB BTree tests crashed because PySequence_Check()
returned true for a dictionary! (In 2.2, the dictionary type has a
tp_as_sequence pointer, but the only field filled is sq_contains, so
you can write "if key in dict".) With this fix, all standalone ZODB
tests succeed.
a->tp_mro. If a doesn't have class, it's considered a subclass only
of itself or of 'object'.
This one fix is enough to prevent the ExtensionClass test suite from
dumping core, but that doesn't say much (it's a rather small test
suite). Also note that for ExtensionClass-defined types, a different
subclass test may be needed. But I haven't checked whether
PyType_IsSubtype() is actually used in situations where this matters
-- probably it doesn't, since we also don't check for classic classes.
Curious: the MS docs say stati64 etc are supported even on Win95, but
Win95 doesn't support a filesystem that allows partitions > 2 Gb.
test_largefile: This was opening its test file in text mode. I have no
idea how that worked under Win64, but it sure needs binary mode on Win98.
BTW, on Win98 test_largefile runs quickly (under a second).
While not even documented, they were clearly part of the C API,
there's no great difficulty to support them, and it has the cool
effect of not requiring any changes to ExtensionClass.c.
requires that errno ever get set, and it looks like glibc is already
playing that game. New rules:
+ Never use HUGE_VAL. Use the new Py_HUGE_VAL instead.
+ Never believe errno. If overflow is the only thing you're interested in,
use the new Py_OVERFLOWED(x) macro. If you're interested in any libm
errors, use the new Py_SET_ERANGE_IF_OVERFLOW(x) macro, which attempts
to set errno the way C89 said it worked.
Unfortunately, none of these are reliable, but they work on Windows and I
*expect* under glibc too.
I believe this works on Linux (tested both on a system with large file
support and one without it), and it may work on Solaris 2.7.
The changes are twofold:
(1) The configure script now boldly tries to set the two symbols that
are recommended (for Solaris and Linux), and then tries a test
script that does some simple seeking without writing.
(2) The _portable_{fseek,ftell} functions are a little more systematic
in how they try the different large file support options: first
try fseeko/ftello, but only if off_t is large; then try
fseek64/ftell64; then try hacking with fgetpos/fsetpos.
I'm keeping my fingers crossed. The meaning of the
HAVE_LARGEFILE_SUPPORT macro is not at all clear.
I'll see if I can get it to work on Windows as well.
the fiddling is simply due to that no caller of PyLong_AsDouble ever
checked for failure (so that's fixing old bugs). PyLong_AsDouble is much
faster for big inputs now too, but that's more of a happy consequence
than a design goal.
but will be the foundation for Good Things:
+ Speed PyLong_AsDouble.
+ Give PyLong_AsDouble the ability to detect overflow.
+ Make true division of long/long nearly as accurate as possible (no
spurious infinities or NaNs).
+ Return non-insane results from math.log and math.log10 when passing a
long that can't be approximated by a double better than HUGE_VAL.
mapping object", in the same sense dict.update(x) requires of x (that x
has a keys() method and a getitem).
Questionable: The other type constructors accept a keyword argument, so I
did that here too (e.g., dictionary(mapping={1:2}) works). But type_call
doesn't pass the keyword args to the tp_new slot (it passes NULL), it only
passes them to the tp_init slot, so getting at them required adding a
tp_init slot to dicts. Looks like that makes the normal case (i.e., no
args at all) a little slower (the time it takes to call dict.tp_init and
have it figure out there's nothing to do).
PEP 238. Changes:
- add a new flag variable Py_DivisionWarningFlag, declared in
pydebug.h, defined in object.c, set in main.c, and used in
{int,long,float,complex}object.c. When this flag is set, the
classic division operator issues a DeprecationWarning message.
- add a new API PyRun_SimpleStringFlags() to match
PyRun_SimpleString(). The main() function calls this so that
commands run with -c can also benefit from -Dnew.
- While I was at it, I changed the usage message in main() somewhat:
alphabetized the options, split it in *four* parts to fit in under
512 bytes (not that I still believe this is necessary -- doc strings
elsewhere are much longer), and perhaps most visibly, don't display
the full list of options on each command line error. Instead, the
full list is only displayed when -h is used, and otherwise a brief
reminder of -h is displayed. When -h is used, write to stdout so
that you can do `python -h | more'.
Notes:
- I don't want to use the -W option to control whether the classic
division warning is issued or not, because the machinery to decide
whether to display the warning or not is very expensive (it involves
calling into the warnings.py module). You can use -Werror to turn
the warnings into exceptions though.
- The -Dnew option doesn't select future division for all of the
program -- only for the __main__ module. I don't know if I'll ever
change this -- it would require changes to the .pyc file magic
number to do it right, and a more global notion of compiler flags.
- You can usefully combine -Dwarn and -Dnew: this gives the __main__
module new division, and warns about classic division everywhere
else.
__dict__ slot for string subtypes.
subtype_dealloc(): properly use _PyObject_GetDictPtr() to get the
(potentially negative) dict offset. Don't copy things into local
variables that are used only once.
type_new(): properly calculate a negative dict offset when tp_itemsize
is nonzero. The __dict__ attribute, if present, is now a calculated
attribute rather than a structure member.
tupledealloc(): only feed the free list when the type is really a
tuple, not a subtype. Otherwise, use PyObject_GC_Del().
_PyTuple_Resize(): disallow using this for tuple subtypes.
don't use getattr, but only look in the dict of the type and base types.
This prevents picking up all sorts of weird stuff, including things defined
by the metaclass when the object is a class (type).
For this purpose, a helper function lookup_method() was added. One or two
other places also use this.
PyString_FromFormatV(): In the final resize at the end, we can use
PyString_AS_STRING() since we know the object is a string and can
avoid the typechecking.
PyString_FromFormat(): GS sez: "For safety/propriety, you should call
va_end() on the vargs variable."
at least in the first two characters. %p is ill-defined, and people will
forever commit bad tests otherwise ("bad" in the sense that they fall
over (at least on Windows) for lack of a leading '0x'; 5 of the 7 tests
in test_repr.py failed on Windows for that reason this time around).
an inappropriate first argument. Now that there are more ways for
this to fail, make sure to report the name of the class of the
expected instance and of the actual instance.
PyErr_Format() these new C API methods can be used instead of
sprintf()'s into hardcoded char* buffers. This allows us to fix
many situation where long package, module, or class names get
truncated in reprs.
PyString_FromFormat() is the varargs variety.
PyString_FromFormatV() is the va_list variety
Original PyErr_Format() code was modified to allow %p and %ld
expansions.
Many reprs were converted to this, checkins coming soo. Not
changed: complex_repr(), float_repr(), float_print(), float_str(),
int_repr(). There may be other candidates not yet converted.
Closes patch #454743.
super(type) -> unbound super object
super(type, obj) -> bound super object; requires isinstance(obj, type)
Typical use to call a cooperative superclass method:
class C(B):
def meth(self, arg):
super(C, self).meth(arg);
the delete function. (Question: should the attribute name also be
recorded in the getset object? That makes the protocol more work, but
may give us better error messages.)
cases.
powu: Deleted.
This started with a nonsensical error msg:
>>> x = -1.
>>> import sys
>>> x**(-sys.maxint-1L)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
ValueError: negative number cannot be raised to a fractional power
>>>
The special-casing in float_pow was simply wrong in this case (there's
not even anything peculiar about these inputs), and I don't see any point
to it in *any* case: a decent libm pow should have worst-case error under
1 ULP, so in particular should deliver the exact result whenever the exact
result is representable (else its error is at least 1 ULP). Thus our
special fiddling for integral values "shouldn't" buy anything in accuracy,
and, to the contrary, repeated multiplication is less accurate than a
decent pow when the true result isn't exactly representable. So just
letting pow() do its job here (we may not be able to trust libm x-platform
in exceptional cases, but these are normal cases).
interpretation of negative indices, since neither the sq_*item slots
nor the slot_ wrappers do this. (Slices are a different story, there
the size wrapping is done too early.)
Classes that don't use __slots__ have a __weakref__ member added in
the same way as __dict__ is added (i.e. only if the base didn't
already have one). Classes using __slots__ can enable weak
referenceability by adding '__weakref__' to the __slots__ list.
Renamed the __weaklistoffset__ class member to __weakrefoffset__ --
it's not always a list, it seems. (Is tp_weaklistoffset a historical
misnomer, or do I misunderstand this?)
- Do not compile unicodeobject, unicodectype, and unicodedata if Unicode is disabled
- check for Py_USING_UNICODE in all places that use Unicode functions
- disables unicode literals, and the builtin functions
- add the types.StringTypes list
- remove Unicode literals from most tests.
the class dict). Anything but a nonnegative int in either place is
*ignored* (before, a non-Boolean was an error). The default is still
static -- in a comparative test, Jeremy's Tools/compiler package ran
twice as slow (compiling itself) using dynamic as the default. (The
static version, which requires a few tweaks to avoid modifying class
variables, runs at about the same speed as the classic version.)
slot_tp_descr_get(): this also needed fallback behavior.
slot_tp_getattro(): remove a debug fprintf() call.
the metatype passed in as an argument. This prevents infinite
recursion when a metatype written in Python calls type.__new__() as a
"super" call.
Also tweaked some comments.
calculate it on the fly. This way even modules with long package
names get an accurate repr instead of a truncated one. The extra
malloc/free cost shouldn't be a problem in a repr function.
Closes SF bug #437984
- type_module(), type_name(): if tp_name contains one or more period,
the part before the last period is __module__, the part after that
is __name__. Otherwise, for non-heap types, __module__ is
"__builtin__". For heap types, __module__ is looked up in
tp_defined.
- type_new(): heap types have their __module__ set from
globals().__name__; a pre-existing __module__ in their dict is not
overridden. This is not inherited.
- type_repr(): if __module__ exists and is not "__builtin__", it is
included in the string representation (just as it already is for
classes). For example <type '__main__.C'>.
- descrobject.c:descr_check(): only believe None means the same as
NULL if the type given is None's type.
- typeobject.c:wrap_descr_get(): don't "conventiently" default an
absent type to the type of the object argument. Let the called
function figure it out.
types -- currently Type, List, None and NotImplemented. To be called
from Py_Initialize() instead of accumulating calls there.
Also rename type(None) to NoneType and type(NotImplemented) to
NotImplementedType -- naming the type identical to the object was
confusing.
returns that. (This fix is also by MvL; checkin it in because I want
to make more changes here. I'm still not 100% satisfied -- see
comments attached to the patch.)
operators for which a default implementation exist now work, both in
dynamic classes and in static classes, overridden or not. This
affects __repr__, __str__, __hash__, __contains__, __nonzero__,
__cmp__, and the rich comparisons (__lt__ etc.). For dynamic
classes, this meant copying a lot of code from classobject! (XXX
There are still some holes, because the comparison code in object.c
uses PyInstance_Check(), meaning new-style classes don't get the
same dispensation. This needs more thinking.)
- Add object.__hash__, object.__repr__, object.__str__. The __str__
dispatcher now calls the __repr__ dispatcher, as it should.
- For static classes, the tp_compare, tp_richcompare and tp_hash slots
are now inherited together, or not at all. (XXX I fear there are
still some situations where you can inherit __hash__ when you
shouldn't, but mostly it's OK now, and I think there's no way we can
get that 100% right.)
setting and deleting a function's __dict__ attribute. Deleting
it, or setting it to a non-dictionary result in a TypeError. Note
that getting it the first time magically initializes it to an
empty dict so that func.__dict__ will always appear to be a
dictionary (never None).
Closes SF bug #446645.
The descr changes moved the dispatch for calling objects from
call_object() in ceval.c to PyObject_Call() in abstract.c.
call_object() and the many functions it used in ceval.c were no longer
used, but were not removed.
Rename meth_call() as PyCFunction_Call() so that it can be called by
the CALL_FUNCTION opcode in ceval.c.
Also, fix error message that referred to PyEval_EvalCodeEx() by its
old name eval_code2(). (I'll probably refer to it by its old name,
too.)
XXX There are still some loose ends: repr(), str(), hash() and
comparisons don't inherit a default implementation from object. This
must be resolved similarly to the way it's resolved for classic
instances.
XXX This is not sufficient: if a dynamic class has no __repr__ method
(for instance), but later one is added, that doesn't add a tp_repr
slot, so repr() doesn't call the __repr__ method. To make this work,
I'll have to add default implementations of several slots to 'object'.
XXX Also, dynamic types currently only inherit slots from their
dominant base.
problem). inherit_slots() is split in two parts: inherit_special()
which inherits the flags and a few very special members from the
dominant base; inherit_slots() which inherits only regular slots,
and is now called for each base in the MRO in turn. These are now
both void functions since they don't have error returns.
- Added object.__setitem__() back -- for the same reason as
object.__new__(): a subclass of object should be able to call
object.__new__().
- add_wrappers() was moved around to be closer to where it is used (it
was defined together with add_methods() etc., but has nothing to do
with these).
Removed all instances of Py_UCS2 from the codebase, and so also (I hope)
the last remaining reliance on the platform having an integral type
with exactly 16 bits.
PyUnicode_DecodeUTF16() and PyUnicode_EncodeUTF16() now read and write
one byte at a time.
bit. For one, this class:
class C(object):
def __new__(myclass, ...): ...
would have no way to call the __new__ method of its base class, and
the workaround (to create an intermediate base class whose __new__ you
can call) is ugly.
So, I've come up with a better solution that restores object.__new__,
but still solves the original problem, which is that built-in and
extension types shouldn't inherit object.__new__. The solution is
simple: only "heap types" inherit tp_new. Simpler, less code,
perfect!
Previously, f.read() and f.readlines() checked for
errors on their file object and possibly raised an
IOError, but f.readline() didn't. This patch makes
f.readline() behave like the others.
Note that I've added a call to clearerr() since the other calls to
ferror() include that too.
I have no way to test this code. :-)
division. The basic binary operators now all correctly call the
__rxxx__ variant when they should.
In type_new(), I now make the new type a new-style number unless it
inherits from an old-style number that has numeric methods.
By way of cosmetics, I've changed the signatures of the SLOT<i> macros
to take actual function names and operator names as strings, rather
than rely on C preprocessor symbol manipulations. This makes the
calls slightly more verbose, but greatly helps simple searches through
the file: you can now find out where "__radd__" is used or where the
function slot_nb_power() is defined and where it is used.
This introduces:
- A new operator // that means floor division (the kind of division
where 1/2 is 0).
- The "future division" statement ("from __future__ import division)
which changes the meaning of the / operator to implement "true
division" (where 1/2 is 0.5).
- New overloadable operators __truediv__ and __floordiv__.
- New slots in the PyNumberMethods struct for true and floor division,
new abstract APIs for them, new opcodes, and so on.
I emphasize that without the future division statement, the semantics
of / will remain unchanged until Python 3.0.
Not yet implemented are warnings (default off) when / is used with int
or long arguments.
This has been on display since 7/31 as SF patch #443474.
Flames to /dev/null.
- Add an explicit call to PyType_Ready(&PyList_Type) to pythonrun.c
(just for the heck of it, really -- we should either explicitly
ready all types, or none).
- Add comment blocks explaining add_operators() and override_slots().
(This file could use some more explaining, but this is all I had
breath for today. :)
- Renamed the argument 'base' of add_wrappers() to 'wraps' because
it's not a base class (which is what the 'base' identifier is used
for elsewhere).
Small nits:
- Fix add_tp_new_wrapper() to avoid overwriting an existing __new__
descriptor in tp_defined.
- In add_operators(), check the return value of add_tp_new_wrapper().
Functional change:
- Remove the tp_new functionality from PyBaseObject_Type; this means
you can no longer instantiate the 'object' type. It's only useful
as a base class.
- To make up for the above loss, add tp_new to dynamic types. This
has to be done in a hackish way (after override_slots() has been
called, with an explicit call to add_tp_new_wrapper() at the very
end) because otherwise I ran into recursive calls of slot_tp_new().
Sigh.
And remove all the extern decls in the middle of .c files.
Apparently, it was excluded from the header file because it is
intended for internal use by the interpreter. It's still intended for
internal use and documented as such in the header file.
#caused warnings with the VMS C compiler. (SF bug #442998, in part.)
On a narrow system the current code should never be executed since ch
will always be < 0x10000.
Marc-Andre: you may end up fixing this a different way, since I
believe you have plans to generate \U for surrogate pairs. I'll leave
that to you.
particular, str(long) and repr(long) use base 10, and that gets a factor
of 4 speedup). Another factor of 2 can be gotten by refactoring divrem1 to
support in-place division, but that started getting messy so I'm leaving
that out.
raising an error. This was one of the two issues that the VPython
folks were particularly problematic for their students. (The other
one was integer division...) This implements (my) SF patch #440487.
raising an error. This was one of the two issues that the VPython
folks were particularly problematic for their students. (The other
one was integer division...) This implements (my) SF patch #440487.
Implement sys.maxunicode.
Explicitly wrap around upper/lower computations for wide Py_UNICODE.
When decoding large characters with UTF-8, represent expected test
results using the \U notation.
Add configure option --enable-unicode.
Add config.h macros Py_USING_UNICODE, PY_UNICODE_TYPE, Py_UNICODE_SIZE,
SIZEOF_WCHAR_T.
Define Py_UCS2.
Encode and decode large UTF-8 characters into single Py_UNICODE values
for wide Unicode types; likewise for UTF-16.
Remove test whether sizeof Py_UNICODE is two.
"mapping" object, specifically one that supports PyMapping_Keys() and
PyObject_GetItem(). This allows you to say e.g. {}.update(UserDict())
We keep the special case for concrete dict objects, although that
seems moderately questionable. OTOH, the code exists and works, so
why change that?
.update()'s docstring already claims that D.update(E) implies calling
E.keys() so it's appropriate not to transform AttributeErrors in
PyMapping_Keys() to TypeErrors.
Patch eyeballed by Tim.
unicodeobject.h, which forces sizeof(Py_UNICODE) == sizeof(Py_UCS4).
(this may be good enough for platforms that doesn't have a 16-bit
type. the UTF-16 codecs don't work, though)
the next free valuestack slot, not to the base (in America, stacks push
and pop at the top -- they mutate at the bottom in Australia <winK>).
eval_frame(): assert that f_stacktop isn't NULL upon entry.
frame_delloc(): avoid ordered pointer comparisons involving f_stacktop
when f_stacktop is NULL.
i_divmod: New and simpler algorithm. Old one returned gibberish on most
boxes when the numerator was -sys.maxint-1. Oddly enough, it worked in the
release (not debug) build on Windows, because the compiler optimized away
some tricky sign manipulations that were incorrect in this case.
Makes you wonder <wink> ...
Bugfix candidate.
Gave Python linear-time repr() implementations for dicts, lists, strings.
This means, e.g., that repr(range(50000)) is no longer 50x slower than
pprint.pprint() in 2.2 <wink>.
I don't consider this a bugfix candidate, as it's a performance boost.
Added _PyString_Join() to the internal string API. If we want that in the
public API, fine, but then it requires runtime error checks instead of
asserts.
is allocated than needed (used to allocate 80 bytes of digit space no
matter how small the long input). This also runs faster, at least on 32-
bit boxes.
Replaced PyLong_{As,From}{Unsigned,}LongLong guts with calls
to _PyLong_{As,From}ByteArray.
_testcapimodule.c:
Added strong tests of PyLong_{As,From}{Unsigned,}LongLong.
Fixes SF bug #432552 PyLong_AsLongLong() problems.
Possible bugfix candidate, but the fix relies on code added to longobject
to support the new q/Q structmodule format codes.
This completes the q/Q project.
longobject.c _PyLong_AsByteArray: The original code had a gross bug:
the most-significant Python digit doesn't necessarily have SHIFT
significant bits, and you really need to count how many copies of the sign
bit it has else spurious overflow errors result.
test_struct.py: This now does exhaustive std q/Q testing at, and on both
sides of, all relevant power-of-2 boundaries, both positive and negative.
NEWS: Added brief dict news while I was at it.
_PyLong_FromByteArray
_PyLong_AsByteArray
Untested and probably buggy -- they compile OK, but nothing calls them
yet. Will soon be called by the struct module, to implement x-platform
'q' and 'Q'.
If other people have uses for them, we could move them into the public API.
See longobject.h for usage details.
case of objects with equal types which support tp_compare. Give
type objects a tp_compare function.
Also add c<0 tests before a few PyErr_Occurred tests.
frequently used, and in particular this allows to drop the last
remaining obvious time-waster in the crucial lookdict() and
lookdict_string() functions. Other changes consist mostly of changing
"i < ma_size" to "i <= ma_mask" everywhere.
be possible to provoke unbounded recursion now, but leaving that to someone
else to provoke and repair.
Bugfix candidate -- although this is getting harder to backstitch, and the
cases it's protecting against are mondo contrived.
code, less memory. Tests have uncovered no drawbacks. Christian and
Vladimir are the other two people who have burned many brain cells on the
dict code in recent years, and they like the approach too, so I'm checking
it in without further ado.
instead of multiplication to generate the probe sequence. The idea is
recorded in Python-Dev for Dec 2000, but that version is prone to rare
infinite loops.
The value is in getting *all* the bits of the hash code to participate;
and, e.g., this speeds up querying every key in a dict with keys
[i << 16 for i in range(20000)] by a factor of 500. Should be equally
valuable in any bad case where the high-order hash bits were getting
ignored.
Also wrote up some of the motivations behind Python's ever-more-subtle
hash table strategy.
resizing.
Accurate timings are impossible on my Win98SE box, but this is obviously
faster even on this box for reasonable list.append() cases. I give
credit for this not to the resizing strategy but to getting rid of integer
multiplication and divsion (in favor of shifting) when computing the
rounded-up size.
For unreasonable list.append() cases, Win98SE now displays linear behavior
for one-at-time appends up to a list with about 35 million elements. Then
it dies with a MemoryError, due to fatally fragmented *address space*
(there's plenty of VM available, but by this point Win9X has broken user
space into many distinct heaps none of which has enough contiguous space
left to resize the list, and for whatever reason Win9x isn't coalescing
the dead heaps). Before the patch it got a MemoryError for the same
reason, but once the list reached about 2 million elements.
Haven't yet tried on Win2K but have high hopes extreme list.append()
will be much better behaved now (NT & Win2K didn't fragment address space,
but suffered obvious quadratic-time behavior before as lists got large).
For other systems I'm relying on common sense: replacing integer * and /
by << and >> can't plausibly hurt, the number of function calls hasn't
changed, and the total operation count for reasonably small lists is about
the same (while the operations are cheaper now).
dictresize() was too aggressive about never ever resizing small dicts.
If a small dict is entirely full, it needs to rebuild it despite that
it won't actually resize it, in order to purge old dummy entries thus
creating at least one virgin slot (lookdict assumes at least one such
exists).
Also took the opportunity to add some high-level comments to dictresize.
The idea is Marc-Andre Lemburg's, the implementation is Tim's.
Add a new ma_smalltable member to dictobjects, an embedded vector of
MINSIZE (8) dictentry structs. Short course is that this lets us avoid
additional malloc(s) for dicts with no more than 5 entries.
The changes are widespread but mostly small.
Long course: WRT speed, all scalar operations (getitem, setitem, delitem)
on non-empty dicts benefit from no longer needing NULL-pointer checks
(ma_table is never NULL anymore). Bulk operations (copy, update, resize,
clearing slots during dealloc) benefit in some cases from now looping
on the ma_fill count rather than on ma_size, but that was an unexpected
benefit: the original reason to loop on ma_fill was to let bulk
operations on empty dicts end quickly (since the NULL-pointer checks
went away, empty dicts aren't special-cased any more).
Special considerations:
For dicts that remain empty, this change is a lose on two counts:
the dict object contains 8 new dictentry slots now that weren't
needed before, and dict object creation also spends time memset'ing
these doomed-to-be-unsused slots to NULLs.
For dicts with one or two entries that never get larger than 2, it's
a mix: a malloc()/free() pair is no longer needed, and the 2-entry case
gets to use 8 slots (instead of 4) thus decreasing the chance of
collision. Against that, dict object creation spends time memset'ing
4 slots that aren't strictly needed in this case.
For dicts with 3 through 5 entries that never get larger than 5, it's a
pure win: the dict is created with all the space they need, and they
never need to resize. Before they suffered two malloc()/free() calls,
plus 1 dict resize, to get enough space. In addition, the 8-slot
table they ended with consumed more memory overall, because of the
hidden overhead due to the additional malloc.
For dicts with 6 or more entries, the ma_smalltable member is wasted
space, but then these are large(r) dicts so 8 slots more or less doesn't
make much difference. They still benefit all the time from removing
ubiquitous dynamic null-pointer checks, and get a small benefit (but
relatively smaller the larger the dict) from not having to do two
mallocs, two frees, and a resize on the way *to* getting their sixth
entry.
All in all it appears a small but definite general win, with larger
benefits in specific cases. It's especially nice that it allowed to
get rid of several branches, gotos and labels, and overall made the
code smaller.
This should be faster.
This means:
(1) "for line in file:" won't work if the xreadlines module can't be
imported.
(2) The body of "for line in file:" shouldn't use the file directly;
the effects (e.g. of file.readline(), file.seek() or even
file.tell()) would be undefined because of the buffering that goes
on in the xreadlines module.
UTF-16 codec will now interpret and remove a *leading* BOM mark. Sub-
sequent BOM characters are no longer interpreted and removed.
UTF-16-LE and -BE pass through all BOM mark characters.
These changes should get the UTF-16 codec more in line with what
the Unicode FAQ recommends w/r to BOM marks.
Two exceedingly unlikely errors in dictresize():
1. The loop for finding the new size had an off-by-one error at the
end (could over-index the polys[] vector).
2. The polys[] vector ended with a 0, apparently intended as a sentinel
value but never used as such; i.e., it was never checked, so 0 could
have been used *as* a polynomial.
Neither bug could trigger unless a dict grew to 2**30 slots; since that
would consume at least 12GB of memory just to hold the dict pointers,
I'm betting it's not the cause of the bug Fred's tracking down <wink>.
in the comments for using two passes was bogus, as the only object that
can get decref'ed due to the copy is the dummy key, and decref'ing dummy
can't have side effects (for one thing, dummy is immortal! for another,
it's a string object, not a potentially dangerous user-defined object).
1. Omit the early-out EQ/NE "lengths different?" test. Was unable to find
any real code where it triggered, but it always costs. The same is not
true of list richcmps, where different-size lists appeared to get
compared about half the time.
2. Because tuples are immutable, there's no need to refetch the lengths of
both tuples from memory again on each loop trip.
BUG ALERT: The tuple (and list) richcmp algorithm is arguably wrong,
because it won't believe there's any difference unless Py_EQ returns false
for some corresponding elements:
>>> class C:
... def __lt__(x, y): return 1
... __eq__ = __lt__
...
>>> C() < C()
1
>>> (C(),) < (C(),)
0
>>>
That doesn't make sense -- provided you believe the defn. of C makes sense.
and introduces a new method .decode().
The major change is that strg.encode() will no longer try to convert
Unicode returns from the codec into a string, but instead pass along
the Unicode object as-is. The same is now true for all other codec
return types. The underlying C APIs were changed accordingly.
Note that even though this does have the potential of breaking
existing code, the chances are low since conversion from Unicode
previously took place using the default encoding which is normally
set to ASCII rendering this auto-conversion mechanism useless for
most Unicode encodings.
The good news is that you can now use .encode() and .decode() with
much greater ease and that the door was opened for better accessibility
of the builtin codecs.
As demonstration of the new feature, the patch includes a few new
codecs which allow string to string encoding and decoding (rot13,
hex, zip, uu, base64).
Written by Marc-Andre Lemburg. Copyright assigned to the PSF.
to reason that me_key is much more likely to match the key we're looking
for than to match dummy, and if the key is absent me_key is much more
likely to be NULL than dummy: most dicts don't even have a dummy entry.
Running instrumented dict code over the test suite and some apps confirmed
that matching dummy was 200-300x less frequent than matching key in
practice. So this reorders the tests to try the common case first.
It can lose if a large dict with many collisions is mostly deleted, not
resized, and then frequently searched, but that's hardly a case we
should be favoring.
The comment following used to say:
/* We use ~hash instead of hash, as degenerate hash functions, such
as for ints <sigh>, can have lots of leading zeros. It's not
really a performance risk, but better safe than sorry.
12-Dec-00 tim: so ~hash produces lots of leading ones instead --
what's the gain? */
That is, there was never a good reason for doing it. And to the contrary,
as explained on Python-Dev last December, it tended to make the *sum*
(i + incr) & mask (which is the first table index examined in case of
collison) the same "too often" across distinct hashes.
Changing to the simpler "i = hash & mask" reduced the number of string-dict
collisions (== # number of times we go around the lookup for-loop) from about
6 million to 5 million during a full run of the test suite (these are
approximate because the test suite does some random stuff from run to run).
The number of collisions in non-string dicts also decreased, but not as
dramatically.
Note that this may, for a given dict, change the order (wrt previous
releases) of entries exposed by .keys(), .values() and .items(). A number
of std tests suffered bogus failures as a result. For dicts keyed by
small ints, or (less so) by characters, the order is much more likely to be
in increasing order of key now; e.g.,
>>> d = {}
>>> for i in range(10):
... d[i] = i
...
>>> d
{0: 0, 1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 6, 7: 7, 8: 8, 9: 9}
>>>
Unfortunately. people may latch on to that in small examples and draw a
bogus conclusion.
test_support.py
Moved test_extcall's sortdict() into test_support, made it stronger,
and imported sortdict into other std tests that needed it.
test_unicode.py
Excluced cp875 from the "roundtrip over range(128)" test, because
cp875 doesn't have a well-defined inverse for unicode("?", "cp875").
See Python-Dev for excruciating details.
Cookie.py
Chaged various output functions to sort dicts before building
strings from them.
test_extcall
Fiddled the expected-result file. This remains sensitive to native
dict ordering, because, e.g., if there are multiple errors in a
keyword-arg dict (and test_extcall sets up many cases like that), the
specific error Python complains about first depends on native dict
ordering.
Allow module getattr and setattr to exploit string interning, via the
previously null module object tp_getattro and tp_setattro slots. Yields
a very nice speedup for things like random.random and os.path etc.
For rich comparisons, use instance_getattr2() when possible to avoid
the expense of setting an AttributeError. Also intern the name_op[]
table and use the interned strings rather than creating a new string
and interning it each time through.
doesn't know how to do LE, LT, GE, GT. dict_richcompare can't do the
latter any faster than dict_compare can. More importantly, for
cmp(dict1, dict2), Python *first* tries rich compares with EQ, LT, and
GT one at a time, even if the tp_compare slot is defined, and
dict_richcompare called dict_compare for the latter two because
it couldn't do them itself. The result was a lot of wasted calls to
dict_compare. Now dict_richcompare gives up at once the times Python
calls it with LT and GT from try_rich_to_3way_compare(), and dict_compare
is called only once (when Python gets around to trying the tp_compare
slot).
Continued mystery: despite that this cut the number of calls to
dict_compare approximately in half in test_mutants.py, the latter still
runs amazingly slowly. Running under the debugger doesn't show excessive
activity in the dict comparison code anymore, so I'm guessing the culprit
is somewhere else -- but where? Perhaps in the element (key/value)
comparison code? We clearly spend a lot of time figuring out how to
compare things.
Fixed a half dozen ways in which general dict comparison could crash
Python (even cause Win98SE to reboot) in the presence of kay and/or
value comparison routines that mutate the dict during dict comparison.
Bugfix candidate.
interned when created, so the cached versions generally aren't ever
interned. With the patch, the
Py_INCREF(t);
*p = t;
Py_DECREF(s);
return;
indirection block in PyString_InternInPlace() is never executed during a
full run of the test suite, but was executed very many times before. So
I'm trading more work when creating one-character strings for doing less
work later. Note that the "more work" here can happen at most 256 times
per program run, so it's trivial. The same reasoning accounts for the
patch's simplification of string_item (the new version can call
PyString_FromStringAndSize() no more than 256 times per run, so there's
no point to inlining that stuff -- if we were serious about saving time
here, we'd pre-initialize the characters vector so that no runtime testing
at all was needed!).
Store floats and doubles to full precision in marshal.
Test that floats read from .pyc/.pyo closely match those read from .py.
Declare PyFloat_AsString() in floatobject header file.
Add new PyFloat_AsReprString() API function.
Document the functions declared in floatobject.h.
d1 == d2 and d1 != d2 now work even if the keys and values in d1 and d2
don't support comparisons other than ==, and testing dicts for equality
is faster now (especially when inequality obtains).