dictionary size was comparing ma_size, the hash table size, which is
always a power of two, rather than ma_used, wich changes on each
insertion or deletion. Fixed this.
to no longer insist that len(seq) be defined.
NEEDS DOC CHANGES.
This is meant to be a model for how other functions of this ilk (max,
filter, etc) can be generalized similarly. Feel encouraged to grab your
favorite and convert it!
Note some cute consequences:
list(file) == file.readlines() == list(file.xreadlines())
list(dict) == dict.keys()
list(dict.iteritems()) = dict.items()
list(xrange(i, j, k)) == range(i, j, k)
object's type didn't define tp_print, there were still cases where the
full "print uses str() which falls back to repr()" semantics weren't
honored. This resulted in
>>> print None
<None object at 0x80bd674>
>>> print type(u'')
<type object at 0x80c0a80>
Fixed this by always using the appropriate PyObject_Repr() or
PyObject_Str() call, rather than trying to emulate what they would do.
Also simplified PyObject_Str() to always fall back on PyObject_Repr()
when tp_str is not defined (rather than making an extra check for
instances with a __str__ method). And got rid of the special case for
strings.
Directory containing
Spam.py
spam/__init__.py
Then "import Spam" caused a SystemError, because code checking for
the existence of "Spam/__init__.py" finds it on a case-insensitive
filesystem, but then bails because the directory it finds it in
doesn't match case, and then old code assumed that was still an error
even though it isn't anymore. Changed the code to just continue
looking in this case (instead of calling it an error). So
import Spam
and
import spam
both work now.
Also a 2.1 bugfix candidate (am I supposed to do something with those?).
Took away map()'s insistence that sequences support __len__, and cleaned
up the convoluted code that made it *look* like it really cared about
__len__ (in fact the old ->len field was only *used* as a flag bit, as
the main loop only looked at its sign bit, setting the field to -1 when
IndexError got raised; renamed the field to ->saw_IndexError instead).
Patch #419651: Metrowerks on Mac adds 0x itself
C std says %#x and %#X conversion of 0 do not add the 0x/0X base marker.
Metrowerks apparently does. Mark Favas reported the same bug under a
Compaq compiler on Tru64 Unix, but no other libc broken in this respect
is known (known to be OK under MSVC and gcc).
So just try the damn thing at runtime and see what the platform does.
Note that we've always had bugs here, but never knew it before because
a relevant test case didn't exist before 2.1.
Fix a very old flaw in PyObject_Print(). Amazing! When an object
type defines tp_str but not tp_repr, 'print x' to a real file
object would not call the tp_str slot but rather print a default style
representation: <foo object at 0x....>. This even though 'print x' to
a file-like-object would correctly call the tp_str slot.
The new test case demonstrates the bug. Be more careful in
symtable_resolve_free() to add a var to cells or frees only if it
won't be added under some other rule.
XXX Add new assertion that will catch this bug.
of ParserCreate().
Added assignment tests for the ordered_attributes and specified_attributes
values, similar to the checks for the returns_unicode attribute.
while not generally a good idea, this is used by RDF users, and works
to implement RDF-style namespace+localname concatenation as defined
in the RDF specifications. (This also corrects a backwards-compatibility
bug.)
Be more conservative while clearing out handlers; set the slot in the
self->handlers array to NULL before DECREFing the callback.
Still more adjustments to make the code style internally consistent.
Obviously bad regexps, spotted by Jeffery Collins.
HELP! I can't run this on Windows, and the module test() function
probably doesn't work on anyone's box. Could a Unixoid please write
an at least minimal working test and add it to the std test suite?
patch for sharing single character Unicode objects.
Martin's patch had to be reworked in a number of ways to take Unicode
resizing into consideration as well. Here's what the updated patch
implements:
* Single character Unicode strings in the Latin-1 range are shared
(not only ASCII chars as in Martin's original patch).
* The ASCII and Latin-1 codecs make use of this optimization,
providing a noticable speedup for single character strings. Most
Unicode methods can use the optimization as well (by virtue
of using PyUnicode_FromUnicode()).
* Some code cleanup was done (replacing memcpy with Py_UNICODE_COPY)
* The PyUnicode_Resize() can now also handle the case of resizing
unicode_empty which previously resulted in an error.
* Modified the internal API _PyUnicode_Resize() and
the public PyUnicode_Resize() API to handle references to
shared objects correctly. The _PyUnicode_Resize() signature
changed due to this.
* Callers of PyUnicode_FromUnicode() may now only modify the Unicode
object contents of the returned object in case they called the API
with NULL as content template.
Note that even though this patch passes the regression tests, there
may still be subtle bugs in the sharing code.
sees it (test_iter.py is unchanged).
- Added a tp_iternext slot, which calls the iterator's next() method;
this is much faster for built-in iterators over built-in types
such as lists and dicts, speeding up pybench's ForLoop with about
25% compared to Python 2.1. (Now there's a good argument for
iterators. ;-)
- Renamed the built-in sequence iterator SeqIter, affecting the C API
functions for it. (This frees up the PyIter prefix for generic
iterator operations.)
- Added PyIter_Check(obj), which checks that obj's type has a
tp_iternext slot and that the proper feature flag is set.
- Added PyIter_Next(obj) which calls the tp_iternext slot. It has a
somewhat complex return condition due to the need for speed: when it
returns NULL, it may not have set an exception condition, meaning
the iterator is exhausted; when the exception StopIteration is set
(or a derived exception class), it means the same thing; any other
exception means some other error occurred.