del'ing func.func_dict. I took the opportunity to also clean up some
other nits with the code, namely core dumps when del'ing func_defaults
and KeyError instead of AttributeError when del'ing a non-existant
function attribute.
Specifically,
func_memberlist: Move func_dict and __dict__ into here instead of
special casing them in the setattro and getattro methods. I don't
remember why I took them out of here before I first uploaded the PEP
232 patch. :/
func_getattro(): No need to special case __dict__/func_dict since
their now in the func_memberlist and PyMember_Get() should Do The
Right Thing (i.e. transforms NULL values into Py_None).
func_setattro(): Document the intended behavior of del'ing or setting
to None one of the special func_* attributes. I.e.:
func_code - can only be set to a code object. It can't be del'd
or set to None.
func_defaults - can be del'd. Can only be set to None or a tuple.
func_dict - can be del'd. Can only be set to None or a
dictionary.
Fix core dumps and incorrect exceptions as described above. Also, if
we're del'ing an arbitrary function attribute but func_dict is NULL,
don't create func_dict before discovering that we'll get an
AttributeError anyway.
implementation details inside the ucnhash module.
also cleaned up the unicode copyright blurb a little; Secret Labs'
internal revision history isn't that interesting...
Also fixes two long-standing bugs (present in 2.0):
1. .join() didn't check that the result size fit in an int.
2. string.join(s) when len(s)==1 returned s[0] regardless of s[0]'s
type; e.g., "".join([3]) returned 3 (overly optimistic optimization).
I resisted a keen temptation to make .join() apply str() automagically.
I found where rich comparison of unequal recursive objects gave
unintuituve results. In a discussion with Tim, where we discovered
that our intuition on when a<=b should be true was failing, we decided
to outlaw ordering comparisons on recursive objects. (Once we have
fixed our intuition and designed a matching algorithm that's practical
and reasonable to implement, we can allow such orderings again.)
- Refactored the recursive-object comparison framework; more is now
done in the support routines so less needs to be done in the calling
routines (even at the expense of slowing it down a bit -- this
should normally never be invoked, it's mostly just there to avoid
blowing up the interpreter).
- Changed the framework so that the comparison operator used is also
stored. (The dictionary now stores triples (v, w, op) instead of
pairs (v, w).)
- Changed the nesting limit to a more reasonable small 20; this only
slows down comparisons of very deeply nested objects (unlikely to
occur in practice), while speeding up comparisons of recursive
objects (previously, this would first waste time and space on 500
nested comparisons before it would start detecting recursion).
- Changed rich comparisons for recursive objects to raise a ValueError
exception when recursion is detected for ordering oprators (<, <=,
>, >=).
Unrelated change:
- Moved PyObject_Unicode() to just under PyObject_Str(), where it
belongs. MAL's patch must've inserted in a random spot between two
functions in the file -- between two helpers for rich comparison...
exceptions when compared using <, <=, > or >=.
NOTE: This is a tentative change: this means that cmp() involving
complex numbers will raise an exception when the numbers differ, and
that in turn means that e.g. dictionaries and certain other compounds
(e.g. UserLists) containing complex numbers can't be compared either.
So we'll have to decide whether this is acceptable. The alpha test
cycle is a good time to keep an eye on this!
- Use PyObject_RichCompareBool() when comparing keys; this makes the
error handling cleaner.
- There were two implementations for dictionary comparison, an old one
(#ifdef'ed out) and a new one. Got rid of the old one, which was
abandoned years ago.
- In the characterize() function, part of dictionary comparison, use
PyObject_RichCompareBool() to compare keys and values instead. But
continue to use PyObject_Compare() for comparing the final
(deciding) elements.
- Align the comments in the type struct initializer.
Note: I don't implement rich comparison for dictionaries -- there
doesn't seem to be much to be gained. (The existing comparison
already decides that shorter dicts are always smaller than longer
dicts.)
- tuplecontains(): call RichCompare(Py_EQ).
- Get rid of tuplecompare(), in favor of new tuplerichcompare() (a
clone of list_compare()).
- Aligned the comments for large struct initializers.
earlier coercion changes, not by rich comparisons. When a coercion
function returns 1 (meaning it cannot do it), it should not INCREF the
arguments. When no __coerce__() method was found, instance_coerce()
originally returned 0, pretending it did it. Neil changed the return
value to 1, more accurately reflecting that it didn't do anything, but
forgot to take out the two INCREF calls.
- sort's docompare() calls RichCompare(Py_LT).
- list_contains(), list_index(), listcount(), listremove() call
RichCompare(Py_EQ).
- Get rid of list_compare(), in favor of new list_richcompare(). The
latter does some nice shortcuts, like when == or != is requested, it
first compares the lengths for trivial accept/reject. Then it goes
over the items until it finds an index where the items differe; then
it does more shortcut magic to minimize the number of additional
comparisons.
- Aligned the comments for large struct initializers.
- Use the compare nesting level and in-progress dictionary properly in
PyObject_RichCompare().
- Change the in-progress code to use static variables instead of
globals (both the nesting level and the key for the thread dict were
globals but have no reason to be globals; the key can even be a
function-static variable in get_inprogress_dict()).
- Rewrote try_rich_to_3way_compare() to benefit from the similarity of
the three cases, making it table-driven.
- In try_rich_to_3way_compare(), test for EQ before LT and GT. This
turns out essential when comparing recursive UserList instances;
with the old code, these would recurse into rich comparison three
times for each nesting level up to NESTING_LIMIT/2, making the total
number of calls in the order of 3**(NESTING_LIMIT/2)!
NOTE: I'm not 100% comfortable with this. It works for the standard
test suite (which compares a few trivial recursive data structures
only), but I'm not sure that the in-progress dictionary is used
properly by the rich comparison code. Jeremy suggested that maybe the
operation should be included in the dict. Currently I presume that
objects in the dict are equal unless proven otherwise, and I set the
outcome for the rich comparison accordingly: true for operators EQ,
LE, GE, and false for the other three. But Jeremy seems to think that
there may be counter-examples where this doesn't do the right thing.
except that it always returns Unicode objects.
A new C API PyObject_Unicode() is also provided.
This closes patch #101664.
Written by Marc-Andre Lemburg. Copyright assigned to Guido van Rossum.
- Got rid of instance_cmp(); refactored instance_compare().
- Added instance_richcompare() which calls __lt__() etc.
Some unrelated stuff mixed in:
- Aligned comments in various large struct initializers.
- Better test to avoid recursion if __coerce__ returns self as the
first argument (this is an unrelated fix by Neil Schemenauer!).
- Style nit: don't use Py_DECREF(Py_NotImplemented); use
Py_DECREF(result) -- it just looks better. :-)
PyObject_RichCompare() and PyObject_RichCompareBool().
XXX Note: the code that checks for deeply nested rich comparisons is
bogus -- it assumes the two objects are always identical, rather than
using the same logic as PyObject_Compare(). I'll fix that later.
simpler if we use fgetpos and fsetpos, rather than trying to mess with
platform-specific TELL64 alternatives.
Of course, this hasn't been tested on a 64-bit platform, so I may have
to withdraw this -- but I'm hopeful, and Trent Mick supports this
patch!
in case the parameters are out of bounds and fixes error handling
for .count(), .startswith() and .endswith() for the case of
mixed string/Unicode objects.
This patch adds Python style index semantics to PyUnicode_Count()
indices (including the special handling of negative indices).
The patch is an extended version of patch #103249 submitted
by Michael Hudson (mwh) on SF. It also includes new test cases.
Closes SF patch #103123.
funcobject.h:
PyFunctionObject: add the func_dict slot.
funcobject.c:
PyFunction_New(): Initialize the func_dict slot to NULL.
func_getattr(): Rename to func_getattro() and change the
signature. It's more efficient to use attro methods and dig the C
string out than it is to re-convert a C string to a PyString.
Also, add support for getting the __dict__ (a.k.a. func_dict)
attribute, and for getting an arbitrary function attribute.
func_setattr(): Rename to func_setattro() and change the signature
for the same reason. Also add support for setting __dict__
(a.k.a. func_dict) and any arbitrary function attribute.
func_dealloc(): Be sure to DECREF the func_dict slot.
func_traverse(): Be sure to traverse func_dict too.
PyFunction_Type: make the necessary func_?etattro() changes.
classobject.c:
instancemethod_memberlist: Add __dict__
instancemethod_setattro(): New method to set arbitrary attributes
on methods (really the underlying im_func). Raise TypeError when
the instance is bound or when you're trying to set one of the
reserved im_* attributes.
instancemethod_getattr(): Renamed to instancemethod_getattro()
since that's what it really is. Also, added support fo getting
arbitrary attributes through the im_func.
PyMethod_Type: Do the ?etattr{,o} dance.
object.
This fixes potential overflows in xrange()'s internal calculations on
64-bit platforms. The fix is complicated because the sq_length slot
function can only return an int; we want to support
xrange(sys.maxint), which is a 64-bit quantity on most 64-bit
platforms (except Win64). The solution is hacky but the best
possible: when the range is that long, we can use it in a for loop but
we can't ask for its length (nor can we actually iterate beyond
2**31-1, because the sq_item slot function has the same restrictions
on its arguments. Fixing those restrictions is a project for another
day...
faster than the other. Should be faster for Mark Favas's 254-character
mail log lines, and *is* 3-4% quicker for my test case with much shorter
lines (but they're typical of *my* text files, and I'm tired of optimizing
for everyone else at my expense <wink> -- in fact, the only one who loses
here is Guido ...).
Tim discovered another "bug" in my get_line() code: while the comments
said that n<0 was invalid, it was in fact still called with n<0 (when
PyFile_GetLine() was called with n<0). In that case fortunately
executed the same code as for n==0.
Changed the comment to admit this fact, and changed Tim's MS speed
hack code to use 'n <= 0' as the criteria for the speed hack.
code duplication is to let us get away without a realloc whenever possible;
boosted the init buf size (the cutoff at which we *can* get away without
a realloc) from 100 to 200 so that more files can enjoy this boost; and
allowed other threads to run in all cases. The last two cost something,
but not significantly: in my fat test case, less than a 1% slowdown total.
Since my test case has a great many short lines, that's probably the worst
slowdown, too. While the logic barely changed, there were lots of edits.
This also gets rid of the reference to fp->_cnt, so the last platform
assumption being made here is that fgets doesn't overwrite bytes
capriciously (== beyond the terminating null byte it must write).
variant that never needs to "search from the right".
Also fixed unlikely memory leak in get_line, if string size overflows INTMAX.
Also new std test test_bufio to make sure .readline() works.
realized that this behavior is already present in PyFile_GetLine(),
which is the only place that needs it. A little refactoring of that
function made get_line_raw() redundant.
The mapping dictionaries can now contain 1-n mappings, meaning
that character ordinals may be mapped to strings or Unicode object,
e.g. 0x0078 ('x') -> u"abc", causing the ordinal to be replaced by
the complete string or Unicode object instead of just one character.
Another feature introduced by the patch is that of mapping oridnals to
the emtpy string. This allows removing characters.
The patch is different from patch #103100 in that it does not cause a
performance hit for the normal use case of 1-1 mappings.
Written by Marc-Andre Lemburg, copyright assigned to Guido van Rossum.
- The raw_input() functionality is moved to a separate function.
- Drop GNU getline() in favor of getc_unlocked(), which exists on more
platforms (and is even a tad faster on my system).
codec to not apply Latin-1 mappings for keys which are not found
in the mapping dictionaries, but instead treat them as undefined
mappings.
The patch was originally written by Martin v. Loewis with some
additional (cosmetic) changes and an updated test script
by Marc-Andre Lemburg.
The standard codecs were recreated from the most current files
available at the Unicode.org site using the Tools/scripts/gencodec.py
tool.
This patch closes the bugs #116285 and #119960.
raise ValueError. Checked in the patch as far as it went, but also changed
all of ints, longs and floats to raise ZeroDivisionError instead when raising
0 to a negative number. This is what 754-inspired stds require, as the "true
result" is an infinity obtained from finite operands, i.e. it's a singularity.
Also changed float pow to not be so timid about using its square-and-multiply
algorithm. Note that what math.pow does is unrelated to what builtin pow
does, and will still vary by platform.
result-object-pointer that is passed in, when an exception occurs during
coercion. The pointer has to be explicitly initialized in the caller to avoid
putting trash on the Python stack.
#define'd to an unreasonable value (several recent gcc systems have
misdefined it, causing bogus overflows in integer multiplication). Nuke
CHAR_BIT entirely.
after unicode_empty has been freed, otherwise it might not point to
the real start of the unicode_freelist. Final closure for SF bug
#110681, Jitterbug PR#398.
Add definitions of INT_MAX and LONG_MAX to pyport.h.
Remove includes of limits.h and conditional definitions of INT_MAX
and LONG_MAX elsewhere.
This closes SourceForge patch #101659 and bug #115323.
- use unidb compression for the unicodectype module. smaller, faster,
and slightly more portable...
(note: this commit doesn't include the unicodectype.c file itself; I'm
still waiting for the reviewers...)
I fixed the specific complaint but left the (many) large issues untouched.
See the (very long) bug report discussion for why:
http://sourceforge.net/bugs/?func=detailbug&group_id=5470&bug_id=110624
Note that while I left the interface to the undocumented public API function
PyFloat_FromString alone, its 2nd argument is useless. From a comment block
in the code:
RED_FLAG 22-Sep-2000 tim
PyFloat_FromString's pend argument is braindead. Prior to this RED_FLAG,
1. If v was a regular string, *pend was set to point to its terminating
null byte. That's useless (the caller can find that without any
help from this function!).
2. If v was a Unicode string, or an object convertible to a character
buffer, *pend was set to point into stack trash (the auto temp
vector holding the character buffer). That was downright dangerous.
Since we can't change the interface of a public API function, pend is
still supported but now *officially* useless: if pend is not NULL,
*pend is set to NULL.
Note a curious extension to the std C rules: x, X and o formatting can never produce
a sign character in C, so the '+' and ' ' flags are meaningless for them. But
unbounded ints *can* produce a sign character under these conversions (no fixed-
width bitstring is wide enough to hold all negative values in 2's-comp form). So
these flags become meaningful in Python when formatting a Python long which is too
big to fit in a C long. This required shuffling around existing code, which hacked
x and X conversions to death when both the '#' and '0' flags were specified: the
hacks weren't strong enough to deal with the simultaneous possibility of the ' ' or
'+' flags too, since signs were always meaningless before for x and X conversions.
Isomorphic shuffling was required in unicodeobject.c.
Also added dozens of non-trivial new unbounded-int test cases to test_format.py.
which implements the automatic conversion from Unicode to a string
object using the default encoding.
The new API is then put to use to have eval() and exec accept
Unicode objects as code parameter. This closes bugs #110924
and #113890.
As side-effect, the traditional C APIs PyString_Size() and
PyString_AsString() will also accept Unicode objects as
parameters.
objects for the attribute name. Unicode objects are converted to
a string using the default encoding before trying the lookup.
Note that previously it was allowed to pass arbitrary objects as
attribute name in case the tp_getattro/setattro slots were defined.
This patch fixes this by applying an explicit string check first:
all uses of these slots expect string objects and do not check
for the type resulting in a core dump. The tp_getattro/setattro
are still useful as optimization for lookups using interned
string objects though.
This patch fixes bug #113829.
that Py_INCREF boosts global _Py_RefTotal when Py_REF_DEBUG is defined
but Py_TRACE_REFS isn't.
There are, IMO, way too many preprocessor gimmicks in use for refcount
debugging (at least 3 distinct true/false symbols, but not all 8 combos
are supported by the code, etc etc), and no coherent documentation of
this stuff -- 'twas too painful to track this one down.
all, either to see whether the # of chars fit in an int, or that the
amount of memory needed fit in a size_t. Checking these is expensive, but
the alternative is silently wrong answers (as in the bug report) or
core dumps (which were easy to provoke using Unicode strings).