Commit Graph

1000 Commits

Author SHA1 Message Date
Tim Peters 8c96369513 PyFrameObject: rename f_stackbottom to f_stacktop, since it points to
the next free valuestack slot, not to the base (in America, stacks push
and pop at the top -- they mutate at the bottom in Australia <winK>).
eval_frame():  assert that f_stacktop isn't NULL upon entry.
frame_delloc():  avoid ordered pointer comparisons involving f_stacktop
when f_stacktop is NULL.
2001-06-23 05:26:56 +00:00
Tim Peters 5ca576ed0a Merging the gen-branch into the main line, at Guido's direction. Yay!
Bugfix candidate in inspect.py:  it was referencing "self" outside of
a method.
2001-06-18 22:08:13 +00:00
Tim Peters 1dad6a86de SF bug 434186: 0x80000000/2 != 0x80000000>>1
i_divmod:  New and simpler algorithm.  Old one returned gibberish on most
boxes when the numerator was -sys.maxint-1.  Oddly enough, it worked in the
release (not debug) build on Windows, because the compiler optimized away
some tricky sign manipulations that were incorrect in this case.
Makes you wonder <wink> ...
Bugfix candidate.
2001-06-18 19:21:11 +00:00
Tim Peters 70128a1ba6 PyLong_{As, From}VoidPtr: cleanup, replacing assumptions in comments with
#if/#error constructs.
2001-06-16 08:48:40 +00:00
Tim Peters c605784174 dict_repr: Reuse one of the int vars (minor code simplification). 2001-06-16 07:52:53 +00:00
Tim Peters 52e155e31b Reformat decl of new _PyString_Join. Add NEWS blurb about repr() speedup. 2001-06-16 05:42:57 +00:00
Tim Peters a7259597f1 SF bug 433228: repr(list) woes when len(list) big.
Gave Python linear-time repr() implementations for dicts, lists, strings.
This means, e.g., that repr(range(50000)) is no longer 50x slower than
pprint.pprint() in 2.2 <wink>.

I don't consider this a bugfix candidate, as it's a performance boost.

Added _PyString_Join() to the internal string API.  If we want that in the
public API, fine, but then it requires runtime error checks instead of
asserts.
2001-06-16 05:11:17 +00:00
Tim Peters cf37dfc3db Change IS_LITTLE_ENDIAN macro -- a little faster now. 2001-06-14 18:42:50 +00:00
Guido van Rossum ad98db1d9e Fix a mis-indentation in _PyUnicode_New() that caused me to stare at
some code for longer than needed.
2001-06-14 17:52:02 +00:00
Tim Peters ede0509111 _PyLong_AsByteArray: simplify the logic for dealing with the most-
significant digits sign bits.  Again no change in semantics.
2001-06-14 08:53:38 +00:00
Tim Peters ce9de2f79a PyLong_From{Unsigned,}Long: count the # of digits first, so no more space
is allocated than needed (used to allocate 80 bytes of digit space no
matter how small the long input).  This also runs faster, at least on 32-
bit boxes.
2001-06-14 04:56:19 +00:00
Tim Peters f251d06a66 _PyLong_FromByteArray: changed decl of "carry" to match "thisbyte". No
semantic change, but a bit clearer and may help a really stupid compiler
avoid pointless runtime length conversions.
2001-06-13 21:09:15 +00:00
Tim Peters 05607ad4fd _PyLong_AsByteArray: Don't do the "delicate overflow" check unless it's
truly needed; usually saves a little time, but no change in semantics.
2001-06-13 21:01:27 +00:00
Tim Peters 898cf85c25 _PyLong_AsByteArray: added assert that the input is normalized. This is
outside the function's control, but is crucial to correct operation.
2001-06-13 20:50:08 +00:00
Tim Peters 9cb0c38fff PyLong_As{Unsigned,}LongLong: fiddled final result casting. 2001-06-13 20:45:17 +00:00
Tim Peters d1a7da6c0d longobject.c:
Replaced PyLong_{As,From}{Unsigned,}LongLong guts with calls
    to _PyLong_{As,From}ByteArray.
_testcapimodule.c:
    Added strong tests of PyLong_{As,From}{Unsigned,}LongLong.

Fixes SF bug #432552 PyLong_AsLongLong() problems.
Possible bugfix candidate, but the fix relies on code added to longobject
to support the new q/Q structmodule format codes.
2001-06-13 00:35:57 +00:00
Tim Peters 8bc84b4391 _PyLong_{As,From}ByteArray: Minor code rearrangement aimed at improving
clarity.  Should have no effect visible to callers.
2001-06-12 19:17:03 +00:00
Marc-André Lemburg 8c2133da7b Fix for bug #432384: Recursion in PyString_AsEncodedString? 2001-06-12 13:14:10 +00:00
Tim Peters 7a3bfc3a47 Added q/Q standard (x-platform 8-byte ints) mode in struct module.
This completes the q/Q project.

longobject.c _PyLong_AsByteArray:  The original code had a gross bug:
the most-significant Python digit doesn't necessarily have SHIFT
significant bits, and you really need to count how many copies of the sign
bit it has else spurious overflow errors result.

test_struct.py:  This now does exhaustive std q/Q testing at, and on both
sides of, all relevant power-of-2 boundaries, both positive and negative.

NEWS:  Added brief dict news while I was at it.
2001-06-12 01:22:22 +00:00
Tim Peters 2a9b367385 Two new private longobject API functions,
_PyLong_FromByteArray
    _PyLong_AsByteArray
Untested and probably buggy -- they compile OK, but nothing calls them
yet.  Will soon be called by the struct module, to implement x-platform
'q' and 'Q'.
If other people have uses for them, we could move them into the public API.
See longobject.h for usage details.
2001-06-11 21:23:58 +00:00
Jack Jansen fcc54cab10 Added a missing cast to the hashfunc initializer. 2001-06-10 21:43:28 +00:00
Martin v. Löwis 0163d6d6ef Patch #424475: Speed-up tp_compare usage, by special-casing the common
case of objects with equal types which support tp_compare. Give
type objects a tp_compare function.
Also add c<0 tests before a few PyErr_Occurred tests.
2001-06-09 07:34:05 +00:00
Marc-André Lemburg 8879a33613 Fixes [ #430986 ] Buglet in PyUnicode_FromUnicode. 2001-06-07 12:26:56 +00:00
Tim Peters afb6ae8452 Store the mask instead of the size in dictobjects. The mask is more
frequently used, and in particular this allows to drop the last
remaining obvious time-waster in the crucial lookdict() and
lookdict_string() functions.  Other changes consist mostly of changing
"i < ma_size" to "i <= ma_mask" everywhere.
2001-06-04 21:00:21 +00:00
Tim Peters 453163d842 lookdict: stop more insane core-dump mutating comparison cases. Should
be possible to provoke unbounded recursion now, but leaving that to someone
else to provoke and repair.
Bugfix candidate -- although this is getting harder to backstitch, and the
cases it's protecting against are mondo contrived.
2001-06-03 04:54:32 +00:00
Tim Peters 7b5d0afb1e lookdict: Reduce obfuscating code duplication with a judicious goto.
This code is likely to get even hairier to squash core dumps due to
mutating comparisons, and it's hard enough to follow without that.
2001-06-03 04:14:43 +00:00
Tim Peters 19b77cfc4b Finish the dict->string coredump fix. Need sleep.
Bugfix candidate.
2001-06-02 08:27:39 +00:00
Tim Peters 23cf6be23c Coredumpers from Michael Hudson, mutating dicts while printing or
converting to string.
Critical bugfix candidate -- if you take this seriously <wink>.
2001-06-02 08:02:56 +00:00
Tim Peters f4b33f61fb dict_popitem(): Repaired last-second 2.1 comment, which misidentified the
true reason for allocating the tuple before checking the dict size.
2001-06-02 05:42:29 +00:00
Tim Peters eb28ef209e New collision resolution scheme: no polynomials, simpler, faster, less
code, less memory.  Tests have uncovered no drawbacks.  Christian and
Vladimir are the other two people who have burned many brain cells on the
dict code in recent years, and they like the approach too, so I'm checking
it in without further ado.
2001-06-02 05:27:19 +00:00
Jeremy Hylton 9cea41c195 fix bogus indentation 2001-05-29 17:13:15 +00:00
Thomas Wouters 0dcea5973d _PyTuple_Resize: guard against PyTuple_New() returning NULL, using Tim's
suggestion (modulo style).
2001-05-29 07:58:45 +00:00
Tim Peters 4324aa3572 Cruft cleanup: Removed the unused last_is_sticky argument from the internal
_PyTuple_Resize().
2001-05-28 22:30:08 +00:00
Thomas Wouters 6a922372ad _PyTuple_Resize: take into account the empty tuple. There can be only one.
Instead of raising a SystemError, just create a new tuple of the desired
size.

This fixes (at least) SF bug #420343.
2001-05-28 13:11:02 +00:00
Tim Peters 15d4929ae4 Implement an old idea of Christian Tismer's: use polynomial division
instead of multiplication to generate the probe sequence.  The idea is
recorded in Python-Dev for Dec 2000, but that version is prone to rare
infinite loops.

The value is in getting *all* the bits of the hash code to participate;
and, e.g., this speeds up querying every key in a dict with keys
 [i << 16 for i in range(20000)] by a factor of 500.  Should be equally
valuable in any bad case where the high-order hash bits were getting
ignored.

Also wrote up some of the motivations behind Python's ever-more-subtle
hash table strategy.
2001-05-27 07:39:22 +00:00
Tim Peters 1af03e98d9 Change list.extend() error msgs and NEWS to reflect that list.extend()
now takes any iterable argument, not only sequences.

NEEDS DOC CHANGES -- but I don't think we settled on a concise way to
say this stuff.
2001-05-26 19:37:54 +00:00
Tim Peters 442914d265 Cruft cleanup: removed the #ifdef'ery in support of compiling to allow
multi-argument list.append(1, 2, 3) (as opposed to .append((1,2,3))).
2001-05-26 05:50:03 +00:00
Tim Peters 65b8b84839 roundupsize() and friends: fiddle over-allocation strategy for list
resizing.

Accurate timings are impossible on my Win98SE box, but this is obviously
faster even on this box for reasonable list.append() cases.  I give
credit for this not to the resizing strategy but to getting rid of integer
multiplication and divsion (in favor of shifting) when computing the
rounded-up size.

For unreasonable list.append() cases, Win98SE now displays linear behavior
for one-at-time appends up to a list with about 35 million elements.  Then
it dies with a MemoryError, due to fatally fragmented *address space*
(there's plenty of VM available, but by this point Win9X has broken user
space into many distinct heaps none of which has enough contiguous space
left to resize the list, and for whatever reason Win9x isn't coalescing
the dead heaps).  Before the patch it got a MemoryError for the same
reason, but once the list reached about 2 million elements.

Haven't yet tried on Win2K but have high hopes extreme list.append()
will be much better behaved now (NT & Win2K didn't fragment address space,
but suffered obvious quadratic-time behavior before as lists got large).

For other systems I'm relying on common sense:  replacing integer * and /
by << and >> can't plausibly hurt, the number of function calls hasn't
changed, and the total operation count for reasonably small lists is about
the same (while the operations are cheaper now).
2001-05-26 05:28:40 +00:00
Martin v. Löwis cd35306a25 Patch #424335: Implement string_richcompare, remove string_compare.
Use new _PyString_Eq in lookdict_string.
2001-05-24 16:56:35 +00:00
Tim Peters f8a548c23c dictresize(): Rebuild small tables if there are any dummies, not just if
they're entirely full.  Not a question of correctness, but of temporarily
misplaced common sense.
2001-05-24 16:26:40 +00:00
Tim Peters 0c6010be75 Jack Jansen hit a bug in the new dict code, reported on python-dev.
dictresize() was too aggressive about never ever resizing small dicts.
If a small dict is entirely full, it needs to rebuild it despite that
it won't actually resize it, in order to purge old dummy entries thus
creating at least one virgin slot (lookdict assumes at least one such
exists).

Also took the opportunity to add some high-level comments to dictresize.
2001-05-23 23:33:57 +00:00
Fred Drake 0c23231f6e Remove unused variable. 2001-05-22 22:36:52 +00:00
Tim Peters dea48ec581 SF patch #425242: Patch which "inlines" small dictionaries.
The idea is Marc-Andre Lemburg's, the implementation is Tim's.
Add a new ma_smalltable member to dictobjects, an embedded vector of
MINSIZE (8) dictentry structs.  Short course is that this lets us avoid
additional malloc(s) for dicts with no more than 5 entries.

The changes are widespread but mostly small.

Long course:  WRT speed, all scalar operations (getitem, setitem, delitem)
on non-empty dicts benefit from no longer needing NULL-pointer checks
(ma_table is never NULL anymore).  Bulk operations (copy, update, resize,
clearing slots during dealloc) benefit in some cases from now looping
on the ma_fill count rather than on ma_size, but that was an unexpected
benefit:  the original reason to loop on ma_fill was to let bulk
operations on empty dicts end quickly (since the NULL-pointer checks
went away, empty dicts aren't special-cased any more).

Special considerations:

For dicts that remain empty, this change is a lose on two counts:
the dict object contains 8 new dictentry slots now that weren't
needed before, and dict object creation also spends time memset'ing
these doomed-to-be-unsused slots to NULLs.

For dicts with one or two entries that never get larger than 2, it's
a mix:  a malloc()/free() pair is no longer needed, and the 2-entry case
gets to use 8 slots (instead of 4) thus decreasing the chance of
collision.  Against that, dict object creation spends time memset'ing
4 slots that aren't strictly needed in this case.

For dicts with 3 through 5 entries that never get larger than 5, it's a
pure win:  the dict is created with all the space they need, and they
never need to resize.  Before they suffered two malloc()/free() calls,
plus 1 dict resize, to get enough space.  In addition, the 8-slot
table they ended with consumed more memory overall, because of the
hidden overhead due to the additional malloc.

For dicts with 6 or more entries, the ma_smalltable member is wasted
space, but then these are large(r) dicts so 8 slots more or less doesn't
make much difference.  They still benefit all the time from removing
ubiquitous dynamic null-pointer checks, and get a small benefit (but
relatively smaller the larger the dict) from not having to do two
mallocs, two frees, and a resize on the way *to* getting their sixth
entry.

All in all it appears a small but definite general win, with larger
benefits in specific cases.  It's especially nice that it allowed to
get rid of several branches, gotos and labels, and overall made the
code smaller.
2001-05-22 20:40:22 +00:00
Guido van Rossum 5b021848ac file_getiter(): make iter(file) be equivalent to file.xreadlines().
This should be faster.

This means:

(1) "for line in file:" won't work if the xreadlines module can't be
    imported.

(2) The body of "for line in file:" shouldn't use the file directly;
    the effects (e.g. of file.readline(), file.seek() or even
    file.tell()) would be undefined because of the buffering that goes
    on in the xreadlines module.
2001-05-22 16:48:37 +00:00
Guido van Rossum 0ba9e3ac27 init_name_op(): add (void) to the argument list to make it a valid
prototype, for gcc -Wstrict-prototypes.
2001-05-22 02:33:08 +00:00
Marc-André Lemburg 489b56e044 This patch changes the behaviour of the UTF-16 codec family. Only the
UTF-16 codec will now interpret and remove a *leading* BOM mark. Sub-
sequent BOM characters are no longer interpreted and removed.
UTF-16-LE and -BE pass through all BOM mark characters.

These changes should get the UTF-16 codec more in line with what
the Unicode FAQ recommends w/r to BOM marks.
2001-05-21 20:30:15 +00:00
Tim Peters 91a364df17 Bugfix candidate.
Two exceedingly unlikely errors in dictresize():
1. The loop for finding the new size had an off-by-one error at the
   end (could over-index the polys[] vector).
2. The polys[] vector ended with a 0, apparently intended as a sentinel
   value but never used as such; i.e., it was never checked, so 0 could
   have been used *as* a polynomial.
Neither bug could trigger unless a dict grew to 2**30 slots; since that
would consume at least 12GB of memory just to hold the dict pointers,
I'm betting it's not the cause of the bug Fred's tracking down <wink>.
2001-05-19 07:04:38 +00:00
Tim Peters 1928314ef4 Speed dictresize by collapsing its two passes into one; the reason given
in the comments for using two passes was bogus, as the only object that
can get decref'ed due to the copy is the dummy key, and decref'ing dummy
can't have side effects (for one thing, dummy is immortal!  for another,
it's a string object, not a potentially dangerous user-defined object).
2001-05-17 22:25:34 +00:00
Tim Peters d7ed3bf552 Speed tuple comparisons in two ways:
1. Omit the early-out EQ/NE "lengths different?" test.  Was unable to find
   any real code where it triggered, but it always costs.  The same is not
   true of list richcmps, where different-size lists appeared to get
   compared about half the time.
2. Because tuples are immutable, there's no need to refetch the lengths of
   both tuples from memory again on each loop trip.

BUG ALERT:  The tuple (and list) richcmp algorithm is arguably wrong,
because it won't believe there's any difference unless Py_EQ returns false
for some corresponding elements:

>>> class C:
...     def __lt__(x, y): return 1
...     __eq__ = __lt__
...
>>> C() < C()
1
>>> (C(),) < (C(),)
0
>>>

That doesn't make sense -- provided you believe the defn. of C makes sense.
2001-05-15 20:12:59 +00:00
Marc-André Lemburg 2d9204199f This patch changes the way the string .encode() method works slightly
and introduces a new method .decode().

The major change is that strg.encode() will no longer try to convert
Unicode returns from the codec into a string, but instead pass along
the Unicode object as-is. The same is now true for all other codec
return types. The underlying C APIs were changed accordingly.

Note that even though this does have the potential of breaking
existing code, the chances are low since conversion from Unicode
previously took place using the default encoding which is normally
set to ASCII rendering this auto-conversion mechanism useless for
most Unicode encodings.

The good news is that you can now use .encode() and .decode() with
much greater ease and that the door was opened for better accessibility
of the builtin codecs.

As demonstration of the new feature, the patch includes a few new
codecs which allow string to string encoding and decoding (rot13,
hex, zip, uu, base64).

Written by Marc-Andre Lemburg. Copyright assigned to the PSF.
2001-05-15 12:00:02 +00:00