Commit Graph

1798 Commits

Author SHA1 Message Date
Walter Dörwald 5c1ee17742 Change the unicode.translate docstring to document that
Unicode strings (with arbitrary length) are allowed
as entries in the unicode.translate mapping.

Add a test case for multicharacter replacements.

(Multicharacter replacements were enabled by the
PEP 293 patch)
2002-09-04 20:31:32 +00:00
Guido van Rossum efae8862fe In doc strings, use 'k in D' rather than D.has_key(k). 2002-09-04 11:29:45 +00:00
Skip Montanaro d581d7792b replace thread state objects' ticker and checkinterval fields with two
globals, _Py_Ticker and _Py_CheckInterval.  This also implements Jeremy's
shortcut in Py_AddPendingCall that zeroes out _Py_Ticker.  This allows the
test in the main loop to only test a single value.

The gory details are at

    http://python.org/sf/602191
2002-09-03 20:10:45 +00:00
Walter Dörwald 8709a420c4 Check whether a string resize is necessary at the end
of PyString_DecodeEscape(). This prevents a call to
_PyString_Resize() for the empty string, which would
result in a PyErr_BadInternalCall(), because the
empty string has more than one reference.

This closes SF bug http://www.python.org/sf/603937
2002-09-03 13:53:40 +00:00
Walter Dörwald 3aeb632c31 PEP 293 implemention (from SF patch http://www.python.org/sf/432401) 2002-09-02 13:14:32 +00:00
Raymond Hettinger 29a6d449ef Added comparison functions to dict proxies.
Now all non-mutating dict methods are in the proxy also.
Inspired by SF bug #602232,
2002-08-31 15:51:04 +00:00
Neal Norwitz d94c28e467 SF #561244: micro optimizations, builtins cannot be NULL, so use Py_INCREF 2002-08-29 20:25:46 +00:00
Raymond Hettinger 604cd6ae79 complex() was the only numeric constructor that created a new instance
when given its own type as an argument.
2002-08-29 14:22:51 +00:00
Guido van Rossum bf935fde15 string_contains(): speed up by avoiding function calls where
possible.  This always called PyUnicode_Check() and PyString_Check(),
at least one of which would call PyType_IsSubtype().  Also, this would
call PyString_Size() on known string objects.
2002-08-24 06:57:49 +00:00
Guido van Rossum 6248f441ea Speedup for PyObject_IsTrue(): check for True and False first.
Because all built-in tests return bools now, this is the most common
path!
2002-08-24 06:31:34 +00:00
Guido van Rossum 81912d4764 Speedup for PyObject_RichCompareBool(): PyObject_RichCompare() almost
always returns a bool, so avoid calling PyObject_IsTrue() in that
case.
2002-08-24 05:33:28 +00:00
Guido van Rossum 2023c9b84a Fix SF bug 599128, submitted by Inyeol Lee: .replace() would do the
wrong thing for a unicode subclass when there were zero string
replacements.  The example given in the SF bug report was only one way
to trigger this; replacing a string of length >= 2 that's not found is
another.  The code would actually write outside allocated memory if
replacement string was longer than the search string.

(I wonder how many more of these are lurking?  The unicode code base
is full of wonders.)

Bugfix candidate; this same bug is present in 2.2.1.
2002-08-23 18:50:21 +00:00
Guido van Rossum 8b1a6d694f Code by Inyeol Lee, submitted to SF bug 595350, to implement
the string/unicode method .replace() with a zero-lengt first argument.
Inyeol contributed tests for this too.
2002-08-23 18:21:28 +00:00
Tim Peters 0d2d87d202 long_format(), long_lshift(): Someone on c.l.py is trying to boost
SHIFT and MASK, and widen digit.  One problem is that code of the form

    digit << small_integer

implicitly assumes that the result fits in an int or unsigned int
(platform-dependent, but "int sized" in any case), since digit is
promoted "just" to int or unsigned via the usual integer promotions.
But if digit is typedef'ed as unsigned int, this loses information.
The cure for this is just to cast digit to twodigits first.
2002-08-20 19:00:22 +00:00
Guido van Rossum 76afbd9aa4 Fix some endcase bugs in unicode rfind()/rindex() and endswith().
These were reported and fixed by Inyeol Lee in SF bug 595350.  The
endswith() bug was already fixed in 2.3, but this adds some more test
cases.
2002-08-20 17:29:29 +00:00
Tim Peters 75585d4ec1 getinstclassname(): Squash new compiler wng in assert (comparison of
signed vs unsigned).
2002-08-20 14:31:35 +00:00
Guido van Rossum 45ec02aed1 SF patch 576101, by Oren Tirosh: alternative implementation of
interning.  I modified Oren's patch significantly, but the basic idea
and most of the implementation is unchanged.  Interned strings created
with PyString_InternInPlace() are now mortal, and you must keep a
reference to the resulting string around; use the new function
PyString_InternImmortal() to create immortal interned strings.
2002-08-19 21:43:18 +00:00
Guido van Rossum e3a8e7ed1d Call me anal, but there was a particular phrase that was speading to
comments everywhere that bugged me: /* Foo is inlined */ instead of
/* Inline Foo */.  Somehow the "is inlined" phrase always confused me
for half a second (thinking, "No it isn't" until I added the missing
"here").  The new phrase is hopefully unambiguous.
2002-08-19 19:26:42 +00:00
Guido van Rossum 056fbf422d Another modest speedup in PyObject_GenericGetAttr(): inline the call
to _PyType_Lookup().
2002-08-19 19:22:50 +00:00
Guido van Rossum 492b46f29e Make PyDescr_IsData() a macro. It's too simple to be a function.
Should save 4% on slot lookups.
2002-08-19 18:45:37 +00:00
Michael W. Hudson 69734a5272 Check in my ultra-shortlived patch #597220.
Move some debugging checks inside Py_DEBUG.

They were causing cache misses according to cachegrind.
2002-08-19 16:54:08 +00:00
Guido van Rossum c66ff4441e Inline call to _PyObject_GetDictPtr() in PyObject_GenericGetAttr().
This causes a modest speedup.
2002-08-19 16:50:48 +00:00
Guido van Rossum c588e9041a Simple but important optimization for descr_check(): instead of the
expensive and overly general PyObject_IsInstance(), call
PyObject_TypeCheck() which is a macro that often avoids a call, and if
it does make a call, calls the much more efficient PyType_IsSubtype().
This saved 6% on a benchmark for slot lookups.
2002-08-19 16:02:33 +00:00
Neal Norwitz b898d9fc9a Get this to compile again if Py_USING_UNICODE is not defined.
com_error() is static in Python/compile.c.
2002-08-16 23:20:39 +00:00
Guido van Rossum 84b2bed435 Squash a few calls to the hideously expensive PyObject_CallObject(o,a)
-- replace then with slightly faster PyObject_Call(o,a,NULL).  (The
difference is that the latter requires a to be a tuple; the former
allows other values and wraps them in a tuple if necessary; it
involves two more levels of C function calls to accomplish all that.)
2002-08-16 17:01:09 +00:00
Guido van Rossum 8e829200b1 Fix SF bug 595838 -- buffer in type_new() should not be static. Moved
to inner scope, too.
2002-08-16 03:47:49 +00:00
Tim Peters e417de0e56 Illustrating by example one good reason not to trust a proof <wink>. 2002-08-15 20:10:45 +00:00
Tim Peters ab86c2be24 k_mul() comments: In honor of Dijkstra, made the proof that "t3 fits"
rigorous instead of hoping for testing not to turn up counterexamples.
Call me heretical, but despite that I'm wholly confident in the proof,
and have done it two different ways now, I still put more faith in
testing ...
2002-08-15 20:06:00 +00:00
Tim Peters 9973d74b2d long_mul(): Simplified exit code. In particular, k_mul() returns a
normalized result, so no point to normalizing it again.  The number
of test+branches was also excessive.
2002-08-15 19:41:06 +00:00
Michael W. Hudson dd32a91cc0 This is my patch
[ 587993 ] SET_LINENO killer

Remove SET_LINENO.  Tracing is now supported by inspecting co_lnotab.

Many sundry changes to document and adapt to this change.
2002-08-15 14:59:02 +00:00
Jeremy Hylton 8b73542cf5 Reflow long lines. 2002-08-14 21:01:41 +00:00
Guido van Rossum 54df53a352 More changes of DeprecationWarning to FutureWarning. 2002-08-14 18:38:27 +00:00
Guido van Rossum 323a9cfc83 PyType_Ready(): initialize the base class a bit earlier, so that if we
copy the metatype from the base, the base actually has one!
2002-08-14 17:26:30 +00:00
Tim Peters 48d52c0fcc k_mul() comments: Simplified the simplified explanation of why ah*bh and
al*bl "always fit":  it's actually trivial given what came before.
2002-08-14 17:07:32 +00:00
Tim Peters 8e966ee49a k_mul() comments: Explained why there's always enough room to subtract
ah*bh and al*bl.  This is much easier than explaining why that's true
for (ah+al)*(bh+bl), and follows directly from the simple part of the
(ah+al)*(bh+bl) explanation.
2002-08-14 16:36:23 +00:00
Martin v. Löwis eb3f00aeeb Check for trailing backslash. Fixes #593656. 2002-08-14 08:22:50 +00:00
Martin v. Löwis 8a8da798a5 Patch #505705: Remove eval in pickle and cPickle. 2002-08-14 07:46:28 +00:00
Neal Norwitz 5dc2a37f0f Allow more docstrings to be removed during compilation 2002-08-13 22:19:13 +00:00
Tim Peters cba6e96929 Fixed error in new comment. 2002-08-13 20:42:00 +00:00
Tim Peters d6974a54ab k_mul(): The fix for (ah+al)*(bh+bl) spilling 1 bit beyond the allocated
space is no longer needed, so removed the code.  It was only possible when
a degenerate (ah->ob_size == 0) split happened, but after that fix went
in I added k_lopsided_mul(), which saves the body of k_mul() from seeing
a degenerate split.  So this removes code, and adds a honking long comment
block explaining why spilling out of bounds isn't possible anymore.  Note:
ff we end up spilling out of bounds anyway <wink>, an assert in v_iadd()
is certain to trigger.
2002-08-13 20:37:51 +00:00
Neal Norwitz d47714a727 Allow docstrings to be removed during compilation for *SLOT macro and friends 2002-08-13 19:01:38 +00:00
Neal Norwitz 858e34f649 Allow docstrings to be removed during compilation 2002-08-13 17:18:45 +00:00
Guido van Rossum 4571e9d42a Add an improvement wrinkle to Neil Schemenauer's change to int_mul
(rev. 2.86).  The other type is only disqualified from sq_repeat when
it has the CHECKTYPES flag.  This means that for extension types that
only support "old-style" numeric ops, such as Zope 2's ExtensionClass,
sq_repeat still trumps nb_multiply.
2002-08-13 10:05:56 +00:00
Guido van Rossum d8c8048f5e Fix comment for PyLong_AsUnsignedLong() to say that the return value
is an *unsigned* long.
2002-08-13 00:24:58 +00:00
Tim Peters 1203403743 k_lopsided_mul(): This allocated more space for bslice than necessary. 2002-08-12 22:10:00 +00:00
Tim Peters 6000464d08 Added new function k_lopsided_mul(), which is much more efficient than
k_mul() when inputs have vastly different sizes, and a little more
efficient when they're close to a factor of 2 out of whack.

I consider this done now, although I'll set up some more correctness
tests to run overnight.
2002-08-12 22:01:34 +00:00
Tim Peters 547607c4bf k_mul(): Moved an assert down. In a debug build, interrupting a
multiply via Ctrl+C could cause a NULL-pointer dereference due to
the assert.
2002-08-12 19:43:49 +00:00
Tim Peters 70b041bbe7 k_mul(): Heh -- I checked in two fixes for the last problem. Only keep
the good one <wink>.  Also checked in a test-aid by mistake.
2002-08-12 19:38:01 +00:00
Tim Peters d8b2173ef9 k_mul(): White-box testing turned up that (ah+al)*(bh+bl) can, in rare
cases, overflow the allocated result object by 1 bit.  In such cases,
it would have been brought back into range if we subtracted al*bl and
ah*bh from it first, but I don't want to do that because it hurts cache
behavior.  Instead we just ignore the excess bit when it appears -- in
effect, this is forcing unsigned mod BASE**(asize + bsize) arithmetic
in a case where that doesn't happen all by itself.
2002-08-12 19:30:26 +00:00
Guido van Rossum 3747a0f04c Fix MSVC warnings. 2002-08-12 19:25:08 +00:00
Guido van Rossum ad47da072a Refactor how __dict__ and __weakref__ interact with __slots__.
1. You can now have __dict__ and/or __weakref__ in your __slots__
   (before only __weakref__ was supported).  This is treated
   differently than before: it merely sets a flag that the object
   should support the corresponding magic.

2. Dynamic types now always have descriptors __dict__ and __weakref__
   thrust upon them.  If the type in fact does not support one or the
   other, that descriptor's __get__ method will raise AttributeError.

3. (This is the reason for all this; it fixes SF bug 575229, reported
   by Cesar Douady.)  Given this code:
      class A(object): __slots__ = []
      class B(object): pass
      class C(A, B): __slots__ = []
   the class object for C was broken; its size was less than that of
   B, and some descriptors on B could cause a segfault.  C now
   correctly inherits __weakrefs__ and __dict__ from B, even though A
   is the "primary" base (C.__base__ is A).

4. Some code cleanup, and a few comments added.
2002-08-12 19:05:44 +00:00
Tim Peters 115c888b97 x_mul(): Made life easier for C optimizers in the "grade school"
algorithm.  MSVC 6 wasn't impressed <wink>.

Something odd:  the x_mul algorithm appears to get substantially worse
than quadratic time as the inputs grow larger:

bits in each input   x_mul time   k_mul time
------------------   ----------   ----------
             15360         0.01         0.00
             30720         0.04         0.01
             61440         0.16         0.04
            122880         0.64         0.14
            245760         2.56         0.40
            491520        10.76         1.23
            983040        71.28         3.69
           1966080       459.31        11.07

That is, x_mul is perfectly quadratic-time until a little burp at
2.56->10.76, and after that goes to hell in a hurry.  Under Karatsuba,
doubling the input size "should take" 3 times longer instead of 4, and
that remains the case throughout this range.  I conclude that my "be nice
to the cache" reworkings of k_mul() are paying.
2002-08-12 18:25:43 +00:00
Tim Peters d64c1def7c k_mul() and long_mul(): I'm confident that the Karatsuba algorithm is
correct now, so added some final comments, did some cleanup, and enabled
it for all long-int multiplies.  The KARAT envar no longer matters,
although I left some #if 0'ed code in there for my own use (temporary).
k_mul() is still much slower than x_mul() if the inputs have very
differenent sizes, and that still needs to be addressed.
2002-08-12 17:36:03 +00:00
Tim Peters 738eda742c k_mul: Rearranged computation for better cache use. Ignored overflow
(it's possible, but should be harmless -- this requires more thought,
and allocating enough space in advance to prevent it requires exactly
as much thought, to know exactly how much that is -- the end result
certainly fits in the allocated space -- hmm, but that's really all
the thought it needs!  borrows/carries out of the high digits really
are harmless).
2002-08-12 15:08:20 +00:00
Tim Peters 44121a6bc9 x_mul(): This failed to normalize its result.
k_mul():  This didn't allocate enough result space when one input had
more than twice as many bits as the other.  This was partly hidden by
that x_mul() didn't normalize its result.

The Karatsuba recurrence is pretty much hosed if the inputs aren't
roughly the same size.  If one has at least twice as many bits as the
other, we get a degenerate case where the "high half" of the smaller
input is 0.  Added a special case for that, for speed, but despite that
it helped, this can still be much slower than the "grade school" method.
It seems to take a really wild imbalance to trigger that; e.g., a
2**22-bit input times a 1000-bit input on my box runs about twice as slow
under k_mul than under x_mul.  This still needs to be addressed.

I'm also not sure that allocating a->ob_size + b->ob_size digits is
enough, given that this is computing k = (ah+al)*(bh+bl) instead of
k = (ah-al)*(bl-bh); i.e., it's certainly enough for the final result,
but it's vaguely possible that adding in the "artificially" large k may
overflow that temporarily.  If so, an assert will trigger in the debug
build, but we'll probably compute the right result anyway(!).
2002-08-12 06:17:58 +00:00
Tim Peters 877a212678 Introduced helper functions v_iadd and v_isub, for in-place digit-vector
addition and subtraction.  Reworked the tail end of k_mul() to use them.
This saves oodles of one-shot longobject allocations (this is a triply-
recursive routine, so saving one allocation in the body saves 3**n
allocations at depth n; we actually save 2 allocations in the body).
2002-08-12 05:09:36 +00:00
Tim Peters fc07e56844 k_mul(): Repaired another typo in another comment. 2002-08-12 02:54:10 +00:00
Tim Peters 18c15b9bbd k_mul(): Repaired typo in comment. 2002-08-12 02:43:58 +00:00
Tim Peters 5af4e6c739 Cautious introduction of a patch that started from
SF 560379:  Karatsuba multiplication.
Lots of things were changed from that.  This needs a lot more testing,
for correctness and speed, the latter especially when bit lengths are
unbalanced.  For now, the Karatsuba code gets invoked if and only if
envar KARAT exists.
2002-08-12 02:31:19 +00:00
Tim Peters da1a2212c8 int_lshift(): Simplified/sped overflow-checking. 2002-08-11 17:54:42 +00:00
Guido van Rossum 643d59cbd6 Use a better check for overflow from a<<b. 2002-08-11 14:04:13 +00:00
Marc-André Lemburg cc8764ca9d Add C API PyUnicode_FromOrdinal() which exposes unichr() at C level.
u'%c' will now raise a ValueError in case the argument is an
integer outside the valid range of Unicode code point ordinals.

Closes SF bug #593581.
2002-08-11 12:23:04 +00:00
Guido van Rossum 078151da90 Implement stage B0 of PEP 237: add warnings for operations that
currently return inconsistent results for ints and longs; in
particular: hex/oct/%u/%o/%x/%X of negative short ints, and x<<n that
either loses bits or changes sign.  (No warnings for repr() of a long,
though that will also change to lose the trailing 'L' eventually.)

This introduces some warnings in the test suite; I'll take care of
those later.
2002-08-11 04:24:12 +00:00
Tim Peters 3ddb856ed1 Fixed new typos, added a little info about ~sort versus "hint"s. 2002-08-10 07:04:01 +00:00
Guido van Rossum 40af889081 Disallow class assignment completely unless both old and new are heap
types.  This prevents nonsense like 2.__class__ = bool or
True.__class__ = int.
2002-08-10 05:42:07 +00:00
Tim Peters e05f65a0c6 1. Combined the base and length arrays into a single array of structs.
This is friendlier for caches.

2. Cut MIN_GALLOP to 7, but added a per-sort min_gallop vrbl that adapts
   the "get into galloping mode" threshold higher when galloping isn't
   paying, and lower when it is.  There's no known case where this hurts.
   It's (of course) neutral for /sort, \sort and =sort.  It also happens
   to be neutral for !sort.  It cuts a tiny # of compares in 3sort and +sort.
   For *sort, it reduces the # of compares to better than what this used to
   do when MIN_GALLOP was hardcoded to 10 (it did about 0.1% more *sort
   compares before, but given how close we are to the limit, this is "a
   lot"!).  %sort used to do about 1.5% more compares, and ~sort about
   3.6% more.  Here are exact counts:

 i    *sort    3sort    +sort    %sort    ~sort    !sort
15   449235    33019    33016    51328   188720    65534  before
     448885    33016    33007    50426   182083    65534  after
      0.08%    0.01%    0.03%    1.79%    3.65%    0.00%  %ch from after

16   963714    65824    65809   103409   377634   131070
     962991    65821    65808   101667   364341   131070
      0.08%    0.00%    0.00%    1.71%    3.65%    0.00%

17  2059092   131413   131362   209130   755476   262142
    2057533   131410   131361   206193   728871   262142
      0.08%    0.00%    0.00%    1.42%    3.65%    0.00%

18  4380687   262440   262460   421998  1511174   524286
    4377402   262437   262459   416347  1457945   524286
      0.08%    0.00%    0.00%    1.36%    3.65%    0.00%

19  9285709   524581   524634   848590  3022584  1048574
    9278734   524580   524633   837947  2916107  1048574
      0.08%    0.00%    0.00%    1.27%    3.65%    0.00%

20 19621118  1048960  1048942  1715806  6045418  2097150
   19606028  1048958  1048941  1694896  5832445  2097150
      0.08%    0.00%    0.00%    1.23%    3.65%    0.00%

3. Added some key asserts I overlooked before.

4. Updated the doc file.
2002-08-10 05:21:15 +00:00
Tim Peters b80595f44a The samplesort-vs-mergesort #-of-comparisons comparisons were captured
before %sort was introduced.  Redid them (the numbers change, but the
conclusions don't).  Also did the samplesort counts with the released
2.2.1, as they're slightly different under the last CVS 2.3 samplesort
(some higher, some lower -- CVS had been changed to stop doing the
special-case business on recursive samplesort calls).
2002-08-10 03:04:33 +00:00
Fred Drake f16c3dc81b Add support for the iterator protocol to weakref proxy objects.
Part of fixing SF bug #591704.
2002-08-09 18:34:16 +00:00
Guido van Rossum f36921c4b0 Unicode replace() method with empty pattern argument should fail, like
it does for 8-bit strings.
2002-08-09 15:36:48 +00:00
Neil Schemenauer 3bc3f28dbe Only call sq_repeat if the object does not have a nb_multiply slot. One
example of where this changes behavior is when a new-style instance
defines '__mul__' and '__rmul__' and is multiplied by an int.  Before the
change the '__rmul__' method is never called, even if the int is the
left operand.
2002-08-09 15:20:48 +00:00
Tim Peters 671764beb0 Repaired a braino in the description of bad minrun values. 2002-08-09 05:06:44 +00:00
Guido van Rossum 721f62e200 Major speedup for new-style class creation. Turns out there was some
trampolining going on with the tp_new descriptor, where the inherited
PyType_GenericNew was overwritten with the much slower slot_tp_new
which would end up calling tp_new_wrapper which would eventually call
PyType_GenericNew.  Add a special case for this to update_one_slot().

XXX Hope there isn't a loophole in this.  I'll buy the first person to
point out a bug in the reasoning a beer.

Backport candidate (but I won't do it).
2002-08-09 02:14:34 +00:00
Raymond Hettinger 48923c5533 Moved special case for tuples from iterobject.c to
tupleobject.c. Makes the code in iterobject.c cleaner
and speeds-up the general case by not checking for
tuples everytime.   SF Patch #592065.
2002-08-09 01:30:17 +00:00
Guido van Rossum 7bed213224 Significant speedup in new-style object creation: in slot_tp_new(),
intern the string "__new__" so we can call PyObject_GetAttr() rather
than PyObject_GetAttrString().  (Though it's a mystery why slot_tp_new
is being called when a class doesn't define __new__.  I'll look into
that tomorrow.)

2.2 backport candidate (but I won't do it).
2002-08-08 21:57:53 +00:00
Guido van Rossum febd61dc02 A modest speedup of object deallocation. call_finalizer() did rather
a lot of work: it had to save and restore the current exception around
a call to lookup_maybe(), because that could fail in rare cases, and
most objects don't have a __del__ method, so the whole exercise was
usually a waste of time.  Changed this to cache the __del__ method in
the type object just like all other special methods, in a new slot
tp_del.  So now subtype_dealloc() can test whether tp_del is NULL and
skip the whole exercise if it is.  The new slot doesn't need a new
flag bit: subtype_dealloc() is only called if the type was dynamically
allocated by type_new(), so it's guaranteed to have all current slots.
Types defined in C cannot fill in tp_del with a function of their own,
so there's no corresponding "wrapper".  (That functionality is already
available through tp_dealloc.)
2002-08-08 20:55:20 +00:00
Tim Peters 6c511e6d1c Added info about highwater heap-memory use for the sortperf.py tests; + a
couple of minor edits elsewhere.
2002-08-08 01:55:16 +00:00
Tim Peters 6063e2615f PyList_Reverse(): This was leaking a reference to Py_None on every call.
I believe I introduced this bug when I refactored the reversal code so
that the mergesort could use it too.  It's not a problem on the 2.2 branch.
2002-08-08 01:06:39 +00:00
Guido van Rossum 0906e07442 Fix a subtle bug in the trashcan code I added yesterday to
subtype_dealloc().

When call_finalizer() failed, it would return without going through
the trashcan end macro, thereby unbalancing the trashcan nesting level
counter, and thereby defeating the test case (slottrash() in
test_descr.py).  This in turn meant that the assert in the GC_UNTRACK
macro wasn't triggered by the slottrash() test despite a bug in the
code: _PyTrash_destroy_chain() calls the dealloc routine with an
object that's untracked, and the assert in the GC_UNTRACK macro would
fail on this; but because of an earlier test that resurrects an
object, causing call_finalizer() to fail and the trashcan nesting
level to be unbalanced, so _PyTrash_destroy_chain() was never called.
Calling the slottrash() test in isolation *did* trigger the assert,
however.

So the fix is twofold: (1) call the GC_UnTrack() function instead of
the GC_UNTRACK macro, because the function is safe when the object is
already untracked; (2) when call_finalizer() fails, jump to a label
that exits through the trashcan end macro, keeping the trashcan
nesting balanced.
2002-08-07 20:42:09 +00:00
Martin v. Löwis 3f19b10ca5 Replace abort with Py_FatalError. 2002-08-07 16:21:51 +00:00
Neal Norwitz 657d222700 Make more functions static 2002-08-06 22:12:52 +00:00
Neal Norwitz d8b995f5e8 Make readahead functions static 2002-08-06 21:50:54 +00:00
Guido van Rossum 22b1387c51 Fix SF bug 574207 (chained __slots__ dealloc segfault).
This is inspired by SF patch 581742 (by Jonathan Hogg, who also
submitted the bug report, and two other suggested patches), but
separates the non-GC case from the GC case to avoid testing for GC
several times.

Had to fix an assert() from call_finalizer() that asserted that the
object wasn't untracked, because it's possible that the object isn't
GC'ed!
2002-08-06 21:41:44 +00:00
Barry Warsaw 6a043f3fe8 PyUnicode_Contains(): The memcmp() call didn't take into account the
width of Py_UNICODE.  Good catch, MAL.
2002-08-06 19:03:17 +00:00
Barry Warsaw 817918cc3c Committing patch #591250 which provides "str1 in str2" when str1 is a
string of longer than 1 character.
2002-08-06 16:58:21 +00:00
Guido van Rossum 7a6e95948c SF patch 580331 by Oren Tirosh: make file objects their own iterator.
For a file f, iter(f) now returns f (unless f is closed), and f.next()
is similar to f.readline() when EOF is not reached; however, f.next()
uses a readahead buffer that messes up the file position, so mixing
f.next() and f.readline() (or other methods) doesn't work right.
Calling f.seek() drops the readahead buffer, but other operations
don't.

The real purpose of this change is to reduce the confusion between
objects and their iterators.  By making a file its own iterator, it's
made clearer that using the iterator modifies the file object's state
(in particular the current position).

A nice side effect is that this speeds up "for line in f:" by not
having to use the xreadlines module.  The f.xreadlines() method is
still supported for backwards compatibility, though it is the same as
iter(f) now.

(I made some cosmetic changes to Oren's code, and added a test for
"file closed" to file_iternext() and file_iter().)
2002-08-06 15:55:28 +00:00
Raymond Hettinger bc552ce1b8 SF 582071 clarified the .split() method's docstring to note that sep=None
will trigger splitting on any whitespace.
2002-08-05 06:28:21 +00:00
Tim Peters 66860f6da4 Sped the usual case for sorting by calling PyObject_RichCompareBool
directly when no comparison function is specified.  This saves a layer
of function call on every compare then.  Measured speedups:

 i    2**i  *sort  \sort  /sort  3sort  +sort  %sort  ~sort  =sort  !sort
15   32768  12.5%   0.0%   0.0% 100.0%   0.0%  50.0% 100.0% 100.0% -50.0%
16   65536   8.7%   0.0%   0.0%   0.0%   0.0%   0.0%  12.5%   0.0%   0.0%
17  131072   8.0%  25.0%   0.0%  25.0%   0.0%  14.3%   5.9%   0.0%   0.0%
18  262144   6.3% -10.0%  12.5%  11.1%   0.0%   6.3%   5.6%  12.5%   0.0%
19  524288   5.3%   5.9%   0.0%   5.6%   0.0%   5.9%   5.4%   0.0%   2.9%
20 1048576   5.3%   2.9%   2.9%   5.1%   2.8%   1.3%   5.9%   2.9%   4.2%

The best indicators are those that take significant time (larger i), and
where sort doesn't do very few compares (so *sort and ~sort benefit most
reliably).  The large numbers are due to roundoff noise combined with
platform variability; e.g., the 14.3% speedup for %sort at i=17 reflects
a printed elapsed time of 0.18 seconds falling to 0.17, but a change in
the last digit isn't really meaningful (indeed, if it really took 0.175
seconds, one electron having a lazy nanosecond could shift it to either
value <wink>).  Similarly the 25% at 3sort i=17 was a meaningless change
from 0.05 to 0.04.  However, almost all the "meaningless changes" were
in the same direction, which is good.  The before-and-after times for
*sort are clearest:

before after
  0.18  0.16
  0.25  0.23
  0.54  0.50
  1.18  1.11
  2.57  2.44
  5.58  5.30
2002-08-04 17:47:26 +00:00
Tim Peters 6bdbc9e0b1 SF bug 590366: Small typo in listsort:ParseTuple
The PyArg_ParseTuple() error string still said "msort".  Changed to "sort".
2002-08-03 02:28:24 +00:00
Guido van Rossum f4be427c46 Tim found that once test_longexp has run, test_sort takes very much
longer to run than normal.  A profiler run showed that this was due to
PyFrame_New() taking up an unreasonable amount of time.  A little
thinking showed that this was due to the while loop clearing the space
available for the stack.  The solution is to only clear the local
variables (and cells and free variables), not the space available for
the stack, since anything beyond the stack top is considered to be
garbage anyway.  Also, use memset() instead of a while loop counting
backwards.  This should be a time savings for normal code too!  (By a
probably unmeasurable amount. :-)
2002-08-01 18:50:33 +00:00
Guido van Rossum 0dbab4c560 SF patch 588728 (Nathan Srebro).
The __delete__ method wrapper for descriptors was not supported

(I added a test, too.)

2.2 bugfix candidate.
2002-08-01 14:39:25 +00:00
Tim Peters a64dc245ac Replaced samplesort with a stable, adaptive mergesort. 2002-08-01 02:13:36 +00:00
Tim Peters 92f81f2e63 Checking in the doc file for "timsort". There's way too much here to
stuff into code comments, and lots of it is going to be useful again (but
hard to predict exactly which parts of it ...).
2002-08-01 00:59:42 +00:00
Neal Norwitz cee5ca060b SF patch #587889, fix memory leak of tp_doc 2002-07-30 00:42:06 +00:00
Michael W. Hudson 56796f672f Fix for
[ 587875 ] crash on deleting extended slice

The array code got simpler, always a good thing!
2002-07-29 14:35:04 +00:00
Mark Hammond a290527376 Excise DL_IMPORT/EXPORT from object.h, and related files. This patch
also adds 'extern' to PyAPI_DATA rather than at each declaration, as
discussed with Tim and Guido.
2002-07-29 13:42:14 +00:00
Neal Norwitz 88fe4ff5a9 Fix the problem of not raising a TypeError exception when doing:
'%g' % '1'
    '%d' % '1'

Add a test for these conditions
Fix the test so that if not exception is raise, this is a failure
2002-07-28 16:44:23 +00:00
Martin v. Löwis 673c0a2247 Patch #574867: Correct list.extend docstring. 2002-07-28 16:35:57 +00:00
Neal Norwitz 7beeed5dfd SF patch #577031, remove PyArg_Parse() since it's deprecated 2002-07-28 15:19:47 +00:00
Martin v. Löwis 75d2d94e0f Patch #554716: Use __va_copy where available. 2002-07-28 10:23:27 +00:00
Skip Montanaro 35b37a5c11 tighten up the unicode object's docstring a tad 2002-07-26 16:22:46 +00:00
Jeremy Hylton 73a088e3fa Don't be so hasty. If PyInt_AsLong() raises an error, don't set ValueError. 2002-07-25 16:43:29 +00:00
Jeremy Hylton f20fcf9fed Complain if __len__() returns < 0, just like classic classes.
Fixes SF bug #575773.

Bug fix candidate.
2002-07-25 16:06:15 +00:00
Michael W. Hudson 206d8f818f Silly typo. Not sure how that got in. 2002-07-19 15:52:38 +00:00
Michael W. Hudson f0d777c56b A few days ago, Guido said (in the thread "[Python-Dev] Python
version of PySlice_GetIndicesEx"):

> OK.  Michael, if you want to check in indices(), go ahead.

Then I did what was needed, but didn't check it in.  Here it is.
2002-07-19 15:47:06 +00:00
Tim Peters 330f9e9581 More sort cleanup: Moved the special cases from samplesortslice into
listsort.  If the former calls itself recursively, they're a waste of
time, since it's called on a random permutation of a random subset of
elements.  OTOH, for exactly the same reason, they're an immeasurably
small waste of time (the odds of finding exploitable order in a random
permutation are ~= 0, so the special-case loops looking for order give
up quickly).  The point is more for conceptual clarity.
Also changed some "assert comments" into real asserts; when this code
was first written, Python.h didn't supply assert.h.
2002-07-19 07:05:44 +00:00
Tim Peters 0fe977c4a9 binarysort() cleanup: Documented the key invariants, explained why they
imply this is a stable sort, and added some asserts.
2002-07-19 06:12:32 +00:00
Tim Peters 326b44871e listreverse(): Don't call the new reverse_slice unless the list
has something in it (else ob_item may be a NULL pointer).
2002-07-19 04:04:16 +00:00
Tim Peters a8c974c157 Cleanup yielding a small speed boost: before rich comparisons were
introduced, list.sort() was rewritten to use only the "< or not <?"
distinction.  After rich comparisons were introduced, docompare() was
fiddled to translate a Py_LT Boolean result into the old "-1 for <,
0 for ==, 1 for >" flavor of outcome, and the sorting code was left
alone.  This left things more obscure than they should be, and turns
out it also cost measurable cycles.

So:  The old CMPERROR novelty is gone.  docompare() is renamed to islt(),
and now has the same return conditinos as PyObject_RichCompareBool.  The
SETK macro is renamed to ISLT, and is even weirder than before (don't
complain unless you want to maintain the sort code <wink>).

Overall, this yields a 1-2% speedup in the usual (no explicit function
passed to list.sort()) case when sorting arrays of floats (as sortperf.py
does).  The boost is higher for arrays of ints.
2002-07-19 03:30:57 +00:00
Tim Peters 3b01a1217f Trimmed trailing whitespace. 2002-07-19 02:35:45 +00:00
Tim Peters 8e2e7ca330 Cleanup: Define one internal utility for reversing a list slice, and
use that everywhere.
2002-07-19 02:33:08 +00:00
Jeremy Hylton d1fedb6ab5 Remove extraneous semicolon.
(Silences compiler warning for Compaq C++ 6.5 on Tru64.)
2002-07-18 18:49:52 +00:00
Jeremy Hylton 938ace69a0 staticforward bites the dust.
The staticforward define was needed to support certain broken C
compilers (notably SCO ODT 3.0, perhaps early AIX as well) botched the
static keyword when it was used with a forward declaration of a static
initialized structure.  Standard C allows the forward declaration with
static, and we've decided to stop catering to broken C compilers.  (In
fact, we expect that the compilers are all fixed eight years later.)

I'm leaving staticforward and statichere defined in object.h as
static.  This is only for backwards compatibility with C extensions
that might still use it.

XXX I haven't updated the documentation.
2002-07-17 16:30:39 +00:00
Guido van Rossum ca5ed5b875 Remove the next() method -- one is supplied automatically by
PyType_Ready() because the tp_iternext slot is set (fortunately,
because using the tp_iternext implementation for the the next()
implementation is buggy).  Also changed the allocation order in
enum_next() so that the underlying iterator is only moved ahead when
we have successfully allocated the result tuple and index.
2002-07-16 21:02:42 +00:00
Guido van Rossum 86d593e110 Remove the next() method -- one is supplied automatically by
PyType_Ready() because the tp_iternext slot is set.  Also removed the
redundant (and expensive!) call to raise StopIteration from
rangeiter_next().
2002-07-16 20:47:50 +00:00
Guido van Rossum 2147df748f Make StopIteration a sink state. This is done by clearing out the
di_dict field when the end of the list is reached.  Also make the
error ("dictionary changed size during iteration") a sticky state.

Also remove the next() method -- one is supplied automatically by
PyType_Ready() because the tp_iternext slot is set.  That's a good
thing, because the implementation given here was buggy (it never
raised StopIteration).
2002-07-16 20:30:22 +00:00
Guido van Rossum 613bed3726 Make StopIteration a sink state. This is done by clearing out the
object references (it_seq for seqiterobject, it_callable and
it_sentinel for calliterobject) when the end of the list is reached.

Also remove the next() methods -- one is supplied automatically by
PyType_Ready() because the tp_iternext slot is set.  That's a good
thing, because the implementation given here was buggy (it never
raised StopIteration).
2002-07-16 20:24:46 +00:00
Guido van Rossum 6b6272c857 Whitespace normalization. 2002-07-16 20:10:23 +00:00
Guido van Rossum 86103ae531 Make StopIteration a sink state. This is done by clearing out the
it_seq field when the end of the list is reached.

Also remove the next() method -- one is supplied automatically by
PyType_Ready() because the tp_iternext slot is set.  That's a good
thing, because the implementation given here was buggy (it never
raised StopIteration).
2002-07-16 20:07:32 +00:00
Jeremy Hylton 719841e2fb The object returned by tp_new() may not have a tp_init.
If the object is an ExtensionClass, for example, the slot is not even
defined.  So we must check that the type has the slot (implied by
HAVE_CLASS) before calling tp_init().
2002-07-16 19:39:38 +00:00
Guido van Rossum 5086e49a6e Make list_iter() really static. 2002-07-16 15:56:52 +00:00
Guido van Rossum 03013a0130 valid_identifier(): use an unsigned char* so that isalpha() will do
the right thing even if char is unsigned.
2002-07-16 14:30:28 +00:00
Tim Peters 58cf361e35 docompare(): Another reasonable optimization from Jonathan Hogg for the
explicit comparison function case:  use PyObject_Call instead of
PyEval_CallObject.  Same thing in context, but gives a 2.4% overall
speedup when sorting a list of ints via list.sort(__builtin__.cmp).
2002-07-15 05:16:13 +00:00
Tim Peters 7a1f91709b WINDOWS_LEAN_AND_MEAN: There is no such symbol, although a very few
MSDN sample programs use it, apparently in error.  The correct name
is WIN32_LEAN_AND_MEAN.  After switching to the correct name, in two
cases more was needed because the code actually relied on things that
disappear when WIN32_LEAN_AND_MEAN is defined.
2002-07-14 22:14:19 +00:00
Guido van Rossum b6d29b7856 Undef MIN and MAX before defining them, to avoid warnings on certain
platforms.
2002-07-13 14:31:51 +00:00
Jeremy Hylton a4b4c3bf05 Don't declare a function with staticforward.
Just declare it static so that lame (BAD_STATIC_FORWARD) compilers
don't see a mismatch between the prototype and the function.
2002-07-13 03:51:17 +00:00
Tim Peters f2a0473350 docompare(): Use PyTuple_New instead of Py_BuildValue to build compare's
arg tuple.  This was suggested on c.l.py but afraid I can't find the msg
again for proper attribution.  For

    list.sort(cmp)

where list is a list of random ints, and cmp is __builtin__.cmp, this
yields an overall 50-60% speedup on my Win2K box.  Of course this is a
best case, because the overhead of calling cmp relative to the cost of
actually comparing two ints is at an extreme.  Nevertheless it's huge
bang for the buck.  An additionak 20-30% can be bought by making the arg
tuple an immortal static (avoiding all but "the first" PyTuple_New), but
that's tricky to make correct since docompare needs to be reentrant.  So
this picks the cherry and leaves the pits for Fred <wink>.

Note that this makes no difference to the

    list.sort()

case; an arg tuple gets built only if the user specifies an explicit
sort function.
2002-07-11 21:46:16 +00:00
Jeremy Hylton df3f793516 Extend function() to support an optional closure argument.
Also, simplify some ref counting for other optional arguments.
2002-07-11 18:30:27 +00:00
Tim Peters 3459251d5a object.h special-build macro minefield: renamed all the new lexical
helper macros to something saner, and used them appropriately in other
files too, to reduce #ifdef blocks.

classobject.c, instance_dealloc():  One of my worst Python Memories is
trying to fix this routine a few years ago when COUNT_ALLOCS was defined
but Py_TRACE_REFS wasn't.  The special-build code here is way too
complicated.  Now it's much simpler.  Difference:  in a Py_TRACE_REFS
build, the instance is no longer in the doubly-linked list of live
objects while its __del__ method is executing, and that may be visible
via sys.getobjects() called from a __del__ method.  Tough -- the object
is presumed dead while its __del__ is executing anyway, and not calling
_Py_NewReference() at the start allows enormous code simplification.

typeobject.c, call_finalizer():  The special-build instance_dealloc()
pain apparently spread to here too via cut-'n-paste, and this is much
simpler now too.  In addition, I didn't understand why this routine
was calling _PyObject_GC_TRACK() after a resurrection, since there's no
plausible way _PyObject_GC_UNTRACK() could have been called on the
object by this point.  I suspect it was left over from pasting the
instance_delloc() code.  Instead asserted that the object is still
tracked.  Caution:  I suspect we don't have a test that actually
exercises the subtype_dealloc() __del__-resurrected-me code.
2002-07-11 06:23:50 +00:00
Tim Peters 889f61dcfb Documented PYMALLOC_DEBUG. This completes primary coverage of all the
"special builds" I ever use.  If you use others, document them here, or
don't be surprised if I rip out the code for them <0.5 wink>.
2002-07-10 19:29:49 +00:00
Tim Peters 7c321a80f9 The Py_REF_DEBUG/COUNT_ALLOCS/Py_TRACE_REFS macro minefield: added
more trivial lexical helper macros so that uses of these guys expand
to nothing at all when they're not enabled.  This should help sub-
standard compilers that can't do a good job of optimizing away the
previous "(void)0" expressions.

Py_DECREF:  There's only one definition of this now.  Yay!  That
was that last one in the family defined multiple times in an #ifdef
maze.

Py_FatalError():  Changed the char* signature to const char*.

_Py_NegativeRefcount():  New helper function for the Py_REF_DEBUG
expansion of Py_DECREF.  Calling an external function cuts down on
the volume of generated code.  The previous inline expansion of abort()
didn't work as intended on Windows (the program often kept going, and
the error msg scrolled off the screen unseen).  _Py_NegativeRefcount
calls Py_FatalError instead, which captures our best knowledge of
how to abort effectively across platforms.
2002-07-09 02:57:01 +00:00
Tim Peters c6a3ff634a SF bug 578752: COUNT_ALLOCS vs heap types
Repair segfaults and infinite loops in COUNT_ALLOCS builds in the
presence of new-style (heap-allocated) classes/types.

Bugfix candidate.  I'll backport this to 2.2.  It's irrelevant in 2.1.
2002-07-08 22:11:52 +00:00
Tim Peters 4be93d0e84 Rearranged and added comments to object.h, to clarify many things
that have taken me "too long" to reverse-engineer over the years.
Vastly reduced the nesting level and redundancy of #ifdef-ery.
Took a light stab at repairing comments that are no longer true.

sys_gettotalrefcount():  Changed to enable under Py_REF_DEBUG.
It was enabled under Py_TRACE_REFS, which was much heavier than
necessary.  sys.gettotalrefcount() is now available in a
Py_REF_DEBUG-only build.
2002-07-07 19:59:50 +00:00
Tim Peters a6269a8ec5 Removed 3 unlikely #includes that were only needed for the non-gc flavor
of the trashcan code.
2002-07-07 16:52:50 +00:00
Tim Peters 803526b9e2 Trashcan cleanup: Now that cyclic gc is always there, the trashcan
mechanism is no longer evil:  it no longer plays dangerous games with
the type pointer or refcounts, and objects in extension modules can play
along too without needing to edit the core first.

Rewrote all the comments to explain this, and (I hope) give clear
guidance to extension authors who do want to play along.  Documented
all the functions.  Added more asserts (it may no longer be evil, but
it's still dangerous <0.9 wink>).  Rearranged the generated code to
make it clearer, and to tolerate either the presence or absence of a
semicolon after the macros.  Rewrote _PyTrash_destroy_chain() to call
tp_dealloc directly; it was doing a Py_DECREF again, and that has all
sorts of obscure distorting effects in non-release builds (Py_DECREF
was already called on the object!).  Removed Christian's little "embedded
change log" comments -- that's what checkin messages are for, and since
it was impossible to correlate the comments with the code that changed,
I found them merely distracting.
2002-07-07 05:13:56 +00:00
Tim Peters 943382c8e5 Removed WITH_CYCLE_GC #ifdef-ery. Holes:
+ I'm not sure what to do about configure.in.  Left it alone.

+ Ditto pyexpat.c.  Fred or Martin will know what to do.
2002-07-07 03:59:34 +00:00
Martin v. Löwis 6238d2b024 Patch #569753: Remove support for WIN16.
Rename all occurrences of MS_WIN32 to MS_WINDOWS.
2002-06-30 15:26:10 +00:00
Raymond Hettinger 5a04aec384 Fix SF bug 546434 -- buffer slice type inconsistent. 2002-06-25 00:25:30 +00:00
Raymond Hettinger ab5dae35ca Fix SF bug 572567: Memory leak in object comparison. 2002-06-24 13:08:16 +00:00
Jeremy Hylton 8b47dffc93 Fix for SF bug 571885
When resizing a tuple, zero out the memory starting at the end of the
old tuple not at the beginning of the old tuple.
2002-06-20 23:13:17 +00:00
Raymond Hettinger 0ae0c07661 SF 569257 -- Name mangle double underscored variable names in __slots__. 2002-06-20 22:23:15 +00:00
Michael W. Hudson 9c14badc5f Fix the bug described in
http://mail.python.org/pipermail/python-dev/2002-June/025461.html

with test cases.

Also includes extended slice support for arrays, which I thought I'd
already checked in but obviously not.
2002-06-19 15:44:15 +00:00
Guido van Rossum 63517577fd Patch from SF bug 570483 (Tim Northover).
In a fresh interpreter, type.mro(tuple) would segfault, because
PyType_Ready() isn't called for tuple yet.  To fix, call
PyType_Ready(type) if type->tp_dict is NULL.
2002-06-18 16:44:57 +00:00
Michael W. Hudson b1e8154013 About the new but unreferenced new_class, Guido sez:
> Looks like an experiment by Oren Tirosh that didn't get nuked.  I
> think you can safely lose it.

It's gone.
2002-06-18 12:38:06 +00:00
Guido van Rossum bea18ccde6 SF patch 568629 by Oren Tirosh: types made callable.
These built-in functions are replaced by their (now callable) type:

    slice()
    buffer()

and these types can also be called (but have no built-in named
function named after them)

    classobj (type name used to be "class")
    code
    function
    instance
    instancemethod (type name used to be "instance method")

The module "new" has been replaced with a small backward compatibility
placeholder in Python.

A large portion of the patch simply removes the new module from
various platform-specific build recipes.  The following binary Mac
project files still have references to it:

    Mac/Build/PythonCore.mcp
    Mac/Build/PythonStandSmall.mcp
    Mac/Build/PythonStandalone.mcp

[I've tweaked the code layout and the doc strings here and there, and
added a comment to types.py about StringTypes vs. basestring.  --Guido]
2002-06-14 20:41:17 +00:00
Guido van Rossum 59e6c53920 Inexplicably, recurse_down_subclasses() was comparing the object
gotten from a weak reference to NULL instead of to None.  This caused
the following assert() to fail (but only in 2.2 in the debug build --
I have to find a better test case).  Will backport.
2002-06-14 02:27:07 +00:00
Neal Norwitz 2c2e827029 Missed one use of new PyDoc_STRVAR macro 2002-06-14 02:04:18 +00:00
Neal Norwitz 1f68fc7fa5 SF bug # 493951 string.{starts,ends}with vs slices
Handle negative indices similar to slices.
2002-06-14 00:50:42 +00:00
Neal Norwitz 4178515035 SF # 533070 Silence AIX C Compiler Warnings
Warning caused by using &func.  & is not necessary.
2002-06-13 21:42:51 +00:00
Guido van Rossum e7b8ecf196 Major cleanup operation: whenever there's a call that looks for an
optional attribute, only clear the exception when the internal getattr
operation raised AttributeError.  Many places in this file already had
that policy; but just as many didn't, and there didn't seem to be any
rhyme or reason to it.  Be consistently cautious.

Question: should I backport this?  On the one hand it's a bugfix.  On
the other hand it's a change in behavior.  Certain forms of buggy or
just weird code would work in the past but raise an exception under
the new rules; e.g. if you define a __getattr__ method that raises a
non-AttributeError exception.
2002-06-13 21:42:04 +00:00
Guido van Rossum 16b93b3d0e Fix for SF bug 532646. This is a little simpler than what Neal
suggested there, based upon a better analysis (__getattr__ is a red
herring).  Will backport to 2.2.
2002-06-13 21:32:51 +00:00