It's possible to create insane datetime objects by using the constructor
"backdoor" inserted for fast unpickling. Doing extensive range checking
would eliminate the backdoor's purpose (speed), but at least a little
checking can stop honest mistakes.
Bugfix candidate.
* The default __reversed__ performed badly, so reintroduced a custom
reverse iterator.
* Added length transparency to improve speed with map(), list(), etc.
array.extend() now accepts iterable arguments implements as a series
of appends. Besides being a user convenience and matching the behavior
for lists, this the saves memory and cycles that would be used to
create a temporary array object.
lists. Speeds append() operations and reduces memory requirements
(because of more conservative overallocation).
Paves the way for the feature request for array.extend() to support
arbitrary iterable arguments.
The writelines() method now accepts any iterable argument and writes
the lines one at a time rather than using ''.join(lines) followed by
a single write. Results in considerable memory savings and makes
the method suitable for use with generator expressions.
are within proper boundaries as specified in the docs.
This can break possible code (datetime module needed changing, for instance)
that uses 0 for values that need to be greater 1 or greater (month, day, and
day of year).
Fixes bug #897625.
__getitem__() and __setitem__().
Simplifies the API, reduces the code size, adds flexibility, and makes
deques work with bisect.bisect(), random.shuffle(), and random.sample().
* Add doctests for the examples in the library reference.
* Add two methods, left() and right(), modeled after deques in C++ STL.
* Apply the new method to asynchat.py.
* Add comparison operators to make deques more substitutable for lists.
* Replace the LookupErrors with IndexErrors to more closely match lists.
* Fixed a bug in the compatibility interface set_location() method
where it would not properly search to the next nearest key when
used on BTree databases. [SF bug id 788421]
* Fixed a bug in the compatibility interface set_location() method
where it could crash when looking up keys in a hash or recno
format database due to an incorrect free().
Allow the user to create Tkinter.Tcl objects which are
just like Tkinter.Tk objects except that they do not
initialize Tk. This is useful in circumstances where the
script is being run on machines that do not have an X
server running -- in those cases, Tk initialization fails,
even if no window is ever created.
Includes documentation change and tests.
Tested on Linux, Solaris and Windows.
Reviewed by Martin von Loewis.
as "This is the same format as used by gl.lrectwrite() and the imgfile
module." This implies a certain byte order in multi-byte pixel
formats. However, the code was originally written on an SGI
(big-endian) and *uses* the fact that bytes are stored in a particular
order in ints. This means that the code uses and produces different
byte order on little-endian systems.
This fix adds a module-level flag "backward_compatible" (default not
set, and if not set, behaves as if set to 1--i.e. backward compatible)
that can be used on a little-endian system to use the same byte order
as the SGI. Using this flag it is then possible to prepare
SGI-compatible images on a little-endian system.
This patch is the result of a (small) discussion on python-dev and was
submitted to SourceForge as patch #874358.
Original idea by Guido van Rossum.
Idea for skipable inner iterators by Raymond Hettinger.
Idea for argument order and identity function default by Alex Martelli.
Implementation by Hye-Shik Chang (with tweaks by Raymond Hettinger).
Also SF patch 843455.
This is a critical bugfix.
I'll backport to 2.3 maint, but not beyond that. The bugs this fixes
have been there since weakrefs were introduced.
for this function has always claimed that was true, but it wasn't
verified before. For the latest batch of "double deallocation" bugs
(stemming from weakref callbacks invoked by way of subtype_dealloc),
this assert would have triggered (instead of waiting for
_Py_ForgetReference to die with a segfault later).
Formerly, underlying queue was implemented in terms of two lists. The
new queue is a series of singly-linked fixed length lists.
The new implementation runs much faster, supports multi-way tees, and
allows tees of tees without additional memory costs.
The root ideas for this structure were contributed by Andrew Koenig
and Guido van Rossum.
memory leak that would've occurred for all iterators that were
destroyed before having iterated until they raised StopIteration.
* Simplify some code.
* Add new test cases to check for the memleak and ensure that mixing
iteration with modification of the values for existing keys works.
has been closed" exceptions.
Adds a DBCursorClosedError exception in the closed cursor case for
future use in fixing the legacy bsddb interface deadlock problems
due to its use of cursors with DB_INIT_LOCK | DB_THREAD support
enabled.
* tee object is no longer subclassable
* independent iterators renamed to "itertools.tee_iterator"
* fixed doc string typo and added entry in the module doc string
* Add error checking code to PyList_Append() call.
* Replace PyObject_CallMethod(to->outbasket, "pop", NULL) with equivalent
in-line code. Inlining is important here because the search for the
pop method will occur for every element returned by the iterator.
* Make tee's dealloc() a little smarter. If the trailing iterator is
being deallocated, then the queue data is no longer needed and can
be freed.
It works like the pure python verion except:
* it stops storing data after of the iterators gets deallocated
* the data queue is implemented with two stacks instead of one dictionary.
* Added C coded getrandbits(k) method that runs in linear time.
* Call the new method from randrange() for ranges >= 2**53.
* Adds a warning for generators not defining getrandbits() whenever they
have a call to randrange() with too large of a population.
features in BerkeleyDB not exposed. notably: the DB_MPOOLFILE interface
has not yet been wrapped in an object.
Adds support for building and installing bsddb3 in python2.3 that has
an older version of this module installed as bsddb without conflicts.
The pybsddb.sf.net build/packaged version of the module uses a
dynamicly loadable module called _pybsddb rather than _bsddb.
The embed2.diff patch solves the user's problem by exporting the missing
symbols from the Python core so Python can be embedded in another Cygwin
application (well, at lest vim).
Fixed leak caused by switching from PyList_GetItem to PySequence_GetItem.
Added missing NULL check.
Clarified code by converting an "if" to an "else if".
Will backport to 2.3.
arbitrary bytes before the actual zip compatible archive. Zipfiles
containing comments at the end of the file are still not supported.
Add a testcase to test_zipimport, and update NEWS.
This closes sf #775637 and sf #669036.
New Plan (releases to be made off the head, ongoing random 2.4 stuff
to be done on a short-lived branch, provided anyone is motivated enough
to create one).
I don't think the fix here is very good, but I'm not sure what would
be better. In particular, we should not be defining _SGIAPI, but lots
of things break if we remove it.
* Extended DB & DBEnv set_get_returns_none functionality to take a
"level" instead of a boolean flag. The boolean 0 and 1 values still
have the same effect. A value of 2 extends the "return None instead
of raising an exception" behaviour to the DBCursor set methods.
This will become the default behaviour in pybsddb 4.2.
* Fixed a typo in DBCursor.join_item method that made it crash instead
of returning a value. Obviously nobody uses it. Wrote a test case
for join and join_item.
after running the script so that a program could do something like:
os.environ['PYTHONINSPECT'] = 1
to programmatically enter a prompt at the end.
(After a patch by Skip Montanaro w/ proposal by Troy Melhase
(Contributed by Bob Halley)
Added a new exception, socket.timeout so that timeouts can be differentiated
from other socket exceptions.
Docs, more tests, and newsitem to follow.
If the callback raised an exception but did not set curexc_traceback,
the trace function was called with PyTrace_RETURN. That is, the trace
function was called with an exception set. The main loop detected the
exception when the trace function returned; it complained and disabled
tracing.
Fix the logic error so that PyTrace_RETURN only occurs if the callback
returned normally.
The trace function must be called for exceptions, too. So we had
to add new functionality to call with PyTrace_EXCEPTION. (Leads to a
rather ugly ifdef / else block that contains only a '}'.)
Reverse the logic and name of NOFIX_TRACE to FIX_TRACE.
Joint work with Fred.
The interning of short strings violates the refcnt==1 assumption for
_PyString_Resize().
A simple fix is to boost the initial value of "totalnew" by 1.
Combined with an NULL argument to PyString_FromStringAndSize(),
this assures that resulting format string is not interned.
This will remain true even if the implementation of
PyString_FromStringAndSize() changes because only the uninitialized
strings that can be interned are those of zero length.
Added a test case.
* Updated comment on design of imap()
* Added untraversed object in izip() structure
* Replaced the pairwise() example with a more general window() example
SF bug [ 751276 ] cPickle doesn't raise error, pickle does (recursiondepth)
Most of the calls to PyErr_Clear() were intended to catch & clear an
attribute error and try something different. Guard all those cases
with a PyErr_ExceptionMatches() and fail if some other error
occurred. The other error is likely a bug in the user code.
This is basically the C equivalent of changing "except:" to
"except AttributeError:"
stack usage on FreeBSD, requiring the recursion limit to be lowered
further. Building with gcc 2.95 (the standard compiler on FreeBSD 4.x)
is now also affected.
The underlying issue is that FreeBSD's pthreads implementation has a
hard-coded 1MB stack size for the initial (or "primary") thread, which
can not be changed without rebuilding libc_r. Exhausting this stack
results in a bus error.
Building without pthreads (configure --without-threads), or linking
with the port of the Linux pthreads library (aka Linuxthreads) instead
of libc_r, avoids this limitation.
On OS/2, only gcc 3.2 is affected and the stack size is controllable,
so the special handling has been removed.
build (assert(gc->gc.gc_refs != 0) in visit_decref()).
Because OSSAudioError is a global, we must compensate (twice!) for
PyModule_AddObject()'s "helpful" decref of the object it adds.
* it no longer takes ssize, which served no purpose apart from
scolding you if you got it wrong
* changed the order of the three remaining required arguments
to (format, channels, rate) to match the order in which they
must be set
* replaced the optional argument 'emulate' with 'strict': if strict
true, and the audio device does not accept the requested sampling
parameters, raise OSSAudioError
* return a tuple (format, channels, rate) reflecting the sampling
parameters that were actually set
Change the canonical name of ossaudiodev.error to
ossaudiodev.OSSAudioError (keep an alias for backwards compatibility).
Remove 'audio_types' list and 'n_audio_types' (no longer needed now that
setparameters() no longer has an 'ssize' argument to police).
* sync(), because it waits for hardware buffers to flush, which
can take several seconds depending on cirumstances (according
to the OSS docs)
* close(), because it does an implicit sync()
tp_free is NULL or PyObject_Del at the end. Because it's a base type
it must call tp_free in its dealloc function, and because it's gc'able
it must not call PyObject_Del.
inherit_slots(): Don't inherit tp_free unless the type and its base
agree about whether they're gc'able. If the type is gc'able and the
base is not, and the base uses the default PyObject_Del for its
tp_free, give the type PyObject_GC_Del for its tp_free (the appropriate
default for a gc'able type).
cPickle.c: The Pickler and Unpickler types claim to be base classes
and gc'able, but their dealloc functions didn't call tp_free.
Repaired that. Also call PyType_Ready() on these typeobjects, so
that the correct (PyObject_GC_Del) default memory-freeing function
gets plugged into these types' tp_free slots.
one good use: a subclass adding a method to express the duration as
a number of hours (or minutes, or whatever else you want to add). The
native breakdown into days+seconds+us is often clumsy. Incidentally
moved a large chunk of object-initialization code closer to the top of
the file, to avoid worse forward-reference trickery.
the itertoolsmodule.
* Taught itertools.repeat(obj, n) to treat negative repeat counts as
zero. This behavior matches that for sequences and prevents
infinite loops.
that was used to start the thread. This is useful to track down the
source of the problem when there is no traceback, as can happen when a
daemon thread gets to run after Python is finialized (a new kind of
event, somehow this is now possible due to changes in Py_Finalize()).
to use LASTMARK_SAVE()/LASTMARK_RESTORE(), based on the discussion
in patch #712900.
- Cleaned up LASTMARK_SAVE()/LASTMARK_RESTORE() usage, based on the
established rules.
- Moved the upper part of the just commited patch (relative to bug #725106)
to outside the for() loop of BRANCH OP. There's no need to mark_save()
in every loop iteration.
This problem is related to a wrong behavior from mark_save/restore(),
which don't restore the mark_stack_base before restoring the marks.
Greg's suggestion was to change the asserts, which happen to be
the only recursive ops that can continue the loop, but the problem would
happen to any operation with the same behavior. So, rather than
hardcoding this into asserts, I have changed mark_save/restore() to
always restore the stackbase before restoring the marks.
Both solutions should fix these two cases, presented by Greg:
>>> re.match('(a)(?:(?=(b)*)c)*', 'abb').groups()
('b', None)
>>> re.match('(a)((?!(b)*))*', 'abb').groups()
('b', None, None)
The rest of the bug and patch in #725149 must be discussed further.
within repeats of alternatives. The only change to the original
patch was to convert the tests to the new test_re.py file.
This patch fixes cases like:
>>> re.match('((a)|b)*', 'abc').groups()
('b', '')
Which is wrong (it's impossible to match the empty string),
and incompatible with other regex systems, like the following
examples show:
% perl -e '"abc" =~ /^((a)|b)*/; print "$1 $2\n";'
b a
% echo "abc" | sed -r -e "s/^((a)|b)*/\1 \2|/"
b a|c
- The socket module now provides the functions inet_pton and inet_ntop
for converting between string and packed representation of IP addresses.
See SF patch #658327.
This still needs a bit of work in the doc area, because it is not
available on all platforms (especially not on Windows).
(contributed by logistix; substantially reworked by rhettinger).
To create a representation of non-string arrays, array_repr() was
starting with a base Python string object and repeatedly using +=
to concatenate the representation of individual objects.
Logistix had the idea to convert to an intermediate tuple form and
then join it all at once. I took advantage of existing tools and
formed a list with array_tolist() and got its representation through
PyObject_Repr(v) which already has a fast implementation for lists.
docs here are best-guess: the MS docs I could find weren't clear, and
some even claimed _commit() has no effect on Win32 systems (which is
easily shown to be false just by trying it).
* UINT_MAX -> ULONG_MAX since we are dealing with longs
* ParseTuple needs &int for 'i' and &long for 'l'
There may be a better way to do this, but this works.
string does what is expected (ie unset [BEGIN|END]LIBPATH)
- set the size of the DosQuerySysInfo buffer correctly; it was safe,
but incorrect (allowing a 1 element overrun)
I've applied a modified version of Greg Chapman's patch. I've included
the fixes without introducing the reorganization mentioned, for the sake
of stability. Also, the second fix mentioned in the patch don't fix the
mentioned problem anymore, because of the change introduced by patch
#720991 (by Greg as well). The new fix wasn't complicated though, and is
included as well.
As a note. It seems that there are other places that require the
"protection" of LASTMARK_SAVE()/LASTMARK_RESTORE(), and are just waiting
for someone to find how to break them. Particularly, I belive that every
recursion of SRE_MATCH() should be protected by these macros. I won't
do that right now since I'm not completely sure about this, and we don't
have much time for testing until the next release.
to be compliant with previous python versions, by backing out the changes
made in revision 2.84 which affected this. The bugfix for backtracking is
still maintained.
New functions:
unsigned long PyInt_AsUnsignedLongMask(PyObject *);
unsigned PY_LONG_LONG) PyInt_AsUnsignedLongLongMask(PyObject *);
unsigned long PyLong_AsUnsignedLongMask(PyObject *);
unsigned PY_LONG_LONG) PyLong_AsUnsignedLongLongMask(PyObject *);
New and changed format codes:
b unsigned char 0..UCHAR_MAX
B unsigned char none **
h unsigned short 0..USHRT_MAX
H unsigned short none **
i int INT_MIN..INT_MAX
I * unsigned int 0..UINT_MAX
l long LONG_MIN..LONG_MAX
k * unsigned long none
L long long LLONG_MIN..LLONG_MAX
K * unsigned long long none
Notes:
* New format codes.
** Changed from previous "range-and-a-half" to "none"; the
range-and-a-half checking wasn't particularly useful.
New test test_getargs2.py, to verify all this.
A small fix for bug #545855 and Greg Chapman's
addition of op code SRE_OP_MIN_REPEAT_ONE for
eliminating recursion on simple uses of pattern '*?' on a
long string.