externally unreachable objects with finalizers, and externally unreachable
objects without finalizers reachable from such objects. This allows us
to call has_finalizer() at most once per object, and so limit the pain of
nasty getattr hooks. This fixes the failing "boom 2" example Jeremy
posted (a non-printing variant of which is now part of test_gc), via never
triggering the nasty part of its __getattr__ method.
to special-case classic classes, or to worry about refcounts;
has_finalizer() deleted the current object iff the first entry in
the unreachable list has changed. I don't believe it was correct
to check for ob_refcnt == 1, either: the dealloc routine would get
called by Py_DECREF then, but there's nothing to stop the dealloc
routine from ressurecting the object, and then gc would remain at
the head of the unreachable list despite that its refcount temporarily
fell to 0 (and that would lead to an infinite loop in move_finalizers()).
I'm still worried about has_finalizer() resurrecting other objects
in the unreachable list: what's to stop them from getting collected?
delstr from initgc() into collect(). initgc() isn't called unless the
user explicitly imports gc, so can be used only for initialization of
user-visible module features; delstr needs to be initialized for proper
internal operation, whether or not gc is explicitly imported.
Bugfix candidate? I don't know whether the new bug was backported to
2.2 already.
pack_float, pack_double, save_float: All the routines for creating
IEEE-format packed representations of floats and doubles simply ignored
that rounding can (in rare cases) propagate out of a long string of
1 bits. At worst, the end-off carry can (by mistake) interfere with
the exponent value, and then unpacking yields a result wrong by a factor
of 2. In less severe cases, it can end up losing more low-order bits
than intended, or fail to catch overflow *caused* by rounding.
Bugfix candidate, but I already backported this to 2.2.
In 2.3, this code remains in severe need of refactoring.
for specific platforms. Use this to add plat-mac and
plat-mac/lib-scriptpackages on MacOSX. Also tested for not having adverse
effects on Linux, and I think this code isn't used on Windows anyway.
Fixes#661521.
invalid, rather than returning a string of random garbage of the
estimated result length. Closes SF patch #703471 by Hye-Shik Chang.
Will backport to 2.2-maint (consider it done.)
it instead of the OS-specific <linux/soundcard.h> or <machine/soundcard.h>.
Mixers devices have an ioctl-only interface, no read/write -- so the
flags passed to open() don't really matter. Thus, drop the 'mode'
parameter to openmixer() (ie. second arg to newossmixerobject()) and
always open mixers with O_RDWR.
- The applet logic has been replaced to bundlebuilder's bootstrap script
- Due to Apple being extremely string about argv[0], we need a way to
specify the actual executable name for use with sys.executable. See
the comment embedded in the code.
- Implement the behavior as specified in PEP 277, meaning os.listdir()
will only return unicode strings if it is _called_ with a unicode
argument.
- And then return only unicode, don't attempt to convert to ASCII.
- Don't switch on Py_FileSystemDefaultEncoding, but simply use the
default encoding if Py_FileSystemDefaultEncoding is NULL. This means
os.listdir() can now raise UnicodeDecodeError if the default encoding
can't represent the directory entry. (This seems better than silcencing
the error and fall back to a byte string.)
- Attempted to decribe the above in Doc/lib/libos.tex.
- Reworded the Misc/NEWS items to reflect the current situation.
This checkin also fixes bug #696261, which was due to os.listdir() not
using Py_FileSystemDefaultEncoding, like all file system calls are
supposed to.
[ 555817 ] Flawed fcntl.ioctl implementation.
with my patch that allows for an array to be mutated when passed
as the buffer argument to ioctl() (details complicated by
backwards compatibility considerations -- read the docs!).
Change setup.py to catch all exceptions.
- Rename module if the exception was an ImportError
- Only warn if the exception was any other error
Revert _iconv_codec to raising a RuntimeError.
(see SF bug #690309) and raise ImportErrors instead of
RuntimeErrors, so building Python continues even
if importing iconv_codecs fails.
This is a temporary fix until we get proper configure
support for "broken" iconv implementations.
is required for the chosen internal encoding in the init function,
as this seems to have a better chance of working under Irix and
Solaris.
Also change the test character from '\x01' to '0'.
This might fix SF bug #690309.
Rather than trying to second-guess the various error returns
of a second connect(), use select() to determine whether the
socket becomes writable (which means connected).
reasons: importing module can fail, or the attribute lookup
module.name can fail. We were giving the same error msg for
both cases, making it needlessly hard to guess what went wrong.
These cases give different error msgs now.
Fix off-by-1 error in normalize_line_endings():
when *p == '\0' the NUL was copied into q and q was auto-incremented,
the loop was broken out of,
then a newline was appended followed by a NUL.
So the function, in effect, was strcpy() but added two extra chars
which was caught by obmalloc in debug mode, since there was only
room for 1 additional newline.
Get test working under regrtest (added test_main).
the optional proto 2 slot state.
pickle.py, load_build(): CAUTION: Noted that cPickle's
load_build and pickle's load_build really don't do the same
things with the state, and didn't before this patch either.
cPickle never tries to do .update(), and has no backoff if
instance.__dict__ can't be retrieved. There are no tests
that can tell the difference, and part of what cPickle's
load_build() did looked accidental to me, so I don't know
what the true intent is here.
pickletester.py, test_pickle.py: Got rid of the hack for
exempting cPickle from running some of the proto 2 tests.
dictobject.c, PyDict_Next(): documented intended use.
exercised by the test suite before cPickle knows how to create NEWOBJ
too. For now, it was just tried once by hand (via loading a NEWOBJ
pickle created by pickle.py).
inet_aton() rather than inet_addr() -- the latter is obsolete because
it has a problem: "255.255.255.255" is a valid address but
indistinguishable from an error.
(I'm not sure if inet_aton() exists everywhere -- in case it doesn't,
I've left the old code in with an #ifdef.)
* Modules/bz2module.c
(BZ2FileObject): Now the structure includes a pointer to a file object,
instead of "inheriting" one. Also, some members were copied from the
PyFileObject structure to avoid dealing with the internals of that
structure from outside fileobject.c.
(Util_GetLine,Util_DropReadAhead,Util_ReadAhead,Util_ReadAheadGetLineSkip,
BZ2File_write,BZ2File_writelines,BZ2File_init,BZ2File_dealloc,
BZ2Comp_dealloc,BZ2Decomp_dealloc):
These functions were adapted to the change above.
(BZ2File_seek,BZ2File_close): Use PyObject_CallMethod instead of
getting the function attribute locally.
(BZ2File_notsup): Removed, since it's not necessary anymore to overload
truncate(), and readinto() with dummy functions.
(BZ2File_methods): Added xreadlines() as an alias to BZ2File_getiter,
and removed truncate() and readinto().
(BZ2File_get_newlines,BZ2File_get_closed,BZ2File_get_mode,BZ2File_get_name,
BZ2File_getset):
Implemented getters for "newlines", "mode", and "name".
(BZ2File_members): Implemented "softspace" member.
(BZ2File_init): Reworked to create a file instance instead of initializing
itself as a file subclass. Also, pass "name" object untouched to the
file constructor, and use PyObject_CallFunction instead of building the
argument tuple locally.
(BZ2File_Type): Set tp_new to PyType_GenericNew, tp_members to
BZ2File_members, and tp_getset to BZ2File_getset.
(initbz2): Do not set BZ2File_Type.tp_base nor BZ2File_Type.tp_new.
* Doc/lib/libbz2.tex
Do not mention that BZ2File inherits from the file type.
* Removed the ifilter flag wart by splitting it into two simpler functions.
* Fixed comment tabbing in C code.
* Factored module start-up code into a loop.
Documentation:
* Re-wrote introduction.
* Addede examples for quantifiers.
* Simplified python equivalent for islice().
* Documented split of ifilter().
Sets.py:
* Replace old ifilter() usage with new.
__ne__ no longer complain if they don't know how to compare to the other
thing. If no meaningful way to compare is known, saying "not equal" is
sensible. This allows things like
if adatetime in some_sequence:
and
somedict[adatetime] = whatever
to work as expected even if some_sequence contains non-datetime objects,
or somedict non-datetime keys, because they only call __eq__.
It still complains (raises TypeError) for mixed-type comparisons in
contexts that require a total ordering, such as list.sort(), use as a
key in a BTree-based data structure, and cmp().
* Fixed typo in exception message for times()
* Filled in missing times_traverse()
* Document reasons that imap() did not adopt a None fill-in feature
* Document that count(sys.maxint) will wrap-around on overflow
* Add overflow test to islice()
* Check that starmap()'s argument returns a tuple
* Verify that imap()'s tuple re-use is safe
* Make a similar tuple re-use (with safety check) for izip()
guarantee to keep valid pointers in its slots.
tests: Moved ExtensionSaver from test_copy_reg into pickletester, and
use it both places. Once extension codes get assigned, it won't be
safe to overwrite them willy nilly in test suites, and ExtensionSaver
does a thorough job of undoing any possible damage.
Beefed up the EXT[124] tests a bit, to check the smallest and largest
codes in each opcode's range too.
this clarifies that they are part of an internal API (albeit shared
between pickle.py, copy_reg.py and cPickle.c).
I'd like to do the same for copy_reg.dispatch_table, but worry that it
might be used by existing code. This risk doesn't exist for the
extension registry.
because it seems more consistent with the rest of the code.
cPickle_PyMapping_HasKey(): This extern function isn't used anywhere in
Python or Zope, so got rid of it.
extension implemented flush() was fixed. Scott also rewrite the
zlib test suite using the unittest module. (SF bug #640230 and
patch #678531.)
Backport candidate I think.
readability.
load_bool(): Now that I know the intended difference between _PUSH and
_APPEND, used the right one.
Pdata_grow(): Squashed out a redundant overflow test.
a function, then
p->f(arg1, arg2, ...)
is semantically the same as
(*p->f)(arg1, arg2, ...)
Changed all instances of the latter into the former. Given how often
the code embeds this kind of expression in an if test, the unnecessary
parens and dereferening operator were a real drag on readability.
loops. Renamed DATA and BINDATA to DATA0 and DATA1. Included
disassemblies, but noted why we can't test them. Added XXX comment to
cPickle about a mysterious comment, where pickle and cPickle diverge
in how they number PUT indices.
Assorted code cleanups; e.g., sizeof(char) is 1 by definition, so there's
no need to do things like multiply by sizeof(char) in hairy malloc
arguments. Fixed an undetected-overflow bug in readline_file().
longobject.c: Fixed a really stupid bug in the new _PyLong_NumBits.
pickle.py: Fixed stupid bug in save_long(): When proto is 2, it
wrote LONG1 or LONG4, but forgot to return then -- it went on to
append the proto 1 LONG opcode too.
Fixed equally stupid cancelling bugs in load_long1() and
load_long4(): they *returned* the unpickled long instead of pushing
it on the stack. The return values were ignored. Tests passed
before only because save_long() pickled the long twice.
Fixed bugs in encode_long().
Noted that decode_long() is quadratic-time despite our hopes,
because long(string, 16) is still quadratic-time in len(string).
It's hex() that's linear-time. I don't know a way to make decode_long()
linear-time in Python, short of maybe transforming the 256's-complement
bytes into marshal's funky internal format, and letting marshal decode
that. It would be more valuable to make long(string, 16) linear time.
pickletester.py: Added a global "protocols" vector so tests can try
all the protocols in a sane way. Changed test_ints() and test_unicode()
to do so. Added a new test_long(), but the tail end of it is disabled
because it "takes forever" under pickle.py (but runs very quickly under
cPickle: cPickle proto 2 for longs is linear-time).
functions. Reworked {time,datetime}_new() to do what their corresponding
setstates used to do in their state-tuple-input paths, but directly,
without constructing an object with throwaway state first. Tightened
the "is this a state tuple input?" paths to check the presumed state
string-length too, and to raise an exception if the optional second state
element isn't a tzinfo instance (IOW, check these paths for type errors
as carefully as the normal paths).
anymore either, so don't. This also allows to get rid of obscure code
making __getnewargs__ identical to __getstate__ (hmm ... hope there
wasn't more to this than I realize!).
(pickling no longer needs them, and immutable objects shouldn't have
visible __setstate__() methods regardless). Rearranged the code to
put the internal setstate functions in the constructor sections.
Repaired the timedelta reduce() method, which was still producing
stuff that required a public timedelta.__setstate__() when unpickling.
Geoff writes:
This is yet another patch to _ssl.c that sets the
underlying BIO to non-blocking if the socket being
wrapped is non-blocking. It also correctly loops when
SSL_connect, SSL_write, or SSL_read indicates that it
needs to read or write more bytes.
This seems to fix bug #673797 which was not fixed by my
previous patch.
error handers in the Unicode codecs: Negative
positions are treated as being relative to the end of
the input and out of bounds positions result in an
IndexError.
Also update the PEP and include an explanation of
this in the documentation for codecs.register_error.
Fixes a small bug in iconv_codecs: if the position
from the callback is negative *add* it to the size
instead of substracting it.
From SF patch #677429.
needs of pickling longs. Backed off to a definition that's much easier
to understand. The pickler will have to work a little harder, but other
uses are more likely to be correct <0.5 wink>.
_PyLong_Sign(): New teensy function to characterize a long, as to <0, ==0,
or >0.
classes have a __reduce__ that returns (self.__class__,
self.__getstate__()). tzinfo.__reduce__() is a bit smarter, calling
__getinitargs__ and __getstate__ if they exist, and falling back to
__dict__ if it exists and isn't empty.
for this iconv() implementation in the init function.
For encoding: use a byteswapped version of the input if
neccessary.
For decoding: byteswap every piece returned by iconv()
if neccessary (but not those pieces returned from the
callback)
Comment out test_sane() in the test script, because
whether this works depends on whether byte swapping
is neccessary or not (an on Py_UNICODE_SIZE)
METH_NOARGS functions are still called with two arguments, one NULL,
so put that back into the function definitions (I didn't know this
until recently).
Make get_history_length() METH_NOARGS.
start for the C implemention of new pickle LONG1 and LONG4 opcodes (the
linear-time way to pickle a long is to call _PyLong_AsByteArray, but
the caller has no idea how big an array to allocate, and correct
calculation is a bit subtle).
compare against "the other" argument, we raise TypeError,
in order to prevent comparison from falling back to the
default (and worse than useless, in this case) comparison
by object address.
That's fine so far as it goes, but leaves no way for
another date/datetime object to make itself comparable
to our objects. For example, it leaves Marc-Andre no way
to teach mxDateTime dates how to compare against Python
dates.
Discussion on Python-Dev raised a number of impractical
ideas, and the simple one implemented here: when we don't
know how to compare against "the other" argument, we raise
TypeError *unless* the other object has a timetuple attr.
In that case, we return NotImplemented instead, and Python
will give the other object a shot at handling the
comparison then.
Note that comparisons of time and timedelta objects still
suffer the original problem, though.
This gives much the same treatment to datetime.fromtimestamp(stamp, tz) as
the last batch of checkins gave to datetime.now(tz): do "the obvious"
thing with the tz argument instead of a senseless thing.
checked in two days agao:
Refactoring of, and new rules for, dt.astimezone(tz).
dt must be aware now, and tz.utcoffset() and tz.dst() must not return None.
The old dt.astimezone(None) no longer works to change an aware datetime
into a naive datetime; use dt.replace(tzinfo=None) instead.
The tzinfo base class now supplies a new fromutc(self, dt) method, and
datetime.astimezone(tz) invokes tz.fromutc(). The default implementation
of fromutc() reproduces the same results as the old astimezone()
implementation, but tzinfo subclasses can override fromutc() if the
default implementation isn't strong enough to get the correct results
in all cases (for example, this may be necessary if a tzinfo subclass
models a time zone whose "standard offset" (wrt UTC) changed in some
year(s), or in some variations of double-daylight time -- the creativity
of time zone politics can't be captured in a single default implementation).
60: Added support for the SkippedEntityHandler, new in Expat 1.95.4.
61: Added support for namespace prefixes, which can be enabled by setting the
"namespace_prefixes" attribute on the parser object.
65: Disable profiling changes for Python 2.0 and 2.1.
66: Update pyexpat to export the Expat 1.95.5 XML_GetFeatureList()
information, and tighten up a type declaration now that Expat is using
an incomplete type rather than a void * for the XML_Parser type.
67: Clarified a comment.
Added support for XML_UseForeignDTD(), new in Expat 1.95.5.
68: Refactor to avoid partial duplication of the code to construct an
ExpatError instance, and actually conform to the API for the exception
instance as well.
69: Remove some spurious trailing whitespace.
Add a special external-entity-ref handler that gets installed once a
handler has raised a Python exception; this can cancel actual parsing
earlier if there's an external entity reference in the input data
after the the Python excpetion has been raised.
70: Untabify APPEND.
71: Backport PyMODINIT_FUNC for 2.2 and earlier.
When daylight time ends, an hour repeats on the local clock (for example,
in US Eastern, the clock jumps from 1:59 back to 1:00 again). Times in
the repeated hour are ambiguous. A tzinfo subclass that wants to play
with astimezone() needs to treat times in the repeated hour as being
standard time. astimezone() previously required that such times be
treated as daylight time. There seems no killer argument either way,
but Guido wants the standard-time version, and it does seem easier the
new way to code both American (local-time based) and European (UTC-based)
switch rules, and the astimezone() implementation is simpler.
underlying DB has already been closed (and thus all of its cursors).
This fixes a potential segfault.
SF pybsddb bug id 667343
bugfix: close the DB object when raising an exception due to an error
during DB.open. This prevents an exception when closing the
environment about not all databases being closed.
SF pybsddb bug id 667340
hoped it would be, but not too bad. A test had to change:
time.__setstate__() can no longer add a non-None tzinfo member to a time
object that didn't already have one, since storage for a tzinfo member
doesn't exist in that case.
into time. This is little more than *exporting* the datetimetz object
under the name "datetime", and similarly for timetz. A good implementation
of this change requires more work, but this is fully functional if you
don't stare too hard at the internals (e.g., right now a type named
"datetime" shows up as a base class of the type named "datetime"). The
docs also need extensive revision, not part of this checkin.
The attached patch enables shared extension
modules to build cleanly under Cygwin without
moving the static initialization of certain function
pointers (i.e., ones exported from the Python
DLL core) to a module initialization function.
Additionally, this patch fixes the modules that
have been changed in the past to accommodate
Cygwin.
cases, plus even tougher tests of that. This implementation follows
the correctness proof very closely, and should also be quicker (yes,
I wrote the proof before the code, and the code proves the proof <wink>).
Lesson learned: kids should not be allowed to use API's starting
with an underscore :-/
zipimport in 2.3a1 is even more broken than I thought: I attemped
to _PyString_Resize a string created by PyString_FromStringAndSize,
which fails for strings with length 0 or 1 since the latter returns
an interned string in those cases. This would cause a SystemError
with empty source files (and no matching pyc) in the zip archive.
I rewrote the offending code to simply allocate a new buffer and
avoid _PyString_Resize altogether.
Added a test that would've caught the problem.
(or None) now. In 2.3a1 they could also return an int or long, but that
was an unhelpfully redundant leftover from an earlier version wherein
they couldn't return a timedelta. TOOWTDI.
On Windows, it was very common to get microsecond values (out of
.today() and .now()) of the form 480999, i.e. with three trailing
nines. The platform precision is .001 seconds, and fp rounding
errors account for the rest. Under the covers, that 480999 started
life as the fractional part of a timestamp, like .4809999978.
Rounding that times 1e6 cures the irritation.
Confession: the platform precision isn't really .001 seconds. It's
usually worse. What actually happens is that MS rounds a cruder value
to a multiple of .001, and that suffers its own rounding errors.
A tiny bit of refactoring added a new internal utility to round
doubles.
the test set as it only tested with a zip archive in the current directory,
but it doesn't work at all for packages when the zip archive was specified
as an absolute path. It's a real embarrassing bug: a strchr call should
have been strrchr; fever apparently implies dyslexia.
Second stupid bug: the zipimport test failed with a name error
__importer__ (which I had renamed to __loader__ everywhere but here).
I would've sworn I ran the test after that change but that can't be true.
What I don't understand that noone reported a failing test_zipimport.py
before the release of 2.3a1.
suggestion from Guido, along with a formal correctness proof of the
trickiest bit. The intricacy of the proof reveals how delicate this
is, but also how robust the conclusion: correctness doesn't rely on
dst() returning +- one hour (not all real time zones do!), it only
relies on:
1. That dst() returns a (any) non-zero value if and only if daylight
time is in effect.
and
2. That the tzinfo subclass implements a consistent notion of time zone.
The meaning of "consistent" was a hidden assumption, which is now an
explicit requirement in the docs. Alas, it's an unverifiable (by the
datetime implementation) requirement, but so it goes.
find a more elegant algorithm (OTOH, the hairy new implementation allows
user-written tzinfo classes to be elegant, so it's a big win even if
astimezone() remains hairy).
Darn! I've only got 10 minutes left to get falling-down drunk! I suppose
I'll have to smoke crack instead now.
The attached patch enables Cygwin Python to
build cleanly against the latest Cygwin Tcl/Tk
which is based on Tcl/Tk 8.3. It also prevents
building against the real X headers, if installed.
A variety of changes from Michael Hudson to get the compiler working
with 2.3. The primary change is the handling of SET_LINENO:
# The set_lineno() function and the explicit emit() calls for
# SET_LINENO below are only used to generate the line number table.
# As of Python 2.3, the interpreter does not have a SET_LINENO
# instruction. pyassem treats SET_LINENO opcodes as a special case.
A few other small changes:
- Remove unused code from pycodegen and pyassem.
- Fix error handling in parsermodule. When PyParser_SimplerParseString()
fails, it sets an exception with detailed info. The parsermodule
was clobbering that exception and replacing it was a generic
"could not parse string" exception. Keep the original exception.
an idea from Guido. This restores that the datetime implementation
never passes a datetime d to a tzinfo method unless d.tzinfo is the
tzinfo instance whose method is being called. That in turn allows
enormous simplifications in user-written tzinfo classes (see the Python
sandbox US.py and EU.py for fully fleshed-out examples).
d.astimezone(tz) also raises ValueError now if d lands in the one hour
of the year that can't be expressed in tz (this can happen iff tz models
both standard and daylight time). That it used to return a nonsense
result always ate at me, and it turned out that it seemed impossible to
force a consistent nonsense result under the new implementation (which
doesn't know anything about how tzinfo classes implement their methods --
it can only infer properties indirectly). Guido doesn't like this --
expect it to change.
New tests of conversion between adjacent DST-aware timezones don't pass
yet, and are commented out.
Running the datetime tests in a loop under a debug build leaks 9
references per test run, but I don't believe the datetime code is the
cause (it didn't leak the last time I changed the C code, and the leak
is the same if I disable all the tests that invoke the only function
that changed here). I'll pursue that next.
devices(), stereodevices(), recdevices() ->
controls(), stereocontrols(), reccontrols()
Based on recommendation of Hannu Savolainen <hannu@opensound.com>:
The right term to use for things like bass/treble/mic/vol/etc is
"control".
"Device" refers to different mixer devices (/dev/mixer0 to /dev/mixerN).
"Channel" cannot be used because it refers to mono/stereo/multich
channels. In fact most mixer controls have left/right channels so ...
- new import hooks in import.c, exposed in the sys module
- new module called 'zipimport'
- various changes to allow bootstrapping from zip files
I hope I didn't break the Windows build (or anything else for that
matter), but then again, it's been sitting on sf long enough...
Regarding the latest discussions on python-dev: zipimport sets
pkg.__path__ as specified in PEP 273, and likewise, sys.path item such as
/path/to/Archive.zip/subdir/ are supported again.
make the callers figure out the right tzinfo arguments to pass, instead of
making the callees guess. The code is uglier this way, but it's less
brittle (when the callee guesses, the caller can get surprised).
of the timetz case. A tzinfo method will always see a datetimetz arg,
or None, now. In the former case, it's still possible that it will get
a datetimetz argument belonging to a different timezone. That will get
fixed next.
Check for readline 2.2 features. This should make it possible to
compile readline.c again with GNU readline versions 2.0 or 2.1; this
ability was removed in readline.c rev. 2.49. Apparently the older
versions are still in widespread deployment on older Solaris
installations. With an older readline, completion behavior is subtly
different (a space is always added).