raising an exception. This is consistent with calling the
constructors for the other builtin types -- called without argument
they all return the false value of that type. (SF patch #724135)
Thanks to Alex Martelli.
On cygwin, the setup.py script uses unixccompiler.py for compiling and linking
C extensions. The unixccompiler.py script assumes that executables do not get
special extensions, which makes sense for Unix. However, on Cygwin,
executables get an .exe extension.
This causes a problem during the configuration step (python setup.py config),
in which some temporary executables may be generated. As unixccompiler.py does
not know about the .exe extension, distutils fails to clean up after itself: it
does not remove _configtest.exe but tries to remove _configtest instead.
The attached patch to unixccompiler.py sets the correct exe_extension for
cygwin by checking if sys.platform is 'cygwin'. With this patch, distutils
cleans up after itself correctly.
Michiel de Hoon
University of Tokyo, Human Genome Center.
New functions:
unsigned long PyInt_AsUnsignedLongMask(PyObject *);
unsigned PY_LONG_LONG) PyInt_AsUnsignedLongLongMask(PyObject *);
unsigned long PyLong_AsUnsignedLongMask(PyObject *);
unsigned PY_LONG_LONG) PyLong_AsUnsignedLongLongMask(PyObject *);
New and changed format codes:
b unsigned char 0..UCHAR_MAX
B unsigned char none **
h unsigned short 0..USHRT_MAX
H unsigned short none **
i int INT_MIN..INT_MAX
I * unsigned int 0..UINT_MAX
l long LONG_MIN..LONG_MAX
k * unsigned long none
L long long LLONG_MIN..LLONG_MAX
K * unsigned long long none
Notes:
* New format codes.
** Changed from previous "range-and-a-half" to "none"; the
range-and-a-half checking wasn't particularly useful.
New test test_getargs2.py, to verify all this.
- Allow setting the destination install directory. If this is set then
it is used for the modules, other items (header files, etc) are not
installed, and warnings are printed if the package would have liked to.
Unfortunaltey binary installs seem broken due to a tarfile bug (#721871)
or my misunderstanding of how tarfile works.
getpwnam()/getpwuid() return consistent data.
Change test_grp to check that getgrall() and
getgrnam()/getgrgid() return consistent data.
Add error checks similar to test_pwd.py.
Port test___all__.py to PyUnit.
From SF patch #662807.
interpreted by slicing, so negative values count from the end of the
list. This was the only place where such an interpretation was not
placed on a list index.
A small fix for bug #545855 and Greg Chapman's
addition of op code SRE_OP_MIN_REPEAT_ONE for
eliminating recursion on simple uses of pattern '*?' on a
long string.
After some more reflection (and no negative feedback), I am reverting the
original patch and applying my version, cygwinccompiler.py-shared.diff,
instead.
My reasons are the following:
1. support for older toolchains is retained
2. support for new toolchains (i.e., ld -shared) is added
The goal of my approach is to avoid breaking older toolchains while adding
better support for newer ones.
to lookup properties declared in base classes. Looking at it I'm not sure
what the official scope if the property codes is, maybe it is only the
(OSA) class in which they are used. But giving them global scope hasn't been
a problem so far.
Regenerated the standard suites, which are now also space-indented.
to iso-8859-1.
GNUTranslations._parse(): Back out the addition of a test for
Project-ID-Version in the metadata. This was deliberately removed in
response to SF patch #700839.
Also, re-organize the code in _parse() so we parse the metadata header
containing the charset parameter before we try to decode any strings
using charset.
- range() now works even if the arguments are longs with magnitude
larger than sys.maxint, as long as the total length of the sequence
fits. E.g., range(2**100, 2**101, 2**100) is the following list:
[1267650600228229401496703205376L]. (SF patch #707427.)
- Expose NullTranslations and GNUTranslations to __all__
- Set the default charset to iso-8859-1. It used to be None, which
would cause problems with .ugettext() if the file had no charset
parameter. Arguably, the po/mo file would be broken, but I still think
iso-8859-1 is a reasonable default.
- Add a "coerce" default argument to GNUTranslations's constructor. The
reason for this is that in Zope, we want all msgids and msgstrs to be
Unicode. For the latter, we could use .ugettext() but there isn't
currently a mechanism for Unicode-ifying msgids.
The plan then is that the charset parameter specifies the encoding for
both the msgids and msgstrs, and both are decoded to Unicode when read.
For example, we might encode po files with utf-8. I think the GNU
gettext tools don't care.
Since this could potentially break code [*] that wants to use the
encoded interface .gettext(), the constructor flag is added, defaulting
to False. Most code I suspect will want to set this to True and use
.ugettext().
- A few other minor changes from the Zope project, including asserting
that a zero-length msgid must have a Project-ID-Version header for it to
be counted as the metadata record.
* Doc - add doc for when functions were added
* UserString
* string object methods
* string module functions
'chars' is used for the last parameter everywhere.
These changes will be backported, since part of the changes
have already been made, but they were inconsistent.
The cygwinccompiler.get_versions() function only handles versions numbers of
the form "x.y.z". The attached patch enhances get_versions() to handle "x.y"
too (i.e., the ".z" is optional).
This change causes the unnecessary "--entry _DllMain@12" link option to be
suppressed for recent Cygwin and Mingw toolchains. Additionally, it directs
recent Mingw toolchains to use gcc instead of dllwrap during linking.
Currently, the cygwinccompiler.py compiler handling in
distutils is invoking the cygwin and mingw compilers
with the -static option.
Logically, this means that the linker should choose to
link to static libraries instead of shared/dynamically
linked libraries.
Current win32 binutils expect import libraries to have
a .dll.a suffix and static libraries to have .a suffix.
If -static is passed, it will skip the .dll.a
libraries. This is pain if one has a tree with both
static and dynamic libraries using this naming
convention, and wish to use the dynamic libraries.
The -static option being passed in distutils is to get
around a bug in old versions of binutils where it would
get confused when it found the DLLs themselves.
The decision to use static or shared libraries is site
or package specific, and should be left to the setup
script or to command line options.
These never failed in 2.3, and the tests confirm it. They still blow up
in the 2.2 branch, despite that all the gc-vs-__del__ fixes from 2.3
have been backported (and this is expected -- 2.2 needs more work than
2.3 needed).
of PyObject_HasAttr(); the former promises never to execute
arbitrary Python code. Undid many of the changes recently made to
worm around the worst consequences of that PyObject_HasAttr() could
execute arbitrary Python code.
Compatibility is hard to discuss, because the dangerous cases are
so perverse, and much of this appears to rely on implementation
accidents.
To start with, using hasattr() to check for __del__ wasn't only
dangerous, in some cases it was wrong: if an instance of an old-
style class didn't have "__del__" in its instance dict or in any
base class dict, but a getattr hook said __del__ existed, then
hasattr() said "yes, this object has a __del__". But
instance_dealloc() ignores the possibility of getattr hooks when
looking for a __del__, so while object.__del__ succeeds, no
__del__ method is called when the object is deleted. gc was
therefore incorrect in believing that the object had a finalizer.
The new method doesn't suffer that problem (like instance_dealloc(),
_PyObject_Lookup() doesn't believe __del__ exists in that case), but
does suffer a somewhat opposite-- and even more obscure --oddity:
if an instance of an old-style class doesn't have "__del__" in its
instance dict, and a base class does have "__del__" in its dict,
and the first base class with a "__del__" associates it with a
descriptor (an object with a __get__ method), *and* if that
descriptor raises an exception when __get__ is called, then
(a) the current method believes the instance does have a __del__,
but (b) hasattr() does not believe the instance has a __del__.
While these disagree, I believe the new method is "more correct":
because the descriptor *will* be called when the object is
destructed, it can execute arbitrary Python code at the time the
object is destructed, and that's really what gc means by "has a
finalizer": not specifically a __del__ method, but more generally
the possibility of executing arbitrary Python code at object
destruction time. Code in a descriptor's __get__() executed at
destruction time can be just as problematic as code in a
__del__() executed then.
So I believe the new method is better on all counts.
Bugfix candidate, but it's unclear to me how all this differs in
the 2.2 branch (e.g., new-style and old-style classes already
took different gc paths in 2.3 before this last round of patches,
but don't in the 2.2 branch).
externally unreachable objects with finalizers, and externally unreachable
objects without finalizers reachable from such objects. This allows us
to call has_finalizer() at most once per object, and so limit the pain of
nasty getattr hooks. This fixes the failing "boom 2" example Jeremy
posted (a non-printing variant of which is now part of test_gc), via never
triggering the nasty part of its __getattr__ method.
the Standard_Suite, but various other suites do expect it (the Finder
implements get() without declaring it itself). It is probably another
case of OSA magic. Adding them to the global base class.
within a certain context. Give them an _Prop_ prefix, so they don't
accidentally obscure an element from another suite (as happened with
the Finder). Comparisons I'm not sure about, so I left them as global
names.
Also got rid of the lists if declarations, they serve no useful purpose.
you to say something like "talker.count(want=Address_Book.people)" in
stead of having to manually create the aetypes.Type(Address_Book.people.want)
OSA type.
platforms which have dup(2). The makefile() method is built directly on top
of the socket without duplicating the file descriptor, allowing timeouts to
work properly. Includes a new test case (urllibnet) which requires the
network resource.
Closes bug 707074.
This is a first step towards regenerating the modules with newer, MacOSX,
versions of these programs, and using the programmatic interface to
get at the terminology in stead of poking in resource files.
Clean up section headings; make the bars on the left less fat.
Adjust the display of properties slightly.
Don't show stuff inherited from the base 'object' type.
M run.py
1. Move subprocess socket handling to a subthread - "SockThread".
2. In the subprocess, implement a queue and global completion and exit
flags. Execute code after it is passed through the queue. (Currently,
user code is executed in SockThread. The next phase of development will
move the tail of the queue to MainThread.)
3. Implement an RPC message used to shut down the execution server.
4. Improve normal and exception subprocess exits.
(At this checkin a "pass loop" interrupt doesn't work on any platform. It
will be restored for all platforms once user code execution is moved to
MainThread.)
pack_float, pack_double, save_float: All the routines for creating
IEEE-format packed representations of floats and doubles simply ignored
that rounding can (in rare cases) propagate out of a long string of
1 bits. At worst, the end-off carry can (by mistake) interfere with
the exponent value, and then unpacking yields a result wrong by a factor
of 2. In less severe cases, it can end up losing more low-order bits
than intended, or fail to catch overflow *caused* by rounding.
Bugfix candidate, but I already backported this to 2.2.
In 2.3, this code remains in severe need of refactoring.
invalid, rather than returning a string of random garbage of the
estimated result length. Closes SF patch #703471 by Hye-Shik Chang.
Will backport to 2.2-maint (consider it done.)
(from 10) and in main() (from 1).
Add a -v option that shows the raw times. Repeating it cranks up the
display precision.
Always use the "best of N" form of output.
- Make all local variables in the template start with an underscore,
to prevent name conflicts with the timed code.
- Added a method to print a traceback that shows source lines from the
expanded template.
- Use that method in main().
M run.py
1. Clarify that rpc.SocketIO._getresponse() currently blocks on socket.
2. Improve exception handling in subprocess when GUI terminates abruptly.
ALERT! A month ago or so I made test_ossaudiodev.py require the
'audio' resource, but I didn't make the necessary changes to
regrtest.py. This means that *nobody* has been testing the oss module
all that time!
When the null string is used as the terminator, it used to be the same
as None, meaning "collect all the data". In the current code, however, it
falls into an endless loop; this change reverts to the old behavior.
Contributed by Brett Cannon.
To prevent code duplication, I patched _strptime to use datetime's date
object to do Julian day, Gregorian, and day of the week calculations.
Patch also includes new regression tests to test results and the
calculation gets triggered.
Very minor comment changes and the contact email are also changed.
* Adds missing pop() methods to weakref.py
* Expands test suite to broaden coverage of objects with
a mapping interface.
Contributed by Sebastien Keim.
Quoting the path doesn't work on Win2K (cmd.exe) regardless, this is just
a hack to let the test pass again on Win2K (so long as Python isn't
installed in a path that does contain an embedded space). On Win2K it
looks like we'd also have to add a second pair of double quotes, around
the entire command line.
long header lines is now (properly) in the Header class. So we no
longer need _split_header() and we'll just defer to Header.encode()
when we have a plain string.
_encode_chunks(): Pass maxlinelen in instead of always using
self._maxlinelen, so we can adjust for shorter initial lines.
Pass this value through to _max_append().
encode(): Weave maxlinelen through to the _encode_chunks() call.
_split_ascii(): When recursively splitting a line on spaces
(i.e. lower level syntactic split), don't append the whole returned
string. Instead, split it on linejoiners and extend the lines up to
the last line (for proper packing). Calculate the linelen based on
the last element in the this list.
- the test was sloppy about filenames: "0-REGTYPE-TEXT" was used where
the archive held "/0-REGTYPE-TEXT".
- tarfile extracts all files in binary mode, but the test expected to be able to
read and compare text files in text mode. Use universal text mode.
part itself is longer than maxlen, and we aren't already splitting on
whitespace, then we recursively split the part on whitespace and
append that to the this list.
preserve spaces in the encoded/unencoded word boundaries. RFC 2047 is
ambiguous here, but most people expect the space to be preserved.
Really closes SF bug # 640110.
_split(): New implementation of ASCII line splitting which should do a
better job and not be subject to the various weird artifacts (bugs)
reported. This should also do a better job of higher-level syntactic
splits by trying first to split on semis, then commas, then
whitespace.
Use a Timbot-ly binary search for optimal non-ASCII split points for
better packing of header lines. This also lets us remove one
recursion call. Don't pass in firstline, but instead pass in the
actual line length we're shooting for. Also pass in the list of split
characters.
encode(): Pass in the list of split characters so applications can
have some control over what "higher level syntactic breaks" are.
Also,
decode_header(): Transform binascii.Errors which can occur when
decoding a base64 RFC 2047 header with bogus data, into an
email.Errors.HeaderParseError. Closes SF bug #696712.
_handle_multipart(): Ensure that if the preamble exists but does not
end in a newline, a newline is still added. Without this, the
boundary separator will end up on the preamble line, breaking the MIME
structure.
_make_boundary(): Handle differences in the decimal point character
based on the locale.
Charset: Alias __repr__ to __str__ for debugging.
header_encode(): When calling quopriMIME.header_encode(), set
maxlinelen=None so that the lower level function doesn't (also) try to
wrap/fold the line.
_max_append(): Change the comparison so that the new string is
concatenated if it's less than or equal to the max length.
header_encode(): Allow for maxlinelen == None to mean, don't do any
line splitting. This is because this module is mostly used by higher
level abstractions (Header.py) which already ensures line lengths. We
do this in a cheapo way by setting the max_encoding to some insanely
<100k wink> large number.
could be responsible for various unexplained problems with Python/OSA
interaction over the years):
- Enum values were passed as their string counterparts. Most applications
don't seem to mind this, but some do (InDesign).
- Attributes have never worked (!), as they were incorrectly passed
as parameters. Apparently nobody uses them much:-)
Eliminate extra blank line in shell output. Caused by stdout not being
flushed
upon completion of subprocess' Executive.runcode() when user code ends by
outputting an unterminated line, e.g. print "test",
[ 555817 ] Flawed fcntl.ioctl implementation.
with my patch that allows for an array to be mutated when passed
as the buffer argument to ioctl() (details complicated by
backwards compatibility considerations -- read the docs!).
assertRaises. Fixed a repeated subtle bug in the inplace tests by
removing the possibilty that a self.fail() call could raise a
TypeError that the test catches by mistake.
Allow mixed-type __eq__ and __ne__ for Set objects. This is messier than
I'd like because Set *also* implements __cmp__. I know of one glitch now:
cmp(s, t) returns 0 now when s and t are both Sets and s == t, despite
that Set.__cmp__ unconditionally raises TypeError (and by intent). The
rub is that __eq__ gets tried first, and the x.__eq__(y) True result
convinces Python that cmp(x, y) is 0 without even calling Set.__cmp__.
rarely needed, but can sometimes be useful to release objects
referenced by the traceback held in sys.exc_info()[2]. (SF patch
#693195.) Thanks to Kevin Jacobs!