tests. Alas, because only the "x86 OpenBSD trunk" buildbot fails
these tests, and test_descr stops after the first failure, there's
no sane way for me to fix these short of fixing one and then
waiting for the buildbot to reveal the next one.
to that id() can now return a Python long on a 32-bit box that allocates
addresses "with the sign bit set".
test_set.py test_subclass_with_custom_hash(): it's never been portably
legal for a __hash__() method to return id(self), but on 32-bit boxes
that never caused a problem before it became possible for id() to
return a Python long. Changed __hash__ here to return a Python int
regardless of platform.
test_descr.py specials():
vereq(hash(c1), id(c1))
has never been a correct test -- just removed it (hash() is always
a Python int; id() may be a Python long).
default decimal context, causing test_tokenize to fail
if it ran after test_contextlib. Changed to restore
the decimal context in effect at the test's start.
soon after because the gmail address it connects to started timing
out on all the buildbot slaves. Rewrote the test to produce a
warning message (instead of failing) when the address times out.
Also removed the special case for Windows -- this test started to
work on Windows as soon as bug 1462352 was fixed.
mouse events. This makes the test fail. Catch that case and don't run
the tests. Should make the debian/ubuntu buildbots that run in a chroot
work again.
Will backport to release24-maint.
least as big as a long. I believe this to be a safe assumption that is being
made in many parts of CPython, but a check could be added.
len(xrange(sys.maxint)) works now, so fix the testsuite's odd exception for
64-bit platforms too. It also fixes 'zip(xrange(sys.maxint), it)' as a
portable-ish (if expensive) alternative to enumerate(it); since zip() now
calls len(), this was breaking on (real) 64-bit platforms. No additional
test was added for that behaviour.
- The buildbot "fetch it" step failed at the end, due to
using Unix syntax in the final "copy the DLL" step.
test_sqlite was skipped as a result.
- test_sqlite is no longer an expected skip on Windows.
Re-enable all the tests in test_trace.py except one. Still not sure that these tests test what they used to test, but they pass. One failing test seems to be caused by undocumented line number table behavior in Python 2.4.
tracing/line number table in except blocks.
Reflow long lines introduced by col_offset changes. Update test_ast
to handle new fields in excepthandler.
As note in Python.asdl says, we might want to rethink how attributes
are handled. Perhaps they should be the same as other fields, with
the primary difference being how they are defined for all types within
a sum.
Also fix asdl_c so that constructors with int fields don't fail when
passed a zero value.
converted to CR CR NL. There may be a way to fix this with tcsetattr,
but I couldn't find it. There was a similar problem on IRIX.
Just normalize the output and compare that.
Will backport.
like cause the interpreter to exit abruptly. If there's a way to fix this,
it would be good to really fix it. It could just be the operation of the
std C library and we just aren't supposed to do that.
When the test case is skipped, we print a message so the user can check
for themselves.
This is based on pysqlite2.1.3, and provides a DB-API interface in
the standard library. You'll need sqlite 3.2.2 or later to build
this - if you have an earlier version, the C extension module will
not be built.
we are about to leave behind. An example of the cause of this leak can be
found in the leakers directory, in case we ever want to tackle the
underlying problem.
effect at the time test_decimal was imported. Else
running test_decimal had the bad side effect of
permanently changing the decimal context in effect.
That caused text_tokenize to fail if it ran after
test_decimal.
- The doctests in decistmt() weren't run at all when
test_tokenize was run via regrtest.py.
- Some expected output in decistmt() was Windows-specific
(but nobody noticed because the doctests weren't getting
run).
- test_roundtrip() didn't actually test anything when
running the tests with -O. Now it does.
- Changed test_roundtrip() to show the name of the input
file when it fails. That would have saved a lot of
time earlier today.
- Added a bunch of comments.
adds the following API calls: PySet_Clear(), _PySet_Next(), and
_PySet_Update(). The latter two are considered non-public. Tests and
documentation (for the public API) are included.
itertools.tee->instance->attribute->itertools.tee and
itertools.tee->teedataobject->itertools.tee cycles, which can be found now
that itertools.tee and its teedataobject participate in GC, remain findable
and cleanable. The test won't fail when they aren't, but at least the
frequent hunt-refleaks runs would spot the rise in refleaks.
def foo((x)): was getting recognized as requiring tuple unpacking
which is not correct.
Add tests for this case and the proper way to unpack a tuple of one:
def foo((x,)):
test_inpsect was incorrect before. I'm not sure why it was passing,
but that has been corrected with a test for both functions above.
This means the test (and therefore inspect.getargspec()) are broken in 2.4.
that yields after a throw(). Make @contextmanager not reraise
exceptions, but return a false value in that case instead. Add test
cases for both behaviors.
as diagnosed by Nick Coghlan.
test_capi.py: A test module should never spawn a thread as
a side effect of being imported. Because this one did, the
segfault one of its thread tests caused didn't occur until
a few tests after test_regrtest.py thought test_capi was
finished. Repair that. Also join() the thread spawned
at the end, so that test_capi is truly finished when
regrtest reports that it's done.
_testcapimodule.c test_thread_state(): this spawns a
couple of non-threading.py threads, passing them a PyObject*
argument, but did nothing to ensure that those threads
finished before returning. As a result, the PyObject*
_could_ (although this was unlikely) get decref'ed out of
existence before the threads got around to using it.
Added explicit synchronization (via a Python mutex) so
that test_thread_state can reliably wait for its spawned
threads to finish.
This was a fair amount of rework of the patch. Refactored test_fork1 so it
could be reused by the new tests for wait3/4. Also made them into new style
unittests (derive from unittest.TestCase).
This patch adds a-LAW encoding to audioop and replaces the old
u-LAW encoding/decoding code with the current code from sox.
Possible issues: the code from sox uses int16_t.
Code by Lars Immisch
of tuple) that provides incremental decoders and encoders (a way to use
stateful codecs without the stream API). Functions
codecs.getincrementaldecoder() and codecs.getincrementalencoder() have
been added.
Since it's never intended that this script be run by
regrtest.py, it shouldn't have been named with a "test_"
prefix to begin with. A consequence is that we shouldn't
see useless:
test_hashlib_speed skipped -- not a unit test (stand alone benchmark)
lines in regrtest output anymore.
Anyway, this is the changes to the with-statement
so that __exit__ must return a true value in order
for a pending exception to be ignored.
The PEP (343) is already updated.
an error code, this let `self` leak. This is a disaster
on Windows, since `self` already points to a newly-opened
file object, and it was impossible for Python code to
close the thing since the only reference to it was in a
blob of leaked C memory.
test_hotshot test_bad_sys_path(): This new test provoked
the C bug above. This test passed, but left an open
"@test" file behind, which caused a massive cascade of
bogus test failures in later, unrelated tests on Windows.
Changed the test code to remove the @test file it leaves
behind, which relies on the change above to close that
file first.
The failure definitely seems timing related. This change *seems* to work.
Since the failure isn't doesn't occur consistently, it's hard to tell.
Running these tests on Solaris in this order:
test_urllibnet test_operator test_cgi \
test_isinstance test_future test_ast test_logging
generally caused a failure (about 50% of the time) before the sleep.
I couldn't provoke the failure with the sleep.
This should really be cleaned up by using threading.Events or something
so it is not timing dependent and doesn't hang forever on failure.
test_codecmaps_tw test_importhooks test_socket_ssl
I don't completely understand the cause, but there's a lot of import magic
going on and this is the smallest change which fixes the problem.
want to wait forever if we don't receive the last message. But we also
don't want the test to fail if we shutdown too quickly. I can't reliably
reproduce this failure, so I'm kinda guessing this is the problem.
We'll see if this band-aid helps.
The culprit was an expression-less yield -- the first apparently in
the standard library. I added a unit test for this.
Also removed the hack to force compilation of test_with.py.
added message attribute compared to the previous version of Exception. It is
also a new-style class, making all exceptions now new-style. KeyboardInterrupt
and SystemExit inherit from BaseException directly. String exceptions now
raise DeprecationWarning.
Applies patch 1104669, and closes bugs 1012952 and 518846.
- New semantics for __exit__() -- it must re-raise the exception
if type is not None; the with-statement itself doesn't do this.
(See the updated PEP for motivation.)
- Added context managers to:
- file
- thread.LockType
- threading.{Lock,RLock,Condition,Semaphore,BoundedSemaphore}
- decimal.Context
- Added contextlib.py, which defines @contextmanager, nested(), closing().
- Unit tests all around; bot no docs yet.
- IMPORT_NAME takes an extra argument from the stack: the relativeness of
the import. Only passed to __import__ when it's not -1.
- __import__() takes an optional 5th argument for the same thing; it
__defaults to -1 (old semantics: try relative, then absolute)
- 'from . import name' imports name (be it module or regular attribute)
from the current module's *package*. Likewise, 'from .module import name'
will import name from a sibling to the current module.
- Importing from outside a package is not allowed; 'from . import sys' in a
toplevel module will not work, nor will 'from .. import sys' in a
(single-level) package.
- 'from __future__ import absolute_import' will turn on the new semantics
for import and from-import: imports will be absolute, except for
from-import with dots.
Includes tests for regular imports and importhooks, parser changes and a
NEWS item, but no compiler-package changes or documentation changes.
This was started by Mike Bland and completed by Guido
(with help from Neal).
This still needs a __future__ statement added;
Thomas is working on Michael's patch for that aspect.
There's a small amount of code cleanup and refactoring
in ast.c, compile.c and ceval.c (I fixed the lltrace
behavior when EXT_POP is used -- however I had to make
lltrace a static global).
breaks the parser module, because it adds the if/else construct as well as
two new grammar rules for backward compatibility. If no one else fixes
parsermodule, I guess I'll go ahead and fix it later this week.
The TeX code was checked with texcheck.py, but not rendered. There is
actually a slight incompatibility:
>>> (x for x in lambda:0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: iteration over non-sequence
changes into
>>> (x for x in lambda: 0)
File "<stdin>", line 1
(x for x in lambda: 0)
^
SyntaxError: invalid syntax
Since there's no way the former version can be useful, it's probably a
bugfix ;)
- The copy module now "copies" function objects (as atomic objects).
- dict.__getitem__ now looks for a __missing__ hook before raising
KeyError.
- Added a new type, defaultdict, to the collections module.
This uses the new __missing__ hook behavior added to dict (see above).
Google for: eu_ES decimal point
shows that BSD locales had the eu_ES decimal point as
a single quote (') instead of a comma (,).
This was seems to have been fixed 15 months ago, but it's not on our
Mac and presumably others. So skip this broken locale.
test suite.
For urllib2, move the import of gopherlib into the
only function that uses it: users (including the
test suite) certainly shouldn't see a deprecation
warning just because they import urllib2! If they
actually use gopher_open(), fine, _then_ they should
see a deprecation warning.
test_file to fail on Windows in reality (can't delete
a still-open file), but a new bare "except:" hid that
test_file failed on Windows, and leaving behind the
still-open TESTFN caused a cascade of bogus failures
in later tests.
So, close the file, and stop hiding failure to unlink.
readline/readlines/read/readinto, loudly break by raising ValueError, rather
than silently deliver data out of order or hitting EOF prematurely.
Probably not a bugfix candidate, even though it affects no 'working' code.
Based on lsprof (patch #1212837) by Brett Rosen and Ted Czotter.
With further editing by Michael Hudson and myself.
History in svn repo: http://codespeak.net/svn/user/arigo/hack/misc/lsprof
* Module/_lsprof.c is the internal C module, Lib/cProfile.py a wrapper.
* pstats.py updated to display cProfile's caller/callee timings if available.
* setup.py and NEWS updated.
* documentation updates in the profiler section:
- explain the differences between the three profilers that we have now
- profile and cProfile can use a unified documentation, like (c)Pickle
- mention that hotshot is "for specialized usage" now
- removed references to the "old profiler" that no longer exists
* test updates:
- extended test_profile to cover delicate cases like recursion
- added tests for the caller/callee displays
- added test_cProfile, performing the same tests for cProfile
* TO-DO:
- cProfile gives a nicer name to built-in, particularly built-in methods,
which could be backported to profile.
- not tested on Windows recently!
Not sure why/how _handlers/_handlerList is out of sync. This could
indicate a deeper problem.
In test_logging, the only absolutely necessary change to get working
was tcpserver.abort = 1. But we don't want to wait infinitely
to join the threads, so give a 2.0 second timeout.
There doesn't appear to be a need for a local abort variable
in serve_until_stopped, so just use the instance member.
Note the problem is only on HEAD, not in 2.4.
on both Unix (SVR4 and BSD) and Windows. Restores behaviour of passing -1
for anonymous memory on Unix. Use MAP_ANONYMOUS instead of _ANON since
the latter is deprecated according to Linux (gentoo) man pages.
Should we continue to allow mmap.mmap(0, length) to work on Windows?
0 is a valid fd.
Will backport bugfix portions.
tty opened by os.openpty() isn't always a tty according to os.isatty(), when
it's tested inside the process that opened it. Doesn't affect actual
functionality, as using a tty this way is rarely, if ever, useful. Ignoring
the failure allows the test for actual functionality to continue.
Will backport to 2.4-maint.
Expand set of errors caught in set_context(). Some new errors, some
old error messages changed for consistency.
Fixed error checking in generator expression code. The first set of
tests were impossible condition given the grammar. In general, the
ast code uses REQ() for those sanity checks.
Fix some error handling for augmented assignments. As comments in the
code explain, set_context() ought to work here, but I got unexpected
crashes when I tried it. Should come back to this.
Add note to Grammar that yield expression is a special case.
Add doctest cases for SyntaxErrors raised by ast.c.
before the listener was ready (on gentoo x86 buildslave). This
caused the listener to not exit normally since nobody connected to it
(waited in accept()). The exception was raised in the other thread
and the test failed.
This fix doesn't completely eliminate the race, but should make it
near impossible to trigger. Hopefully it's good enough.
bugs which cause the interpreter to crash. I'm sure we can find a few
more. Many missing bugs deal with variations on unchecked infinite recursion
(like coerce.py).
cases if TERM isn't set or is unknown (perhaps we should only check if
unset or empty?)
Skip the test if TERM isn't set. This seems to occur when running under
buildbot and presumably cron.
For some more info check here:
http://mail.python.org/pipermail/python-checkins/2006-January/048704.html
Will backport if it works.
the tests. This stops the confusing/annoying:
No handlers could be found for logger "cookielib"
message we got whenever some test running after test_logging
happened to use cookielib.py (when not using regrtest's -r,
this happened during test_urllib2; when using -r, it varied).
returning 'a' as the delimiter. It now returns '|', but not because I
understood better what the code was supposed to do. Would someone that
understands the idea behind _guess_delimiter() (see its doc string) look to
see if my fallback choice is better than before or if it's just serendipity
that I picked the proper delimiter?
* set sq_repeat and sq_concat to NULL for user-defined new-style
classes, as a way to fix a number of related problems. See
test_descr.notimplemented()). One of these problems was fixed
in r25556 and r25557 but many more existed; this is a general
fix and thus reverts r25556-r25557.
* to avoid having PySequence_Repeat()/PySequence_Concat() failing
on user-defined classes, they now fall back to nb_add/nb_mul if
sq_concat/sq_repeat are not defined and the arguments appear to
be sequences.
* added tests.
Backport candidate.
last field was empty it would strip the delimiter and incorrectly guess that
"" was the delimiter. Reported in c.l.py by Laurent Laporte. Will
backport.
cookielib.LWPCookieJar and .MozillaCookieJar are documented to raise
cookielib.LoadError on attempt to load an invalid cookies file, but
raise IOError instead. Compromise by having LoadError subclass IOError.
This code generated a C assertion:
assert 1, ([s for s in x] +
[s for s in x])
pass
assert was completely broken, it needed to use the proper block.
compiler_use_block() is now no longer used, so remove it.
svn:ignore *.pyc *.pyo
svn:eol-style native
The .py files appear to have been checked in with Windows or inconsistent line
endings. The current check-in disrupts the 'svn blame', but hopefully it is
irrelevant for freshly imported code.
If a line had multiple semi-colons and ended with a semi-colon, we would
loop too many times and access a NULL node. Exit the loop early if
there are no more children.
Delete globals which contain variable information at the end of the test.
This makes the test stable (no reported leaks) when running regrtest -R
to find reference leaks.
so it is only executed once. Otherwise the same search function is
repeated added to the codec search path when regrtest is run with -R
and leaks are reported.
'[].__add__', to match what the other internal descriptor types provide:
'__objclass__' attribute, '__self__' member, and reasonable repr and
comparison.
Added a test.
accepts strings only for unpickling reasons. This check prevents the honest
mistake of passing a string like '2:59.0' to time() and getting an insane
object.
According to Jeremy, the comment only made sense when
the yield was disallowed. Now it's testing that the yield
is allowed, so it's not bad and the outer finally is irrelevant.
[ 1327110 ] wrong TypeError traceback in generator expressions
by removing the code that can stomp on the users' TypeError raised by the
iterable argument to ''.join() -- PySequence_Fast (now?) gives a perfectly
reasonable message itself. Also, a couple of tests.
Incorrect code was generated for:
foo(a = i for i in range(10))
This should have generated a SyntaxError. Fix the Grammar so
it raises a SyntaxError and test it.
I'm uncertain whether this should be backported. It makes
something that was Syntactically valid invalid. However,
the code would either be completely broken or do the wrong thing.
This change implements a new bytecode compiler, based on a
transformation of the parse tree to an abstract syntax defined in
Parser/Python.asdl.
The compiler implementation is not complete, but it is in stable
enough shape to run the entire test suite excepting two disabled
tests.
Problem: if two files are assigned the same inode
number by the filesystem, the second one will be added
as a hardlink to the first, which means that the
content will be lost.
The patched code checks if the file's st_nlink is
greater 1. So only for files that actually have several
links pointing to them hardlinks will be created, which
is what GNU tar does.
Will backport.
PyUnicode_DecodeCharmap() the accept a unicode string as the mapping
argument which is used as a mapping table.
This code isn't used by any of the codecs yet.
- SF Bug #772896, unknown encoding results in MemoryError, which is not helpful
I will only backport the segfault fix. I'll let Anthony decide if he wants
the other changes backported. I will do the backport if asked.
about illegal code points. The codec now supports PEP 293 style error handlers.
(This is a variant of the Nik Haldimann's patch that detects truncated data)
A new hashlib module to replace the md5 and sha modules. It adds
support for additional secure hashes such as SHA-256 and SHA-512. The
hashlib module uses OpenSSL for fast platform optimized
implementations of algorithms when available. The old md5 and sha
modules still exist as wrappers around hashlib to preserve backwards
compatibility.
Fix over-aggressive PyErr_Clear(). The same code fragment appears in
various guises in list.extend(), map(), filter(), zip(), and internally
in PySequence_Tuple().
a frozenset conversion when the initial search attempt fails with a
TypeError and the key is some type of set. Add a testcase.
* Eliminate a duplicate if-stmt.
s|=s, s&=s, s-=s, or s^=s). Add related tests.
* Improve names for several variables and functions.
* Provide alternate table access functions (next, contains, add, and discard)
that work with an entry argument instead of just a key. This improves
set-vs-set operations because we already have a hash value for each key
and can avoid unnecessary calls to PyObject_Hash(). Provides a 5% to 20%
speed-up for quick hashing elements like strings and integers. Provides
much more substantial improvements for slow hashing elements like tuples
or objects defining a custom __hash__() function.
* Have difference operations resize() when 1/5 of the elements are dummies.
Formerly, it was 1/6. The new ratio triggers less frequently and only
in cases that it can resize quicker and with greater benefit. The right
answer is probably either 1/4, 1/5, or 1/6. Picked the middle value for
an even trade-off between resize time and the space/time costs of dummy
entries.
- Handle both frozenset() and frozenset([]).
- Do not use singleton for frozenset subclasses.
- Finalize the singleton.
- Add test cases.
* Factor-out set_update_internal() from set_update(). Simplifies the
code for several internal callers.
* Factor constant expressions out of loop in set_merge_internal().
* Minor comment touch-ups.
[ 1229429 ] missing Py_DECREF in PyObject_CallMethod
Add a test in test_enumerate, which is a bit random, but suffices
(reversed_new calls PyObject_CallMethod under some circumstances).
Should significantly enhance the utility of the module by supporting
the creation of tools that modify the token stream and writeback the
modified result.
[ 1180995 ] binary formats for marshalling floats
Adds 2 new type codes for marshal (binary floats and binary complexes), a
new marshal version (2), updates MAGIC and fiddles the de-serializing of
code objects to be less likely to clobber the real reason for failing if
it fails.
[ 1181301 ] make float packing copy bytes when they can
which hasn't been reviewed, despite numerous threats to check it in
anyway if noone reviews it. Please read the diff on the checkin list,
at least!
The basic idea is to examine the bytes of some 'probe values' to see if
the current platform is a IEEE 754-ish platform, and if so
_PyFloat_{Pack,Unpack}{4,8} just copy bytes around.
The rest is hair for testing, and tests.
crashing, and indirectly on the fact that hash codes in
random.randrange(1000000000) were very unlikely to exhibit collisions.
To see the problem, replace this number with 500 and observe the crash on
either del target[key] or del keys[i].
The fix prevents recursive mutation, just as in the key insertion case.
conversion using the proper magic slot (e.g., __int__()). Also move conversion
code out of PyNumber_*() functions in the C API into the nb_* function.
Applied patch #1109424. Thanks Walter Doewald.
test_site often failed under "regrtest.py -r", because this xmlrpc test
left sys with a setdefaultencoding attribute, but loading site.py removes
that attribute and test_site.py verifies the attribute is gone. Changed
this test to get rid of sys.setdefaultencoding if it didn't exist when
this test started.
Don't know whether this is a bugfix (backport) candidate.
the last character read is "\r" (and size is None, i.e. we're allowed to
call read() multiple times), so that we can return the correct line ending
(this additional character might be a "\n").
If the stream is temporarily exhausted, we might return the wrong line ending
(if the last character read is "\r" and the next one (after the byte stream
provides more data) is "\n", but at least the atcr member ensure that we
get the correct number of lines (i.e. this "\n" will not be treated as
another line ending.)
[ 1165306 ] Property access with decorator makes interpreter crash
Don't allow the creation of unbound methods with NULL im_class, because
attempting to call such crashes.
Backport candidate.
instead of raising a TypeError. Allows other types to successfully
implement __radd__() style methods.
* Remove future division import from test suite.
* Remove test suite's shadowing of __builtin__.dir().
etc., had comments after the colon, and some other cases. This patch
take a simpler approach that doesn't rely on looking for a ':'. Thanks
Simon Percivall!
socket.gethostname() in the check for a valid return.
Also clarified docs (official and docstring) that the value from gethostname()
is returned if gethostbyaddr() doesn't do the job.
* Speed-up "x in y" where x has more than one character.
The existing code made excessive calls to the expensive memcmp() function.
The new code uses memchr() to rapidly find a start point for memcmp().
In addition to knowing that the first character is a match, the new code
also checks that the last character is a match. This significantly reduces
the incidence of false starts (saving memcmp() calls and making quadratic
behavior less likely).
Improves the timings on:
python -m timeit -r7 -s"x='a'*1000" "'ab' in x"
python -m timeit -r7 -s"x='a'*1000" "'bc' in x"
Once this code has proven itself, then string_find_internal() should refer
to it rather than running its own version. Also, something similar may
apply to unicode objects.
[ 1124295 ] Function's __name__ no longer accessible in restricted mode
which I introduced with a bit of mindless copy-paste when making
__name__ writable. You can't assign to __name__ in restricted mode,
which I'm going to pretend was intentional :)
PyNumber_Check, rather than trying to convert to a float. Reimplemented
writer - now raises exceptions when it sees a quotechar but neither
doublequote or escapechar are set. Doublequote results are now more
consistent (eg, single quote should generate """", rather than "",
which is ambiguous).
when this limit is reached. Limit defaults to 128k, and is changed
by module set_field_limit() method. Previously, an unmatched quote
character could result in the entire file being read into the field
buffer, potentially exhausting virtual memory.
was done because we were previously performing validation of the dialect
from python, but this is now down within the C module. Also, the method
we were using to detect classes did not work with new-style classes.
a delimiter. Previously, the 'network location' (<authority> in RFC 2396) would
become 'www.example.com?query=spam', while RFC 2396 does not allow a '?' in
<authority>. See bug #548176 for further discussion.
`glob.glob()` currently calls itself recursively to build a list of matches of
the dirname part of the pattern and then filters by the basename part. This is
effectively BFS. ``glob.glob('*/*/*/*/*/foo')`` will build a huge list of all
directories 5 levels deep even if only a handful of them contain a ``foo``
entry. A generator-based recusion would never have to store these list at once
by implementing DFS. This patch converts the `glob` function to an `iglob`
recursive generator . `glob()` now just returns ``list(iglob(pattern))``.
I also cleaned up the code a bit (reduced duplicate `has_magic()` checks and
created a second `glob0` helper func so that the main loop need not be
duplicated).
Thanks to Cherniavsky Beni for the patch!
test_threading.test_foreign_thread(): new test does a basic check that
"foreign" threads can using the threading module, and that they create
a _DummyThread instance in at least one use case. This isn't a very
good test, since a thread created by thread.start_new_thread() isn't
particularly "foreign".
trying to return a complete line even if a size parameter was given (see
http://www.python.org/sf/1076985). This leads to buffer overflows with long
source lines under Windows if e.g. cp1252 is used as the source encoding.
This patch reverts the behaviour of readline() to something that behaves more
like Python 2.3: If a size parameter is given, read() is called only once.
As a side effect of this, readline() now supports all types of linebreaks
supported by unicode.splitlines().
Note that the tokenizer is still broken and it's possible to provoke segfaults
(see http://www.python.org/sf/1089395).
more. Thanks to Simon Percivall!
The patch makes changes to inspect.py in two places:
* the pattern to match against functions at line 436 is
modified: lambdas should be matched even if not
preceded by whitespace, as long as "lambda" isn't part
of another word.
* the BlockFinder class is heavily modified. Changes are:
- checking for "def", "class" or "lambda" names
before setting self.started to True. Then checking the
same line for word characters after the colon (if the
colon is on that line). If so, and the line does not
end with a line continuation marker, raise EndOfBlock
immediately.
- adding self.passline to show that the line is to be
included and no more checking is necessary on that
line. Since a NEWLINE token is not generated when a
line continuation marker exists, this allows getsource
to continue with these functions even if the following
line would not be indented.
Also add a bunch of
'quite-unlikely-to-occur-in-real-life-but-working-anyway' tests.
is pointless.
Also add a note to the docs for the 'test' package that test cases should check
first that any conditions needed in the operating system are met before having
a test run.
Closes bug #1077302. THanks, Ian Holsman.
(http://www.cygwin.com/faq/faq_3.html#SEC41).
Also check whether onerror has actually been called so this test will
fail on assertion instead of on trying to chmod a non-existent file.
__getitem__() methods: compute only the new spellings needed to satisfy
the given indexing object. This is purely an optimization (it should
have no effect on visible semantics).
regrtest.py: skip rgbimg and imageop as they are not built on 64-bit systems.
_tkinter.c: replace %.8x with %p for printing pointers.
setup.py: add lib64 into the library directories.
showing that doctest's pdb.set_trace() support was dramatically broken.
doctest.py _OutputRedirectingPdb.trace_dispatch(): Return a local trace
function instead of (implicitly) None. Else interaction with pdb was
bizarre, noticing only 'call' events. Amazingly, the existing set_trace()
tests didn't care.