cases if TERM isn't set or is unknown (perhaps we should only check if
unset or empty?)
Skip the test if TERM isn't set. This seems to occur when running under
buildbot and presumably cron.
For some more info check here:
http://mail.python.org/pipermail/python-checkins/2006-January/048704.html
Will backport if it works.
the tests. This stops the confusing/annoying:
No handlers could be found for logger "cookielib"
message we got whenever some test running after test_logging
happened to use cookielib.py (when not using regrtest's -r,
this happened during test_urllib2; when using -r, it varied).
returning 'a' as the delimiter. It now returns '|', but not because I
understood better what the code was supposed to do. Would someone that
understands the idea behind _guess_delimiter() (see its doc string) look to
see if my fallback choice is better than before or if it's just serendipity
that I picked the proper delimiter?
* set sq_repeat and sq_concat to NULL for user-defined new-style
classes, as a way to fix a number of related problems. See
test_descr.notimplemented()). One of these problems was fixed
in r25556 and r25557 but many more existed; this is a general
fix and thus reverts r25556-r25557.
* to avoid having PySequence_Repeat()/PySequence_Concat() failing
on user-defined classes, they now fall back to nb_add/nb_mul if
sq_concat/sq_repeat are not defined and the arguments appear to
be sequences.
* added tests.
Backport candidate.
last field was empty it would strip the delimiter and incorrectly guess that
"" was the delimiter. Reported in c.l.py by Laurent Laporte. Will
backport.
cookielib.LWPCookieJar and .MozillaCookieJar are documented to raise
cookielib.LoadError on attempt to load an invalid cookies file, but
raise IOError instead. Compromise by having LoadError subclass IOError.
This code generated a C assertion:
assert 1, ([s for s in x] +
[s for s in x])
pass
assert was completely broken, it needed to use the proper block.
compiler_use_block() is now no longer used, so remove it.
svn:ignore *.pyc *.pyo
svn:eol-style native
The .py files appear to have been checked in with Windows or inconsistent line
endings. The current check-in disrupts the 'svn blame', but hopefully it is
irrelevant for freshly imported code.
If a line had multiple semi-colons and ended with a semi-colon, we would
loop too many times and access a NULL node. Exit the loop early if
there are no more children.
Delete globals which contain variable information at the end of the test.
This makes the test stable (no reported leaks) when running regrtest -R
to find reference leaks.
so it is only executed once. Otherwise the same search function is
repeated added to the codec search path when regrtest is run with -R
and leaks are reported.
'[].__add__', to match what the other internal descriptor types provide:
'__objclass__' attribute, '__self__' member, and reasonable repr and
comparison.
Added a test.
accepts strings only for unpickling reasons. This check prevents the honest
mistake of passing a string like '2:59.0' to time() and getting an insane
object.
According to Jeremy, the comment only made sense when
the yield was disallowed. Now it's testing that the yield
is allowed, so it's not bad and the outer finally is irrelevant.
[ 1327110 ] wrong TypeError traceback in generator expressions
by removing the code that can stomp on the users' TypeError raised by the
iterable argument to ''.join() -- PySequence_Fast (now?) gives a perfectly
reasonable message itself. Also, a couple of tests.
Incorrect code was generated for:
foo(a = i for i in range(10))
This should have generated a SyntaxError. Fix the Grammar so
it raises a SyntaxError and test it.
I'm uncertain whether this should be backported. It makes
something that was Syntactically valid invalid. However,
the code would either be completely broken or do the wrong thing.
This change implements a new bytecode compiler, based on a
transformation of the parse tree to an abstract syntax defined in
Parser/Python.asdl.
The compiler implementation is not complete, but it is in stable
enough shape to run the entire test suite excepting two disabled
tests.
Problem: if two files are assigned the same inode
number by the filesystem, the second one will be added
as a hardlink to the first, which means that the
content will be lost.
The patched code checks if the file's st_nlink is
greater 1. So only for files that actually have several
links pointing to them hardlinks will be created, which
is what GNU tar does.
Will backport.
PyUnicode_DecodeCharmap() the accept a unicode string as the mapping
argument which is used as a mapping table.
This code isn't used by any of the codecs yet.
- SF Bug #772896, unknown encoding results in MemoryError, which is not helpful
I will only backport the segfault fix. I'll let Anthony decide if he wants
the other changes backported. I will do the backport if asked.
about illegal code points. The codec now supports PEP 293 style error handlers.
(This is a variant of the Nik Haldimann's patch that detects truncated data)