requires them. Disable executable bits and shebang lines in test and
benchmark files in order to prevent using a random system python, and in
source files of modules which don't provide command line interface.
parent: 80003:be83cbf4a789
parent: 80006:32df036e6b75
user: Georg Brandl <georg@python.org>
date: Sun Oct 28 10:50:11 2012 +0100
summary: merge with 3.3
* ftpwrapper now uses reference counting to ensure that the underlying socket
is closed when the ftpwrapper object is no longer in use
* ftplib.FTP.ntransfercmd() now closes the socket if an error occurs
Initial patch by Victor Stinner.
svn+ssh://pythondev@svn.python.org/python/branches/py3k
........
r85025 | senthil.kumaran | 2010-09-27 06:56:03 +0530 (Mon, 27 Sep 2010) | 6 lines
Fix Issue1595365 - Adding the req.headers after the un-redirect headers have
been added. This helps in accidental overwritting of User-Agent header to
default value. To preserve the old behavior, only headers not in unredirected
headers will be updated.
........
svn+ssh://pythondev@svn.python.org/python/branches/py3k
........
r84597 | antoine.pitrou | 2010-09-07 22:42:19 +0200 (mar., 07 sept. 2010) | 5 lines
Issue #8574: better implementation of test.support.transient_internet().
Original patch by Victor.
........
r84598 | antoine.pitrou | 2010-09-07 23:05:49 +0200 (mar., 07 sept. 2010) | 6 lines
Issue #9792: In case of connection failure, socket.create_connection()
would swallow the exception and raise a new one, making it impossible
to fetch the original errno, or to filter timeout errors. Now the
original error is re-raised.
........
r84599 | antoine.pitrou | 2010-09-07 23:09:09 +0200 (mar., 07 sept. 2010) | 4 lines
Improve transient_internet() again to detect more network errors,
and use it in test_robotparser. Fixes#8574.
........
svn+ssh://pythondev@svn.python.org/python/branches/py3k
........
r83818 | senthil.kumaran | 2010-08-08 16:57:53 +0530 (Sun, 08 Aug 2010) | 4 lines
Fix Issue8280 - urllib2's Request method will remove fragements in the url.
This is how it should work,wget and curl work like this way too. Old behavior was wrong.
........
all the upper level libraries that use it, including urllib2.
Added and fixed some tests, and changed docs correspondingly.
Thanks to John J Lee for the patch and the pusing, :)
The moved tests use a local server rather than going out to external servers.
Accepts patch from issue2429.
Contributed by Jerry Seutter & Michael Foord (fuzzyman) at PyCon 2008.
in case there were transient failures. This will hopefully silence
the buildbots for this test. As we find other tests that have a problem,
we can fix with a similar strategy assuming it is successful. It worked
on my box in a loop for 10+ runs where it would have an exception otherwise.
alone class. This addresses the primary concern in
http://bugs.python.org/issue1706815
python-dev discussion here:
http://mail.python.org/pipermail/python-dev/2007-July/073749.html
I chose IOError rather than EnvironmentError as the base class since
socket objects are often used as transparent duck typed file objects
in code already prepared to deal with IOError exceptions.
also a minor fix:
urllib2 - fix a couple places where IOError was raised rather than URLError.
for better or worse, URLError already inherits from IOError so
this won't break any existing code.
test_urllib2net - replace bad ftp urls.
with tests in test_urllib2net.py (must have network resource
enabled to execute them). Also modified test_urllib2.py because
testing mock classes must take it into acount. Docs are also
updated.
Python 2.5.
Also remove gopher support from urllib/urllib2. As both imported gopherlib the
usage of the support would have raised a DeprecationWarning.
a search path setup, some of these hosts resolve to the wrong address.
By appending a period to the hostname, the hostname should only resolve
to what we want it to resolve to. Hopefully this doesn't break different bots.
Also add more info to failure message to aid debugging test failure.
a search path setup, some of these hosts resolve to the wrong address.
By appending a period to the hostname, the hostname should only resolve
to what we want it to resolve to. Hopefully this doesn't break different bots.
The change to use the newer httplib interface admitted the possibility
that we'd get an HTTP/1.1 chunked response, but the code didn't handle
it correctly. The raw socket object can't be pass to addinfourl(),
because it would read the undecoded response. Instead, addinfourl()
must call HTTPResponse.read(), which will handle the decoding.
One extra wrinkle is that the HTTPReponse object can't be passed to
addinfourl() either, because it doesn't implement readline() or
readlines(). As a quick hack, use socket._fileobject(), which
implements those methods on top of a read buffer. (suggested by mwh)
Finally, add some tests based on test_urllibnet.
Thanks to Andrew Sawyers for originally reporting the chunked problem.