cpython/Doc/lib/liburllib.tex

315 lines
13 KiB
TeX
Raw Normal View History

\section{\module{urllib} ---
Open an arbitrary resource by URL}
\declaremodule{standard}{urllib}
\modulesynopsis{Open an arbitrary network resource by URL (requires sockets).}
1995-02-16 12:29:46 -04:00
\index{WWW}
\index{World-Wide Web}
1995-02-27 13:51:51 -04:00
\index{URL}
1995-02-16 12:29:46 -04:00
1995-02-16 12:29:46 -04:00
This module provides a high-level interface for fetching data across
1998-03-12 02:52:05 -04:00
the World-Wide Web. In particular, the \function{urlopen()} function
is similar to the built-in function \function{open()}, but accepts
Universal Resource Locators (URLs) instead of filenames. Some
restrictions apply --- it can only open URLs for reading, and no seek
operations are available.
1995-02-16 12:29:46 -04:00
It defines the following public functions:
1995-02-16 12:29:46 -04:00
\begin{funcdesc}{urlopen}{url\optional{, data}}
1995-02-16 12:29:46 -04:00
Open a network object denoted by a URL for reading. If the URL does
1998-03-12 02:52:05 -04:00
not have a scheme identifier, or if it has \file{file:} as its scheme
1995-02-16 12:29:46 -04:00
identifier, this opens a local file; otherwise it opens a socket to a
server somewhere on the network. If the connection cannot be made, or
1998-03-12 02:52:05 -04:00
if the server returns an error code, the \exception{IOError} exception
is raised. If all went well, a file-like object is returned. This
supports the following methods: \method{read()}, \method{readline()},
\method{readlines()}, \method{fileno()}, \method{close()},
\method{info()} and \method{geturl()}.
Except for the \method{info()} and \method{geturl()} methods,
these methods have the same interface as for
1998-03-12 02:52:05 -04:00
file objects --- see section \ref{bltin-file-objects} in this
manual. (It is not a built-in file object, however, so it can't be
used at those few places where a true built-in file object is
required.)
1995-02-16 12:29:46 -04:00
1998-03-12 02:52:05 -04:00
The \method{info()} method returns an instance of the class
\class{mimetools.Message} containing meta-information associated
with the URL. When the method is HTTP, these headers are those
returned by the server at the head of the retrieved HTML page
(including Content-Length and Content-Type). When the method is FTP,
a Content-Length header will be present if (as is now usual) the
server passed back a file length in response to the FTP retrieval
request. When the method is local-file, returned headers will include
a Date representing the file's last-modified time, a Content-Length
giving file size, and a Content-Type containing a guess at the file's
type. See also the description of the
\refmodule{mimetools}\refstmodindex{mimetools} module.
The \method{geturl()} method returns the real URL of the page. In
some cases, the HTTP server redirects a client to another URL. The
\function{urlopen()} function handles this transparently, but in some
cases the caller needs to know which URL the client was redirected
to. The \method{geturl()} method can be used to get at this
redirected URL.
If the \var{url} uses the \file{http:} scheme identifier, the optional
\var{data} argument may be given to specify a \code{POST} request
(normally the request type is \code{GET}). The \var{data} argument
must in standard \file{application/x-www-form-urlencoded} format;
see the \function{urlencode()} function below.
The \function{urlopen()} function works transparently with proxies.
In a \UNIX{} or Windows environment, set the \envvar{http_proxy},
\envvar{ftp_proxy} or \envvar{gopher_proxy} environment variables to a
URL that identifies the proxy server before starting the Python
interpreter. For example (the \character{\%} is the command prompt):
\begin{verbatim}
% http_proxy="http://www.someproxy.com:3128"
% export http_proxy
% python
...
\end{verbatim}
In a Macintosh environment, \function{urlopen()} will retrieve proxy
information from Internet\index{Internet Config} Config.
The \function{urlopen()} function works transparently with proxies.
In a \UNIX{} or Windows environment, set the \envvar{http_proxy},
\envvar{ftp_proxy} or \envvar{gopher_proxy} environment variables to a
URL that identifies the proxy server before starting the Python
interpreter, e.g.:
\begin{verbatim}
% http_proxy="http://www.someproxy.com:3128"
% export http_proxy
% python
...
\end{verbatim}
In a Macintosh environment, \function{urlopen()} will retrieve proxy
information from Internet Config.
1995-02-16 12:29:46 -04:00
\end{funcdesc}
\begin{funcdesc}{urlretrieve}{url\optional{, filename\optional{, hook}}}
1995-02-16 12:29:46 -04:00
Copy a network object denoted by a URL to a local file, if necessary.
1995-03-07 06:14:09 -04:00
If the URL points to a local file, or a valid cached copy of the
1998-03-12 02:52:05 -04:00
object exists, the object is not copied. Return a tuple
\code{(\var{filename}, \var{headers})} where \var{filename} is the
local file name under which the object can be found, and \var{headers}
is either \code{None} (for a local object) or whatever the
\method{info()} method of the object returned by \function{urlopen()}
returned (for a remote object, possibly cached). Exceptions are the
same as for \function{urlopen()}.
The second argument, if present, specifies the file location to copy
to (if absent, the location will be a tempfile with a generated name).
The third argument, if present, is a hook function that will be called
once on establishment of the network connection and once after each
block read thereafter. The hook will be passed three arguments; a
count of blocks transferred so far, a block size in bytes, and the
total size of the file. The third argument may be \code{-1} on older
FTP servers which do not return a file size in response to a retrieval
request.
1995-02-16 12:29:46 -04:00
\end{funcdesc}
\begin{funcdesc}{urlcleanup}{}
Clear the cache that may have been built up by previous calls to
1998-03-12 02:52:05 -04:00
\function{urlretrieve()}.
1995-02-16 12:29:46 -04:00
\end{funcdesc}
\begin{funcdesc}{quote}{string\optional{, safe}}
1998-03-12 02:52:05 -04:00
Replace special characters in \var{string} using the \samp{\%xx} escape.
Letters, digits, and the characters \character{_,.-} are never quoted.
The optional \var{safe} parameter specifies additional characters
1995-02-27 13:51:51 -04:00
that should not be quoted --- its default value is \code{'/'}.
Example: \code{quote('/\~connolly/')} yields \code{'/\%7econnolly/'}.
\end{funcdesc}
\begin{funcdesc}{quote_plus}{string\optional{, safe}}
1998-03-12 02:52:05 -04:00
Like \function{quote()}, but also replaces spaces by plus signs, as
required for quoting HTML form values. Plus signs in the original
string are escaped unless they are included in \var{safe}.
1995-02-27 13:51:51 -04:00
\end{funcdesc}
\begin{funcdesc}{unquote}{string}
1995-03-07 06:14:09 -04:00
Replace \samp{\%xx} escapes by their single-character equivalent.
1995-02-27 13:51:51 -04:00
Example: \code{unquote('/\%7Econnolly/')} yields \code{'/\~connolly/'}.
1995-02-27 13:51:51 -04:00
\end{funcdesc}
\begin{funcdesc}{unquote_plus}{string}
1998-03-12 02:52:05 -04:00
Like \function{unquote()}, but also replaces plus signs by spaces, as
required for unquoting HTML form values.
\end{funcdesc}
\begin{funcdesc}{urlencode}{dict}
Convert a dictionary to a ``url-encoded'' string, suitable to pass to
\function{urlopen()} above as the optional \var{data} argument. This
is useful to pass a dictionary of form fields to a \code{POST}
request. The resulting string is a series of
\code{\var{key}=\var{value}} pairs separated by \character{\&}
characters, where both \var{key} and \var{value} are quoted using
\function{quote_plus()} above.
\end{funcdesc}
The public functions \function{urlopen()} and \function{urlretrieve()}
create an instance of the \class{FancyURLopener} class and use it to perform
their requested actions. To override this functionality, programmers can
create a subclass of \class{URLopener} or \class{FancyURLopener}, then
assign that class to the \var{urllib._urlopener} variable before calling the
desired function. For example, applications may want to specify a different
\code{user-agent} header than \class{URLopener} defines. This can be
accomplished with the following code:
\begin{verbatim}
class AppURLopener(urllib.FancyURLopener):
def __init__(self, *args):
apply(urllib.FancyURLopener.__init__, (self,) + args)
self.version = "App/1.7"
urllib._urlopener = AppURLopener()
\end{verbatim}
\begin{classdesc}{URLopener}{\optional{proxies\optional{, **x509}}}
Base class for opening and reading URLs. Unless you need to support
opening objects using schemes other than \file{http:}, \file{ftp:},
\file{gopher:} or \file{file:}, you probably want to use
\class{FancyURLopener}.
By default, the \class{URLopener} class sends a
\code{user-agent} header of \samp{urllib/\var{VVV}}, where
\var{VVV} is the \module{urllib} version number. Applications can
define their own \code{user-agent} header by subclassing
\class{URLopener} or \class{FancyURLopener} and setting the instance
attribute \var{version} to an appropriate string value before the
\method{open()} method is called.
Additional keyword parameters, collected in \var{x509}, are used for
authentication with the \file{https:} scheme. The keywords
\var{key_file} and \var{cert_file} are supported; both are needed to
actually retrieve a resource at an \file{https:} URL.
\end{classdesc}
\begin{classdesc}{FancyURLopener}{...}
\class{FancyURLopener} subclasses \class{URLopener} providing default
handling for the following HTTP response codes: 301, 302 or 401. For
301 and 302 response codes, the \code{location} header is used to
fetch the actual URL. For 401 response codes (authentication
required), basic HTTP authentication is performed.
The parameters to the constructor are the same as those for
\class{URLopener}.
\end{classdesc}
1995-02-16 12:29:46 -04:00
Restrictions:
\begin{itemize}
\item
Currently, only the following protocols are supported: HTTP, (versions
0.9 and 1.0), Gopher (but not Gopher-+), FTP, and local files.
1998-03-12 02:52:05 -04:00
\indexii{HTTP}{protocol}
\indexii{Gopher}{protocol}
\indexii{FTP}{protocol}
1995-02-16 12:29:46 -04:00
\item
1998-03-12 02:52:05 -04:00
The caching feature of \function{urlretrieve()} has been disabled
until I find the time to hack proper processing of Expiration time
headers.
1995-02-16 12:29:46 -04:00
\item
1995-03-07 06:14:09 -04:00
There should be a function to query whether a particular URL is in
1995-02-16 12:29:46 -04:00
the cache.
\item
For backward compatibility, if a URL appears to point to a local file
but the file can't be opened, the URL is re-interpreted using the FTP
protocol. This can sometimes cause confusing error messages.
\item
1998-03-12 02:52:05 -04:00
The \function{urlopen()} and \function{urlretrieve()} functions can
cause arbitrarily long delays while waiting for a network connection
to be set up. This means that it is difficult to build an interactive
1995-02-16 12:29:46 -04:00
web client using these functions without using threads.
\item
1998-03-12 02:52:05 -04:00
The data returned by \function{urlopen()} or \function{urlretrieve()}
is the raw data returned by the server. This may be binary data
(e.g. an image), plain text or (for example) HTML\index{HTML}. The
HTTP\indexii{HTTP}{protocol} protocol provides type information in the
reply header, which can be inspected by looking at the
\code{content-type} header. For the Gopher\indexii{Gopher}{protocol}
protocol, type information is encoded in the URL; there is currently
no easy way to extract it. If the returned data is HTML, you can use
the module \refmodule{htmllib}\refstmodindex{htmllib} to parse it.
1995-02-16 12:29:46 -04:00
\item
1998-03-12 02:52:05 -04:00
Although the \module{urllib} module contains (undocumented) routines
to parse and unparse URL strings, the recommended interface for URL
manipulation is in module \refmodule{urlparse}\refstmodindex{urlparse}.
1995-02-16 12:29:46 -04:00
\end{itemize}
\subsection{URLopener Objects \label{urlopener-objs}}
\sectionauthor{Skip Montanaro}{skip@mojam.com}
\class{URLopener} and \class{FancyURLopener} objects have the
following methodsL
\begin{methoddesc}{open}{fullurl\optional{, data}}
Open \var{fullurl} using the appropriate protocol. This method sets
up cache and proxy information, then calls the appropriate open method with
its input arguments. If the scheme is not recognized,
\method{open_unknown()} is called. The \var{data} argument
has the same meaning as the \var{data} argument of \function{urlopen()}.
\end{methoddesc}
\begin{methoddesc}{open_unknown}{fullurl\optional{, data}}
Overridable interface to open unknown URL types.
\end{methoddesc}
\begin{methoddesc}{retrieve}{url\optional{, filename\optional{, reporthook}}}
Retrieves the contents of \var{url} and places it in \var{filename}. The
return value is a tuple consisting of a local filename and either a
\class{mimetools.Message} object containing the response headers (for remote
URLs) or None (for local URLs). The caller must then open and read the
contents of \var{filename}. If \var{filename} is not given and the URL
refers to a local file, the input filename is returned. If the URL is
non-local and \var{filename} is not given, the filename is the output of
\function{tempfile.mktemp()} with a suffix that matches the suffix of the last
path component of the input URL. If \var{reporthook} is given, it must be
a function accepting three numeric parameters. It will be called after each
chunk of data is read from the network. \var{reporthook} is ignored for
local URLs.
\end{methoddesc}
\subsection{Examples}
\nodename{Urllib Examples}
Here is an example session that uses the \samp{GET} method to retrieve
a URL containing parameters:
\begin{verbatim}
>>> import urllib
>>> params = urllib.urlencode({'spam': 1, 'eggs': 2, 'bacon': 0})
>>> f = urllib.urlopen("http://www.musi-cal.com/cgi-bin/query?%s" % params)
>>> print f.read()
\end{verbatim}
The following example uses the \samp{POST} method instead:
\begin{verbatim}
>>> import urllib
>>> params = urllib.urlencode({'spam': 1, 'eggs': 2, 'bacon': 0})
>>> f = urllib.urlopen("http://www.musi-cal.com/cgi-bin/query", params)
>>> print f.read()
\end{verbatim}