Issue #26127: Fix links in tokenize documentation; patch by Silent Ghost

This commit is contained in:
Martin Panter 2016-01-16 04:32:52 +00:00
parent a3a58331a5
commit 20b1bfa6fb
1 changed files with 7 additions and 7 deletions

View File

@ -27,7 +27,7 @@ The primary entry point is a :term:`generator`:
.. function:: tokenize(readline)
The :func:`tokenize` generator requires one argument, *readline*, which
The :func:`.tokenize` generator requires one argument, *readline*, which
must be a callable object which provides the same interface as the
:meth:`io.IOBase.readline` method of file objects. Each call to the
function should return one line of input as bytes.
@ -52,7 +52,7 @@ The primary entry point is a :term:`generator`:
.. versionchanged:: 3.3
Added support for ``exact_type``.
:func:`tokenize` determines the source encoding of the file by looking for a
:func:`.tokenize` determines the source encoding of the file by looking for a
UTF-8 BOM or encoding cookie, according to :pep:`263`.
@ -74,7 +74,7 @@ All constants from the :mod:`token` module are also exported from
.. data:: ENCODING
Token value that indicates the encoding used to decode the source bytes
into text. The first token returned by :func:`tokenize` will always be an
into text. The first token returned by :func:`.tokenize` will always be an
ENCODING token.
@ -96,17 +96,17 @@ write back the modified script.
positions) may change.
It returns bytes, encoded using the ENCODING token, which is the first
token sequence output by :func:`tokenize`.
token sequence output by :func:`.tokenize`.
:func:`tokenize` needs to detect the encoding of source files it tokenizes. The
:func:`.tokenize` needs to detect the encoding of source files it tokenizes. The
function it uses to do this is available:
.. function:: detect_encoding(readline)
The :func:`detect_encoding` function is used to detect the encoding that
should be used to decode a Python source file. It requires one argument,
readline, in the same way as the :func:`tokenize` generator.
readline, in the same way as the :func:`.tokenize` generator.
It will call readline a maximum of twice, and return the encoding used
(as a string) and a list of any lines (not decoded from bytes) it has read
@ -120,7 +120,7 @@ function it uses to do this is available:
If no encoding is specified, then the default of ``'utf-8'`` will be
returned.
Use :func:`open` to open Python source files: it uses
Use :func:`.open` to open Python source files: it uses
:func:`detect_encoding` to detect the file encoding.