bpo-5028: Fix up rest of documentation for tokenize documenting line (GH-13686)
https://bugs.python.org/issue5028
This commit is contained in:
parent
eea47e0939
commit
2a58b0636d
|
@ -39,8 +39,8 @@ The primary entry point is a :term:`generator`:
|
|||
column where the token begins in the source; a 2-tuple ``(erow, ecol)`` of
|
||||
ints specifying the row and column where the token ends in the source; and
|
||||
the line on which the token was found. The line passed (the last tuple item)
|
||||
is the *physical* line; continuation lines are included. The 5 tuple is
|
||||
returned as a :term:`named tuple` with the field names:
|
||||
is the *physical* line. The 5 tuple is returned as a :term:`named tuple`
|
||||
with the field names:
|
||||
``type string start end line``.
|
||||
|
||||
The returned :term:`named tuple` has an additional property named
|
||||
|
|
|
@ -346,7 +346,7 @@ def generate_tokens(readline):
|
|||
column where the token begins in the source; a 2-tuple (erow, ecol) of
|
||||
ints specifying the row and column where the token ends in the source;
|
||||
and the line on which the token was found. The line passed is the
|
||||
physical line; continuation lines are included.
|
||||
physical line.
|
||||
"""
|
||||
lnum = parenlev = continued = 0
|
||||
contstr, needcont = '', 0
|
||||
|
|
|
@ -415,7 +415,7 @@ def tokenize(readline):
|
|||
column where the token begins in the source; a 2-tuple (erow, ecol) of
|
||||
ints specifying the row and column where the token ends in the source;
|
||||
and the line on which the token was found. The line passed is the
|
||||
physical line; continuation lines are included.
|
||||
physical line.
|
||||
|
||||
The first token sequence will always be an ENCODING token
|
||||
which tells you which encoding was used to decode the bytes stream.
|
||||
|
|
Loading…
Reference in New Issue