bpo-5028: fix doc bug for tokenize (GH-11683)

https://bugs.python.org/issue5028
This commit is contained in:
Andrew Carr 2019-05-30 13:31:51 -06:00 committed by Miss Islington (bot)
parent 1b69c09248
commit 1e36f75d63
3 changed files with 3 additions and 3 deletions

View File

@ -39,7 +39,7 @@ The primary entry point is a :term:`generator`:
column where the token begins in the source; a 2-tuple ``(erow, ecol)`` of column where the token begins in the source; a 2-tuple ``(erow, ecol)`` of
ints specifying the row and column where the token ends in the source; and ints specifying the row and column where the token ends in the source; and
the line on which the token was found. The line passed (the last tuple item) the line on which the token was found. The line passed (the last tuple item)
is the *logical* line; continuation lines are included. The 5 tuple is is the *physical* line; continuation lines are included. The 5 tuple is
returned as a :term:`named tuple` with the field names: returned as a :term:`named tuple` with the field names:
``type string start end line``. ``type string start end line``.

View File

@ -346,7 +346,7 @@ def generate_tokens(readline):
column where the token begins in the source; a 2-tuple (erow, ecol) of column where the token begins in the source; a 2-tuple (erow, ecol) of
ints specifying the row and column where the token ends in the source; ints specifying the row and column where the token ends in the source;
and the line on which the token was found. The line passed is the and the line on which the token was found. The line passed is the
logical line; continuation lines are included. physical line; continuation lines are included.
""" """
lnum = parenlev = continued = 0 lnum = parenlev = continued = 0
contstr, needcont = '', 0 contstr, needcont = '', 0

View File

@ -415,7 +415,7 @@ def tokenize(readline):
column where the token begins in the source; a 2-tuple (erow, ecol) of column where the token begins in the source; a 2-tuple (erow, ecol) of
ints specifying the row and column where the token ends in the source; ints specifying the row and column where the token ends in the source;
and the line on which the token was found. The line passed is the and the line on which the token was found. The line passed is the
logical line; continuation lines are included. physical line; continuation lines are included.
The first token sequence will always be an ENCODING token The first token sequence will always be an ENCODING token
which tells you which encoding was used to decode the bytes stream. which tells you which encoding was used to decode the bytes stream.