Issue #25977: Fix typos in Lib/tokenize.py

Patch by John Walker.
This commit is contained in:
Berker Peksag 2015-12-30 01:41:58 +02:00
parent 3cc8f4b969
commit ff8d0873aa
1 changed files with 4 additions and 4 deletions

View File

@ -328,8 +328,8 @@ def untokenize(iterable):
Round-trip invariant for full input: Round-trip invariant for full input:
Untokenized source will match input source exactly Untokenized source will match input source exactly
Round-trip invariant for limited intput: Round-trip invariant for limited input:
# Output bytes will tokenize the back to the input # Output bytes will tokenize back to the input
t1 = [tok[:2] for tok in tokenize(f.readline)] t1 = [tok[:2] for tok in tokenize(f.readline)]
newcode = untokenize(t1) newcode = untokenize(t1)
readline = BytesIO(newcode).readline readline = BytesIO(newcode).readline
@ -465,10 +465,10 @@ def open(filename):
def tokenize(readline): def tokenize(readline):
""" """
The tokenize() generator requires one argment, readline, which The tokenize() generator requires one argument, readline, which
must be a callable object which provides the same interface as the must be a callable object which provides the same interface as the
readline() method of built-in file objects. Each call to the function readline() method of built-in file objects. Each call to the function
should return one line of input as bytes. Alternately, readline should return one line of input as bytes. Alternatively, readline
can be a callable function terminating with StopIteration: can be a callable function terminating with StopIteration:
readline = open(myfile, 'rb').__next__ # Example of alternate readline readline = open(myfile, 'rb').__next__ # Example of alternate readline