Issue #2134: Clarify token.OP handling rationale in tokenize documentation.

This commit is contained in:
Meador Inge 2012-01-19 00:17:44 -06:00
parent d7664dee0c
commit da747c3d97
2 changed files with 9 additions and 0 deletions

View File

@ -15,6 +15,12 @@ implemented in Python. The scanner in this module returns comments as tokens as
well, making it useful for implementing "pretty-printers," including colorizers
for on-screen displays.
To simplify token stream handling, all :ref:`operators` and :ref:`delimiters`
tokens are returned using the generic :data:`token.OP` token type. The exact
type can be determined by checking the token ``string`` field on the
:term:`named tuple` returned from :func:`tokenize.tokenize` for the character
sequence that identifies a specific operator token.
The primary entry point is a :term:`generator`:
.. function:: generate_tokens(readline)

View File

@ -495,6 +495,9 @@ Tests
Documentation
-------------
- Issue #2134: The tokenize documentation has been clarified to explain why
all operator and delimiter tokens are treated as token.OP tokens.
- Issue #13513: Fix io.IOBase documentation to correctly link to the
io.IOBase.readline method instead of the readline module.