Issue #2134: Clarify token.OP handling rationale in tokenize documentation.

This commit is contained in:
Meador Inge 2012-01-19 00:22:22 -06:00
parent 1b468af7be
commit 972cfb9169
2 changed files with 9 additions and 0 deletions

View File

@ -15,6 +15,12 @@ implemented in Python. The scanner in this module returns comments as tokens
as well, making it useful for implementing "pretty-printers," including
colorizers for on-screen displays.
To simplify token stream handling, all :ref:`operators` and :ref:`delimiters`
tokens are returned using the generic :data:`token.OP` token type. The exact
type can be determined by checking the token ``string`` field on the
:term:`named tuple` returned from :func:`tokenize.tokenize` for the character
sequence that identifies a specific operator token.
The primary entry point is a :term:`generator`:
.. function:: tokenize(readline)

View File

@ -418,6 +418,9 @@ Extension Modules
Documentation
-------------
- Issue #2134: The tokenize documentation has been clarified to explain why
all operator and delimiter tokens are treated as token.OP tokens.
- Issue #13513: Fix io.IOBase documentation to correctly link to the
io.IOBase.readline method instead of the readline module.