Commit Graph

33 Commits

Author SHA1 Message Date
Serhiy Storchaka 0a8845e64f Issue #25317: Converted doctests in test_tokenize to unittests. 2015-10-06 18:13:38 +03:00
Jason R. Coombs 33b24f5c09 Issue #20387: Backport test from Python 3.4 2015-06-28 13:03:26 -04:00
Terry Jan Reedy bd7cf3ade3 Issue #9974: When untokenizing, use row info to insert backslash+newline.
Original patches by A. Kuchling and G. Rees (#12691).
2014-02-23 23:32:59 -05:00
Terry Jan Reedy 6858f00dab Issue #8478: Untokenizer.compat now processes first token from iterator input.
Patch based on lines from Georg Brandl, Eric Snow, and Gareth Rees.
2014-02-17 23:12:07 -05:00
Terry Jan Reedy 7751a34400 Untokenize: An logically incorrect assert tested user input validity.
Replace it with correct logic that raises ValueError for bad input.
Issues #8478 and #12691 reported the incorrect logic.
Add an Untokenize test case and an initial test method.
2014-02-17 16:45:38 -05:00
Ezio Melotti 7d24b1698a #16152: fix tokenize to ignore whitespace at the end of the code when no newline is found. Patch by Ned Batchelder. 2012-11-03 17:30:51 +02:00
Meador Inge 43f42fc3cb Issue #15054: Fix incorrect tokenization of 'b' and 'br' string literals.
Patch by Serhiy Storchaka.
2012-06-16 21:05:50 -05:00
Antoine Pitrou d989f820c8 Merged revisions 85482 via svnmerge from
svn+ssh://pythondev@svn.python.org/python/branches/py3k

........
  r85482 | antoine.pitrou | 2010-10-14 17:34:31 +0200 (jeu., 14 oct. 2010) | 4 lines

  Replace the "compiler" resource with the more generic "cpu", so
  as to mark CPU-heavy tests.
........
2010-10-14 15:43:25 +00:00
Mark Dickinson 858624944c Spelling. 2010-06-29 07:37:25 +00:00
Georg Brandl a4f46e1292 Remove unused imports in test modules. 2010-02-07 17:03:15 +00:00
Benjamin Peterson e52657220c change test to what I intended 2009-10-15 01:56:25 +00:00
Benjamin Peterson 447dc15658 use floor division and add a test that exercises the tabsize codepath 2009-10-15 01:49:37 +00:00
Amaury Forgeot d'Arc da0c025a43 Issue2495: tokenize.untokenize did not insert space between two consecutive string literals:
"" "" => """", which is invalid code.

Will backport
2008-03-27 23:23:54 +00:00
Christian Heimes 6c052fd523 Fixed tokenize tests
The tokenize module doesn't understand __future__.unicode_literals yet
2008-03-27 11:46:37 +00:00
Eric Smith 0aed07ad80 Added PEP 3127 support to tokenize (with tests); added PEP 3127 to NEWS. 2008-03-17 19:43:40 +00:00
Brett Cannon b8d37359cd Move test_tokenize to doctest.
Done as GHOP 238 by Josip Dzolonga.
2008-03-13 20:33:10 +00:00
Neal Norwitz c1120b4b66 Hmm, this test has failed at least twice recently on the OpenBSD and
Debian sparc buildbots.  Since this goes through a lot of tests
and hits the disk a lot it could be slow (especially if NFS is involved).
I'm not sure if that's the problem, but printing periodic msgs shouldn't hurt.
The code was stolen from test_compiler.
2006-09-02 19:40:19 +00:00
Tim Peters 4582d7d905 A new test here relied on preserving invisible trailing
whitespace in expected output.  Stop that.
2006-08-25 22:26:21 +00:00
Tim Peters 147f9ae6db Whitespace normalization. 2006-08-25 22:05:39 +00:00
Jeremy Hylton 76467ba6d6 Bug fixes large and small for tokenize.
Small: Always generate a NL or NEWLINE token following
       a COMMENT token.  The old code did not generate an NL token if
       the comment was on a line by itself.

Large: The output of untokenize() will now match the
       input exactly if it is passed the full token sequence.  The
       old, crufty output is still generated if a limited input
       sequence is provided, where limited means that it does not
       include position information for tokens.

Remaining bug: There is no CONTINUATION token (\) so there is no way
for untokenize() to handle such code.

Also, expanded the number of doctests in hopes of eventually removing
the old-style tests that compare against a golden file.

Bug fix candidate for Python 2.5.1. (Sigh.)
2006-08-23 21:14:03 +00:00
Jeremy Hylton 29bef0bbaa Baby steps towards better tests for tokenize 2006-08-23 18:37:43 +00:00
Tim Peters ef57567de0 Repaired a number of errors in this test:
- The doctests in decistmt() weren't run at all when
  test_tokenize was run via regrtest.py.

- Some expected output in decistmt() was Windows-specific
  (but nobody noticed because the doctests weren't getting
  run).

- test_roundtrip() didn't actually test anything when
  running the tests with -O.  Now it does.

- Changed test_roundtrip() to show the name of the input
  file when it fails.  That would have saved a lot of
  time earlier today.

- Added a bunch of comments.
2006-03-31 03:17:30 +00:00
Raymond Hettinger da99d1cbfe SF bug #1224621: tokenize module does not detect inconsistent dedents 2005-06-21 07:43:58 +00:00
Raymond Hettinger 68c0453418 Add untokenize() function to allow full round-trip tokenization.
Should significantly enhance the utility of the module by supporting
the creation of tools that modify the token stream and writeback the
modified result.
2005-06-10 11:05:19 +00:00
Tim Peters 0ff2ee0561 Effectively renamed tokenize_tests.py to have a txt extension instead.
This file isn't meant to be executed, it's data input for test_tokenize.py.
The problem with the .py extension is that it uses "non-standard"
indentation, and it's good to test that, but reindent.py keeps wanting
to fix it.  But fixing the indentation causes the expected-output file to
change, since exact line and column numbers are part of the
tokenize.tokenize() output getting tested.
2003-05-12 19:42:04 +00:00
Tim Peters 11cb813598 Close the file after tokenizing it. Because the open file object was
bound to a module global, the file object remained opened throughout
the test suite run.
2003-05-12 19:29:36 +00:00
Barry Warsaw 04f357cffe Get rid of relative imports in all unittests. Now anything that
imports e.g. test_support must do so using an absolute package name
such as "import test.test_support" or "from test import test_support".

This also updates the README in Lib/test, and gets rid of the
duplicate data dirctory in Lib/test/data (replaced by
Lib/email/test/data).

Now Tim and Jack can have at it. :)
2002-07-23 19:04:11 +00:00
Guido van Rossum e2ae77b8b8 SF patch #474590 -- RISC OS support 2001-10-24 20:42:55 +00:00
Fredrik Lundh f785042433 a bold attempt to fix things broken by MAL's verify patch: import
'verify' iff it's used by a test module...
2001-01-17 21:51:36 +00:00
Marc-André Lemburg 3661908a6a This patch removes all uses of "assert" in the regression test suite
and replaces them with a new API verify(). As a result the regression
suite will also perform its tests in optimization mode.

Written by Marc-Andre Lemburg. Copyright assigned to Guido van Rossum.
2001-01-17 19:11:13 +00:00
Fred Drake 004d5e6880 Make reindent.py happy (convert everything to 4-space indents!). 2000-10-23 17:22:08 +00:00
Guido van Rossum e26132cf5e Move unified findfile() into test_support.py 1998-04-23 20:13:30 +00:00
Guido van Rossum 0874f7fdaf Tests for tokenize.py (Ka-Ping Yee) 1997-10-27 22:15:06 +00:00