Remove test_doctest's expected-output file.
Change test_doctest and test_difflib to pass regrtest's notion of verbosity on to doctest. Add explanation for a dozen "new" things to test/README.
This commit is contained in:
parent
a6daad2e55
commit
f5f6c436c6
124
Lib/test/README
124
Lib/test/README
|
@ -1,7 +1,7 @@
|
|||
Writing Python Regression Tests
|
||||
-------------------------------
|
||||
Skip Montanaro
|
||||
(skip@mojam.com)
|
||||
Writing Python Regression Tests
|
||||
-------------------------------
|
||||
Skip Montanaro
|
||||
(skip@mojam.com)
|
||||
|
||||
|
||||
Introduction
|
||||
|
@ -26,6 +26,7 @@ your test cases to exercise it more completely. In particular, you will be
|
|||
able to refer to the C and Python code in the CVS repository when writing
|
||||
your regression test cases.
|
||||
|
||||
|
||||
PyUnit based tests
|
||||
|
||||
The PyUnit framework is based on the ideas of unit testing as espoused
|
||||
|
@ -42,22 +43,31 @@ and runs the tests defined in that class. All test methods in the
|
|||
Python regression framework have names that start with "test_" and use
|
||||
lower-case names with words separated with underscores.
|
||||
|
||||
|
||||
doctest based tests
|
||||
|
||||
Tests written to use doctest are actually part of the docstrings for
|
||||
the module being tested. Each test is written as a display of an
|
||||
interactive session, including the Python prompts, statements that would
|
||||
be typed by the user, and the output of those statements (including
|
||||
tracebacks!). The module in the test package is simply a wrapper that
|
||||
causes doctest to run over the tests in the module. The test for the
|
||||
doctest module provides a convenient example:
|
||||
tracebacks, although only the exception msg needs to be retained then).
|
||||
The module in the test package is simply a wrapper that causes doctest
|
||||
to run over the tests in the module. The test for the difflib module
|
||||
provides a convenient example:
|
||||
|
||||
import doctest
|
||||
doctest.testmod(doctest, verbose=1)
|
||||
from test_support import verbose
|
||||
import doctest, difflib
|
||||
doctest.testmod(difflib, verbose=verbose)
|
||||
|
||||
If the test is successful, nothing is written to stdout (so you should not
|
||||
create a corresponding output/test_difflib file), but running regrtest
|
||||
with -v will give a detailed report, the same as if passing -v to doctest
|
||||
(that's what importing verbose from test_support accomplishes).
|
||||
|
||||
See the documentation for the doctest module for information on
|
||||
writing tests using the doctest framework.
|
||||
|
||||
|
||||
"traditional" Python test modules
|
||||
|
||||
The mechanics of how the "traditional" test system operates are fairly
|
||||
|
@ -71,22 +81,25 @@ raised, the test is not run.
|
|||
Executing Test Cases
|
||||
|
||||
If you are writing test cases for module spam, you need to create a file
|
||||
in .../Lib/test named test_spam.py and an expected output file in
|
||||
.../Lib/test/output named test_spam ("..." represents the top-level
|
||||
directory in the Python source tree, the directory containing the configure
|
||||
script). From the top-level directory, generate the initial version of the
|
||||
test output file by executing:
|
||||
in .../Lib/test named test_spam.py. In addition, if the tests are expected
|
||||
to write to stdout during a successful run, you also need to create an
|
||||
expected output file in .../Lib/test/output named test_spam ("..."
|
||||
represents the top-level directory in the Python source tree, the directory
|
||||
containing the configure script). If needed, generate the initial version
|
||||
of the test output file by executing:
|
||||
|
||||
./python Lib/test/regrtest.py -g test_spam.py
|
||||
|
||||
(If your test does not generate any output when run successfully, this
|
||||
step may be skipped; no file containing expected output will be needed
|
||||
in this case.)
|
||||
from the top-level directory.
|
||||
|
||||
Any time you modify test_spam.py you need to generate a new expected
|
||||
output file. Don't forget to desk check the generated output to make sure
|
||||
it's really what you expected to find! To run a single test after modifying
|
||||
a module, simply run regrtest.py without the -g flag:
|
||||
it's really what you expected to find! All in all it's usually better
|
||||
not to have an expected-out file (note that doctest- and unittest-based
|
||||
tests do not).
|
||||
|
||||
To run a single test after modifying a module, simply run regrtest.py
|
||||
without the -g flag:
|
||||
|
||||
./python Lib/test/regrtest.py test_spam.py
|
||||
|
||||
|
@ -95,16 +108,26 @@ independently of the regression testing framework and see what it prints:
|
|||
|
||||
./python Lib/test/test_spam.py
|
||||
|
||||
To run the entire test suite, make the "test" target at the top level:
|
||||
To run the entire test suite:
|
||||
|
||||
[UNIX, + other platforms where "make" works] Make the "test" target at the
|
||||
top level:
|
||||
|
||||
make test
|
||||
|
||||
On non-Unix platforms where make may not be available, you can simply
|
||||
execute the two runs of regrtest (optimized and non-optimized) directly:
|
||||
{WINDOWS] Run rt.bat from your PCBuild directory. Read the comments at
|
||||
the top of rt.bat for the use of special -d, -O and -q options processed
|
||||
by rt.bat.
|
||||
|
||||
[OTHER] You can simply execute the two runs of regrtest (optimized and
|
||||
non-optimized) directly:
|
||||
|
||||
./python Lib/test/regrtest.py
|
||||
./python -O Lib/test/regrtest.py
|
||||
|
||||
But note that this way picks up whatever .pyc and .pyo files happen to be
|
||||
around. The makefile and rt.bat ways run the tests twice, the first time
|
||||
removing all .pyc and .pyo files from the subtree rooted at Lib/.
|
||||
|
||||
Test cases generate output based upon values computed by the test code.
|
||||
When executed, regrtest.py compares the actual output generated by executing
|
||||
|
@ -172,7 +195,9 @@ In designing test cases you should pay attention to the following:
|
|||
Regression Test Writing Rules
|
||||
|
||||
Each test case is different. There is no "standard" form for a Python
|
||||
regression test case, though there are some general rules:
|
||||
regression test case, though there are some general rules (note that
|
||||
these mostly apply only to the "classic" tests; unittest- and doctest-
|
||||
based tests should follow the conventions natural to those frameworks):
|
||||
|
||||
* If your test case detects a failure, raise TestFailed (found in
|
||||
test_support).
|
||||
|
@ -212,14 +237,32 @@ provides the following useful objects:
|
|||
platform doesn't offer all the required facilities (like large
|
||||
file support), even if all the required modules are available.
|
||||
|
||||
* findfile(file) - you can call this function to locate a file somewhere
|
||||
along sys.path or in the Lib/test tree - see test_linuxaudiodev.py for
|
||||
an example of its use.
|
||||
|
||||
* verbose - you can use this variable to control print output. Many
|
||||
modules use it. Search for "verbose" in the test_*.py files to see
|
||||
lots of examples.
|
||||
|
||||
* verify(condition, reason='test failed'). Use this instead of
|
||||
|
||||
assert condition[, reason]
|
||||
|
||||
verify() has two advantages over assert: it works even in -O mode,
|
||||
and it raises TestFailed on failure instead of AssertionError.
|
||||
|
||||
* TESTFN - a string that should always be used as the filename when you
|
||||
need to create a temp file. Also use try/finally to ensure that your
|
||||
temp files are deleted before your test completes. Note that you
|
||||
cannot unlink an open file on all operating systems, so also be sure
|
||||
to close temp files before trying to unlink them.
|
||||
|
||||
* sortdict(dict) - acts like repr(dict.items()), but sorts the items
|
||||
first. This is important when printing a dict value, because the
|
||||
order of items produced by dict.items() is not defined by the
|
||||
language.
|
||||
|
||||
* findfile(file) - you can call this function to locate a file somewhere
|
||||
along sys.path or in the Lib/test tree - see test_linuxaudiodev.py for
|
||||
an example of its use.
|
||||
|
||||
* use_large_resources - true iff tests requiring large time or space
|
||||
should be run.
|
||||
|
||||
|
@ -258,3 +301,30 @@ As of this writing (July, 2000) these results are being generated nightly.
|
|||
You can refer to the summaries and the test coverage output files to see
|
||||
where coverage is adequate or lacking and write test cases to beef up the
|
||||
coverage.
|
||||
|
||||
|
||||
Some Non-Obvious regrtest Features
|
||||
|
||||
* Automagic test detection: When you create a new test file
|
||||
test_spam.py, you do not need to modify regrtest (or anything else)
|
||||
to advertise its existence. regrtest searches for and runs all
|
||||
modules in the test directory with names of the form test_xxx.py.
|
||||
|
||||
* Miranda output: If, when running test_spam.py, regrtest does not
|
||||
find an expected-output file test/output/test_spam, regrtest
|
||||
pretends that it did find one, containing the single line
|
||||
|
||||
test_spam
|
||||
|
||||
This allows new tests that don't expect to print anything to stdout
|
||||
to not bother creating expected-output files.
|
||||
|
||||
* Two-stage testing: To run test_spam.py, regrtest imports test_spam
|
||||
as a module. Most tests run to completion as a side-effect of
|
||||
getting imported. After importing test_spam, regrtest also executes
|
||||
test_spam.test_main(), if test_spam has a "test_main" attribute.
|
||||
This is rarely needed, and you shouldn't create a module global
|
||||
with name test_main unless you're specifically exploiting this
|
||||
gimmick. In such cases, please put a comment saying so near your
|
||||
def test_main, because this feature is so rarely used it's not
|
||||
obvious when reading the test code.
|
||||
|
|
|
@ -1,301 +0,0 @@
|
|||
test_doctest
|
||||
Running doctest.__doc__
|
||||
Trying: [1, 2, 3].remove(42)
|
||||
Expecting:
|
||||
Traceback (most recent call last):
|
||||
File "<stdin>", line 1, in ?
|
||||
ValueError: list.remove(x): x not in list
|
||||
ok
|
||||
Trying: x = 12
|
||||
Expecting: nothing
|
||||
ok
|
||||
Trying: x
|
||||
Expecting: 12
|
||||
ok
|
||||
Trying:
|
||||
if x == 13:
|
||||
print "yes"
|
||||
else:
|
||||
print "no"
|
||||
print "NO"
|
||||
print "NO!!!"
|
||||
Expecting:
|
||||
no
|
||||
NO
|
||||
NO!!!
|
||||
ok
|
||||
Trying:
|
||||
if "yes" == \
|
||||
"y" + \
|
||||
"es": # in the source code you'll see the doubled backslashes
|
||||
print 'yes'
|
||||
Expecting: yes
|
||||
ok
|
||||
Trying: assert "Easy!"
|
||||
Expecting: nothing
|
||||
ok
|
||||
Trying: import math
|
||||
Expecting: nothing
|
||||
ok
|
||||
Trying: math.floor(1.9)
|
||||
Expecting: 1.0
|
||||
ok
|
||||
0 of 8 examples failed in doctest.__doc__
|
||||
Running doctest.Tester.__doc__
|
||||
Trying: from doctest import Tester
|
||||
Expecting: nothing
|
||||
ok
|
||||
Trying: t = Tester(globs={'x': 42}, verbose=0)
|
||||
Expecting: nothing
|
||||
ok
|
||||
Trying:
|
||||
t.runstring(r'''
|
||||
>>> x = x * 2
|
||||
>>> print x
|
||||
42
|
||||
''', 'XYZ')
|
||||
Expecting:
|
||||
*****************************************************************
|
||||
Failure in example: print x
|
||||
from line #2 of XYZ
|
||||
Expected: 42
|
||||
Got: 84
|
||||
(1, 2)
|
||||
ok
|
||||
Trying: t.runstring(">>> x = x * 2\n>>> print x\n84\n", 'example2')
|
||||
Expecting: (0, 2)
|
||||
ok
|
||||
Trying: t.summarize()
|
||||
Expecting:
|
||||
*****************************************************************
|
||||
1 items had failures:
|
||||
1 of 2 in XYZ
|
||||
***Test Failed*** 1 failures.
|
||||
(1, 4)
|
||||
ok
|
||||
Trying: t.summarize(verbose=1)
|
||||
Expecting:
|
||||
1 items passed all tests:
|
||||
2 tests in example2
|
||||
*****************************************************************
|
||||
1 items had failures:
|
||||
1 of 2 in XYZ
|
||||
4 tests in 2 items.
|
||||
3 passed and 1 failed.
|
||||
***Test Failed*** 1 failures.
|
||||
(1, 4)
|
||||
ok
|
||||
0 of 6 examples failed in doctest.Tester.__doc__
|
||||
Running doctest.Tester.__init__.__doc__
|
||||
0 of 0 examples failed in doctest.Tester.__init__.__doc__
|
||||
Running doctest.Tester.merge.__doc__
|
||||
Trying: from doctest import Tester
|
||||
Expecting: nothing
|
||||
ok
|
||||
Trying: t1 = Tester(globs={}, verbose=0)
|
||||
Expecting: nothing
|
||||
ok
|
||||
Trying:
|
||||
t1.runstring('''
|
||||
>>> x = 12
|
||||
>>> print x
|
||||
12
|
||||
''', "t1example")
|
||||
Expecting: (0, 2)
|
||||
ok
|
||||
Trying: t2 = Tester(globs={}, verbose=0)
|
||||
Expecting: nothing
|
||||
ok
|
||||
Trying:
|
||||
t2.runstring('''
|
||||
>>> x = 13
|
||||
>>> print x
|
||||
13
|
||||
''', "t2example")
|
||||
Expecting: (0, 2)
|
||||
ok
|
||||
Trying: common = ">>> assert 1 + 2 == 3\n"
|
||||
Expecting: nothing
|
||||
ok
|
||||
Trying: t1.runstring(common, "common")
|
||||
Expecting: (0, 1)
|
||||
ok
|
||||
Trying: t2.runstring(common, "common")
|
||||
Expecting: (0, 1)
|
||||
ok
|
||||
Trying: t1.merge(t2)
|
||||
Expecting: *** Tester.merge: 'common' in both testers; summing outcomes.
|
||||
ok
|
||||
Trying: t1.summarize(1)
|
||||
Expecting:
|
||||
3 items passed all tests:
|
||||
2 tests in common
|
||||
2 tests in t1example
|
||||
2 tests in t2example
|
||||
6 tests in 3 items.
|
||||
6 passed and 0 failed.
|
||||
Test passed.
|
||||
(0, 6)
|
||||
ok
|
||||
0 of 10 examples failed in doctest.Tester.merge.__doc__
|
||||
Running doctest.Tester.run__test__.__doc__
|
||||
0 of 0 examples failed in doctest.Tester.run__test__.__doc__
|
||||
Running doctest.Tester.rundict.__doc__
|
||||
Trying:
|
||||
def _f():
|
||||
'''>>> assert 1 == 1
|
||||
'''
|
||||
Expecting: nothing
|
||||
ok
|
||||
Trying:
|
||||
def g():
|
||||
'''>>> assert 2 != 1
|
||||
'''
|
||||
Expecting: nothing
|
||||
ok
|
||||
Trying: d = {"_f": _f, "g": g}
|
||||
Expecting: nothing
|
||||
ok
|
||||
Trying: t = Tester(globs={}, verbose=0)
|
||||
Expecting: nothing
|
||||
ok
|
||||
Trying: t.rundict(d, "rundict_test") # _f is skipped
|
||||
Expecting: (0, 1)
|
||||
ok
|
||||
Trying: t = Tester(globs={}, verbose=0, isprivate=lambda x,y: 0)
|
||||
Expecting: nothing
|
||||
ok
|
||||
Trying: t.rundict(d, "rundict_test_pvt") # both are searched
|
||||
Expecting: (0, 2)
|
||||
ok
|
||||
0 of 7 examples failed in doctest.Tester.rundict.__doc__
|
||||
Running doctest.Tester.rundoc.__doc__
|
||||
Trying: t = Tester(globs={}, verbose=0)
|
||||
Expecting: nothing
|
||||
ok
|
||||
Trying:
|
||||
def _f():
|
||||
'''Trivial docstring example.
|
||||
>>> assert 2 == 2
|
||||
'''
|
||||
return 32
|
||||
Expecting: nothing
|
||||
ok
|
||||
Trying: t.rundoc(_f) # expect 0 failures in 1 example
|
||||
Expecting: (0, 1)
|
||||
ok
|
||||
0 of 3 examples failed in doctest.Tester.rundoc.__doc__
|
||||
Running doctest.Tester.runstring.__doc__
|
||||
Trying: t = Tester(globs={}, verbose=1)
|
||||
Expecting: nothing
|
||||
ok
|
||||
Trying:
|
||||
test = r'''
|
||||
# just an example
|
||||
>>> x = 1 + 2
|
||||
>>> x
|
||||
3
|
||||
'''
|
||||
Expecting: nothing
|
||||
ok
|
||||
Trying: t.runstring(test, "Example")
|
||||
Expecting:
|
||||
Running string Example
|
||||
Trying: x = 1 + 2
|
||||
Expecting: nothing
|
||||
ok
|
||||
Trying: x
|
||||
Expecting: 3
|
||||
ok
|
||||
0 of 2 examples failed in string Example
|
||||
(0, 2)
|
||||
ok
|
||||
0 of 3 examples failed in doctest.Tester.runstring.__doc__
|
||||
Running doctest.Tester.summarize.__doc__
|
||||
0 of 0 examples failed in doctest.Tester.summarize.__doc__
|
||||
Running doctest.is_private.__doc__
|
||||
Trying: is_private("a.b", "my_func")
|
||||
Expecting: 0
|
||||
ok
|
||||
Trying: is_private("____", "_my_func")
|
||||
Expecting: 1
|
||||
ok
|
||||
Trying: is_private("someclass", "__init__")
|
||||
Expecting: 0
|
||||
ok
|
||||
Trying: is_private("sometypo", "__init_")
|
||||
Expecting: 1
|
||||
ok
|
||||
Trying: is_private("x.y.z", "_")
|
||||
Expecting: 1
|
||||
ok
|
||||
Trying: is_private("_x.y.z", "__")
|
||||
Expecting: 0
|
||||
ok
|
||||
Trying: is_private("", "") # senseless but consistent
|
||||
Expecting: 0
|
||||
ok
|
||||
0 of 7 examples failed in doctest.is_private.__doc__
|
||||
Running doctest.run_docstring_examples.__doc__
|
||||
0 of 0 examples failed in doctest.run_docstring_examples.__doc__
|
||||
Running doctest.testmod.__doc__
|
||||
0 of 0 examples failed in doctest.testmod.__doc__
|
||||
Running doctest.__test__._TestClass.__doc__
|
||||
Trying: _TestClass(13).get() + _TestClass(-12).get()
|
||||
Expecting: 1
|
||||
ok
|
||||
Trying: hex(_TestClass(13).square().get())
|
||||
Expecting: '0xa9'
|
||||
ok
|
||||
0 of 2 examples failed in doctest.__test__._TestClass.__doc__
|
||||
Running doctest.__test__._TestClass.__init__.__doc__
|
||||
Trying: t = _TestClass(123)
|
||||
Expecting: nothing
|
||||
ok
|
||||
Trying: print t.get()
|
||||
Expecting: 123
|
||||
ok
|
||||
0 of 2 examples failed in doctest.__test__._TestClass.__init__.__doc__
|
||||
Running doctest.__test__._TestClass.get.__doc__
|
||||
Trying: x = _TestClass(-42)
|
||||
Expecting: nothing
|
||||
ok
|
||||
Trying: print x.get()
|
||||
Expecting: -42
|
||||
ok
|
||||
0 of 2 examples failed in doctest.__test__._TestClass.get.__doc__
|
||||
Running doctest.__test__._TestClass.square.__doc__
|
||||
Trying: _TestClass(13).square().get()
|
||||
Expecting: 169
|
||||
ok
|
||||
0 of 1 examples failed in doctest.__test__._TestClass.square.__doc__
|
||||
Running string doctest.__test__.string
|
||||
Trying: x = 1; y = 2
|
||||
Expecting: nothing
|
||||
ok
|
||||
Trying: x + y, x * y
|
||||
Expecting: (3, 2)
|
||||
ok
|
||||
0 of 2 examples failed in string doctest.__test__.string
|
||||
5 items had no tests:
|
||||
doctest.Tester.__init__
|
||||
doctest.Tester.run__test__
|
||||
doctest.Tester.summarize
|
||||
doctest.run_docstring_examples
|
||||
doctest.testmod
|
||||
12 items passed all tests:
|
||||
8 tests in doctest
|
||||
6 tests in doctest.Tester
|
||||
10 tests in doctest.Tester.merge
|
||||
7 tests in doctest.Tester.rundict
|
||||
3 tests in doctest.Tester.rundoc
|
||||
3 tests in doctest.Tester.runstring
|
||||
2 tests in doctest.__test__._TestClass
|
||||
2 tests in doctest.__test__._TestClass.__init__
|
||||
2 tests in doctest.__test__._TestClass.get
|
||||
1 tests in doctest.__test__._TestClass.square
|
||||
2 tests in doctest.__test__.string
|
||||
7 tests in doctest.is_private
|
||||
53 tests in 17 items.
|
||||
53 passed and 0 failed.
|
||||
Test passed.
|
|
@ -1,2 +1,3 @@
|
|||
from test_support import verbose
|
||||
import doctest, difflib
|
||||
doctest.testmod(difflib, verbose=0)
|
||||
doctest.testmod(difflib, verbose=verbose)
|
||||
|
|
|
@ -1,2 +1,3 @@
|
|||
from test_support import verbose
|
||||
import doctest
|
||||
doctest.testmod(doctest, verbose=1)
|
||||
doctest.testmod(doctest, verbose=verbose)
|
||||
|
|
Loading…
Reference in New Issue