mirror of https://github.com/python/cpython
ef0c42d4e5 | ||
---|---|---|
.. | ||
output | ||
README | ||
__init__.py | ||
audiotest.au | ||
autotest.py | ||
badsyntax_future3.py | ||
badsyntax_future4.py | ||
badsyntax_future5.py | ||
badsyntax_future6.py | ||
badsyntax_future7.py | ||
badsyntax_nocaret.py | ||
greyrgb.uue | ||
pickletester.py | ||
pystone.py | ||
re_tests.py | ||
regex_tests.py | ||
regrtest.py | ||
reperf.py | ||
sortperf.py | ||
string_tests.py | ||
test.xml | ||
test.xml.out | ||
test_MimeWriter.py | ||
test_StringIO.py | ||
test___all__.py | ||
test___future__.py | ||
test_al.py | ||
test_array.py | ||
test_asynchat.py | ||
test_atexit.py | ||
test_audioop.py | ||
test_augassign.py | ||
test_b1.py | ||
test_b2.py | ||
test_bastion.py | ||
test_binascii.py | ||
test_binhex.py | ||
test_bisect.py | ||
test_bsddb.py | ||
test_bufio.py | ||
test_builtin.py | ||
test_capi.py | ||
test_cd.py | ||
test_cfgparser.py | ||
test_cgi.py | ||
test_charmapcodec.py | ||
test_cl.py | ||
test_class.py | ||
test_cmath.py | ||
test_coercion.py | ||
test_compare.py | ||
test_compile.py | ||
test_complex.py | ||
test_contains.py | ||
test_cookie.py | ||
test_copy_reg.py | ||
test_cpickle.py | ||
test_crypt.py | ||
test_dbm.py | ||
test_difflib.py | ||
test_dl.py | ||
test_doctest.py | ||
test_dospath.py | ||
test_dumbdbm.py | ||
test_errno.py | ||
test_exceptions.py | ||
test_extcall.py | ||
test_fcntl.py | ||
test_file.py | ||
test_fnmatch.py | ||
test_fork1.py | ||
test_format.py | ||
test_funcattrs.py | ||
test_future.py | ||
test_future1.py | ||
test_future2.py | ||
test_gc.py | ||
test_gdbm.py | ||
test_getopt.py | ||
test_gettext.py | ||
test_gl.py | ||
test_global.py | ||
test_grammar.py | ||
test_grp.py | ||
test_gzip.py | ||
test_hash.py | ||
test_httplib.py | ||
test_imageop.py | ||
test_imgfile.py | ||
test_import.py | ||
test_inspect.py | ||
test_iter.py | ||
test_largefile.py | ||
test_linuxaudiodev.py | ||
test_locale.py | ||
test_long.py | ||
test_longexp.py | ||
test_mailbox.py | ||
test_math.py | ||
test_md5.py | ||
test_mimetools.py | ||
test_minidom.py | ||
test_mmap.py | ||
test_new.py | ||
test_nis.py | ||
test_ntpath.py | ||
test_opcodes.py | ||
test_openpty.py | ||
test_operations.py | ||
test_operator.py | ||
test_parser.py | ||
test_pickle.py | ||
test_pkg.py | ||
test_poll.py | ||
test_popen2.py | ||
test_posixpath.py | ||
test_pow.py | ||
test_pty.py | ||
test_pwd.py | ||
test_pyexpat.py | ||
test_re.py | ||
test_regex.py | ||
test_rfc822.py | ||
test_rgbimg.py | ||
test_richcmp.py | ||
test_rotor.py | ||
test_sax.py | ||
test_scope.py | ||
test_select.py | ||
test_sha.py | ||
test_signal.py | ||
test_socket.py | ||
test_sre.py | ||
test_strftime.py | ||
test_string.py | ||
test_strop.py | ||
test_struct.py | ||
test_sunaudiodev.py | ||
test_sundry.py | ||
test_support.py | ||
test_symtable.py | ||
test_thread.py | ||
test_threadedtempfile.py | ||
test_time.py | ||
test_timing.py | ||
test_tokenize.py | ||
test_traceback.py | ||
test_types.py | ||
test_ucn.py | ||
test_unicode.py | ||
test_unicodedata.py | ||
test_unpack.py | ||
test_urllib.py | ||
test_urlparse.py | ||
test_userdict.py | ||
test_userlist.py | ||
test_userstring.py | ||
test_wave.py | ||
test_weakref.py | ||
test_winreg.py | ||
test_winsound.py | ||
test_xmllib.py | ||
test_xreadline.py | ||
test_zipfile.py | ||
test_zlib.py | ||
testall.py | ||
testcodec.py | ||
testimg.uue | ||
testimgr.uue | ||
testrgb.uue | ||
tokenize_tests.py |
README
Writing Python Regression Tests ------------------------------- Skip Montanaro (skip@mojam.com) Introduction If you add a new module to Python or modify the functionality of an existing module, you should write one or more test cases to exercise that new functionality. The mechanics of how the test system operates are fairly straightforward. When a test case is run, the output is compared with the expected output that is stored in .../Lib/test/output. If the test runs to completion and the actual and expected outputs match, the test succeeds, if not, it fails. If an ImportError or test_support.TestSkipped error is raised, the test is not run. You will be writing unit tests (isolated tests of functions and objects defined by the module) using white box techniques. Unlike black box testing, where you only have the external interfaces to guide your test case writing, in white box testing you can see the code being tested and tailor your test cases to exercise it more completely. In particular, you will be able to refer to the C and Python code in the CVS repository when writing your regression test cases. Executing Test Cases If you are writing test cases for module spam, you need to create a file in .../Lib/test named test_spam.py and an expected output file in .../Lib/test/output named test_spam ("..." represents the top-level directory in the Python source tree, the directory containing the configure script). From the top-level directory, generate the initial version of the test output file by executing: ./python Lib/test/regrtest.py -g test_spam.py Any time you modify test_spam.py you need to generate a new expected output file. Don't forget to desk check the generated output to make sure it's really what you expected to find! To run a single test after modifying a module, simply run regrtest.py without the -g flag: ./python Lib/test/regrtest.py test_spam.py While debugging a regression test, you can of course execute it independently of the regression testing framework and see what it prints: ./python Lib/test/test_spam.py To run the entire test suite, make the "test" target at the top level: make test On non-Unix platforms where make may not be available, you can simply execute the two runs of regrtest (optimized and non-optimized) directly: ./python Lib/test/regrtest.py ./python -O Lib/test/regrtest.py Test cases generate output based upon values computed by the test code. When executed, regrtest.py compares the actual output generated by executing the test case with the expected output and reports success or failure. It stands to reason that if the actual and expected outputs are to match, they must not contain any machine dependencies. This means your test cases should not print out absolute machine addresses (e.g. the return value of the id() builtin function) or floating point numbers with large numbers of significant digits (unless you understand what you are doing!). Test Case Writing Tips Writing good test cases is a skilled task and is too complex to discuss in detail in this short document. Many books have been written on the subject. I'll show my age by suggesting that Glenford Myers' "The Art of Software Testing", published in 1979, is still the best introduction to the subject available. It is short (177 pages), easy to read, and discusses the major elements of software testing, though its publication predates the object-oriented software revolution, so doesn't cover that subject at all. Unfortunately, it is very expensive (about $100 new). If you can borrow it or find it used (around $20), I strongly urge you to pick up a copy. The most important goal when writing test cases is to break things. A test case that doesn't uncover a bug is much less valuable than one that does. In designing test cases you should pay attention to the following: * Your test cases should exercise all the functions and objects defined in the module, not just the ones meant to be called by users of your module. This may require you to write test code that uses the module in ways you don't expect (explicitly calling internal functions, for example - see test_atexit.py). * You should consider any boundary values that may tickle exceptional conditions (e.g. if you were writing regression tests for division, you might well want to generate tests with numerators and denominators at the limits of floating point and integer numbers on the machine performing the tests as well as a denominator of zero). * You should exercise as many paths through the code as possible. This may not always be possible, but is a goal to strive for. In particular, when considering if statements (or their equivalent), you want to create test cases that exercise both the true and false branches. For loops, you should create test cases that exercise the loop zero, one and multiple times. * You should test with obviously invalid input. If you know that a function requires an integer input, try calling it with other types of objects to see how it responds. * You should test with obviously out-of-range input. If the domain of a function is only defined for positive integers, try calling it with a negative integer. * If you are going to fix a bug that wasn't uncovered by an existing test, try to write a test case that exposes the bug (preferably before fixing it). * If you need to create a temporary file, you can use the filename in test_support.TESTFN to do so. It is important to remove the file when done; other tests should be able to use the name without cleaning up after your test. Regression Test Writing Rules Each test case is different. There is no "standard" form for a Python regression test case, though there are some general rules: * If your test case detects a failure, raise TestFailed (found in test_support). * Import everything you'll need as early as possible. * If you'll be importing objects from a module that is at least partially platform-dependent, only import those objects you need for the current test case to avoid spurious ImportError exceptions that prevent the test from running to completion. * Print all your test case results using the print statement. For non-fatal errors, print an error message (or omit a successful completion print) to indicate the failure, but proceed instead of raising TestFailed. * Use "assert" sparingly, if at all. It's usually better to just print what you got, and rely on regrtest's got-vs-expected comparison to catch deviations from what you expect. assert statements aren't executed at all when regrtest is run in -O mode; and, because they cause the test to stop immediately, can lead to a long & tedious test-fix, test-fix, test-fix, ... cycle when things are badly broken (and note that "badly broken" often includes running the test suite for the first time on new platforms or under new implementations of the language). Miscellaneous There is a test_support module you can import from your test case. It provides the following useful objects: * TestFailed - raise this exception when your regression test detects a failure. * TestSkipped - raise this if the test could not be run because the platform doesn't offer all the required facilities (like large file support), even if all the required modules are available. * findfile(file) - you can call this function to locate a file somewhere along sys.path or in the Lib/test tree - see test_linuxaudiodev.py for an example of its use. * verbose - you can use this variable to control print output. Many modules use it. Search for "verbose" in the test_*.py files to see lots of examples. * use_large_resources - true iff tests requiring large time or space should be run. * fcmp(x,y) - you can call this function to compare two floating point numbers when you expect them to only be approximately equal withing a fuzz factor (test_support.FUZZ, which defaults to 1e-6). NOTE: Always import something from test_support like so: from test_support import verbose or like so: import test_support ... use test_support.verbose in the code ... Never import anything from test_support like this: from test.test_support import verbose "test" is a package already, so can refer to modules it contains without "test." qualification. If you do an explicit "test.xxx" qualification, that can fool Python into believing test.xxx is a module distinct from the xxx in the current package, and you can end up importing two distinct copies of xxx. This is especially bad if xxx=test_support, as regrtest.py can (and routinely does) overwrite its "verbose" and "use_large_resources" attributes: if you get a second copy of test_support loaded, it may not have the same values for those as regrtest intended. Python and C statement coverage results are currently available at http://www.musi-cal.com/~skip/python/Python/dist/src/ As of this writing (July, 2000) these results are being generated nightly. You can refer to the summaries and the test coverage output files to see where coverage is adequate or lacking and write test cases to beef up the coverage.