Merged revisions 79464,79471,79623,79626,79630,79632,79643,79648-79649,79679,79685,79711,79761,79774,79777,79792-79794,79877,79898-79900 via svnmerge from
svn+ssh://pythondev@svn.python.org/python/trunk ........ r79464 | michael.foord | 2010-03-27 07:55:19 -0500 (Sat, 27 Mar 2010) | 1 line A fix for running unittest tests on platforms without the audioop module (e.g. jython and IronPython) ........ r79471 | michael.foord | 2010-03-27 14:10:11 -0500 (Sat, 27 Mar 2010) | 4 lines Addition of delta keyword argument to unittest.TestCase.assertAlmostEquals and assertNotAlmostEquals This allows the comparison of objects by specifying a maximum difference; this includes the comparing of non-numeric objects that don't support rounding. ........ r79623 | michael.foord | 2010-04-02 16:42:47 -0500 (Fri, 02 Apr 2010) | 1 line Addition of -b command line option to unittest for buffering stdout and stderr during test runs. ........ r79626 | michael.foord | 2010-04-02 17:08:29 -0500 (Fri, 02 Apr 2010) | 1 line TestResult stores original sys.stdout and tests no longer use sys.__stdout__ (etc) in tests for unittest -b command line option ........ r79630 | michael.foord | 2010-04-02 17:30:56 -0500 (Fri, 02 Apr 2010) | 1 line unittest tests no longer replace the sys.stdout put in place by regrtest ........ r79632 | michael.foord | 2010-04-02 17:55:59 -0500 (Fri, 02 Apr 2010) | 1 line Issue #8038: Addition of unittest.TestCase.assertNotRegexpMatches ........ r79643 | michael.foord | 2010-04-02 20:15:21 -0500 (Fri, 02 Apr 2010) | 1 line Support dotted module names for test discovery paths in unittest. Issue 8038. ........ r79648 | michael.foord | 2010-04-02 21:21:39 -0500 (Fri, 02 Apr 2010) | 1 line Cross platform unittest.TestResult newline handling when buffering stdout / stderr. ........ r79649 | michael.foord | 2010-04-02 21:33:55 -0500 (Fri, 02 Apr 2010) | 1 line Another attempt at a fix for unittest.test.test_result for windows line endings ........ r79679 | michael.foord | 2010-04-03 09:52:18 -0500 (Sat, 03 Apr 2010) | 1 line Adding -b command line option to the unittest usage message. ........ r79685 | michael.foord | 2010-04-03 10:20:00 -0500 (Sat, 03 Apr 2010) | 1 line Minor tweak to unittest command line usage message ........ r79711 | michael.foord | 2010-04-03 12:03:11 -0500 (Sat, 03 Apr 2010) | 1 line Documenting new features in unittest ........ r79761 | michael.foord | 2010-04-04 17:41:54 -0500 (Sun, 04 Apr 2010) | 1 line unittest documentation formatting changes ........ r79774 | michael.foord | 2010-04-04 18:28:44 -0500 (Sun, 04 Apr 2010) | 1 line Adding documentation for new unittest.main() parameters ........ r79777 | michael.foord | 2010-04-04 19:39:50 -0500 (Sun, 04 Apr 2010) | 1 line Document signal handling functions in unittest.rst ........ r79792 | michael.foord | 2010-04-05 05:26:26 -0500 (Mon, 05 Apr 2010) | 1 line Documentation fixes for unittest ........ r79793 | michael.foord | 2010-04-05 05:28:27 -0500 (Mon, 05 Apr 2010) | 1 line Furterh documentation fix for unittest.rst ........ r79794 | michael.foord | 2010-04-05 05:30:14 -0500 (Mon, 05 Apr 2010) | 1 line Further documentation fix for unittest.rst ........ r79877 | michael.foord | 2010-04-06 18:18:16 -0500 (Tue, 06 Apr 2010) | 1 line Fix module directory finding logic for dotted paths in unittest test discovery. ........ r79898 | michael.foord | 2010-04-07 18:04:22 -0500 (Wed, 07 Apr 2010) | 1 line unittest.result.TestResult does not create its buffers until they're used. It uses StringIO not cStringIO. Issue 8333. ........ r79899 | michael.foord | 2010-04-07 19:04:24 -0500 (Wed, 07 Apr 2010) | 1 line Switch regrtest to use StringIO instead of cStringIO for test_multiprocessing on Windows. Issue 8333. ........ r79900 | michael.foord | 2010-04-07 23:33:20 -0500 (Wed, 07 Apr 2010) | 1 line Correction of unittest documentation typos and omissions ........
This commit is contained in:
parent
fc3c9cd793
commit
b48af54ff7
|
@ -74,6 +74,11 @@ need to derive from a specific class.
|
|||
Module :mod:`doctest`
|
||||
Another test-support module with a very different flavor.
|
||||
|
||||
`unittest2: A backport of new unittest features for Python 2.4-2.6 <http://pypi.python.org/pypi/unittest2>`_
|
||||
Many new features were added to unittest in Python 2.7, including test
|
||||
discovery. unittest2 allows you to use these features with earlier
|
||||
versions of Python.
|
||||
|
||||
`Simple Smalltalk Testing: With Patterns <http://www.XProgramming.com/testfram.htm>`_
|
||||
Kent Beck's original paper on testing frameworks using the pattern shared
|
||||
by :mod:`unittest`.
|
||||
|
@ -82,41 +87,13 @@ need to derive from a specific class.
|
|||
Third-party unittest frameworks with a lighter-weight syntax for writing
|
||||
tests. For example, ``assert func(10) == 42``.
|
||||
|
||||
`python-mock <http://python-mock.sourceforge.net/>`_ and `minimock <http://blog.ianbicking.org/minimock.html>`_
|
||||
Tools for creating mock test objects (objects simulating external
|
||||
resources).
|
||||
|
||||
|
||||
.. _unittest-command-line-interface:
|
||||
|
||||
Command Line Interface
|
||||
----------------------
|
||||
|
||||
The unittest module can be used from the command line to run tests from
|
||||
modules, classes or even individual test methods::
|
||||
|
||||
python -m unittest test_module1 test_module2
|
||||
python -m unittest test_module.TestClass
|
||||
python -m unittest test_module.TestClass.test_method
|
||||
|
||||
You can pass in a list with any combination of module names, and fully
|
||||
qualified class or method names.
|
||||
|
||||
You can run tests with more detail (higher verbosity) by passing in the -v flag::
|
||||
|
||||
python -m unittest -v test_module
|
||||
|
||||
For a list of all the command line options::
|
||||
|
||||
python -m unittest -h
|
||||
|
||||
.. versionchanged:: 3.2
|
||||
In earlier versions it was only possible to run individual test methods and
|
||||
not modules or classes.
|
||||
|
||||
The command line can also be used for test discovery, for running all of the
|
||||
tests in a project or just a subset.
|
||||
`The Python Testing Tools Taxonomy <http://pycheesecake.org/wiki/PythonTestingToolsTaxonomy>`_
|
||||
An extensive list of Python testing tools including functional testing
|
||||
frameworks and mock object libraries.
|
||||
|
||||
`Testing in Python Mailing List <http://lists.idyll.org/listinfo/testing-in-python>`_
|
||||
A special-interest-group for discussion of testing, and testing tools,
|
||||
in Python.
|
||||
|
||||
.. _unittest-test-discovery:
|
||||
|
||||
|
@ -243,6 +220,100 @@ The above examples show the most commonly used :mod:`unittest` features which
|
|||
are sufficient to meet many everyday testing needs. The remainder of the
|
||||
documentation explores the full feature set from first principles.
|
||||
|
||||
|
||||
.. _unittest-command-line-interface:
|
||||
|
||||
Command Line Interface
|
||||
----------------------
|
||||
|
||||
The unittest module can be used from the command line to run tests from
|
||||
modules, classes or even individual test methods::
|
||||
|
||||
python -m unittest test_module1 test_module2
|
||||
python -m unittest test_module.TestClass
|
||||
python -m unittest test_module.TestClass.test_method
|
||||
|
||||
You can pass in a list with any combination of module names, and fully
|
||||
qualified class or method names.
|
||||
|
||||
You can run tests with more detail (higher verbosity) by passing in the -v flag::
|
||||
|
||||
python -m unittest -v test_module
|
||||
|
||||
For a list of all the command line options::
|
||||
|
||||
python -m unittest -h
|
||||
|
||||
.. versionchanged:: 3.2
|
||||
In earlier versions it was only possible to run individual test methods and
|
||||
not modules or classes.
|
||||
|
||||
|
||||
failfast, catch and buffer command line options
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
unittest supports three command options.
|
||||
|
||||
* -f / --failfast
|
||||
|
||||
Stop the test run on the first error or failure.
|
||||
|
||||
* -c / --catch
|
||||
|
||||
Control-c during the test run waits for the current test to end and then
|
||||
reports all the results so far. A second control-c raises the normal
|
||||
``KeyboardInterrupt`` exception.
|
||||
|
||||
See `Signal Handling`_ for the functions that provide this functionality.
|
||||
|
||||
* -b / --buffer
|
||||
|
||||
The standard out and standard error streams are buffered during the test
|
||||
run. Output during a passing test is discarded. Output is echoed normally
|
||||
on test fail or error and is added to the failure messages.
|
||||
|
||||
.. versionadded:: 2.7
|
||||
The command line options ``-c``, ``-b`` and ``-f`` where added.
|
||||
|
||||
The command line can also be used for test discovery, for running all of the
|
||||
tests in a project or just a subset.
|
||||
|
||||
|
||||
.. _unittest-test-discovery:
|
||||
|
||||
Test Discovery
|
||||
--------------
|
||||
|
||||
.. versionadded:: 2.7
|
||||
|
||||
Unittest supports simple test discovery. For a project's tests to be
|
||||
compatible with test discovery they must all be importable from the top level
|
||||
directory of the project (in other words, they must all be in Python packages).
|
||||
|
||||
Test discovery is implemented in :meth:`TestLoader.discover`, but can also be
|
||||
used from the command line. The basic command line usage is::
|
||||
|
||||
cd project_directory
|
||||
python -m unittest discover
|
||||
|
||||
The ``discover`` sub-command has the following options:
|
||||
|
||||
-v, --verbose Verbose output
|
||||
-s directory Directory to start discovery ('.' default)
|
||||
-p pattern Pattern to match test files ('test*.py' default)
|
||||
-t directory Top level directory of project (default to
|
||||
start directory)
|
||||
|
||||
The -s, -p, & -t options can be passsed in as positional arguments. The
|
||||
following two command lines are equivalent::
|
||||
|
||||
python -m unittest discover -s project_directory -p '*_test.py'
|
||||
python -m unittest discover project_directory '*_test.py'
|
||||
|
||||
Test modules and packages can customize test loading and discovery by through
|
||||
the `load_tests protocol`_.
|
||||
|
||||
|
||||
.. _organizing-tests:
|
||||
|
||||
Organizing test code
|
||||
|
@ -580,6 +651,9 @@ The following decorators implement test skipping and expected failures:
|
|||
Mark the test as an expected failure. If the test fails when run, the test
|
||||
is not counted as a failure.
|
||||
|
||||
Skipped tests will not have :meth:`setUp` or :meth:`tearDown` run around them.
|
||||
Skipped classes will not have :meth:`setUpClass` or :meth:`tearDownClass` run.
|
||||
|
||||
|
||||
.. _unittest-contents:
|
||||
|
||||
|
@ -645,6 +719,36 @@ Test cases
|
|||
the outcome of the test method. The default implementation does nothing.
|
||||
|
||||
|
||||
.. method:: setUpClass()
|
||||
|
||||
A class method called before tests in an individual class run.
|
||||
``setUpClass`` is called with the class as the only argument
|
||||
and must be decorated as a :func:`classmethod`::
|
||||
|
||||
@classmethod
|
||||
def setUpClass(cls):
|
||||
...
|
||||
|
||||
See `Class and Module Fixtures`_ for more details.
|
||||
|
||||
.. versionadded:: 3.2
|
||||
|
||||
|
||||
.. method:: tearDownClass()
|
||||
|
||||
A class method called after tests in an individual class have run.
|
||||
``tearDownClass`` is called with the class as the only argument
|
||||
and must be decorated as a :meth:`classmethod`::
|
||||
|
||||
@classmethod
|
||||
def tearDownClass(cls):
|
||||
...
|
||||
|
||||
See `Class and Module Fixtures`_ for more details.
|
||||
|
||||
.. versionadded:: 3.2
|
||||
|
||||
|
||||
.. method:: run(result=None)
|
||||
|
||||
Run the test, collecting the result into the test result object passed as
|
||||
|
@ -727,8 +831,8 @@ Test cases
|
|||
:meth:`failIfEqual`; use :meth:`assertNotEqual`.
|
||||
|
||||
|
||||
.. method:: assertAlmostEqual(first, second, *, places=7, msg=None)
|
||||
failUnlessAlmostEqual(first, second, *, places=7, msg=None)
|
||||
.. method:: assertAlmostEqual(first, second, *, places=7, msg=None, delta=None)
|
||||
failUnlessAlmostEqual(first, second, *, places=7, msg=None, delta=None)
|
||||
|
||||
Test that *first* and *second* are approximately equal by computing the
|
||||
difference, rounding to the given number of decimal *places* (default 7),
|
||||
|
@ -741,13 +845,14 @@ Test cases
|
|||
|
||||
.. versionchanged:: 3.2
|
||||
Objects that compare equal are automatically almost equal.
|
||||
Added the ``delta`` keyword argument.
|
||||
|
||||
.. deprecated:: 3.1
|
||||
:meth:`failUnlessAlmostEqual`; use :meth:`assertAlmostEqual`.
|
||||
|
||||
|
||||
.. method:: assertNotAlmostEqual(first, second, *, places=7, msg=None)
|
||||
failIfAlmostEqual(first, second, *, places=7, msg=None)
|
||||
.. method:: assertNotAlmostEqual(first, second, *, places=7, msg=None, delta=None)
|
||||
failIfAlmostEqual(first, second, *, places=7, msg=None, delta=None)
|
||||
|
||||
Test that *first* and *second* are not approximately equal by computing
|
||||
the difference, rounding to the given number of decimal *places* (default
|
||||
|
@ -758,8 +863,14 @@ Test cases
|
|||
compare equal, the test will fail with the explanation given by *msg*, or
|
||||
:const:`None`.
|
||||
|
||||
If *delta* is supplied instead of *places* then the the difference
|
||||
between *first* and *second* must be more than *delta*.
|
||||
|
||||
Supplying both *delta* and *places* raises a ``TypeError``.
|
||||
|
||||
.. versionchanged:: 3.2
|
||||
Objects that compare equal automatically fail.
|
||||
Added the ``delta`` keyword argument.
|
||||
|
||||
.. deprecated:: 3.1
|
||||
:meth:`failIfAlmostEqual`; use :meth:`assertNotAlmostEqual`.
|
||||
|
@ -802,6 +913,16 @@ Test cases
|
|||
.. versionadded:: 3.1
|
||||
|
||||
|
||||
.. method:: assertNotRegexpMatches(text, regexp, msg=None)
|
||||
|
||||
Verifies that a *regexp* search does not match *text*. Fails with an error
|
||||
message including the pattern and the *text*. *regexp* may be
|
||||
a regular expression object or a string containing a regular expression
|
||||
suitable for use by :func:`re.search`.
|
||||
|
||||
.. versionadded:: 2.7
|
||||
|
||||
|
||||
.. method:: assertIn(first, second, msg=None)
|
||||
assertNotIn(first, second, msg=None)
|
||||
|
||||
|
@ -1342,6 +1463,8 @@ a
|
|||
``load_tests`` does not need to pass this argument in to
|
||||
``loader.discover()``.
|
||||
|
||||
*start_dir* can be a dotted module name as well as a directory.
|
||||
|
||||
.. versionadded:: 3.2
|
||||
|
||||
|
||||
|
@ -1433,6 +1556,24 @@ a
|
|||
The total number of tests run so far.
|
||||
|
||||
|
||||
.. attribute:: buffer
|
||||
|
||||
If set to true, ``sys.stdout`` and ``sys.stderr`` will be buffered in between
|
||||
:meth:`startTest` and :meth:`stopTest` being called. Collected output will
|
||||
only be echoed onto the real ``sys.stdout`` and ``sys.stderr`` if the test
|
||||
fails or errors. Any output is also attached to the failure / error message.
|
||||
|
||||
.. versionadded:: 2.7
|
||||
|
||||
|
||||
.. attribute:: failfast
|
||||
|
||||
If set to true :meth:`stop` will be called on the first failure or error,
|
||||
halting the test run.
|
||||
|
||||
.. versionadded:: 2.7
|
||||
|
||||
|
||||
.. method:: wasSuccessful()
|
||||
|
||||
Return :const:`True` if all tests run so far have passed, otherwise returns
|
||||
|
@ -1461,18 +1602,11 @@ a
|
|||
|
||||
Called when the test case *test* is about to be run.
|
||||
|
||||
The default implementation simply increments the instance's :attr:`testsRun`
|
||||
counter.
|
||||
|
||||
|
||||
.. method:: stopTest(test)
|
||||
|
||||
Called after the test case *test* has been executed, regardless of the
|
||||
outcome.
|
||||
|
||||
The default implementation does nothing.
|
||||
|
||||
|
||||
.. method:: startTestRun(test)
|
||||
|
||||
Called once before any tests are executed.
|
||||
|
@ -1572,12 +1706,12 @@ a
|
|||
|
||||
``_makeResult()`` instantiates the class or callable passed in the
|
||||
``TextTestRunner`` constructor as the ``resultclass`` argument. It
|
||||
defaults to :class::`TextTestResult` if no ``resultclass`` is provided.
|
||||
defaults to :class:`TextTestResult` if no ``resultclass`` is provided.
|
||||
The result class is instantiated with the following arguments::
|
||||
|
||||
stream, descriptions, verbosity
|
||||
|
||||
.. function:: main(module='__main__', defaultTest=None, argv=None, testRunner=None, testLoader=unittest.loader.defaultTestLoader, exit=True, verbosity=1)
|
||||
.. function:: main(module='__main__', defaultTest=None, argv=None, testRunner=None, testLoader=unittest.loader.defaultTestLoader, exit=True, verbosity=1, failfast=None, catchbreak=None, buffer=None)
|
||||
|
||||
A command-line program that runs a set of tests; this is primarily for making
|
||||
test modules conveniently executable. The simplest use for this function is to
|
||||
|
@ -1603,11 +1737,15 @@ a
|
|||
>>> from unittest import main
|
||||
>>> main(module='test_module', exit=False)
|
||||
|
||||
The ``failfast``, ``catchbreak`` and ``buffer`` parameters have the same
|
||||
effect as the `failfast, catch and buffer command line options`_.
|
||||
|
||||
Calling ``main`` actually returns an instance of the ``TestProgram`` class.
|
||||
This stores the result of the tests run as the ``result`` attribute.
|
||||
|
||||
.. versionchanged:: 3.2
|
||||
The ``exit`` and ``verbosity`` parameters were added.
|
||||
The ``exit``, ``verbosity``, ``failfast``, ``catchbreak`` and ``buffer``
|
||||
parameters were added.
|
||||
|
||||
|
||||
load_tests Protocol
|
||||
|
@ -1677,3 +1815,113 @@ continue (and potentially modify) test discovery. A 'do nothing'
|
|||
package_tests = loader.discover(start_dir=this_dir, pattern=pattern)
|
||||
standard_tests.addTests(package_tests)
|
||||
return standard_tests
|
||||
|
||||
|
||||
Class and Module Fixtures
|
||||
-------------------------
|
||||
|
||||
Class and module level fixtures are implemented in :class:`TestSuite`. When
|
||||
the test suite encounters a test from a new class then :meth:`tearDownClass`
|
||||
from the previous class (if there is one) is called, followed by
|
||||
:meth:`setUpClass` from the new class.
|
||||
|
||||
Similarly if a test is from a different module from the previous test then
|
||||
``tearDownModule`` from the previous module is run, followed by
|
||||
``setUpModule`` from the new module.
|
||||
|
||||
After all the tests have run the final ``tearDownClass`` and
|
||||
``tearDownModule`` are run.
|
||||
|
||||
Note that shared fixtures do not play well with [potential] features like test
|
||||
parallelization and they break test isolation. They should be used with care.
|
||||
|
||||
The default ordering of tests created by the unittest test loaders is to group
|
||||
all tests from the same modules and classes together. This will lead to
|
||||
``setUpClass`` / ``setUpModule`` (etc) being called exactly once per class and
|
||||
module. If you randomize the order, so that tests from different modules and
|
||||
classes are adjacent to each other, then these shared fixture functions may be
|
||||
called multiple times in a single test run.
|
||||
|
||||
Shared fixtures are not intended to work with suites with non-standard
|
||||
ordering. A ``BaseTestSuite`` still exists for frameworks that don't want to
|
||||
support shared fixtures.
|
||||
|
||||
If there are any exceptions raised during one of the shared fixture functions
|
||||
the test is reported as an error. Because there is no corresponding test
|
||||
instance an ``_ErrorHolder`` object (that has the same interface as a
|
||||
:class:`TestCase`) is created to represent the error. If you are just using
|
||||
the standard unittest test runner then this detail doesn't matter, but if you
|
||||
are a framework author it may be relevant.
|
||||
|
||||
|
||||
setUpClass and tearDownClass
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
These must be implemented as class methods::
|
||||
|
||||
import unittest
|
||||
|
||||
class Test(unittest.TestCase):
|
||||
@classmethod
|
||||
def setUpClass(cls):
|
||||
cls._connection = createExpensiveConnectionObject()
|
||||
|
||||
@classmethod
|
||||
def tearDownClass(cls):
|
||||
cls._connection.destroy()
|
||||
|
||||
If you want the ``setUpClass`` and ``tearDownClass`` on base classes called
|
||||
then you must call up to them yourself. The implementations in
|
||||
:class:`TestCase` are empty.
|
||||
|
||||
If an exception is raised during a ``setUpClass`` then the tests in the class
|
||||
are not run and the ``tearDownClass`` is not run. Skipped classes will not
|
||||
have ``setUpClass`` or ``tearDownClass`` run.
|
||||
|
||||
|
||||
setUpModule and tearDownModule
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
These should be implemented as functions::
|
||||
|
||||
def setUpModule():
|
||||
createConnection()
|
||||
|
||||
def tearDownModule():
|
||||
closeConnection()
|
||||
|
||||
If an exception is raised in a ``setUpModule`` then none of the tests in the
|
||||
module will be run and the ``tearDownModule`` will not be run.
|
||||
|
||||
|
||||
Signal Handling
|
||||
---------------
|
||||
|
||||
The -c / --catch command line option to unittest, along with the ``catchbreak``
|
||||
parameter to :func:`unittest.main()`, provide more friendly handling of
|
||||
control-c during a test run. With catch break behavior enabled control-c will
|
||||
allow the currently running test to complete, and the test run will then end
|
||||
and report all the results so far. A second control-c will raise a
|
||||
``KeyboardInterrupt`` in the usual way.
|
||||
|
||||
There are a few utility functions for framework authors to enable this
|
||||
functionality within test frameworks.
|
||||
|
||||
.. function:: installHandler()
|
||||
|
||||
Install the control-c handler. When a :const:`signal.SIGINT` is received
|
||||
(usually in response to the user pressing control-c) all registered results
|
||||
have :meth:`~TestResult.stop` called.
|
||||
|
||||
.. function:: registerResult(result)
|
||||
|
||||
Register a :class:`TestResult` object for control-c handling. Registering a
|
||||
result stores a weak reference to it, so it doesn't prevent the result from
|
||||
being garbage collected.
|
||||
|
||||
.. function:: removeResult(result)
|
||||
|
||||
Remove a registered result. Once a result has been removed then
|
||||
:meth:`~TestResult.stop` will no longer be called on that result object in
|
||||
response to a control-c.
|
||||
|
||||
|
|
|
@ -880,6 +880,10 @@ def runtest_inner(test, verbose, quiet,
|
|||
testdir=None, huntrleaks=False, debug=False):
|
||||
support.unload(test)
|
||||
testdir = findtestdir(testdir)
|
||||
if verbose:
|
||||
capture_stdout = None
|
||||
else:
|
||||
capture_stdout = io.StringIO()
|
||||
|
||||
test_time = 0.0
|
||||
refleak = False # True if the test leaked references.
|
||||
|
|
|
@ -12,6 +12,8 @@ import argparse
|
|||
from io import StringIO
|
||||
|
||||
from test import support
|
||||
class StdIOBuffer(StringIO):
|
||||
pass
|
||||
|
||||
class TestCase(unittest.TestCase):
|
||||
|
||||
|
@ -25,6 +27,7 @@ class TestCase(unittest.TestCase):
|
|||
super(TestCase, self).assertEqual(obj1, obj2)
|
||||
|
||||
|
||||
|
||||
class TempDirMixin(object):
|
||||
|
||||
def setUp(self):
|
||||
|
@ -81,15 +84,15 @@ def stderr_to_parser_error(parse_args, *args, **kwargs):
|
|||
# if this is being called recursively and stderr or stdout is already being
|
||||
# redirected, simply call the function and let the enclosing function
|
||||
# catch the exception
|
||||
if isinstance(sys.stderr, StringIO) or isinstance(sys.stdout, StringIO):
|
||||
if isinstance(sys.stderr, StdIOBuffer) or isinstance(sys.stdout, StdIOBuffer):
|
||||
return parse_args(*args, **kwargs)
|
||||
|
||||
# if this is not being called recursively, redirect stderr and
|
||||
# use it as the ArgumentParserError message
|
||||
old_stdout = sys.stdout
|
||||
old_stderr = sys.stderr
|
||||
sys.stdout = StringIO()
|
||||
sys.stderr = StringIO()
|
||||
sys.stdout = StdIOBuffer()
|
||||
sys.stderr = StdIOBuffer()
|
||||
try:
|
||||
try:
|
||||
result = parse_args(*args, **kwargs)
|
||||
|
@ -2634,7 +2637,7 @@ class TestHelpFormattingMetaclass(type):
|
|||
parser = self._get_parser(tester)
|
||||
print_ = getattr(parser, 'print_%s' % self.func_suffix)
|
||||
old_stream = getattr(sys, self.std_name)
|
||||
setattr(sys, self.std_name, StringIO())
|
||||
setattr(sys, self.std_name, StdIOBuffer())
|
||||
try:
|
||||
print_()
|
||||
parser_text = getattr(sys, self.std_name).getvalue()
|
||||
|
@ -2645,7 +2648,7 @@ class TestHelpFormattingMetaclass(type):
|
|||
def test_print_file(self, tester):
|
||||
parser = self._get_parser(tester)
|
||||
print_ = getattr(parser, 'print_%s' % self.func_suffix)
|
||||
sfile = StringIO()
|
||||
sfile = StdIOBuffer()
|
||||
print_(sfile)
|
||||
parser_text = sfile.getvalue()
|
||||
self._test(tester, parser_text)
|
||||
|
|
|
@ -502,10 +502,12 @@ class TestCase(object):
|
|||
safe_repr(second)))
|
||||
raise self.failureException(msg)
|
||||
|
||||
def assertAlmostEqual(self, first, second, *, places=7, msg=None):
|
||||
def assertAlmostEqual(self, first, second, *, places=None, msg=None,
|
||||
delta=None):
|
||||
"""Fail if the two objects are unequal as determined by their
|
||||
difference rounded to the given number of decimal places
|
||||
(default 7) and comparing to zero.
|
||||
(default 7) and comparing to zero, or by comparing that the
|
||||
between the two objects is more than the given delta.
|
||||
|
||||
Note that decimal places (from zero) are usually not the same
|
||||
as significant digits (measured from the most signficant digit).
|
||||
|
@ -514,31 +516,62 @@ class TestCase(object):
|
|||
compare almost equal.
|
||||
"""
|
||||
if first == second:
|
||||
# shortcut for inf
|
||||
# shortcut
|
||||
return
|
||||
if round(abs(second-first), places) != 0:
|
||||
if delta is not None and places is not None:
|
||||
raise TypeError("specify delta or places not both")
|
||||
|
||||
if delta is not None:
|
||||
if abs(first - second) <= delta:
|
||||
return
|
||||
|
||||
standardMsg = '%s != %s within %s delta' % (safe_repr(first),
|
||||
safe_repr(second),
|
||||
safe_repr(delta))
|
||||
else:
|
||||
if places is None:
|
||||
places = 7
|
||||
|
||||
if round(abs(second-first), places) == 0:
|
||||
return
|
||||
|
||||
standardMsg = '%s != %s within %r places' % (safe_repr(first),
|
||||
safe_repr(second),
|
||||
places)
|
||||
msg = self._formatMessage(msg, standardMsg)
|
||||
raise self.failureException(msg)
|
||||
msg = self._formatMessage(msg, standardMsg)
|
||||
raise self.failureException(msg)
|
||||
|
||||
def assertNotAlmostEqual(self, first, second, *, places=7, msg=None):
|
||||
def assertNotAlmostEqual(self, first, second, *, places=None, msg=None,
|
||||
delta=None):
|
||||
"""Fail if the two objects are equal as determined by their
|
||||
difference rounded to the given number of decimal places
|
||||
(default 7) and comparing to zero.
|
||||
(default 7) and comparing to zero, or by comparing that the
|
||||
between the two objects is less than the given delta.
|
||||
|
||||
Note that decimal places (from zero) are usually not the same
|
||||
as significant digits (measured from the most signficant digit).
|
||||
|
||||
Objects that are equal automatically fail.
|
||||
"""
|
||||
if (first == second) or round(abs(second-first), places) == 0:
|
||||
if delta is not None and places is not None:
|
||||
raise TypeError("specify delta or places not both")
|
||||
if delta is not None:
|
||||
if not (first == second) and abs(first - second) > delta:
|
||||
return
|
||||
standardMsg = '%s == %s within %s delta' % (safe_repr(first),
|
||||
safe_repr(second),
|
||||
safe_repr(delta))
|
||||
else:
|
||||
if places is None:
|
||||
places = 7
|
||||
if not (first == second) and round(abs(second-first), places) != 0:
|
||||
return
|
||||
standardMsg = '%s == %s within %r places' % (safe_repr(first),
|
||||
safe_repr(second),
|
||||
places)
|
||||
msg = self._formatMessage(msg, standardMsg)
|
||||
raise self.failureException(msg)
|
||||
safe_repr(second),
|
||||
places)
|
||||
|
||||
msg = self._formatMessage(msg, standardMsg)
|
||||
raise self.failureException(msg)
|
||||
|
||||
# Synonyms for assertion methods
|
||||
|
||||
|
@ -967,6 +1000,18 @@ class TestCase(object):
|
|||
msg = '%s: %r not found in %r' % (msg, expected_regexp.pattern, text)
|
||||
raise self.failureException(msg)
|
||||
|
||||
def assertNotRegexpMatches(self, text, unexpected_regexp, msg=None):
|
||||
if isinstance(unexpected_regexp, (str, bytes)):
|
||||
unexpected_regexp = re.compile(unexpected_regexp)
|
||||
match = unexpected_regexp.search(text)
|
||||
if match:
|
||||
msg = msg or "Regexp matched"
|
||||
msg = '%s: %r matches %r in %r' % (msg,
|
||||
text[match.start():match.end()],
|
||||
unexpected_regexp.pattern,
|
||||
text)
|
||||
raise self.failureException(msg)
|
||||
|
||||
|
||||
class FunctionTestCase(TestCase):
|
||||
"""A test case that wraps a test function.
|
||||
|
|
|
@ -166,27 +166,58 @@ class TestLoader(object):
|
|||
packages can continue discovery themselves. top_level_dir is stored so
|
||||
load_tests does not need to pass this argument in to loader.discover().
|
||||
"""
|
||||
set_implicit_top = False
|
||||
if top_level_dir is None and self._top_level_dir is not None:
|
||||
# make top_level_dir optional if called from load_tests in a package
|
||||
top_level_dir = self._top_level_dir
|
||||
elif top_level_dir is None:
|
||||
set_implicit_top = True
|
||||
top_level_dir = start_dir
|
||||
|
||||
top_level_dir = os.path.abspath(os.path.normpath(top_level_dir))
|
||||
start_dir = os.path.abspath(os.path.normpath(start_dir))
|
||||
top_level_dir = os.path.abspath(top_level_dir)
|
||||
|
||||
if not top_level_dir in sys.path:
|
||||
# all test modules must be importable from the top level directory
|
||||
sys.path.append(top_level_dir)
|
||||
self._top_level_dir = top_level_dir
|
||||
|
||||
if start_dir != top_level_dir and not os.path.isfile(os.path.join(start_dir, '__init__.py')):
|
||||
# what about __init__.pyc or pyo (etc)
|
||||
is_not_importable = False
|
||||
if os.path.isdir(os.path.abspath(start_dir)):
|
||||
start_dir = os.path.abspath(start_dir)
|
||||
if start_dir != top_level_dir:
|
||||
is_not_importable = not os.path.isfile(os.path.join(start_dir, '__init__.py'))
|
||||
else:
|
||||
# support for discovery from dotted module names
|
||||
try:
|
||||
__import__(start_dir)
|
||||
except ImportError:
|
||||
is_not_importable = True
|
||||
else:
|
||||
the_module = sys.modules[start_dir]
|
||||
top_part = start_dir.split('.')[0]
|
||||
start_dir = os.path.abspath(os.path.dirname((the_module.__file__)))
|
||||
if set_implicit_top:
|
||||
self._top_level_dir = self._get_directory_containing_module(top_part)
|
||||
sys.path.remove(top_level_dir)
|
||||
|
||||
if is_not_importable:
|
||||
raise ImportError('Start directory is not importable: %r' % start_dir)
|
||||
|
||||
tests = list(self._find_tests(start_dir, pattern))
|
||||
return self.suiteClass(tests)
|
||||
|
||||
def _get_directory_containing_module(self, module_name):
|
||||
module = sys.modules[module_name]
|
||||
full_path = os.path.abspath(module.__file__)
|
||||
|
||||
if os.path.basename(full_path).lower().startswith('__init__.py'):
|
||||
return os.path.dirname(os.path.dirname(full_path))
|
||||
else:
|
||||
# here we have been given a module rather than a package - so
|
||||
# all we can do is search the *same* directory the module is in
|
||||
# should an exception be raised instead
|
||||
return os.path.dirname(full_path)
|
||||
|
||||
def _get_name_from_path(self, path):
|
||||
path = os.path.splitext(os.path.normpath(path))[0]
|
||||
|
||||
|
|
|
@ -9,9 +9,9 @@ from .signals import installHandler
|
|||
|
||||
__unittest = True
|
||||
|
||||
|
||||
FAILFAST = " -f, --failfast Stop on first failure\n"
|
||||
CATCHBREAK = " -c, --catch Catch control-C and display results\n"
|
||||
FAILFAST = " -f, --failfast Stop on first failure\n"
|
||||
CATCHBREAK = " -c, --catch Catch control-C and display results\n"
|
||||
BUFFEROUTPUT = " -b, --buffer Buffer stdout and stderr during test runs\n"
|
||||
|
||||
USAGE_AS_MAIN = """\
|
||||
Usage: %(progName)s [options] [tests]
|
||||
|
@ -20,7 +20,7 @@ Options:
|
|||
-h, --help Show this message
|
||||
-v, --verbose Verbose output
|
||||
-q, --quiet Minimal output
|
||||
%(failfast)s%(catchbreak)s
|
||||
%(failfast)s%(catchbreak)s%(buffer)s
|
||||
Examples:
|
||||
%(progName)s test_module - run tests from test_module
|
||||
%(progName)s test_module.TestClass - run tests from
|
||||
|
@ -34,7 +34,7 @@ Alternative Usage: %(progName)s discover [options]
|
|||
|
||||
Options:
|
||||
-v, --verbose Verbose output
|
||||
%(failfast)s%(catchbreak)s -s directory Directory to start discovery ('.' default)
|
||||
%(failfast)s%(catchbreak)s%(buffer)s -s directory Directory to start discovery ('.' default)
|
||||
-p pattern Pattern to match test files ('test*.py' default)
|
||||
-t directory Top level directory of project (default to
|
||||
start directory)
|
||||
|
@ -50,7 +50,7 @@ Options:
|
|||
-h, --help Show this message
|
||||
-v, --verbose Verbose output
|
||||
-q, --quiet Minimal output
|
||||
%(failfast)s%(catchbreak)s
|
||||
%(failfast)s%(catchbreak)s%(buffer)s
|
||||
Examples:
|
||||
%(progName)s - run default set of tests
|
||||
%(progName)s MyTestSuite - run suite 'MyTestSuite'
|
||||
|
@ -68,12 +68,12 @@ class TestProgram(object):
|
|||
USAGE = USAGE_FROM_MODULE
|
||||
|
||||
# defaults for testing
|
||||
failfast = catchbreak = None
|
||||
failfast = catchbreak = buffer = None
|
||||
|
||||
def __init__(self, module='__main__', defaultTest=None,
|
||||
argv=None, testRunner=None,
|
||||
testLoader=loader.defaultTestLoader, exit=True,
|
||||
verbosity=1, failfast=None, catchbreak=None):
|
||||
verbosity=1, failfast=None, catchbreak=None, buffer=None):
|
||||
if isinstance(module, str):
|
||||
self.module = __import__(module)
|
||||
for part in module.split('.')[1:]:
|
||||
|
@ -87,6 +87,7 @@ class TestProgram(object):
|
|||
self.failfast = failfast
|
||||
self.catchbreak = catchbreak
|
||||
self.verbosity = verbosity
|
||||
self.buffer = buffer
|
||||
self.defaultTest = defaultTest
|
||||
self.testRunner = testRunner
|
||||
self.testLoader = testLoader
|
||||
|
@ -97,11 +98,14 @@ class TestProgram(object):
|
|||
def usageExit(self, msg=None):
|
||||
if msg:
|
||||
print(msg)
|
||||
usage = {'progName': self.progName, 'catchbreak': '', 'failfast': ''}
|
||||
usage = {'progName': self.progName, 'catchbreak': '', 'failfast': '',
|
||||
'buffer': ''}
|
||||
if self.failfast != False:
|
||||
usage['failfast'] = FAILFAST
|
||||
if self.catchbreak != False:
|
||||
usage['catchbreak'] = CATCHBREAK
|
||||
if self.buffer != False:
|
||||
usage['buffer'] = BUFFEROUTPUT
|
||||
print(self.USAGE % usage)
|
||||
sys.exit(2)
|
||||
|
||||
|
@ -111,9 +115,9 @@ class TestProgram(object):
|
|||
return
|
||||
|
||||
import getopt
|
||||
long_opts = ['help', 'verbose', 'quiet', 'failfast', 'catch']
|
||||
long_opts = ['help', 'verbose', 'quiet', 'failfast', 'catch', 'buffer']
|
||||
try:
|
||||
options, args = getopt.getopt(argv[1:], 'hHvqfc', long_opts)
|
||||
options, args = getopt.getopt(argv[1:], 'hHvqfcb', long_opts)
|
||||
for opt, value in options:
|
||||
if opt in ('-h','-H','--help'):
|
||||
self.usageExit()
|
||||
|
@ -129,6 +133,10 @@ class TestProgram(object):
|
|||
if self.catchbreak is None:
|
||||
self.catchbreak = True
|
||||
# Should this raise an exception if -c is not valid?
|
||||
if opt in ('-b','--buffer'):
|
||||
if self.buffer is None:
|
||||
self.buffer = True
|
||||
# Should this raise an exception if -b is not valid?
|
||||
if len(args) == 0 and self.defaultTest is None:
|
||||
# createTests will load tests from self.module
|
||||
self.testNames = None
|
||||
|
@ -164,6 +172,10 @@ class TestProgram(object):
|
|||
parser.add_option('-c', '--catch', dest='catchbreak', default=False,
|
||||
help='Catch ctrl-C and display results so far',
|
||||
action='store_true')
|
||||
if self.buffer != False:
|
||||
parser.add_option('-b', '--buffer', dest='buffer', default=False,
|
||||
help='Buffer stdout and stderr during tests',
|
||||
action='store_true')
|
||||
parser.add_option('-s', '--start-directory', dest='start', default='.',
|
||||
help="Directory to start discovery ('.' default)")
|
||||
parser.add_option('-p', '--pattern', dest='pattern', default='test*.py',
|
||||
|
@ -184,6 +196,8 @@ class TestProgram(object):
|
|||
self.failfast = options.failfast
|
||||
if self.catchbreak is None:
|
||||
self.catchbreak = options.catchbreak
|
||||
if self.buffer is None:
|
||||
self.buffer = options.buffer
|
||||
|
||||
if options.verbose:
|
||||
self.verbosity = 2
|
||||
|
@ -203,9 +217,10 @@ class TestProgram(object):
|
|||
if isinstance(self.testRunner, type):
|
||||
try:
|
||||
testRunner = self.testRunner(verbosity=self.verbosity,
|
||||
failfast=self.failfast)
|
||||
failfast=self.failfast,
|
||||
buffer=self.buffer)
|
||||
except TypeError:
|
||||
# didn't accept the verbosity or failfast arguments
|
||||
# didn't accept the verbosity, buffer or failfast arguments
|
||||
testRunner = self.testRunner()
|
||||
else:
|
||||
# it is assumed to be a TestRunner instance
|
||||
|
|
|
@ -1,5 +1,8 @@
|
|||
"""Test result object"""
|
||||
|
||||
import os
|
||||
import io
|
||||
import sys
|
||||
import traceback
|
||||
|
||||
from . import util
|
||||
|
@ -15,6 +18,10 @@ def failfast(method):
|
|||
return method(self, *args, **kw)
|
||||
return inner
|
||||
|
||||
STDOUT_LINE = '\nStdout:\n%s'
|
||||
STDERR_LINE = '\nStderr:\n%s'
|
||||
|
||||
|
||||
class TestResult(object):
|
||||
"""Holder for test result information.
|
||||
|
||||
|
@ -37,6 +44,12 @@ class TestResult(object):
|
|||
self.expectedFailures = []
|
||||
self.unexpectedSuccesses = []
|
||||
self.shouldStop = False
|
||||
self.buffer = False
|
||||
self._stdout_buffer = None
|
||||
self._stderr_buffer = None
|
||||
self._original_stdout = sys.stdout
|
||||
self._original_stderr = sys.stderr
|
||||
self._mirrorOutput = False
|
||||
|
||||
def printErrors(self):
|
||||
"Called by TestRunner after test run"
|
||||
|
@ -44,6 +57,13 @@ class TestResult(object):
|
|||
def startTest(self, test):
|
||||
"Called when the given test is about to be run"
|
||||
self.testsRun += 1
|
||||
self._mirrorOutput = False
|
||||
if self.buffer:
|
||||
if self._stderr_buffer is None:
|
||||
self._stderr_buffer = io.StringIO()
|
||||
self._stdout_buffer = io.StringIO()
|
||||
sys.stdout = self._stdout_buffer
|
||||
sys.stderr = self._stderr_buffer
|
||||
|
||||
def startTestRun(self):
|
||||
"""Called once before any tests are executed.
|
||||
|
@ -53,6 +73,26 @@ class TestResult(object):
|
|||
|
||||
def stopTest(self, test):
|
||||
"""Called when the given test has been run"""
|
||||
if self.buffer:
|
||||
if self._mirrorOutput:
|
||||
output = sys.stdout.getvalue()
|
||||
error = sys.stderr.getvalue()
|
||||
if output:
|
||||
if not output.endswith('\n'):
|
||||
output += '\n'
|
||||
self._original_stdout.write(STDOUT_LINE % output)
|
||||
if error:
|
||||
if not error.endswith('\n'):
|
||||
error += '\n'
|
||||
self._original_stderr.write(STDERR_LINE % error)
|
||||
|
||||
sys.stdout = self._original_stdout
|
||||
sys.stderr = self._original_stderr
|
||||
self._stdout_buffer.seek(0)
|
||||
self._stdout_buffer.truncate()
|
||||
self._stderr_buffer.seek(0)
|
||||
self._stderr_buffer.truncate()
|
||||
self._mirrorOutput = False
|
||||
|
||||
def stopTestRun(self):
|
||||
"""Called once after all tests are executed.
|
||||
|
@ -66,12 +106,14 @@ class TestResult(object):
|
|||
returned by sys.exc_info().
|
||||
"""
|
||||
self.errors.append((test, self._exc_info_to_string(err, test)))
|
||||
self._mirrorOutput = True
|
||||
|
||||
@failfast
|
||||
def addFailure(self, test, err):
|
||||
"""Called when an error has occurred. 'err' is a tuple of values as
|
||||
returned by sys.exc_info()."""
|
||||
self.failures.append((test, self._exc_info_to_string(err, test)))
|
||||
self._mirrorOutput = True
|
||||
|
||||
def addSuccess(self, test):
|
||||
"Called when a test has completed successfully"
|
||||
|
@ -105,11 +147,29 @@ class TestResult(object):
|
|||
# Skip test runner traceback levels
|
||||
while tb and self._is_relevant_tb_level(tb):
|
||||
tb = tb.tb_next
|
||||
|
||||
if exctype is test.failureException:
|
||||
# Skip assert*() traceback levels
|
||||
length = self._count_relevant_tb_levels(tb)
|
||||
return ''.join(traceback.format_exception(exctype, value, tb, length))
|
||||
return ''.join(traceback.format_exception(exctype, value, tb))
|
||||
msgLines = traceback.format_exception(exctype, value, tb, length)
|
||||
else:
|
||||
chain = exctype is not None
|
||||
msgLines = traceback.format_exception(exctype, value, tb,
|
||||
chain=chain)
|
||||
|
||||
if self.buffer:
|
||||
output = sys.stdout.getvalue()
|
||||
error = sys.stderr.getvalue()
|
||||
if output:
|
||||
if not output.endswith('\n'):
|
||||
output += '\n'
|
||||
msgLines.append(STDOUT_LINE % output)
|
||||
if error:
|
||||
if not error.endswith('\n'):
|
||||
error += '\n'
|
||||
msgLines.append(STDERR_LINE % error)
|
||||
return ''.join(msgLines)
|
||||
|
||||
|
||||
def _is_relevant_tb_level(self, tb):
|
||||
return '__unittest' in tb.tb_frame.f_globals
|
||||
|
|
|
@ -125,11 +125,12 @@ class TextTestRunner(object):
|
|||
resultclass = TextTestResult
|
||||
|
||||
def __init__(self, stream=sys.stderr, descriptions=True, verbosity=1,
|
||||
failfast=False, resultclass=None):
|
||||
failfast=False, buffer=False, resultclass=None):
|
||||
self.stream = _WritelnDecorator(stream)
|
||||
self.descriptions = descriptions
|
||||
self.verbosity = verbosity
|
||||
self.failfast = failfast
|
||||
self.buffer = buffer
|
||||
if resultclass is not None:
|
||||
self.resultclass = resultclass
|
||||
|
||||
|
@ -141,6 +142,7 @@ class TextTestRunner(object):
|
|||
result = self._makeResult()
|
||||
registerResult(result)
|
||||
result.failfast = self.failfast
|
||||
result.buffer = self.buffer
|
||||
startTime = time.time()
|
||||
startTestRun = getattr(result, 'startTestRun', None)
|
||||
if startTestRun is not None:
|
||||
|
|
|
@ -0,0 +1 @@
|
|||
# Empty module for testing the loading of modules
|
|
@ -1,3 +1,5 @@
|
|||
import datetime
|
||||
|
||||
import unittest
|
||||
|
||||
|
||||
|
@ -25,6 +27,28 @@ class Test_Assertions(unittest.TestCase):
|
|||
self.assertRaises(self.failureException, self.assertNotAlmostEqual,
|
||||
float('inf'), float('inf'))
|
||||
|
||||
def test_AmostEqualWithDelta(self):
|
||||
self.assertAlmostEqual(1.1, 1.0, delta=0.5)
|
||||
self.assertAlmostEqual(1.0, 1.1, delta=0.5)
|
||||
self.assertNotAlmostEqual(1.1, 1.0, delta=0.05)
|
||||
self.assertNotAlmostEqual(1.0, 1.1, delta=0.05)
|
||||
|
||||
self.assertRaises(self.failureException, self.assertAlmostEqual,
|
||||
1.1, 1.0, delta=0.05)
|
||||
self.assertRaises(self.failureException, self.assertNotAlmostEqual,
|
||||
1.1, 1.0, delta=0.5)
|
||||
|
||||
self.assertRaises(TypeError, self.assertAlmostEqual,
|
||||
1.1, 1.0, places=2, delta=2)
|
||||
self.assertRaises(TypeError, self.assertNotAlmostEqual,
|
||||
1.1, 1.0, places=2, delta=2)
|
||||
|
||||
first = datetime.datetime.now()
|
||||
second = first + datetime.timedelta(seconds=10)
|
||||
self.assertAlmostEqual(first, second,
|
||||
delta=datetime.timedelta(seconds=20))
|
||||
self.assertNotAlmostEqual(first, second,
|
||||
delta=datetime.timedelta(seconds=5))
|
||||
|
||||
def test_assertRaises(self):
|
||||
def _raise(e):
|
||||
|
@ -68,6 +92,16 @@ class Test_Assertions(unittest.TestCase):
|
|||
else:
|
||||
self.fail("assertRaises() didn't let exception pass through")
|
||||
|
||||
def testAssertNotRegexpMatches(self):
|
||||
self.assertNotRegexpMatches('Ala ma kota', r'r+')
|
||||
try:
|
||||
self.assertNotRegexpMatches('Ala ma kota', r'k.t', 'Message')
|
||||
except self.failureException as e:
|
||||
self.assertIn("'kot'", e.args[0])
|
||||
self.assertIn('Message', e.args[0])
|
||||
else:
|
||||
self.fail('assertNotRegexpMatches should have failed.')
|
||||
|
||||
|
||||
class TestLongMessage(unittest.TestCase):
|
||||
"""Test that the individual asserts honour longMessage.
|
||||
|
|
|
@ -203,8 +203,9 @@ class TestBreak(unittest.TestCase):
|
|||
p = Program(False)
|
||||
p.runTests()
|
||||
|
||||
self.assertEqual(FakeRunner.initArgs, [((), {'verbosity': verbosity,
|
||||
'failfast': failfast})])
|
||||
self.assertEqual(FakeRunner.initArgs, [((), {'buffer': None,
|
||||
'verbosity': verbosity,
|
||||
'failfast': failfast})])
|
||||
self.assertEqual(FakeRunner.runArgs, [test])
|
||||
self.assertEqual(p.result, result)
|
||||
|
||||
|
@ -215,8 +216,9 @@ class TestBreak(unittest.TestCase):
|
|||
p = Program(True)
|
||||
p.runTests()
|
||||
|
||||
self.assertEqual(FakeRunner.initArgs, [((), {'verbosity': verbosity,
|
||||
'failfast': failfast})])
|
||||
self.assertEqual(FakeRunner.initArgs, [((), {'buffer': None,
|
||||
'verbosity': verbosity,
|
||||
'failfast': failfast})])
|
||||
self.assertEqual(FakeRunner.runArgs, [test])
|
||||
self.assertEqual(p.result, result)
|
||||
|
||||
|
|
|
@ -128,6 +128,7 @@ class TestDiscovery(unittest.TestCase):
|
|||
loader = unittest.TestLoader()
|
||||
|
||||
original_isfile = os.path.isfile
|
||||
original_isdir = os.path.isdir
|
||||
def restore_isfile():
|
||||
os.path.isfile = original_isfile
|
||||
|
||||
|
@ -147,6 +148,12 @@ class TestDiscovery(unittest.TestCase):
|
|||
self.assertIn(full_path, sys.path)
|
||||
|
||||
os.path.isfile = lambda path: True
|
||||
os.path.isdir = lambda path: True
|
||||
|
||||
def restore_isdir():
|
||||
os.path.isdir = original_isdir
|
||||
self.addCleanup(restore_isdir)
|
||||
|
||||
_find_tests_args = []
|
||||
def _find_tests(start_dir, pattern):
|
||||
_find_tests_args.append((start_dir, pattern))
|
||||
|
@ -156,8 +163,8 @@ class TestDiscovery(unittest.TestCase):
|
|||
|
||||
suite = loader.discover('/foo/bar/baz', 'pattern', '/foo/bar')
|
||||
|
||||
top_level_dir = os.path.abspath(os.path.normpath('/foo/bar'))
|
||||
start_dir = os.path.abspath(os.path.normpath('/foo/bar/baz'))
|
||||
top_level_dir = os.path.abspath('/foo/bar')
|
||||
start_dir = os.path.abspath('/foo/bar/baz')
|
||||
self.assertEqual(suite, "['tests']")
|
||||
self.assertEqual(loader._top_level_dir, top_level_dir)
|
||||
self.assertEqual(_find_tests_args, [(start_dir, 'pattern')])
|
||||
|
|
|
@ -524,12 +524,8 @@ class Test_TestLoader(unittest.TestCase):
|
|||
# We're going to try to load this module as a side-effect, so it
|
||||
# better not be loaded before we try.
|
||||
#
|
||||
# Why pick audioop? Google shows it isn't used very often, so there's
|
||||
# a good chance that it won't be imported when this test is run
|
||||
module_name = 'audioop'
|
||||
|
||||
if module_name in sys.modules:
|
||||
del sys.modules[module_name]
|
||||
module_name = 'unittest.test.dummy'
|
||||
sys.modules.pop(module_name, None)
|
||||
|
||||
loader = unittest.TestLoader()
|
||||
try:
|
||||
|
@ -538,7 +534,7 @@ class Test_TestLoader(unittest.TestCase):
|
|||
self.assertIsInstance(suite, loader.suiteClass)
|
||||
self.assertEqual(list(suite), [])
|
||||
|
||||
# audioop should now be loaded, thanks to loadTestsFromName()
|
||||
# module should now be loaded, thanks to loadTestsFromName()
|
||||
self.assertIn(module_name, sys.modules)
|
||||
finally:
|
||||
if module_name in sys.modules:
|
||||
|
@ -911,12 +907,8 @@ class Test_TestLoader(unittest.TestCase):
|
|||
# We're going to try to load this module as a side-effect, so it
|
||||
# better not be loaded before we try.
|
||||
#
|
||||
# Why pick audioop? Google shows it isn't used very often, so there's
|
||||
# a good chance that it won't be imported when this test is run
|
||||
module_name = 'audioop'
|
||||
|
||||
if module_name in sys.modules:
|
||||
del sys.modules[module_name]
|
||||
module_name = 'unittest.test.dummy'
|
||||
sys.modules.pop(module_name, None)
|
||||
|
||||
loader = unittest.TestLoader()
|
||||
try:
|
||||
|
@ -925,7 +917,7 @@ class Test_TestLoader(unittest.TestCase):
|
|||
self.assertIsInstance(suite, loader.suiteClass)
|
||||
self.assertEqual(list(suite), [unittest.TestSuite()])
|
||||
|
||||
# audioop should now be loaded, thanks to loadTestsFromName()
|
||||
# module should now be loaded, thanks to loadTestsFromName()
|
||||
self.assertIn(module_name, sys.modules)
|
||||
finally:
|
||||
if module_name in sys.modules:
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
import io
|
||||
import sys
|
||||
import warnings
|
||||
import textwrap
|
||||
|
||||
from test import support
|
||||
|
||||
|
@ -25,6 +25,8 @@ class Test_TestResult(unittest.TestCase):
|
|||
self.assertEqual(len(result.failures), 0)
|
||||
self.assertEqual(result.testsRun, 0)
|
||||
self.assertEqual(result.shouldStop, False)
|
||||
self.assertIsNone(result._stdout_buffer)
|
||||
self.assertIsNone(result._stderr_buffer)
|
||||
|
||||
# "This method can be called to signal that the set of tests being
|
||||
# run should be aborted by setting the TestResult's shouldStop
|
||||
|
@ -302,6 +304,8 @@ def __init__(self, stream=None, descriptions=None, verbosity=None):
|
|||
self.errors = []
|
||||
self.testsRun = 0
|
||||
self.shouldStop = False
|
||||
self.buffer = False
|
||||
|
||||
classDict['__init__'] = __init__
|
||||
OldResult = type('OldResult', (object,), classDict)
|
||||
|
||||
|
@ -355,3 +359,129 @@ class Test_OldTestResult(unittest.TestCase):
|
|||
# This will raise an exception if TextTestRunner can't handle old
|
||||
# test result objects
|
||||
runner.run(Test('testFoo'))
|
||||
|
||||
|
||||
class TestOutputBuffering(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
self._real_out = sys.stdout
|
||||
self._real_err = sys.stderr
|
||||
|
||||
def tearDown(self):
|
||||
sys.stdout = self._real_out
|
||||
sys.stderr = self._real_err
|
||||
|
||||
def testBufferOutputOff(self):
|
||||
real_out = self._real_out
|
||||
real_err = self._real_err
|
||||
|
||||
result = unittest.TestResult()
|
||||
self.assertFalse(result.buffer)
|
||||
|
||||
self.assertIs(real_out, sys.stdout)
|
||||
self.assertIs(real_err, sys.stderr)
|
||||
|
||||
result.startTest(self)
|
||||
|
||||
self.assertIs(real_out, sys.stdout)
|
||||
self.assertIs(real_err, sys.stderr)
|
||||
|
||||
def testBufferOutputStartTestAddSuccess(self):
|
||||
real_out = self._real_out
|
||||
real_err = self._real_err
|
||||
|
||||
result = unittest.TestResult()
|
||||
self.assertFalse(result.buffer)
|
||||
|
||||
result.buffer = True
|
||||
|
||||
self.assertIs(real_out, sys.stdout)
|
||||
self.assertIs(real_err, sys.stderr)
|
||||
|
||||
result.startTest(self)
|
||||
|
||||
self.assertIsNot(real_out, sys.stdout)
|
||||
self.assertIsNot(real_err, sys.stderr)
|
||||
self.assertIsInstance(sys.stdout, io.StringIO)
|
||||
self.assertIsInstance(sys.stderr, io.StringIO)
|
||||
self.assertIsNot(sys.stdout, sys.stderr)
|
||||
|
||||
out_stream = sys.stdout
|
||||
err_stream = sys.stderr
|
||||
|
||||
result._original_stdout = io.StringIO()
|
||||
result._original_stderr = io.StringIO()
|
||||
|
||||
print('foo')
|
||||
print('bar', file=sys.stderr)
|
||||
|
||||
self.assertEqual(out_stream.getvalue(), 'foo\n')
|
||||
self.assertEqual(err_stream.getvalue(), 'bar\n')
|
||||
|
||||
self.assertEqual(result._original_stdout.getvalue(), '')
|
||||
self.assertEqual(result._original_stderr.getvalue(), '')
|
||||
|
||||
result.addSuccess(self)
|
||||
result.stopTest(self)
|
||||
|
||||
self.assertIs(sys.stdout, result._original_stdout)
|
||||
self.assertIs(sys.stderr, result._original_stderr)
|
||||
|
||||
self.assertEqual(result._original_stdout.getvalue(), '')
|
||||
self.assertEqual(result._original_stderr.getvalue(), '')
|
||||
|
||||
self.assertEqual(out_stream.getvalue(), '')
|
||||
self.assertEqual(err_stream.getvalue(), '')
|
||||
|
||||
|
||||
def getStartedResult(self):
|
||||
result = unittest.TestResult()
|
||||
result.buffer = True
|
||||
result.startTest(self)
|
||||
return result
|
||||
|
||||
def testBufferOutputAddErrorOrFailure(self):
|
||||
for message_attr, add_attr, include_error in [
|
||||
('errors', 'addError', True),
|
||||
('failures', 'addFailure', False),
|
||||
('errors', 'addError', True),
|
||||
('failures', 'addFailure', False)
|
||||
]:
|
||||
result = self.getStartedResult()
|
||||
buffered_out = sys.stdout
|
||||
buffered_err = sys.stderr
|
||||
result._original_stdout = io.StringIO()
|
||||
result._original_stderr = io.StringIO()
|
||||
|
||||
print('foo', file=sys.stdout)
|
||||
if include_error:
|
||||
print('bar', file=sys.stderr)
|
||||
|
||||
|
||||
addFunction = getattr(result, add_attr)
|
||||
addFunction(self, (None, None, None))
|
||||
result.stopTest(self)
|
||||
|
||||
result_list = getattr(result, message_attr)
|
||||
self.assertEqual(len(result_list), 1)
|
||||
|
||||
test, message = result_list[0]
|
||||
expectedOutMessage = textwrap.dedent("""
|
||||
Stdout:
|
||||
foo
|
||||
""")
|
||||
expectedErrMessage = ''
|
||||
if include_error:
|
||||
expectedErrMessage = textwrap.dedent("""
|
||||
Stderr:
|
||||
bar
|
||||
""")
|
||||
expectedFullMessage = 'NoneType\n%s%s' % (expectedOutMessage, expectedErrMessage)
|
||||
|
||||
self.assertIs(test, self)
|
||||
self.assertEqual(result._original_stdout.getvalue(), expectedOutMessage)
|
||||
self.assertEqual(result._original_stderr.getvalue(), expectedErrMessage)
|
||||
self.assertMultiLineEqual(message, expectedFullMessage)
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
||||
|
|
Loading…
Reference in New Issue