A root cause of bpo-37936 is that it's easy to write a .gitignore
rule that's intended to apply to a specific file (e.g., the
`pyconfig.h` generated by `./configure`) but actually applies to all
similarly-named files in the tree (e.g., `PC/pyconfig.h`.)
Specifically, any rule with no non-trailing slashes is applied in an
"unrooted" way, to files anywhere in the tree. This means that if we
write the rules in the most obvious-looking way, then
* for specific files we want to ignore that happen to be in
subdirectories (like `Modules/config.c`), the rule will work
as intended, staying "rooted" to the top of the tree; but
* when a specific file we want to ignore happens to be at the root of
the repo (like `platform`), then the obvious rule (`platform`) will
apply much more broadly than intended: if someone tries to add a
file or directory named `platform` somewhere else in the tree, it
will unexpectedly get ignored.
That's surprising behavior that can make the .gitignore file's
behavior feel finicky and unpredictable.
To avoid it, we can simply always give a rule "rooted" behavior when
that's what's intended, by systematically using leading slashes.
Further, to help make the pattern obvious when looking at the file and
minimize any need for thinking about the syntax when adding new rules:
separate the rules into one group for each type, with brief comments
identifying them.
For most of these rules it's clear whether they're meant to be rooted
or unrooted, but in a handful of cases I've only guessed. In that
case the safer default (the choice that won't hide information) is the
narrower, rooted meaning, with a leading slash. If for some of these
the unrooted meaning is desired after all, it'll be easy to move them
to the unrooted section at the top.
(cherry picked from commit 455122a009)
Co-authored-by: Greg Price <gnprice@gmail.com>
* Write a message when killing a worker process
* Put a timeout on the second popen.communicate() call
(after killing the process)
* Put a timeout on popen.wait() call
* Catch popen.kill() and popen.wait() exceptions
(cherry picked from commit de2d9eed8b)
Mark some individual tests to skip when --pgo is used. The tests
marked increase the PGO task time significantly and likely don't
help improve optimization of the final executable.
(cherry picked from commit 52a48e62c6)
Co-authored-by: Neil Schemenauer <nas-github@arctrix.com>
Reduce the number of unit tests run for the PGO generation task. This
speeds up the task by a factor of about 15x. Running the full unit test
suite is slow. This change may result in a slightly less optimized build
since not as many code branches will be executed. If you are willing to
wait for the much slower build, the old behavior can be restored using
'./configure [..] PROFILE_TASK="-m test --pgo-extended"'. We make no
guarantees as to which PGO task set produces a faster build. Users who
care should run their own relevant benchmarks as results can depend on
the environment, workload, and compiler tool chain.
(cherry picked from commit 4e16a4a311)
Co-authored-by: Neil Schemenauer <nas-github@arctrix.com>
Under some conditions the earlier fix for bpo-18075, "Infinite recursion
tests triggering a segfault on Mac OS X", now causes failures on macOS
when attempting to change stack limit with resource.setrlimit
resource.RLIMIT_STACK, like regrtest does when running the test suite.
The reverted change had specified a non-default stack size when linking
the python executable on macOS. As of macOS 10.14.4, the previous
code causes a hard failure when running tests, although similar
failures had been seen under some conditions under some earlier
systems. Reverting the change to the interpreter stack size at link
time helped for release builds but caused some tests to fail when
built --with-pydebug. Try the opposite approach: continue to build
the interpreter with an increased stack size on macOS and remove
the failing setrlimit call in regrtest initialization. This will
definitely avoid the resource.RLIMIT_STACK error and should have
no, or fewer, side effects.
(cherry picked from commit 5bbbc733e6)
Co-authored-by: Ned Deily <nad@python.org>
* regrtest: Add --cleanup option to remove "test_python_*" directories
of previous failed test jobs.
* Add "make cleantest" to run "python3 -m test --cleanup".
(cherry picked from commit 47fbc4e45b)
When using multiprocessing (-jN option), worker processes now create
their temporary directory inside the temporary directory of the
main process. So the main process is able to remove temporary
directories of worker processes even if they crash or when they are
killed by regrtest on KeyboardInterrupt (CTRL+c).
Rework also how multiprocessing arguments are parsed in main.py.
"python3 -m test -jN ..." now continues the execution of next tests
when a worker process crash (CHILD_ERROR state). Previously, the test
suite stopped immediately. Use --failfast to stop at the first error.
Moreover, --forever now also implies --failfast.
regrtest now always detects uncollectable objects. Previously, the
check was only enabled by --findleaks. The check now also works with
-jN/--multiprocess N.
--findleaks becomes a deprecated alias to --fail-env-changed.
Rewrite run_tests_multiprocess() function as a new MultiprocessRunner
class with multiple methods to better report errors and stop
immediately when needed.
Changes:
* Worker processes are now killed immediately if tests are
interrupted or if a test does crash (CHILD_ERROR): worker
processes are killed.
* Rewrite how errors in a worker thread are reported to
the main thread. No longer ignore BaseException or parsing errors
silently.
* Remove 'finished' variable: use worker.is_alive() instead
* Always compute omitted tests. Add Regrtest.get_executed() method.
* Add TestResult and MultiprocessResult types to ensure that results
always have the same fields.
* runtest() now handles KeyboardInterrupt
* accumulate_result() and format_test_result() now takes a TestResult
* cleanup_test_droppings() is now called by runtest() and mark the
test as ENV_CHANGED if the test leaks support.TESTFN file.
* runtest() now includes code "around" the test in the test timing
* Add print_warning() in test.libregrtest.utils to standardize how
libregrtest logs warnings to ease parsing the test output.
* support.unload() is now called with abstest rather than test_name
* Rename 'test' variable/parameter to 'test_name'
* dash_R(): remove unused the_module parameter
* Remove unused imports
dash_R() function of libregrtest doesn't call support.gc_collect()
directly anymore: it's already called by dash_R_cleanup().
Call dash_R_cleanup() before starting the loop.
Fix reference leak hunting in regrtest: compute also deltas (of
reference count, allocated memory blocks, file descriptor count)
during warmup, to ensure that everything is initialized before
starting to hunt reference leaks.
Other changes:
* Replace gc.collect() with support.gc_collect()
* Move calls to read memory statistics from dash_R_cleanup() to
dash_R()
* Pass regrtest 'ns' to dash_R()
* dash_R() is now more quiet with --quiet option (don't display
progress).
* Precompute the full range for "for it in range(repcount):" to
ensure that the iteration doesn't allocate anything new.
* dash_R() now is responsible to call warm_caches().
While Windows exposes the system processor queue length, the raw value
used for load calculations on Unix systems, it does not provide an API
to access the averaged value. Hence to calculate the load we must track
and average it ourselves. We can't use multiprocessing or a thread to
read it in the background while the tests run since using those would
conflict with test_multiprocessing and test_xxsubprocess.
Thus, we use Window's asynchronous IO API to run the tracker in the
background with it sampling at the correct rate. When we wish to access
the load we check to see if there's new data on the stream, if there is,
we update our load values.
TextTestRunner of unittest.runner now uses time.perf_counter() rather
than time.time() to measure the execution time of a test: time.time()
can go backwards, whereas time.perf_counter() is monotonic.
Similar change made in libregrtest, pprint and random.
Fix bug in `Lib/test/libregrtest/runtest.py` that makes running tests an extra time than the specified number of runs.
Add check for invalid --huntrleaks/-R parameters.
If tests are re-run, use "xxx then yyy" result format (ex: "FAILURE
then SUCCESS") to show that some failing tests have been re-run.
Add also test_regrtest.test_rerun_fail() test.
* "running:" progress: Format number of seconds as hours and minutes
* format_duration(): count also minutes as hours
* Create Lib/test/libregrtest/utils.py
* No longer clear filters, like --match, to re-run failed tests in
verbose mode (-w option).
* Tests result: always indicate if tests have been interrupted.
* Enhance tests summary
* Rename support._match_test() to support.match_test(): make it
public
* Remove support.match_tests global variable. It is replaced with a
new support.set_match_tests() function, so match_test() doesn't
have to check each time if patterns were modified.
* Rewrite match_test(): use different code paths depending on the
kind of patterns for best performances.
Co-Authored-By: Serhiy Storchaka <storchaka@gmail.com>
kB (*kilo* byte) unit means 1000 bytes, whereas KiB ("kibibyte")
means 1024 bytes. KB was misused: replace kB or KB with KiB when
appropriate.
Same change for MB and GB which become MiB and GiB.
Change the output of Tools/iobench/iobench.py.
Round also the size of the documentation from 5.5 MB to 5 MiB.
When regrtest in run inside IDLE, sys.stdout and sys.stderr are not
TextIOWrapper objects and have no file descriptor associated:
sys.stderr.fileno() raises io.UnsupportedOperation.
Disable faulthandler and don't replace sys.stdout in that case.