There are now at least two bytecodes that may attempt to optimize,
JUMP_BACK, and more recently, COLD_EXIT.
Only the JUMP_BACK was counting the attempt in the stats.
This moves that counter to uop_optimize itself so it should
always happen no matter where it is called from.
The test case had a race condition: if `q.task_done()` was executed
after `shutdown(immediate=True)`, then it would raise an exception
because the immediate shutdown already emptied the queue. This happened
rarely with the GIL (due to the switching interval), but frequently in
the free-threaded build.
The free-threaded GC only does full collections, so it uses a threshold that
is a maximum of a fixed value (default 2000) and proportional to the number of
live objects. If there were many live objects after the previous collection,
then the threshold may be larger than 10,000 causing
`test_indirect_calls_with_gc_disabled` to fail.
This manually sets the threshold to `(1000, 0, 0)` for the test. The `0`
disables the proportional scaling.
test_match_tests now saves and restores patterns.
Add get_match_tests() function to libregrtest.filter.
Previously, running test_regrtest multiple times in a row only ran
tests once: "./python -m test test_regrtest -R 3:3.
* GH-116554: Relax list.sort()'s notion of "descending" run
Rewrote `count_run()` so that sub-runs of equal elements no longer end a descending run. Both ascending and descending runs can have arbitrarily many sub-runs of arbitrarily many equal elements now. This is tricky, because we only use ``<`` comparisons, so checking for equality doesn't come "for free". Surprisingly, it turned out there's a very cheap (one comparison) way to determine whether an ascending run consisted of all-equal elements. That sealed the deal.
In addition, after a descending run is reversed in-place, we now go on to see whether it can be extended by an ascending run that just happens to be adjacent. This succeeds in finding at least one additional element to append about half the time, and so appears to more than repay its cost (the savings come from getting to skip a binary search, when a short run is artificially forced to length MIINRUN later, for each new element `count_run()` can add to the initial run).
While these have been in the back of my mind for years, a question on StackOverflow pushed it to action:
https://stackoverflow.com/questions/78108792/
They were wondering why it took about 4x longer to sort a list like:
[999_999, 999_999, ..., 2, 2, 1, 1, 0, 0]
than "similar" lists. Of course that runs very much faster after this patch.
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Pieter Eendebak <pieter.eendebak@gmail.com>
gh-116307: Create a new import helper 'isolated modules' and use that instead of 'Clean Import' to ensure that tests from importlib_resources don't leave modules in sys.modules.
This isn't strictly necessary because the implementation of `gc_should_collect`
already checks `gcstate->enabled` in the free-threaded build, but it seems
like a good idea until the common pieces of gc.c and gc_free_threading.c are
refactored out.
* and fix global flag repr
* Update Misc/NEWS.d/next/Library/2024-03-11-12-11-10.gh-issue-116600.FcNBy_.rst
Co-authored-by: Kirill Podoprigora <kirill.bast9@mail.ru>