weakref.WeakValueDictionary defines a local remove() function used as
callback for weak references. This function was created with a
closure. Modify the implementation to avoid the closure.
This is the converse of GH-15353 -- in addition to plenty of
scripts in the tree that are marked with the executable bit
(and so can be directly executed), there are a few that have
a leading `#!` which could let them be executed, but it doesn't
do anything because they don't have the executable bit set.
Here's a command which finds such files and marks them. The
first line finds files in the tree with a `#!` line *anywhere*;
the next-to-last step checks that the *first* line is actually of
that form. In between we filter out files that already have the
bit set, and some files that are meant as fragments to be
consumed by one or another kind of preprocessor.
$ git grep -l '^#!' \
| grep -vxFf <( \
git ls-files --stage \
| perl -lane 'print $F[3] if (!/^100644/)' \
) \
| grep -ve '\.in$' -e '^Doc/includes/' \
| while read f; do
head -c2 "$f" | grep -qxF '#!' \
&& chmod a+x "$f"; \
done
* Update documentation for plistlib
- Update "Mac OS X" to "Apple" since plists are used more widely than just macOS
- Re-add the UID class documentation (oops, removed in GH-15615)
* bpo-26185: Fix repr() on empty ZipInfo object
It was failing on AttributeError due to inexistant
but required attributes file_size and compress_size.
They are now initialized to 0 in ZipInfo.__init__().
* Remove useless hasattr() in ZipInfo._open_to_write()
* Completely remove file_size setting in _open_to_write().
ssl_collect_certificates function in _ssl.c has a memory leak.
Calling CertOpenStore() and CertAddStoreToCollection(), a store's refcnt gets incremented by 2.
But CertCloseStore() is called only once and the refcnt leaves 1.
winerror_to_errno() is no longer automatically generated.
Do not rely on the old _dosmapperr() function.
Add ERROR_NO_UNICODE_TRANSLATION (1113) -> EILSEQ.
There were about 14 files that are actually in the repo but that are
covered by the rules in .gitignore.
Git itself takes no notice of what .gitignore says about files that
it's already tracking... but the discrepancy can be confusing to a
human that adds a new file unexpectedly covered by these rules, as
well as to non-Git software that looks at .gitignore but doesn't
implement this wrinkle in its semantics. (E.g., `rg`.)
Several of these are from rules that apply more broadly than
intended: for example, `Makefile` applies to `Doc/Makefile` and
`Tools/freeze/test/Makefile`, whereas `/Makefile` means only the
`Makefile` at the repo's root.
And the `Modules/Setup` rule simply wasn't updated after 961d54c5c.
https://bugs.python.org/issue37936
If FormatMessageW() is passed the FORMAT_MESSAGE_FROM_SYSTEM flag without FORMAT_MESSAGE_IGNORE_INSERTS, it will fail if there are insert sequences in the message definition.
The gdb manual[1] says the following for "document":
The command commandname must already be defined.
[1] https://sourceware.org/gdb/current/onlinedocs/gdb/Define.html
And indeed when trying to use the gdbinit file with gdb 8.3, I get:
.../cpython/Misc/gdbinit:17: Error in sourced command file:
Undefined command: "pyo". Try "help".
Fix this by moving all documentation blocks after the define blocks.
This was introduced in GH-6384.
Restart lines now always start with '=' and never end with ' ' and fill the width of the window unless that would require ending with ' ', which could be wrapped by itself and possible confusing the user.
* Rename PyThreadState_DeleteCurrent()
to _PyThreadState_DeleteCurrent()
* Move it to the internal C API
Co-Authored-By: Carol Willing <carolcode@willingconsulting.com>
The purpose of the `unicodedata.is_normalized` function is to answer
the question `str == unicodedata.normalized(form, str)` more
efficiently than writing just that, by using the "quick check"
optimization described in the Unicode standard in UAX #15.
However, it turns out the code doesn't implement the full algorithm
from the standard, and as a result we often miss the optimization and
end up having to compute the whole normalized string after all.
Implement the standard's algorithm. This greatly speeds up
`unicodedata.is_normalized` in many cases where our partial variant
of quick-check had been returning MAYBE and the standard algorithm
returns NO.
At a quick test on my desktop, the existing code takes about 4.4 ms/MB
(so 4.4 ns per byte) when the partial quick-check returns MAYBE and it
has to do the slow normalize-and-compare:
$ build.base/python -m timeit -s 'import unicodedata; s = "\uf900"*500000' \
-- 'unicodedata.is_normalized("NFD", s)'
50 loops, best of 5: 4.39 msec per loop
With this patch, it gets the answer instantly (58 ns) on the same 1 MB
string:
$ build.dev/python -m timeit -s 'import unicodedata; s = "\uf900"*500000' \
-- 'unicodedata.is_normalized("NFD", s)'
5000000 loops, best of 5: 58.2 nsec per loop
This restores a small optimization that the original version of this
code had for the `unicodedata.normalize` use case.
With this, that case is actually faster than in master!
$ build.base/python -m timeit -s 'import unicodedata; s = "\u0338"*500000' \
-- 'unicodedata.normalize("NFD", s)'
500 loops, best of 5: 561 usec per loop
$ build.dev/python -m timeit -s 'import unicodedata; s = "\u0338"*500000' \
-- 'unicodedata.normalize("NFD", s)'
500 loops, best of 5: 512 usec per loop
Extending the hover delay in test_tooltip should avoid spurious test_idle failures.
One longer delay instead of two shorter delays results in a net speedup.
Fixes a case in which email._header_value_parser.get_unstructured hangs the system for some invalid headers. This covers the cases in which the header contains either:
- a case without trailing whitespace
- an invalid encoded word
https://bugs.python.org/issue37764
This fix should also be backported to 3.7 and 3.8
https://bugs.python.org/issue37764