* bpo-26185: Fix repr() on empty ZipInfo object
It was failing on AttributeError due to inexistant
but required attributes file_size and compress_size.
They are now initialized to 0 in ZipInfo.__init__().
* Remove useless hasattr() in ZipInfo._open_to_write()
* Completely remove file_size setting in _open_to_write().
ssl_collect_certificates function in _ssl.c has a memory leak.
Calling CertOpenStore() and CertAddStoreToCollection(), a store's refcnt gets incremented by 2.
But CertCloseStore() is called only once and the refcnt leaves 1.
winerror_to_errno() is no longer automatically generated.
Do not rely on the old _dosmapperr() function.
Add ERROR_NO_UNICODE_TRANSLATION (1113) -> EILSEQ.
There were about 14 files that are actually in the repo but that are
covered by the rules in .gitignore.
Git itself takes no notice of what .gitignore says about files that
it's already tracking... but the discrepancy can be confusing to a
human that adds a new file unexpectedly covered by these rules, as
well as to non-Git software that looks at .gitignore but doesn't
implement this wrinkle in its semantics. (E.g., `rg`.)
Several of these are from rules that apply more broadly than
intended: for example, `Makefile` applies to `Doc/Makefile` and
`Tools/freeze/test/Makefile`, whereas `/Makefile` means only the
`Makefile` at the repo's root.
And the `Modules/Setup` rule simply wasn't updated after 961d54c5c.
https://bugs.python.org/issue37936
If FormatMessageW() is passed the FORMAT_MESSAGE_FROM_SYSTEM flag without FORMAT_MESSAGE_IGNORE_INSERTS, it will fail if there are insert sequences in the message definition.
The gdb manual[1] says the following for "document":
The command commandname must already be defined.
[1] https://sourceware.org/gdb/current/onlinedocs/gdb/Define.html
And indeed when trying to use the gdbinit file with gdb 8.3, I get:
.../cpython/Misc/gdbinit:17: Error in sourced command file:
Undefined command: "pyo". Try "help".
Fix this by moving all documentation blocks after the define blocks.
This was introduced in GH-6384.
Restart lines now always start with '=' and never end with ' ' and fill the width of the window unless that would require ending with ' ', which could be wrapped by itself and possible confusing the user.
* Rename PyThreadState_DeleteCurrent()
to _PyThreadState_DeleteCurrent()
* Move it to the internal C API
Co-Authored-By: Carol Willing <carolcode@willingconsulting.com>
The purpose of the `unicodedata.is_normalized` function is to answer
the question `str == unicodedata.normalized(form, str)` more
efficiently than writing just that, by using the "quick check"
optimization described in the Unicode standard in UAX #15.
However, it turns out the code doesn't implement the full algorithm
from the standard, and as a result we often miss the optimization and
end up having to compute the whole normalized string after all.
Implement the standard's algorithm. This greatly speeds up
`unicodedata.is_normalized` in many cases where our partial variant
of quick-check had been returning MAYBE and the standard algorithm
returns NO.
At a quick test on my desktop, the existing code takes about 4.4 ms/MB
(so 4.4 ns per byte) when the partial quick-check returns MAYBE and it
has to do the slow normalize-and-compare:
$ build.base/python -m timeit -s 'import unicodedata; s = "\uf900"*500000' \
-- 'unicodedata.is_normalized("NFD", s)'
50 loops, best of 5: 4.39 msec per loop
With this patch, it gets the answer instantly (58 ns) on the same 1 MB
string:
$ build.dev/python -m timeit -s 'import unicodedata; s = "\uf900"*500000' \
-- 'unicodedata.is_normalized("NFD", s)'
5000000 loops, best of 5: 58.2 nsec per loop
This restores a small optimization that the original version of this
code had for the `unicodedata.normalize` use case.
With this, that case is actually faster than in master!
$ build.base/python -m timeit -s 'import unicodedata; s = "\u0338"*500000' \
-- 'unicodedata.normalize("NFD", s)'
500 loops, best of 5: 561 usec per loop
$ build.dev/python -m timeit -s 'import unicodedata; s = "\u0338"*500000' \
-- 'unicodedata.normalize("NFD", s)'
500 loops, best of 5: 512 usec per loop