* bpo-39648: Expand math.gcd() and math.lcm() to handle multiple arguments.
* Simplify fast path.
* Difine lcm() without arguments returning 1.
* Apply suggestions from code review
Co-Authored-By: Mark Dickinson <dickinsm@gmail.com>
Co-authored-by: Mark Dickinson <dickinsm@gmail.com>
The truncate() method of io.BufferedReader() should raise
UnsupportedOperation when it is called on a read-only
io.BufferedReader() instance.
https://bugs.python.org/issue35950
Automerge-Triggered-By: @methane
When `allow_abbrev` was first added, disabling the abbreviation of
long options broke the grouping of short flags ([bpo-26967](https://bugs.python.org/issue26967)). As a fix,
b1e4d1b603 (contained in v3.8) ignores `allow_abbrev=False` for a
given argument string if the string does _not_ start with "--"
(i.e. it doesn't look like a long option).
This fix, however, doesn't take into account that long options can
start with alternative characters specified via `prefix_chars`,
introducing a regression: `allow_abbrev=False` has no effect on long
options that start with an alternative prefix character.
The most minimal fix would be to replace the "starts with --" check
with a "starts with two prefix_chars characters". But
`_get_option_tuples` already distinguishes between long and short
options, so let's instead piggyback off of that check by moving the
`allow_abbrev` condition into `_get_option_tuples`.
https://bugs.python.org/issue39546
* Hard reset + cherry piciking the changes.
* 📜🤖 Added by blurb_it.
* Added @vstinner News
* Update Misc/NEWS.d/next/Library/2020-02-11-13-01-38.bpo-38691.oND8Sk.rst
Co-Authored-By: Victor Stinner <vstinner@python.org>
* Hard reset to master
* Hard reset to master + latest changes
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Victor Stinner <vstinner@python.org>
As reported initially by @rad-pat in #6084, the following script causes a deadlock.
```
from concurrent.futures import ProcessPoolExecutor
class ObjectWithPickleError():
"""Triggers a RuntimeError when sending job to the workers"""
def __reduce__(self):
raise RuntimeError()
if __name__ == "__main__":
e = ProcessPoolExecutor()
f = e.submit(id, ObjectWithPickleError())
e.shutdown(wait=False)
f.result() # Deadlock on get
```
This is caused by the fact that the main process is closing communication channels that might be necessary to the `queue_management_thread` later. To avoid this, this PR let the `queue_management_thread` manage all the closing.
https://bugs.python.org/issue39104
Automerge-Triggered-By: @pitrou
The fix for [bpo-39386](https://bugs.python.org/issue39386) attempted to make it so you couldn't reuse a
agen.aclose() coroutine object. It accidentally also prevented you
from calling aclose() at all on an async generator that was already
closed or exhausted. This commit fixes it so we're only blocking the
actually illegal cases, while allowing the legal cases.
The new tests failed before this patch. Also confirmed that this fixes
the test failures we were seeing in Trio with Python dev builds:
https://github.com/python-trio/trio/pull/1396https://bugs.python.org/issue39606
The GNU docs describe the `devmajor` and `devminor` fields of the tar
header struct only in the context of character and block special files,
suggesting that in other cases they are not populated. Typical utilities
behave accordingly; this patch teaches `tarfile` to do the same.
bpo-21016, bpo-1294959: The pydoc and trace modules now use the
sysconfig module to get the path to the Python standard library, to
support uncommon installation path like /usr/lib64/python3.9/ on
Fedora.
Co-Authored-By: Jan Matějek <jmatejek@suse.com>
* Improve zipfile.Path performance on zipfiles with a large number of entries.
* 📜🤖 Added by blurb_it.
* Add bpo to blurb
* Sync with importlib_metadata 1.5 (6fe70ca)
* Update blurb.
* Remove compatibility code
* Add stubs module, omitted from earlier commit
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
The bulk of this patch was generated automatically with:
for name in \
PyObject_Vectorcall \
Py_TPFLAGS_HAVE_VECTORCALL \
PyObject_VectorcallMethod \
PyVectorcall_Function \
PyObject_CallOneArg \
PyObject_CallMethodNoArgs \
PyObject_CallMethodOneArg \
;
do
echo $name
git grep -lwz _$name | xargs -0 sed -i "s/\b_$name\b/$name/g"
done
old=_PyObject_FastCallDict
new=PyObject_VectorcallDict
git grep -lwz $old | xargs -0 sed -i "s/\b$old\b/$new/g"
and then cleaned up:
- Revert changes to in docs & news
- Revert changes to backcompat defines in headers
- Nudge misaligned comments
Fix regression in fractions.Fraction if the numerator and/or the
denominator is an int subclass. The math.gcd() function is now
used to normalize the numerator and denominator. math.gcd() always
return a int type. Previously, the GCD type depended on numerator
and denominator.
Some numerator types used (specifically NumPy) decides to not
return a Python boolean for the "a != b" operation. Using the equivalent
call to bool() guarantees a bool return also for such types.
* bpo-39491: Merge PEP 593 (typing.Annotated) support
PEP 593 has been accepted some time ago. I got a green light for merging
this from Till, so I went ahead and combined the code contributed to
typing_extensions[1] and the documentation from the PEP 593 text[2].
My changes were limited to:
* removing code designed for typing_extensions to run on older Python
versions
* removing some irrelevant parts of the PEP text when copying it over as
documentation and otherwise changing few small bits to better serve
the purpose
* changing the get_type_hints signature to match reality (parameter
names)
I wasn't entirely sure how to go about crediting the authors but I used
my best judgment, let me know if something needs changing in this
regard.
[1] 8280de241f/typing_extensions/src_py3/typing_extensions.py
[2] 17710b8798/pep-0593.rst
When called on a closed object, readinto() segfaults on account
of a write to a freed buffer:
==220553== Process terminating with default action of signal 11 (SIGSEGV): dumping core
==220553== Access not within mapped region at address 0x2A
==220553== at 0x48408A0: memmove (vg_replace_strmem.c:1272)
==220553== by 0x58DB0C: _buffered_readinto_generic (bufferedio.c:972)
==220553== by 0x58DCBA: _io__Buffered_readinto_impl (bufferedio.c:1053)
==220553== by 0x58DCBA: _io__Buffered_readinto (bufferedio.c.h:253)
Reproducer:
reader = open ("/dev/zero", "rb")
_void = reader.read (42)
reader.close ()
reader.readinto (bytearray (42)) ### BANG!
The problem exists since 2012 when commit dc469454ec added code
to free the read buffer on close().
Signed-off-by: Philipp Gesang <philipp.gesang@intra2net.com>
Currently, during runtime destruction, `_PyImport_Cleanup` is clearing the interpreter state before clearing out the modules themselves. This leads to a segfault on modules that rely on the module state to clear themselves up.
For example, let's take the small snippet added in the issue by @DinoV :
```
import _struct
class C:
def __init__(self):
self.pack = _struct.pack
def __del__(self):
self.pack('I', -42)
_struct.x = C()
```
The module `_struct` uses the module state to run `pack`. Therefore, the module state has to be alive until after the module has been cleared out to successfully run `C.__del__`. This happens at line 606, when `_PyImport_Cleanup` calls `_PyModule_Clear`. In fact, the loop that calls `_PyModule_Clear` has in its comments:
> Now, if there are any modules left alive, clear their globals to minimize potential leaks. All C extension modules actually end up here, since they are kept alive in the interpreter state.
That means that we can't clear the module state (which is used by C Extensions) before we run that loop.
Moving `_PyInterpreterState_ClearModules` until after it, fixes the segfault in the code snippet.
Finally, this updates a test in `io` to correctly assert the error that it now throws (since it now finds the io module state). The test that uses this is: `test_create_at_shutdown_without_encoding`. Given this test is now working is a proof that the module state now stays alive even when `__del__` is called at module destruction time. Thus, I didn't add a new tests for this.
https://bugs.python.org/issue38076