Remove io.OpenWrapper and _pyio.OpenWrapper, deprecated in Python
3.10: just use :func:`open` instead. The open() (io.open()) function
is a built-in function. Since Python 3.10, _pyio.open() is also a
static method.
`TextIOWrapper.__init__()` called `os.device_encoding(file.fileno())` if fileno is 0-2 and encoding=None.
But it is very rarely works, and never documented behavior.
open(), io.open(), codecs.open() and fileinput.FileInput no longer
accept "U" ("universal newline") in the file mode. This flag was
deprecated since Python 3.3.
Deprecate io.OpenWrapper and _pyio.OpenWrapper: use io.open and
_pyio.open instead. Until Python 3.9, _pyio.open was not a static
method and builtins.open was set to OpenWrapper to not become a bound
method when set to a class variable. _io.open is a built-in function
whereas _pyio.open is a Python function. In Python 3.10, _pyio.open()
is now a static method, and builtins.open() is now io.open().
See [PEP 597](https://www.python.org/dev/peps/pep-0597/).
* Add `-X warn_default_encoding` and `PYTHONWARNDEFAULTENCODING`.
* Add EncodingWarning
* Add io.text_encoding()
* open(), TextIOWrapper() emits EncodingWarning when encoding is omitted and warn_default_encoding is enabled.
* _pyio.TextIOWrapper() uses UTF-8 as fallback default encoding used when failed to import locale module. (used during building Python)
* bz2, configparser, gzip, lzma, pathlib, tempfile modules use io.text_encoding().
* What's new entry
When very large data remains in TextIOWrapper, flush() may fail forever.
So prevent that data larger than chunk_size is remained in TextIOWrapper internal
buffer.
Co-Authored-By: Eryk Sun
The Py_FatalError() function is replaced with a macro which logs
automatically the name of the current function, unless the
Py_LIMITED_API macro is defined.
Changes:
* Add _Py_FatalErrorFunc() function.
* Remove the function name from the message of Py_FatalError() calls
which included the function name.
* Update tests.
The truncate() method of io.BufferedReader() should raise
UnsupportedOperation when it is called on a read-only
io.BufferedReader() instance.
https://bugs.python.org/issue35950
Automerge-Triggered-By: @methane
When called on a closed object, readinto() segfaults on account
of a write to a freed buffer:
==220553== Process terminating with default action of signal 11 (SIGSEGV): dumping core
==220553== Access not within mapped region at address 0x2A
==220553== at 0x48408A0: memmove (vg_replace_strmem.c:1272)
==220553== by 0x58DB0C: _buffered_readinto_generic (bufferedio.c:972)
==220553== by 0x58DCBA: _io__Buffered_readinto_impl (bufferedio.c:1053)
==220553== by 0x58DCBA: _io__Buffered_readinto (bufferedio.c.h:253)
Reproducer:
reader = open ("/dev/zero", "rb")
_void = reader.read (42)
reader.close ()
reader.readinto (bytearray (42)) ### BANG!
The problem exists since 2012 when commit dc469454ec added code
to free the read buffer on close().
Signed-off-by: Philipp Gesang <philipp.gesang@intra2net.com>
Currently, during runtime destruction, `_PyImport_Cleanup` is clearing the interpreter state before clearing out the modules themselves. This leads to a segfault on modules that rely on the module state to clear themselves up.
For example, let's take the small snippet added in the issue by @DinoV :
```
import _struct
class C:
def __init__(self):
self.pack = _struct.pack
def __del__(self):
self.pack('I', -42)
_struct.x = C()
```
The module `_struct` uses the module state to run `pack`. Therefore, the module state has to be alive until after the module has been cleared out to successfully run `C.__del__`. This happens at line 606, when `_PyImport_Cleanup` calls `_PyModule_Clear`. In fact, the loop that calls `_PyModule_Clear` has in its comments:
> Now, if there are any modules left alive, clear their globals to minimize potential leaks. All C extension modules actually end up here, since they are kept alive in the interpreter state.
That means that we can't clear the module state (which is used by C Extensions) before we run that loop.
Moving `_PyInterpreterState_ClearModules` until after it, fixes the segfault in the code snippet.
Finally, this updates a test in `io` to correctly assert the error that it now throws (since it now finds the io module state). The test that uses this is: `test_create_at_shutdown_without_encoding`. Given this test is now working is a proof that the module state now stays alive even when `__del__` is called at module destruction time. Thus, I didn't add a new tests for this.
https://bugs.python.org/issue38076
open(), io.open(), codecs.open() and fileinput.FileInput no longer
accept "U" ("universal newline") in the file mode. This flag was
deprecated since Python 3.3.
In development mode and in debug build, encoding and errors arguments
are now checked on string encoding and decoding operations. Examples:
open(), str.encode() and bytes.decode().
By default, for best performances, the errors argument is only
checked at the first encoding/decoding error, and the encoding
argument is sometimes ignored for empty strings.
Document reference cycle and resurrected objects issues in
sys.unraisablehook() and threading.excepthook() documentation.
Fix test.support.catch_unraisable_exception(): __exit__() no longer
ignores unraisable exceptions.
Fix test_io test_writer_close_error_on_close(): use a second
catch_unraisable_exception() to catch the BufferedWriter unraisable
exception.
Use catch_unraisable_exception() to ignore 'Exception ignored in:'
error when the internal BufferedWriter of the BufferedRWPair is
destroyed. The C implementation doesn't give access to the
internal BufferedWriter, so just ignore the warning instead.
In development (-X dev) mode and in a debug build, IOBase finalizer
of the _pyio module now logs the exception if the close() method
fails. The exception is ignored silently by default in release build.
test_io: test_error_through_destructor() now uses
support.catch_unraisable_exception() rather than capturing stderr.