Sometimes a large file is truncated (test_largefile). While
estimated_size is used as a estimate (the read will stil get the number
of bytes in the file), that it is much larger than the actual size of
data can result in a significant over allocation and sometimes lead to
a MemoryError / running out of memory.
This brings the C implementation to match the Python _pyio
implementation.
This reduces the system call count of a simple program[0] that reads all
the `.rst` files in Doc by over 10% (5706 -> 4734 system calls on my
linux system, 5813 -> 4875 on my macOS)
This reduces the number of `fstat()` calls always and seek calls most
the time. Stat was always called twice, once at open (to error early on
directories), and a second time to get the size of the file to be able
to read the whole file in one read. Now the size is cached with the
first call.
The code keeps an optimization that if the user had previously read a
lot of data, the current position is subtracted from the number of bytes
to read. That is somewhat expensive so only do it on larger files,
otherwise just try and read the extra bytes and resize the PyBytes as
needeed.
I built a little test program to validate the behavior + assumptions
around relative costs and then ran it under `strace` to get a log of the
system calls. Full samples below[1].
After the changes, this is everything in one `filename.read_text()`:
```python3
openat(AT_FDCWD, "cpython/Doc/howto/clinic.rst", O_RDONLY|O_CLOEXEC) = 3`
fstat(3, {st_mode=S_IFREG|0644, st_size=343, ...}) = 0`
ioctl(3, TCGETS, 0x7ffdfac04b40) = -1 ENOTTY (Inappropriate ioctl for device)
lseek(3, 0, SEEK_CUR) = 0
read(3, ":orphan:\n\n.. This page is retain"..., 344) = 343
read(3, "", 1) = 0
close(3) = 0
```
This does make some tradeoffs
1. If the file size changes between open() and readall(), this will
still get all the data but might have more read calls.
2. I experimented with avoiding the stat + cached result for small files
in general, but on my dev workstation at least that tended to reduce
performance compared to using the fstat().
[0]
```python3
from pathlib import Path
nlines = []
for filename in Path("cpython/Doc").glob("**/*.rst"):
nlines.append(len(filename.read_text()))
```
[1]
Before small file:
```
openat(AT_FDCWD, "cpython/Doc/howto/clinic.rst", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=343, ...}) = 0
ioctl(3, TCGETS, 0x7ffe52525930) = -1 ENOTTY (Inappropriate ioctl for device)
lseek(3, 0, SEEK_CUR) = 0
lseek(3, 0, SEEK_CUR) = 0
fstat(3, {st_mode=S_IFREG|0644, st_size=343, ...}) = 0
read(3, ":orphan:\n\n.. This page is retain"..., 344) = 343
read(3, "", 1) = 0
close(3) = 0
```
After small file:
```
openat(AT_FDCWD, "cpython/Doc/howto/clinic.rst", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=343, ...}) = 0
ioctl(3, TCGETS, 0x7ffdfac04b40) = -1 ENOTTY (Inappropriate ioctl for device)
lseek(3, 0, SEEK_CUR) = 0
read(3, ":orphan:\n\n.. This page is retain"..., 344) = 343
read(3, "", 1) = 0
close(3) = 0
```
Before large file:
```
openat(AT_FDCWD, "cpython/Doc/c-api/typeobj.rst", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=133104, ...}) = 0
ioctl(3, TCGETS, 0x7ffe52525930) = -1 ENOTTY (Inappropriate ioctl for device)
lseek(3, 0, SEEK_CUR) = 0
lseek(3, 0, SEEK_CUR) = 0
fstat(3, {st_mode=S_IFREG|0644, st_size=133104, ...}) = 0
read(3, ".. highlight:: c\n\n.. _type-struc"..., 133105) = 133104
read(3, "", 1) = 0
close(3) = 0
```
After large file:
```
openat(AT_FDCWD, "cpython/Doc/c-api/typeobj.rst", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=133104, ...}) = 0
ioctl(3, TCGETS, 0x7ffdfac04b40) = -1 ENOTTY (Inappropriate ioctl for device)
lseek(3, 0, SEEK_CUR) = 0
lseek(3, 0, SEEK_CUR) = 0
read(3, ".. highlight:: c\n\n.. _type-struc"..., 133105) = 133104
read(3, "", 1) = 0
close(3) = 0
```
Co-authored-by: Shantanu <12621235+hauntsaninja@users.noreply.github.com>
Co-authored-by: Erlend E. Aasland <erlend.aasland@protonmail.com>
Co-authored-by: Victor Stinner <vstinner@python.org>
As noted in gh-117983, the import importlib.util can be triggered at
interpreter startup under some circumstances, so adding threading makes
it a potentially obligatory load.
Lazy loading is not used in the stdlib, so this removes an unnecessary
load for the majority of users and slightly increases the cost of the
first lazily loaded module.
An obligatory threading load breaks gevent, which monkeypatches the
stdlib. Although unsupported, there doesn't seem to be an offsetting
benefit to breaking their use case.
For reference, here are benchmarks for the current main branch:
```
❯ hyperfine -w 8 './python -c "import importlib.util"'
Benchmark 1: ./python -c "import importlib.util"
Time (mean ± σ): 9.7 ms ± 0.7 ms [User: 7.7 ms, System: 1.8 ms]
Range (min … max): 8.4 ms … 13.1 ms 313 runs
```
And with this patch:
```
❯ hyperfine -w 8 './python -c "import importlib.util"'
Benchmark 1: ./python -c "import importlib.util"
Time (mean ± σ): 8.4 ms ± 0.7 ms [User: 6.8 ms, System: 1.4 ms]
Range (min … max): 7.2 ms … 11.7 ms 352 runs
```
Compare to:
```
❯ hyperfine -w 8 './python -c pass'
Benchmark 1: ./python -c pass
Time (mean ± σ): 7.6 ms ± 0.6 ms [User: 5.9 ms, System: 1.6 ms]
Range (min … max): 6.7 ms … 11.3 ms 390 runs
```
This roughly halves the import time of importlib.util.
This amends 6988ff02a5: memory allocation for
stginfo->ffi_type_pointer.elements in PyCSimpleType_init() should be
more generic (perhaps someday fmt->pffi_type->elements will be not a
two-elements array).
It should finally resolve#61103.
Co-authored-by: Victor Stinner <vstinner@python.org>
Co-authored-by: Bénédikt Tran <10796600+picnixz@users.noreply.github.com>
Add more offsets to _Py_DebugOffsets
We add a few more offsets that are required by some out-of-process
tools, such as [Austin](https://github.com/p403n1x87/austin).
Check for `ERROR_INVALID_PARAMETER` when calling `_winapi.CopyFile2()` and
raise `UnsupportedOperation`. In `Path.copy()`, handle this exception and
fall back to the `PathBase.copy()` implementation.
* Move get_signal_name() from test.libregrtest to test.support.
* Use get_signal_name() in support.script_helper.
* support.script_helper now decodes stdout and stderr from UTF-8,
instead of ASCII, if a command failed.
Refactor the fast Unicode hash check into `_PyObject_HashFast` and use relaxed
atomic loads in the free-threaded build.
After this change, the TSAN doesn't report data races for this method.