* Lib/test/certdata: do not hardcode reference cert data into tests
The script was simply printing the reference data and asking
users to update it by hand into the test suites. This can
be easily improved by writing the data into files and
having the test cases load the files.
* make_ssl_certs: make it possible to pass in expiration dates from command line
Note that in this commit, the defaults are same as they were,
so if nothing is specified the script works as before.
---------
Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
* Detect source file encoding.
* Use the "replace" error handler even for UTF-8 (default) encoding.
* Remove the BOM.
* Fix detection of too long lines if they contain NUL.
* Return the head rather than the tail for truncated long lines.
This allows to use positional argument with nargs='*' and without default
in mutually exclusive group and improves error message about required
arguments.
Arguments with the value identical to the default value (e.g. booleans,
small integers, empty or 1-character strings) are no longer considered
"not present".
- If setting `_fields_` fails, e.g. with AttributeError, don't set the attribute in `__dict__`
- Document the “finalization” behaviour
- Beef up tests: add `getattr`, test Union as well as Structure
- Put common functionality in a common function
Co-authored-by: Peter Bierma <zintensitydev@gmail.com>
Co-authored-by: Russell Keith-Magee <russell@keith-magee.com>
Co-authored-by: T. Wouters <thomas@python.org>
Co-authored-by: Adam Turner <9087854+AA-Turner@users.noreply.github.com>
The code changes for warning related to `__package__` landed in Python 3.12. `__cached__` doesn't have any changes as it isn't used but only set by the import system.
In case of usage a long command along with max_help_position more than
the length of the command, the command's help was incorrectly started
on the new line.
Co-authored-by: Pavel Ditenbir <pavel.ditenbir@gmail.com>
https://github.com/python/cpython/issues/88496 replaced text.update with text.update_idletasks in colorizer.py and outwin.py to fix test failures on macOS. While theoretically correct, the result was Shell freezing when receiving continuous short strings to print. Test: `while 1: 1`.
The guess is that there is no idle time in which to do the screen update. Reverting the change in one of the files,
outwin, fixes the issue. Colorizer runs ever 1/20 second and seems to work fine.
When running test-outwin on macOS, alias 'update'
to 'update_idletasks on the text used for testing.
Add a helper function that checks whether the test suite is running
inside a systemd-nspawn container, and skip the few tests failing
with `--suppress-sync=true` in that case. The tests are failing because
`--suppress-sync=true` stubs out `fsync()`, `fdatasync()` and `msync()`
calls, and therefore they always return success without checking for
invalid arguments.
Call `os.open(__file__, os.O_RDONLY | os.O_SYNC)` and check the errno to
detect whether `--suppress-sync=true` is actually used, and skip
the tests only in that scenario.
Debian (and derivatives) provide a /usr/bin/pager binary, managed by the
alternatives system, that always points to an available pager utility.
Allow _pyrepl to use it, to follow system policy.
This is a very trivial change, from a patch that Debian has been
carrying since 2.7 era. Seems appropriate to upstream.
https://bugs.debian.org/799555
Buildbot failure on Windows 10 with MSC v.1916 64 bit (AMD64):
FAIL: testFmod (test.test_math.MathTests.testFmod)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\buildarea\3.x.bolen-windows10\build\Lib\test\test_math.py", line 605, in testFmod
self.ftest('fmod(-10, 1)', math.fmod(-10, 1), -0.0)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\buildarea\3.x.bolen-windows10\build\Lib\test\test_math.py", line 258, in ftest
self.fail("{}: {}".format(name, failure))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: fmod(-10, 1): expected -0.0, got 0.0 (zero has wrong sign)
Here Windows loose sign of the result; if y is nonzero, the result
should have the same sign as x.
This amends commit 28aea5d07d.
Co-authored-by: Hugo van Kemenade <1324225+hugovk@users.noreply.github.com>
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Hugo van Kemenade <1324225+hugovk@users.noreply.github.com>
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
pyrepl: Support Del, PgUp, and PgDn on TERM=vt100
From Fedora's /etc/inputrc:
"\e[5~": history-search-backward
"\e[6~": history-search-forward
"\e[3~": delete-char
Fixes https://github.com/python/cpython/issues/124027
Use a `_PyStackRef` and defer the reference to `f_executable` when
possible. This avoids some reference count contention in the common case
of executing the same code object from multiple threads concurrently in
the free-threaded build.
* gh-116608: Apply style and compatibility changes from importlib_metadata.
* gh-121735: Ensure module-adjacent resources are loadable from a zipfile.
* gh-121735: Allow all modules to be processed by the ZipReader.
* Add blurb
* Remove update-zips script, unneeded.
* Remove unnecessary references to removed static fixtures.
* Remove zipdata fixtures, unused.
POSIX allows errno to be negative.
Even though all currently supported platforms have non-negative errno,
relying on a quirk like that would make Python less portable.
* Raise PicklingError instead of UnicodeEncodeError, ValueError
and AttributeError in both implementations.
* Chain the original exception to the pickle-specific one as __context__.
* Include the error message of ImportError and some AttributeError in
the PicklingError error message.
* Unify error messages between Python and C implementations.
* Refer to documented __reduce__ and __newobj__ callables instead of
internal methods (e.g. save_reduce()) or pickle opcodes (e.g. NEWOBJ).
* Include more details in error messages (what expected, what got).
* Avoid including a potentially long repr of an arbitrary object in
error messages.
This switches the main pyrepl event loop to always be non-blocking so that it
can listen to incoming interruptions from other threads.
This also resolves invalid display of exceptions from other threads
(gh-123178).
This also fixes freezes with pasting and an active input hook.
Improve import time of `socket` by writing `socket.errorTab`
as a constant and lazy import modules.
Co-authored-by: Pieter Eendebak <pieter.eendebak@gmail.com>
Co-authored-by: Bénédikt Tran <10796600+picnixz@users.noreply.github.com>
Co-authored-by: Gregory P. Smith <greg@krypto.org>
Co-authored-by: Hugo van Kemenade <1324225+hugovk@users.noreply.github.com>
Co-authored-by: Shantanu <12621235+hauntsaninja@users.noreply.github.com>
Increases the multiprocessing connection buffer size from 8k to 64k for efficiency, without overallocating.
Co-authored-by: Bénédikt Tran <10796600+picnixz@users.noreply.github.com>
Co-authored-by: Victor Stinner <vstinner@python.org>
Add PyConfig_Get(), PyConfig_GetInt(), PyConfig_Set() and
PyConfig_Names() functions to get and set the current runtime Python
configuration.
Add visibility and "sys spec" to config and preconfig specifications.
_PyConfig_AsDict() now converts PyConfig.xoptions as a dictionary.
Co-authored-by: Bénédikt Tran <10796600+picnixz@users.noreply.github.com>
* Remove backtracking when parsing tarfile headers
* Rewrite PAX header parsing to be stricter
* Optimize parsing of GNU extended sparse headers v0.0
Co-authored-by: Kirill Podoprigora <kirill.bast9@mail.ru>
Co-authored-by: Gregory P. Smith <greg@krypto.org>
* urljoin() with relative reference "?" sets empty query and removes fragment.
* Preserve empty components (authority, params, query, fragment) in urljoin().
* Preserve empty components (authority, params, query) in urldefrag().
Also refactor the code and get rid of double _coerce_args() and
_coerce_result() calls in urljoin(), urldefrag(), urlparse() and
urlunparse().
When checking if the registering browser is the "OS preferred browser", do not use a substring search - that makes no sense: one can have a preferred browser that looks like a super-string of a known browser, e.g. "firefox-nightly" vs "firefox".
https://github.com/python/cpython/issues/108172 explains in more detail, and lays out a potential better future enhancement for this case of just using xdg-open. We'll go with this for now.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
See https://github.com/python/cpython/issues/121313 for analysis, but this greatly reduces memory overallocation and overhead when multiprocessing is sending non-small data over its pipes between processes.
The `zip_next` function uses a common optimization technique for methods
that generate tuples. The iterator maintains an internal reference to
the returned tuple. When the method is called again, it checks if the
internal tuple's reference count is 1. If so, the tuple can be reused.
However, this approach is not safe under the free-threading build:
after checking the reference count, another thread may perform the same
check and also reuse the tuple. This can result in a double decref on
the items of the replaced tuple and a double incref (memory leak) on
the items of the tuple being set.
This adds a function, `_PyObject_IsUniquelyReferenced` that
encapsulates the stricter logic necessary for the free-threaded build:
the internal tuple must be owned by the current thread, have a local
refcount of one, and a shared refcount of zero.
* Make `weakref.WeakSet` safe against concurrent mutations while it is being iterated.
`_IterationGuard` is no longer used for `WeakSet`, it now relies on copying the underlying set which is an atomic operation while iterating so that it can be modified by other threads.
Per feedback from Paul Moore on GH-123158, it's better to defer making
`Path.delete()` public than ship it with under-designed error handling
capabilities.
We leave a remnant `_delete()` method, which is used by `move()`. Any
functionality not needed by `move()` is deleted.
These two methods accept an *existing* directory path, onto which we join
the source path's base name to form the final target path.
A possible alternative implementation is to check for directories in
`copy()` and `move()` and adjust the target path, which is done in several
`shutil` functions. This behaviour is helpful in a shell context, but
less so in a stored program that explicitly specifies destinations. For
example, a user that calls `Path('foo.py').copy('bar.py')` might not
imagine that `bar.py/foo.py` would be created, but under the alternative
implementation this will happen if `bar.py` is an existing directory.
When display lines above the cursor come from the cache, the first line
to not come from the cache may be a wrapped line, starting half way
through a logical line in the buffer. Detect and handle this case to
avoid accidentally drawing a stray prompt in the middle of a logical
line.
Add a `Path.move()` method that moves a file or directory tree, and returns a new `Path` instance pointing to the target.
This method is similar to `shutil.move()`, except that it doesn't accept a *copy_function* argument, and it doesn't check whether the destination is an existing directory.
* pass the original string error message from the ftplib error to URLError()
* Update request.py
Change error string for ftp error to be consistent with other errors reported for ftp
* Add NEWS entry for change to urllib.request for ftp errors.
* Track the change in the ftp error message in the test.
Check that the current default heap is initialized in
`_mi_os_get_aligned_hint` and `mi_os_claim_huge_pages`.
The mimalloc function `_mi_os_get_aligned_hint` assumes that there is an
initialized default heap. This is true for our main thread, but not for
background threads. The problematic code path is usually called during
initialization (i.e., `Py_Initialize`), but it may also be called if the
program allocates large amounts of memory in total.
The crash only affected the free-threaded build.
`Path.read_bytes()` is used to read a whole file. buffering /
BufferedIO is focused around making small, possibly interleaved,
read/write efficient which doesn't add value in this case.
On my Mac, running the benchmark:
```python
import pyperf
from pathlib import Path
def read_all(all_paths):
for p in all_paths:
p.read_bytes()
def read_file(path_obj):
path_obj.read_bytes()
all_rst = list(Path("Doc").glob("**/*.rst"))
all_py = list(Path(".").glob("**/*.py"))
assert all_rst, "Should have found rst files"
assert all_py, "Should have found python source files"
runner = pyperf.Runner()
runner.bench_func("read_file_small", read_file, Path("Doc/howto/clinic.rst"))
runner.bench_func("read_file_large", read_file, Path("Doc/c-api/typeobj.rst"))
```
before:
```python
.....................
read_file_small: Mean +- std dev: 6.80 us +- 0.07 us
.....................
read_file_large: Mean +- std dev: 10.8 us +- 0.2 us
````
after:
```python
.....................
read_file_small: Mean +- std dev: 5.67 us +- 0.05 us
.....................
read_file_large: Mean +- std dev: 9.77 us +- 0.52 us
```
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Hugo van Kemenade <1324225+hugovk@users.noreply.github.com>
This replaces the existing hashlib Blake2 module with a single implementation that uses HACL\*'s Blake2b/Blake2s implementations. We added support for all the modes exposed by the Python API, including tree hashing, leaf nodes, and so on. We ported and merged all of these changes upstream in HACL\*, added test vectors based on Python's existing implementation, and exposed everything needed for hashlib.
This was joint work done with @R1kM.
See the PR for much discussion and benchmarking details. TL;DR: On many systems, 8-50% faster (!) than `libb2`, on some systems it appeared 10-20% slower than `libb2`.
As of 529a160 (gh-118204), building with HAVE_DYNAMIC_LOADING stopped working. This is a minimal fix just to get builds working again. There are actually a number of long-standing deficiencies with HAVE_DYNAMIC_LOADING builds that need to be resolved separately.
Co-authored-by: Adam Turner <9087854+AA-Turner@users.noreply.github.com>
Co-authored-by: Hugo van Kemenade <1324225+hugovk@users.noreply.github.com>
Co-authored-by: Alyssa Coghlan <ncoghlan@gmail.com>
Rename `pathlib.Path.copy()` to `_copy_file()` (i.e. make it private.)
Rename `pathlib.Path.copytree()` to `copy()`, and add support for copying
non-directories. This simplifies the interface for users, and nicely
complements the upcoming `move()` and `delete()` methods (which will also
accept any type of file.)
Co-authored-by: Adam Turner <9087854+AA-Turner@users.noreply.github.com>
This reverts commit dcc028d924 and
commit 6c54e5d721.
Keep the deprecated logging warn() method in Python 3.13.
Co-authored-by: Gregory P. Smith <greg@krypto.org>
Co-authored-by: Hugo van Kemenade <1324225+hugovk@users.noreply.github.com>
We were not properly accounting for interpreter memory leaks at
shutdown and had two sources of leaks:
* Objects that use deferred reference counting and were reachable via
static types outlive the final GC. We now disable deferred reference
counting on all objects if we are calling the GC due to interpreter
shutdown.
* `_PyMem_FreeDelayed` did not properly check for interpreter shutdown
so we had some memory blocks that were enqueued to be freed, but
never actually freed.
* `_PyType_FinalizeIdPool` wasn't called at interpreter shutdown.
Return -1 and set an exception on error; return 0 if the iterator is
exhausted, and return 1 if the next item was fetched successfully.
Prefer this API to PyIter_Next(), which requires the caller to use
PyErr_Occurred() to differentiate between iterator exhaustion and errors.
Co-authered-by: Irit Katriel <iritkatriel@yahoo.com>
Fix _PyArg_UnpackKeywordsWithVararg for the case when argument for
positional-or-keyword parameter is passed by keyword.
There was only one such case in the stdlib -- the TypeVar constructor.
Frames of methods in code and codeop modules was show with non-default
sys.excepthook.
Save correct tracebacks in sys.last_traceback and update __traceback__
attribute of sys.last_value and sys.last_exc.
Rename `pathlib.Path.rmtree()` to `delete()`, and add support for deleting
non-directories. This simplifies the interface for users, and nicely
complements the upcoming `move()` and `copy()` methods (which will also
accept any type of file.)
Fix PyEval_GetLocals() to avoid SystemError ("bad argument to
internal function"). Don't redefine the 'ret' variable in the if
block.
Add an unit test on PyEval_GetLocals().
The free-threaded build partially stores heap type reference counts in
distributed manner in per-thread arrays. This avoids reference count
contention when creating or destroying instances.
Co-authored-by: Ken Jin <kenjin@python.org>
Modifies the handling of stdout/stderr redirection on Android to accomodate
the rate and buffer size limits imposed by Android's logging infrastructure.
Match statements in tooling require a more recent Python. Tools/cases_generator/*.py (and `Tools/jit/*.py` in 3.13+).
Co-authored-by: Erlend E. Aasland <erlend.aasland@protonmail.com>
Co-authored-by: Gregory P. Smith <greg@krypto.org>
As per C11 DR#471, ctanh (0 + i NaN) and ctanh (0 + i Inf) should return
0 + i NaN (with "invalid" exception in the second case). This has
corresponding implications for ctan(z), as its errors and special cases
are handled as if the operation is implemented by -i*ctanh(i*z).
This patch fixes cmath's code to do same.
Glibs patch: https://sourceware.org/git/?p=glibc.git;a=commitdiff;h=d15e83c5f5231d971472b5ffc9219d54056ca0f1
As per C11 DR#471 (adjusted resolution accepted for C17), cacosh (0 +
iNaN) should return NaN ± i pi/2, not NaN + iNaN. This patch
fixes cmath's code to do same.
The `PyStructSequence` destructor would crash if it was deallocated after
its type's dictionary was cleared by the GC, because it couldn't compute
the "real size" of the instance. This could occur with relatively
straightforward code in the free-threaded build or with a reference
cycle involving the type in the default build, due to differing orders
in which `tp_clear()` was called.
Account for the non-sequence fields in `tp_basicsize` and use that,
along with `Py_SIZE()`, to compute the "real" size of a
`PyStructSequence` in the dealloc function. This avoids the accesses to
the type's dictionary during dealloc, which were unsafe.
On recent versions of macOS (sometime between Catalina and Sonoma 14.5), the default Hovertip foreground color changed from black to white, thereby matching the background. This might be a matter of matching the white foreground of the dark-mode text. The unreadable result is shown here (#120083 (comment)).
The foreground and background colors were made parameters so we can pass different colors for future additional hovertips in IDLE.
---------
Co-authored-by: Terry Jan Reedy <tjreedy@udel.edu>
This flag was added as an escape hatch in gh-91401 and backported to
Python 3.10. The flag broke at some point between its addition and now.
As there is currently no publicly known environments that require this,
remove it rather than work on fixing it.
This leaves the flag in the subprocess module to not break code which
may have used / checked the flag itself.
discussion: https://discuss.python.org/t/subprocess-use-vfork-escape-hatch-broken-fix-or-remove/56915/2
Currently, idle-dev@python.org and idle-dev mailing list
serve to collect spam (90+%). Change About IDLE to direct
discussions to discuss.python.org. Users are already
doing so.
## Encode header parts that contain newlines
Per RFC 2047:
> [...] these encoding schemes allow the
> encoding of arbitrary octet values, mail readers that implement this
> decoding should also ensure that display of the decoded data on the
> recipient's terminal will not cause unwanted side-effects
It seems that the "quoted-word" scheme is a valid way to include
a newline character in a header value, just like we already allow
undecodable bytes or control characters.
They do need to be properly quoted when serialized to text, though.
## Verify that email headers are well-formed
This should fail for custom fold() implementations that aren't careful
about newlines.
Co-authored-by: Bas Bloemsaat <bas@bloemsaat.org>
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
* Authenticate socket connection for `socket.socketpair()` fallback when the platform does not have a native `socketpair` C API. We authenticate in-process using `getsocketname` and `getpeername` (thanks to Nathaniel J Smith for that suggestion).
Co-authored-by: Gregory P. Smith <greg@krypto.org>
* Use compensated summation for complex sums with floating-point items.
This amends #121176.
* sum() specializations for floats and complexes now use
PyLong_AsDouble() instead of PyLong_AsLongAndOverflow() and
compensated summation as well.
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Hugo van Kemenade <1324225+hugovk@users.noreply.github.com>
Co-authored-by: Tomas R <tomas.roun8@gmail.com>
Co-authored-by: Scott Odle <scott@sjodle.com>
Co-authored-by: Bénédikt Tran <10796600+picnixz@users.noreply.github.com>
Co-authored-by: Petr Viktorin <encukou@gmail.com>
Serializing objects with complex __qualname__ (such as unbound methods and
nested classes) by name no longer involves serializing parent objects by value
in pickle protocols < 4.
Adds a --with-app-store-compliance configuration option that patches out code known to be an issue with App Store review processes. This option is applied automatically on iOS, and optionally on macOS.
Add a `Path.rmtree()` method that removes an entire directory tree, like
`shutil.rmtree()`. The signature of the optional *on_error* argument
matches the `Path.walk()` argument of the same name, but differs from the
*onexc* and *onerror* arguments to `shutil.rmtree()`. Consistency within
pathlib is probably more important.
In the private pathlib ABCs, we add an implementation based on `walk()`.
Co-authored-by: Bénédikt Tran <10796600+picnixz@users.noreply.github.com>
Problem occurred when attribute xyz could not be pickled.
Since this is not trivial to selectively fix, block all
attributes (other than 'width'). IDLE does not access them
and they are private implementation details.
* Switch PyUnicode_InternInPlace to _PyUnicode_InternMortal, clarify docs
* Document immortality in some functions that take `const char *`
This is PyUnicode_InternFromString;
PyDict_SetItemString, PyObject_SetAttrString;
PyObject_DelAttrString; PyUnicode_InternFromString;
and the PyModule_Add convenience functions.
Always point out a non-immortalizing alternative.
* Don't immortalize user-provided attr names in _ctypes
We should maintain the invariant that a zero `ob_tid` implies the
refcount fields are merged.
* Move the assignment in `_Py_MergeZeroLocalRefcount` to immediately
before the refcount merge.
* Update `_PyTrash_thread_destroy_chain` to set `ob_ref_shared` to
`_Py_REF_MERGED` when setting `ob_tid` to zero.
Also check this invariant with assertions in the GC in debug builds.
That uncovered a bug when running out of memory during GC.
They are alternate constructors which only accept numbers
(including objects with special methods __float__, __complex__
and __index__), but not strings.
It is our general practice to make new optional parameters keyword-only,
even if the existing parameters are all positional-or-keyword. Passing
this parameter as positional would look confusing and could be error-prone
if additional parameters are added in the future.
Performance improvement to `float.fromhex`: use a lookup table
for computing the hexadecimal value of a character, in place of the
previous switch-case construct. Patch by Bruno Lima.
On POSIX systems, excluding macOS framework installs, the lib directory
for the free-threaded build now includes a "t" suffix to avoid conflicts
with a co-located default build installation.
When builtin static types are initialized for a subinterpreter, various "tp" slots have already been inherited (for the main interpreter). This was interfering with the logic in add_operators() (in Objects/typeobject.c), causing a wrapper to get created when it shouldn't. This change fixes that by preserving the original data from the static type struct and checking that.
The `_PySeqLock_EndRead` function needs an acquire fence to ensure that
the load of the sequence happens after any loads within the read side
critical section. The missing fence can trigger bugs on macOS arm64.
Additionally, we need a release fence in `_PySeqLock_LockWrite` to
ensure that the sequence update is visible before any modifications to
the cache entry.
Make error message for index() methods consistent
Remove the repr of the searched value (which can be arbitrary large)
from ValueError messages for list.index(), range.index(), deque.index(),
deque.remove() and ShareableList.index(). Make the error messages
consistent with error messages for other index() and remove()
methods.
This reduces the system call count of a simple program[0] that reads all
the `.rst` files in Doc by over 10% (5706 -> 4734 system calls on my
linux system, 5813 -> 4875 on my macOS)
This reduces the number of `fstat()` calls always and seek calls most
the time. Stat was always called twice, once at open (to error early on
directories), and a second time to get the size of the file to be able
to read the whole file in one read. Now the size is cached with the
first call.
The code keeps an optimization that if the user had previously read a
lot of data, the current position is subtracted from the number of bytes
to read. That is somewhat expensive so only do it on larger files,
otherwise just try and read the extra bytes and resize the PyBytes as
needeed.
I built a little test program to validate the behavior + assumptions
around relative costs and then ran it under `strace` to get a log of the
system calls. Full samples below[1].
After the changes, this is everything in one `filename.read_text()`:
```python3
openat(AT_FDCWD, "cpython/Doc/howto/clinic.rst", O_RDONLY|O_CLOEXEC) = 3`
fstat(3, {st_mode=S_IFREG|0644, st_size=343, ...}) = 0`
ioctl(3, TCGETS, 0x7ffdfac04b40) = -1 ENOTTY (Inappropriate ioctl for device)
lseek(3, 0, SEEK_CUR) = 0
read(3, ":orphan:\n\n.. This page is retain"..., 344) = 343
read(3, "", 1) = 0
close(3) = 0
```
This does make some tradeoffs
1. If the file size changes between open() and readall(), this will
still get all the data but might have more read calls.
2. I experimented with avoiding the stat + cached result for small files
in general, but on my dev workstation at least that tended to reduce
performance compared to using the fstat().
[0]
```python3
from pathlib import Path
nlines = []
for filename in Path("cpython/Doc").glob("**/*.rst"):
nlines.append(len(filename.read_text()))
```
[1]
Before small file:
```
openat(AT_FDCWD, "cpython/Doc/howto/clinic.rst", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=343, ...}) = 0
ioctl(3, TCGETS, 0x7ffe52525930) = -1 ENOTTY (Inappropriate ioctl for device)
lseek(3, 0, SEEK_CUR) = 0
lseek(3, 0, SEEK_CUR) = 0
fstat(3, {st_mode=S_IFREG|0644, st_size=343, ...}) = 0
read(3, ":orphan:\n\n.. This page is retain"..., 344) = 343
read(3, "", 1) = 0
close(3) = 0
```
After small file:
```
openat(AT_FDCWD, "cpython/Doc/howto/clinic.rst", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=343, ...}) = 0
ioctl(3, TCGETS, 0x7ffdfac04b40) = -1 ENOTTY (Inappropriate ioctl for device)
lseek(3, 0, SEEK_CUR) = 0
read(3, ":orphan:\n\n.. This page is retain"..., 344) = 343
read(3, "", 1) = 0
close(3) = 0
```
Before large file:
```
openat(AT_FDCWD, "cpython/Doc/c-api/typeobj.rst", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=133104, ...}) = 0
ioctl(3, TCGETS, 0x7ffe52525930) = -1 ENOTTY (Inappropriate ioctl for device)
lseek(3, 0, SEEK_CUR) = 0
lseek(3, 0, SEEK_CUR) = 0
fstat(3, {st_mode=S_IFREG|0644, st_size=133104, ...}) = 0
read(3, ".. highlight:: c\n\n.. _type-struc"..., 133105) = 133104
read(3, "", 1) = 0
close(3) = 0
```
After large file:
```
openat(AT_FDCWD, "cpython/Doc/c-api/typeobj.rst", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=133104, ...}) = 0
ioctl(3, TCGETS, 0x7ffdfac04b40) = -1 ENOTTY (Inappropriate ioctl for device)
lseek(3, 0, SEEK_CUR) = 0
lseek(3, 0, SEEK_CUR) = 0
read(3, ".. highlight:: c\n\n.. _type-struc"..., 133105) = 133104
read(3, "", 1) = 0
close(3) = 0
```
Co-authored-by: Shantanu <12621235+hauntsaninja@users.noreply.github.com>
Co-authored-by: Erlend E. Aasland <erlend.aasland@protonmail.com>
Co-authored-by: Victor Stinner <vstinner@python.org>
As noted in gh-117983, the import importlib.util can be triggered at
interpreter startup under some circumstances, so adding threading makes
it a potentially obligatory load.
Lazy loading is not used in the stdlib, so this removes an unnecessary
load for the majority of users and slightly increases the cost of the
first lazily loaded module.
An obligatory threading load breaks gevent, which monkeypatches the
stdlib. Although unsupported, there doesn't seem to be an offsetting
benefit to breaking their use case.
For reference, here are benchmarks for the current main branch:
```
❯ hyperfine -w 8 './python -c "import importlib.util"'
Benchmark 1: ./python -c "import importlib.util"
Time (mean ± σ): 9.7 ms ± 0.7 ms [User: 7.7 ms, System: 1.8 ms]
Range (min … max): 8.4 ms … 13.1 ms 313 runs
```
And with this patch:
```
❯ hyperfine -w 8 './python -c "import importlib.util"'
Benchmark 1: ./python -c "import importlib.util"
Time (mean ± σ): 8.4 ms ± 0.7 ms [User: 6.8 ms, System: 1.4 ms]
Range (min … max): 7.2 ms … 11.7 ms 352 runs
```
Compare to:
```
❯ hyperfine -w 8 './python -c pass'
Benchmark 1: ./python -c pass
Time (mean ± σ): 7.6 ms ± 0.6 ms [User: 5.9 ms, System: 1.6 ms]
Range (min … max): 6.7 ms … 11.3 ms 390 runs
```
This roughly halves the import time of importlib.util.
This amends 6988ff02a5: memory allocation for
stginfo->ffi_type_pointer.elements in PyCSimpleType_init() should be
more generic (perhaps someday fmt->pffi_type->elements will be not a
two-elements array).
It should finally resolve#61103.
Co-authored-by: Victor Stinner <vstinner@python.org>
Co-authored-by: Bénédikt Tran <10796600+picnixz@users.noreply.github.com>
When creating the JUnit XML file, regrtest now escapes characters
which are invalid in XML, such as the chr(27) control character used
in ANSI escape sequences.
1. Use pkg-config to check for ncursesw/panelw. If that fails, use
pkg-config to check for ncurses/panel.
2. Regardless of pkg-config output, search for curses/panel headers, so
we're sure we have all defines in pyconfig.h.
3. Regardless of pkg-config output, check if libncurses or libncursesw
contains the 'initscr' symbol; if it does _and_ pkg-config failed
earlier, add the resulting -llib linker option to CURSES_LIBS.
Ditto for 'update_panels' and PANEL_LIBS.
4. Wrap the rest of the checks with WITH_SAVE_ENV and make sure we're
using updated LIBS and CPPFLAGS for those.
Add the PY_CHECK_CURSES convenience macro.
asyncio earlier relied on subprocess module to send signals to the process, this has some drawbacks one being that subprocess module unnecessarily calls waitpid on child processes and hence it races with asyncio implementation which internally uses child watchers. To mitigate this, now asyncio sends signals directly to the process without going through the subprocess on non windows systems. On Windows it fallbacks to subprocess module handling but on windows there are no child watchers so this issue doesn't exists altogether.
In some cases, previously computed as (nan+nanj), we could
recover meaningful component values in the result, see
e.g. the C11, Annex G.5.2, routine _Cdivd().
* parse_intermixed_args() now raises ArgumentError instead of calling
error() if exit_on_error is false.
* Internal code now always raises ArgumentError instead of calling
error(). It is then caught at the higher level and error() is called if
exit_on_error is true.
The check for whether the log file is a real file is expensive on NFS
filesystems. This commit reorders the rollover condition checking to
not do the file type check if the expected file size is less than the
rotation threshold.
Co-authored-by: Oleg Iarygin <oleg@arhadthedev.net>