POSIX allows errno to be negative.
Even though all currently supported platforms have non-negative errno,
relying on a quirk like that would make Python less portable.
* Raise PicklingError instead of UnicodeEncodeError, ValueError
and AttributeError in both implementations.
* Chain the original exception to the pickle-specific one as __context__.
* Include the error message of ImportError and some AttributeError in
the PicklingError error message.
* Unify error messages between Python and C implementations.
* Refer to documented __reduce__ and __newobj__ callables instead of
internal methods (e.g. save_reduce()) or pickle opcodes (e.g. NEWOBJ).
* Include more details in error messages (what expected, what got).
* Avoid including a potentially long repr of an arbitrary object in
error messages.
This switches the main pyrepl event loop to always be non-blocking so that it
can listen to incoming interruptions from other threads.
This also resolves invalid display of exceptions from other threads
(gh-123178).
This also fixes freezes with pasting and an active input hook.
Improve import time of `socket` by writing `socket.errorTab`
as a constant and lazy import modules.
Co-authored-by: Pieter Eendebak <pieter.eendebak@gmail.com>
Co-authored-by: Bénédikt Tran <10796600+picnixz@users.noreply.github.com>
Co-authored-by: Gregory P. Smith <greg@krypto.org>
Co-authored-by: Hugo van Kemenade <1324225+hugovk@users.noreply.github.com>
Co-authored-by: Shantanu <12621235+hauntsaninja@users.noreply.github.com>
Increases the multiprocessing connection buffer size from 8k to 64k for efficiency, without overallocating.
Co-authored-by: Bénédikt Tran <10796600+picnixz@users.noreply.github.com>
Co-authored-by: Victor Stinner <vstinner@python.org>
Add PyConfig_Get(), PyConfig_GetInt(), PyConfig_Set() and
PyConfig_Names() functions to get and set the current runtime Python
configuration.
Add visibility and "sys spec" to config and preconfig specifications.
_PyConfig_AsDict() now converts PyConfig.xoptions as a dictionary.
Co-authored-by: Bénédikt Tran <10796600+picnixz@users.noreply.github.com>
* Remove backtracking when parsing tarfile headers
* Rewrite PAX header parsing to be stricter
* Optimize parsing of GNU extended sparse headers v0.0
Co-authored-by: Kirill Podoprigora <kirill.bast9@mail.ru>
Co-authored-by: Gregory P. Smith <greg@krypto.org>
* urljoin() with relative reference "?" sets empty query and removes fragment.
* Preserve empty components (authority, params, query, fragment) in urljoin().
* Preserve empty components (authority, params, query) in urldefrag().
Also refactor the code and get rid of double _coerce_args() and
_coerce_result() calls in urljoin(), urldefrag(), urlparse() and
urlunparse().
When checking if the registering browser is the "OS preferred browser", do not use a substring search - that makes no sense: one can have a preferred browser that looks like a super-string of a known browser, e.g. "firefox-nightly" vs "firefox".
https://github.com/python/cpython/issues/108172 explains in more detail, and lays out a potential better future enhancement for this case of just using xdg-open. We'll go with this for now.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
See https://github.com/python/cpython/issues/121313 for analysis, but this greatly reduces memory overallocation and overhead when multiprocessing is sending non-small data over its pipes between processes.
The `zip_next` function uses a common optimization technique for methods
that generate tuples. The iterator maintains an internal reference to
the returned tuple. When the method is called again, it checks if the
internal tuple's reference count is 1. If so, the tuple can be reused.
However, this approach is not safe under the free-threading build:
after checking the reference count, another thread may perform the same
check and also reuse the tuple. This can result in a double decref on
the items of the replaced tuple and a double incref (memory leak) on
the items of the tuple being set.
This adds a function, `_PyObject_IsUniquelyReferenced` that
encapsulates the stricter logic necessary for the free-threaded build:
the internal tuple must be owned by the current thread, have a local
refcount of one, and a shared refcount of zero.
* Make `weakref.WeakSet` safe against concurrent mutations while it is being iterated.
`_IterationGuard` is no longer used for `WeakSet`, it now relies on copying the underlying set which is an atomic operation while iterating so that it can be modified by other threads.
Per feedback from Paul Moore on GH-123158, it's better to defer making
`Path.delete()` public than ship it with under-designed error handling
capabilities.
We leave a remnant `_delete()` method, which is used by `move()`. Any
functionality not needed by `move()` is deleted.
These two methods accept an *existing* directory path, onto which we join
the source path's base name to form the final target path.
A possible alternative implementation is to check for directories in
`copy()` and `move()` and adjust the target path, which is done in several
`shutil` functions. This behaviour is helpful in a shell context, but
less so in a stored program that explicitly specifies destinations. For
example, a user that calls `Path('foo.py').copy('bar.py')` might not
imagine that `bar.py/foo.py` would be created, but under the alternative
implementation this will happen if `bar.py` is an existing directory.
When display lines above the cursor come from the cache, the first line
to not come from the cache may be a wrapped line, starting half way
through a logical line in the buffer. Detect and handle this case to
avoid accidentally drawing a stray prompt in the middle of a logical
line.
Add a `Path.move()` method that moves a file or directory tree, and returns a new `Path` instance pointing to the target.
This method is similar to `shutil.move()`, except that it doesn't accept a *copy_function* argument, and it doesn't check whether the destination is an existing directory.
* pass the original string error message from the ftplib error to URLError()
* Update request.py
Change error string for ftp error to be consistent with other errors reported for ftp
* Add NEWS entry for change to urllib.request for ftp errors.
* Track the change in the ftp error message in the test.
Check that the current default heap is initialized in
`_mi_os_get_aligned_hint` and `mi_os_claim_huge_pages`.
The mimalloc function `_mi_os_get_aligned_hint` assumes that there is an
initialized default heap. This is true for our main thread, but not for
background threads. The problematic code path is usually called during
initialization (i.e., `Py_Initialize`), but it may also be called if the
program allocates large amounts of memory in total.
The crash only affected the free-threaded build.
`Path.read_bytes()` is used to read a whole file. buffering /
BufferedIO is focused around making small, possibly interleaved,
read/write efficient which doesn't add value in this case.
On my Mac, running the benchmark:
```python
import pyperf
from pathlib import Path
def read_all(all_paths):
for p in all_paths:
p.read_bytes()
def read_file(path_obj):
path_obj.read_bytes()
all_rst = list(Path("Doc").glob("**/*.rst"))
all_py = list(Path(".").glob("**/*.py"))
assert all_rst, "Should have found rst files"
assert all_py, "Should have found python source files"
runner = pyperf.Runner()
runner.bench_func("read_file_small", read_file, Path("Doc/howto/clinic.rst"))
runner.bench_func("read_file_large", read_file, Path("Doc/c-api/typeobj.rst"))
```
before:
```python
.....................
read_file_small: Mean +- std dev: 6.80 us +- 0.07 us
.....................
read_file_large: Mean +- std dev: 10.8 us +- 0.2 us
````
after:
```python
.....................
read_file_small: Mean +- std dev: 5.67 us +- 0.05 us
.....................
read_file_large: Mean +- std dev: 9.77 us +- 0.52 us
```
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Hugo van Kemenade <1324225+hugovk@users.noreply.github.com>