Remove support for supplying additional positional arguments to
`PurePath.relative_to()` and `is_relative_to()`. This has been deprecated
since Python 3.12.
_PyWeakref_ClearRef was previously exposed in the public C-API, although
it begins with an underscore and is not documented. It's used by a few
C-API extensions. There is currently no alternative public API that can
replace its use.
_PyWeakref_ClearWeakRefsExceptCallbacks is the only thread-safe way to
use _PyWeakref_ClearRef in the free-threaded build. This exposes the C
symbol, but does not make the API public.
Some embedders and extensions include parts of the internal API. The
pycore_mimalloc.h file is transitively include by a number of other
internal headers. This avoids include errors for code that was
already including those headers.
The `list_preallocate_exact` function did not zero initialize array
contents. In the free-threaded build, this could expose uninitialized
memory to concurrent readers between the call to
`list_preallocate_exact` and the filling of the array contents with
items.
Callbacks registered in the tkinter module now take arguments as
various Python objects (int, float, bytes, tuple), not just str.
To restore the previous behavior set tkinter module global wantobject to 1
before creating the Tk object or call the wantobject() method of the Tk object
with argument 1.
Calling it with argument 2 restores the current default behavior.
Fix an edge case in `binascii.a2b_base64` strict mode, where
excessive padding was not detected when no padding is necessary.
Co-authored-by: Terry Jan Reedy <tjreedy@udel.edu>
Co-authored-by: Pieter Eendebak <pieter.eendebak@gmail.com>
Rationale
=========
argparse performs a complex formatting of the usage for argument grouping
and for line wrapping to fit the terminal width. This formatting has been
a constant source of bugs for at least 10 years (see linked issues below)
where defensive assertion errors are triggered or brackets and paranthesis
are not properly handeled.
Problem
=======
The current implementation of argparse usage formatting relies on regular
expressions to group arguments usage only to separate them again later
with another set of regular expressions. This is a complex and error prone
approach that caused all the issues linked below. Special casing certain
argument formats has not solved the problem. The following are some of
the most common issues:
- empty `metavar`
- mutually exclusive groups with `SUPPRESS`ed arguments
- metavars with whitespace
- metavars with brackets or paranthesis
Solution
========
The following two comments summarize the solution:
- https://github.com/python/cpython/issues/82091#issuecomment-1093832187
- https://github.com/python/cpython/issues/77048#issuecomment-1093776995
Mainly, the solution is to rewrite the usage formatting to avoid the
group-then-separate approach. Instead, the usage parts are kept separate
and only joined together at the end. This allows for a much simpler
implementation that is easier to understand and maintain. It avoids the
regular expressions approach and fixes the corresponding issues.
This closes the following GitHub issues:
- #62090
- #62549
- #77048
- #82091
- #89743
- #96310
- #98666
These PRs become obsolete:
- #15372
- #96311
Add missing import to code that handles too large files and offsets.
Use list, not tuple, for a mutable sequence.
Add tests to prevent similar mistakes.
---------
Co-authored-by: Gregory P. Smith [Google LLC] <greg@krypto.org>
Co-authored-by: Kirill Podoprigora <kirill.bast9@mail.ru>
This change makes sure all extension/builtin modules have their init function run first by the main interpreter before proceeding with import in the original interpreter (main or otherwise). This means when the import of a single-phase init module fails in an isolated subinterpreter, it won't tie any global state/callbacks to the subinterpreter.
Add the ability to enable/disable the GIL at runtime, and use that in
the C module loading code.
We can't know before running a module init function if it supports
free-threading, so the GIL is temporarily enabled before doing so. If
the module declares support for running without the GIL, the GIL is
later disabled. Otherwise, the GIL is permanently enabled, and will
never be disabled again for the life of the current interpreter.
Now, such classes will no longer require changes in Python 3.13 in the normal case.
The test suite for robotframework passes with no DeprecationWarnings under this PR.
I also added a new DeprecationWarning for the case where `_field_types` exists
but is incomplete, since that seems likely to indicate a user mistake.
Add _PyType_LookupRef and use incref before setting attribute on type
Makes setting an attribute on a class and signaling type modified atomic
Avoid adding re-entrancy exposing the type cache in an inconsistent state by decrefing after type is updated
* Add PhotoImage.read() to read an image from a file.
* Add PhotoImage.data() to get the image data.
* Add background and grayscale parameters to PhotoImage.write().
* Add the PhotoImage method copy_replace() to copy a region
from one image to other image, possibly with pixel zooming and/or
subsampling.
* Add from_coords parameter to PhotoImage methods copy(), zoom() and subsample().
* Add zoom and subsample parameters to PhotoImage method copy().
The designated initializer syntax in static inline functions in pycore_backoff.h
causes problems for C++ or MSVC users who aren't yet using C++20.
While internal, pycore_backoff.h is included (indirectly, via pycore_code.h)
by some key 3rd party software that does so for speed.
This is *not* sufficient for the final 3.13 release, but it will do for beta 1:
- What's new entry
- Updated changelog entry (news blurb)
- Mention the proxy for f_globals in the datamodel and Python frame object docs
This doesn't have any C API details (what's new refers to the PEP).
For converting large ints to strings, CPython invokes a function in _pylong.py,
which uses the decimal module to implement an asymptotically waaaaay
sub-quadratic algorithm. But if the C decimal module isn't available, CPython
uses _pydecimal.py instead. Which in turn frequently does str(int). If the int
is very large, _pylong ends up doing the work, which in turn asks decimal to do
"big" arithmetic, which in turn calls str(big_int), which in turn ... it can
become infinite mutual recursion.
This change introduces a different int->str function that doesn't use decimal.
It's asymptotically worse, "Karatsuba time" instead of quadratic time, so
still a huge improvement. _pylong switches to that when the C decimal isn't
available. It is also used for not too large integers (less than 450_000 bits),
where it is faster (up to 2 times for 30_000 bits) than the asymptotically
better implementation that uses the C decimal.
Co-authored-by: Tim Peters <tim.peters@gmail.com>
* Initial stab.
* Test the tentative fix. Hangs "forever" without this change.
* Move the new test to a better spot.
* New comment to explain why _convert_to_str allows any poewr of 10.
* Fixed a comment, and fleshed out an existing test that appeared unfinished.
* Added temporary asserts. Or maybe permanent ;-)
* Update Lib/_pydecimal.py
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
* Remove the new _convert_to_str().
Serhiy and I independently concluded that exact powers of 10
aren't possible in these contexts, so just checking the
string length is sufficient.
* At least for now, add the asserts to the other block too.
* 📜🤖 Added by blurb_it.
---------
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
This PR adds the ability to enable the GIL if it was disabled at
interpreter startup, and modifies the multi-phase module initialization
path to enable the GIL when loading a module, unless that module's spec
includes a slot indicating it can run safely without the GIL.
PEP 703 called the constant for the slot `Py_mod_gil_not_used`; I went
with `Py_MOD_GIL_NOT_USED` for consistency with gh-104148.
A warning will be issued up to once per interpreter for the first
GIL-using module that is loaded. If `-v` is given, a shorter message
will be printed to stderr every time a GIL-using module is loaded
(including the first one that issues a warning).
The function returns `True` or `False` depending on whether the GIL is
currently enabled. In the default build, it always returns `True`
because the GIL is always enabled.
Now inspect.signature() supports references to the module globals in
parameter defaults on methods in extension modules. Previously it was
only supported in functions. The workaround was to specify the fully
qualified name, including the module name.
Add "Raw" variant of PyTime functions:
* PyTime_MonotonicRaw()
* PyTime_PerfCounterRaw()
* PyTime_TimeRaw()
Changes:
* Add documentation and tests. Tests release the GIL while calling
raw clock functions.
* py_get_system_clock() and py_get_monotonic_clock() now check that
the GIL is hold by the caller if raise_exc is non-zero.
* Reimplement "Unchecked" functions with raw clock functions.
Co-authored-by: Petr Viktorin <encukou@gmail.com>
Account for `add_stopiteration_handler` pushing a block for `async with`.
To allow generator functions that previously almost hit the `CO_MAXBLOCKS`
limit by nesting non-async blocks, the limit is increased by 1.
This increase allows one more block in non-generator functions.
Add code and config for a minimal Android app, and instructions to build and run it.
Improve Android build instructions in general.
Add a tool subcommand to download the Gradle wrapper (with its binary blob). Android
studio must be downloaded manually (due to the license).
The code for Tier 2 is now only compiled when configured
with `--enable-experimental-jit[=yes|interpreter]`.
We drop support for `PYTHON_UOPS` and -`Xuops`,
but you can disable the interpreter or JIT
at runtime by setting `PYTHON_JIT=0`.
You can also build it without enabling it by default
using `--enable-experimental-jit=yes-off`;
enable with `PYTHON_JIT=1`.
On Windows, the `build.bat` script supports
`--experimental-jit`, `--experimental-jit-off`,
`--experimental-interpreter`.
In the C code, `_Py_JIT` is defined as before
when the JIT is enabled; the new variable
`_Py_TIER2` is defined when the JIT *or* the
interpreter is enabled. It is actually a bitmask:
1: JIT; 2: default-off; 4: interpreter.
* Allow to specify the signature of custom callable instances of extension
type by the __text_signature__ attribute.
* Specify signatures of operator.attrgetter, operator.itemgetter, and
operator.methodcaller instances.
While properties like IPv6Address.is_private account for IPv4-mapped
IPv6 addresses, such as for example:
>>> ipaddress.ip_address("192.168.0.1").is_private
True
>>> ipaddress.ip_address("::ffff:192.168.0.1").is_private
True
...the same doesn't currently apply to the is_loopback property:
>>> ipaddress.ip_address("127.0.0.1").is_loopback
True
>>> ipaddress.ip_address("::ffff:127.0.0.1").is_loopback
False
At minimum, this inconsistency between different properties is
counter-intuitive. Moreover, ::ffff:127.0.0.0/104 is for all intents and
purposes a loopback address, and should be treated as such.
sqlite3.iterdump() depends on the row factory returning resulting rows
as tuples; it will fail with custom row factories like for example a
dict factory.
With this commit, we explicitly reset the row factory of the cursor used
by iterdump(), so we always get predictable results. This does not
affect the row factory of the parent connection.
Co-authored-by: Mariusz Felisiak <felisiak.mariusz@gmail.com>
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
* Add name and mode attributes for compressed and archived file-like objects
in modules bz2, lzma, tarfile and zipfile.
* Change the value of the mode attribute of GzipFile from integer (1 or 2)
to string ('rb' or 'wb').
* Change the value of the mode attribute of ZipExtFile from 'r' to 'rb'.
The second item in the tuple returned from `__reduce__()` is a tuple of arguments to supply to path constructor. Previously we returned the `parts` tuple here, which entailed joining, parsing and normalising the path object, and produced a compact pickle representation.
With this patch, we instead return a tuple of paths that were originally given to the path constructor. This makes pickling much faster (at the expense of compactness).
It's worth noting that, in the olden times, pathlib performed this parsing/normalization up-front in every case, and so using `parts` for pickling was almost free. Nowadays pathlib only parses/normalises paths when it's necessary or advantageous to do so (e.g. computing a path parent, or iterating over a directory, respectively).
Tarfile.addfile now throws an ValueError when the user passes
in a non-zero size tarinfo but does not provide a fileobj,
instead of writing an incomplete entry.
Only treat '\n', '\r' and '\r\n' as line separators in re-folding the email
messages. Preserve control characters '\v', '\f', '\x1c', '\x1d' and '\x1e'
and Unicode line separators '\x85', '\u2028' and '\u2029' as is.
Older libedit versions (like Apple's) use a different type signature
for rl_startup_hook and rl_pre_input_hook. Add a configure check to
determine which signature is accepted by introducing the
Py_RL_STARTUP_HOOK_TAKES_ARGS macro in pyconfig.h.
Fix mimalloc allocator for huge memory allocation (around
8,589,934,592 GiB) on s390x.
Abort allocation early in mimalloc if the number of slices doesn't
fit into uint32_t, to prevent a integer overflow (cast 64-bit
size_t to uint32_t).
Since 6258844c, paths that might not exist can be fed into pathlib's
globbing implementation, which will call `os.scandir()` / `os.lstat()` only
when strictly necessary. This allows us to drop an initial `self.is_dir()`
call, which saves a `stat()`.
Co-authored-by: Shantanu <12621235+hauntsaninja@users.noreply.github.com>
rfc9110 obsoletes the earlier rfc 7231. This document also includes some
status codes that were previously only used for WebDAV and assigns more
generic names to these status codes.
ref: https://www.rfc-editor.org/rfc/rfc9110.html#name-changes-from-rfc-7231
- http.HTTPStatus.CONTENT_TOO_LARGE (413, previously
REQUEST_ENTITY_TOO_LARGE)
- http.HTTPStatus.URI_TOO_LONG (414, previously REQUEST_URI_TOO_LONG)
- http.HTTPStatus.RANGE_NOT_SATISFYABLE (416, previously
REQUEST_RANGE_NOT_SATISFYABLE)
- http.HTTPStatus.UNPROCESSABLE_CONTENT (422, previously
UNPROCESSABLE_ENTITY)
The new constants are added to http.HTTPStatus and the old constant names are
preserved for backwards compatibility.
References in documentation to the obsoleted rfc 7231 are updated
Replace use of `os.listdir()` with `os.scandir()`. Forgo setting `_drv`,
`_root` and `_tail_cached`, as these usually aren't needed. Use
`os.DirEntry.path` to set `_str`.
Don't bother calling `os.scandir()` to scan for literal pattern segments,
like `foo` in `foo/*.py`. Instead, append the segment(s) as-is and call
through to the next selector with `exists=False`, which signals that the
path might not exist. Subsequent selectors will call `os.scandir()` or
`os.lstat()` to filter out missing paths as needed.
This change gives a significant speedup, as the METH_FASTCALL calling
convention is now used. The following bytes and bytearray methods are adapted:
- count()
- find()
- index()
- rfind()
- rindex()
Co-authored-by: Inada Naoki <songofacandy@gmail.com>
Detect libcrypto BLAKE2, Shake, SHA3, and Truncated-SHA512 support at hashlib build time
## BLAKE2
While OpenSSL supports both "b" and "s" variants of the BLAKE2 hash
function, other cryptographic libraries may lack support for one or both
of the variants. This commit modifies `hashlib`'s C code to detect
whether or not the linked libcrypto supports each BLAKE2 variant, and
elides references to each variant's NID accordingly. In cases where the
underlying libcrypto doesn't fully support BLAKE2, CPython's
`./configure` script can be given the following flag to use CPython's
interned BLAKE2 implementation: `--with-builtin-hashlib-hashes=blake2`.
## SHA3, Shake, & truncated SHA512.
Detect BLAKE2, SHA3, Shake, & truncated SHA512 support in the
OpenSSL-ish libcrypto library at build time. This helps allow hashlib's
`_hashopenssl` to be used with libraries that do not to support every
algorithm that upstream OpenSSL does. Such as AWS-LC & BoringSSL.
Co-authored-by: Gregory P. Smith [Google LLC] <greg@krypto.org>
gh-16429 introduced support for an iterable of separators in
Stream.readuntil. Since bytes-like types are themselves iterable, this
can introduce ambiguities in deciding whether the argument is an
iterator of separators or a singleton separator. In gh-16429, only 'bytes'
was considered a singleton, but this will break code that passes other
buffer object types.
Fix it by only supporting tuples rather than arbitrary iterables.
Closes gh-117722.
Fall back to tp_call() for cases when arguments are passed by name.
Co-authored-by: Donghee Na <donghee.na@python.org>
Co-authored-by: Victor Stinner <vstinner@python.org>
Move `pathlib.Path.walk()` implementation into `glob._Globber`. The new
`glob._Globber.walk()` classmethod works with strings internally, which is
a little faster than generating `Path` objects and keeping them normalized.
The `pathlib.Path.walk()` method converts the strings back to path objects.
In the private pathlib ABCs, our existing subclass of `_Globber` ensures
that `PathBase` instances are used throughout.
Follow-up to #117589.
Move pathlib globbing implementation into a new private class: `glob._Globber`. This class implements fast string-based globbing. It's called by `pathlib.Path.glob()`, which then converts strings back to path objects.
In the private pathlib ABCs, add a `pathlib._abc.Globber` subclass that works with `PathBase` objects rather than strings, and calls user-defined path methods like `PathBase.stat()` rather than `os.stat()`.
This sets the stage for two more improvements:
- GH-115060: Query non-wildcard segments with `lstat()`
- GH-116380: Unify `pathlib` and `glob` implementations of globbing.
No change to the implementations of `glob.glob()` and `glob.iglob()`.
Moves the validation for invalid years in the C implementation of the `datetime` module into a common location between `fromisoformat` and `fromisocalendar`, which improves the error message and fixes a failed assertion when parsing invalid ISO 8601 years using one of the "ISO weeks" formats.
---------
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
This prevents external cancellations of a task group's parent task to
be dropped when an internal cancellation happens at the same time.
Also strengthen the semantics of uncancel() to clear self._must_cancel
when the cancellation count reaches zero.
Co-Authored-By: Tin Tvrtković <tinchester@gmail.com>
Co-Authored-By: Arthur Tacca
Apply the following optimizations to `posixpath.realpath()`:
- Remove use of recursion
- Construct child paths directly rather than using `join()`
- Use `os.getcwd[b]()` rather than `abspath()`
- Use `startswith(sep)` rather than `isabs()`
- Use slicing rather than `split()`
Co-authored-by: Petr Viktorin <encukou@gmail.com>
Introduce a unified 16-bit backoff counter type (``_Py_BackoffCounter``),
shared between the Tier 1 adaptive specializer and the Tier 2 optimizer. The
API used for adaptive specialization counters is changed but the behavior is
(supposed to be) identical.
The behavior of the Tier 2 counters is changed:
- There are no longer dynamic thresholds (we never varied these).
- All counters now use the same exponential backoff.
- The counter for ``JUMP_BACKWARD`` starts counting down from 16.
- The ``temperature`` in side exits starts counting down from 64.
This change gives a significant speedup, as the METH_FASTCALL calling
convention is now used. The following methods are adapted:
- str.count
- str.find
- str.index
- str.rfind
- str.rindex
On Linux >= 2.6.36 with glibc < 2.27, `getcwd()` can return a relative
pathname starting with '(unreachable)'. We detect this and fail with
ENOENT, matching new glibc behaviour.
Co-authored-by: Petr Viktorin <encukou@gmail.com>
* as_completed returns object that is both iterator and async iterator
* Existing tests adjusted to test both the old and new style
* New test to ensure iterator can be resumed
* New test to ensure async iterator yields any passed-in Futures as-is
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
Co-authored-by: Guido van Rossum <gvanrossum@gmail.com>
* Extract method for _read_inner, reducing complexity and indentation by 1.
* Extract method for _raise_all and yield ParseErrors from _read_inner.
Reduces complexity by 1 and reduces touch points for handling errors in _read_inner.
* Prefer iterators to splat expansion and literal indexing.
* Extract method for _strip_comments. Reduces complexity by 7.
* Model the file lines in a class to encapsulate the comment status and cleaned value.
* Encapsulate the read state as a dataclass
* Extract _handle_continuation_line and _handle_rest methods. Reduces complexity by 8.
* Reindent
* At least for now, collect errors in the ReadState
* Check for missing section header separately.
* Extract methods for _handle_header and _handle_option. Reduces complexity by 6.
* Remove unreachable code. Reduces complexity by 4.
* Remove unreachable branch
* Handle error condition early. Reduces complexity by 1.
* Add blurb
* Move _raise_all to ParsingError, as its behavior is most closely related to the exception class and not the reader.
* Split _strip* into separate methods.
* Refactor _strip_full to compute the strip just once and use 'not any' to determine the factor.
* Replace use of 'sys.maxsize' with direct computation of the stripped value.
* Extract has_comments as a dynamic property.
* Implement clean as a cached property.
* Model comment prefixes in the RawConfigParser within a prefixes namespace.
* Use a regular expression to search for the first match.
Avoids mutating variables and tricky logic and over-computing all of the starts when only the first is relevant.
Remove extra self DECREF on ssl "no ciphers" error path.
This doesn't come up in practice because nobody links against a broken
OpenSSL library that provides nothing.
Co-authored-by: Hugo van Kemenade <1324225+hugovk@users.noreply.github.com>
Co-authored-by: Jacob Coffee <jacob@z7x.org>
Co-authored-by: Malcolm Smith <smith@chaquo.com>
Co-authored-by: Ned Deily <nad@python.org>
Co-authored-by: Hugo van Kemenade <1324225+hugovk@users.noreply.github.com>
Co-authored-by: Malcolm Smith <smith@chaquo.com>
Co-authored-by: Ned Deily <nad@python.org>
* Reads zip64 files as produced by the zipfile module
* Include tests (somewhat slow, however, because of the need to create "large" zips)
* About the same amount of strictness reading invalid zip files as zipfile has
* Still works on files with prepended data (like pex)
There are a lot more test cases at https://github.com/thatch/zipimport64/ that give me confidence that this works for real-world files.
Fixes#89739 and #77140.
---------
Co-authored-by: Itamar Ostricher <itamarost@gmail.com>
Reviewed-by: Gregory P. Smith <greg@krypto.org>
Explicitly handle the case where stdout=STDOUT
as otherwise the existing error handling gets
confused and reports hard to understand errors.
Signed-off-by: Paulo Neves <ptsneves@gmail.com>
Fix parsing of the following corner cases:
* URLs with only a host name
* URLs containing a fragment
* URLs containing a query
* filenames with only a UNC sharepoint on Windows
Co-authored-by: Dong-hee Na <donghee.na92@gmail.com>