This PR updates the cmath module documentation to reflect the reality that Python is almost always (and as far as I can tell, that "almost" can be omitted) running on a machine whose C double supports signed zeros.
* Removes misleading references to functions being continuous from above / below / the left / the right at branch cuts
* Expands the note on branch cuts at the top of the module documentation to explain the double-sided sign-of-zero-based behaviour
We're adding the function back, only for the stable ABI symbol and not as any form of API. I had removed it yesterday.
This undocumented "private" function was added with the implementation for PEP 3121 (3.0, 2007) for internal use and later moved out of the limited API (3.6, 2016) and then into the internal API (3.9, 2019). I removed it completely yesterday, including from the stable ABI manifest (where it was added because the symbol happened to be exported). It's unlikely that anyone is using _PyState_AddModule(), especially any stable ABI extensions built against 3.2-3.5, but we're playing it safe.
https://github.com/python/cpython/issues/101758
* fileutils: handle non-blocking pipe IO on Windows
Handle erroring operations on non-blocking pipes by reading the _doserrno code.
Limit writes on non-blocking pipes that are too large.
* Support blocking functions on Windows
Use the GetNamedPipeHandleState and SetNamedPipeHandleState Win32 API functions to add support for os.get_blocking and os.set_blocking.
This merges their code. They're backed by the same single HACL* static library, having them be a single module simplifies maintenance.
This should unbreak the wasm enscripten builds that currently fail due to linking in --whole-archive mode and the HACL* library appearing twice.
Long unnoticed error fixed: _sha512.SHA384Type was doubly assigned and was actually SHA512Type. Nobody depends on those internal names.
Also rename LIBHACL_ make vars to LIBHACL_SHA2_ in preperation for other future HACL things.
Enforcing (optionally) the restriction set by PEP 489 makes sense. Furthermore, this sets the stage for a potential restriction related to a per-interpreter GIL.
This change includes the following:
* add tests for extension module subinterpreter compatibility
* add _PyInterpreterConfig.check_multi_interp_extensions
* add Py_RTFLAGS_MULTI_INTERP_EXTENSIONS
* add _PyImport_CheckSubinterpIncompatibleExtensionAllowed()
* fail iff the module does not implement multi-phase init and the current interpreter is configured to check
https://github.com/python/cpython/issues/98627
This change is almost entirely moving code around and hiding import state behind internal API. We introduce no changes to behavior, nor to non-internal API. (Since there was already going to be a lot of churn, I took this as an opportunity to re-organize import.c into topically-grouped sections of code.) The motivation is to simplify a number of upcoming changes.
Specific changes:
* move existing import-related code to import.c, wherever possible
* add internal API for interacting with import state (both global and per-interpreter)
* use only API outside of import.c (to limit churn there when changing the location, etc.)
* consolidate the import-related state of PyInterpreterState into a single struct field (this changes layout slightly)
* add macros for import state in import.c (to simplify changing the location)
* group code in import.c into sections
*remove _PyState_AddModule()
https://github.com/python/cpython/issues/101758
Replace the builtin hashlib implementations of SHA2-384 and SHA2-512
originally from LibTomCrypt with formally verified, side-channel resistant
code from the [HACL*](https://github.com/hacl-star/hacl-star/) project.
The builtins remain a fallback only used when OpenSSL does not provide them.
Previously, we checked exclusively for `__GLIBC__` (AND'd with some other
conditions). Checking for `__linux__` instead should be fine.
This fixes using e.g. `os.listxattr()` on systems using musl libc.
Bug: https://bugs.gentoo.org/894130
Co-authored-by: Gregory P. Smith <greg@krypto.org>
`socket.getaddrinfo()` no longer raises `OverflowError` based on the **port** argument. Error reporting (or not) for its value is left up to the underlying C library `getaddrinfo()` implementation.
Utilize new functions termios.tcgetwinsize() and termios.tcsetwinsize in test_pty.py.
Signed-off-by: Soumendra Ganguly <soumendraganguly@gmail.com>
Co-authored-by: Gregory P. Smith <greg@krypto.org>
Prevent test_tools from copying 1000M of "source"
It doesn't need a git repo, just the checkout. We skip .git metadata, Doc/build, Doc/venv, and `__pycache__` subdirs, that developers often have in their clients to reduce the size of the source tree copy ten-fold.
This should significantly reduce IO and presumably time on buildbots during this long test.
Refactored the implementation of pty.fork to use os.login_tty.
A DeprecationWarning is now raised by pty.master_open() and pty.slave_open(). They were
undocumented and deprecated long long ago in the docstring in favor of pty.openpty.
Signed-off-by: Soumendra Ganguly <soumendraganguly@gmail.com>
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Gregory P. Smith <greg@krypto.org>
Fix creating install directories in `make sharedinstall` if they exist already outside `DESTDIR`. The previous make rules assumed that the directories would be created via a dependency on a rule for `$(DESTSHARED)` that did not fire if the directory did exist outside `$(DESTDIR)`.
While technically `$(DESTDIR)` could be prepended to the rule name, moving the rules for creating directories straight into the `sharedinstall` rule seems to fit the common practices better. Since the rule explicitly checks whether the individual directories exist anyway, there seems to be no reason to rely on make determining that implicitly as well.
* Make sure that the current exception is always normalized.
* Remove redundant type and traceback fields for the current exception.
* Add new API functions: PyErr_GetRaisedException, PyErr_SetRaisedException
* Add new API functions: PyException_GetArgs, PyException_SetArgs
replacing hashlib primitives (for the non-OpenSSL case) with verified implementations from HACL*. This is the first PR in the series, and focuses specifically on SHA2-256 and SHA2-224.
This PR imports Hacl_Streaming_SHA2 into the Python tree. This is the HACL* implementation of SHA2, which combines a core implementation of SHA2 along with a layer of buffer management that allows updating the digest with any number of bytes. This supersedes the previous implementation in the tree.
@franziskuskiefer was kind enough to benchmark the changes: in addition to being verified (thus providing significant safety and security improvements), this implementation also provides a sizeable performance boost!
```
---------------------------------------------------------------
Benchmark Time CPU Iterations
---------------------------------------------------------------
Sha2_256_Streaming 3163 ns 3160 ns 219353 // this PR
LibTomCrypt_Sha2_256 5057 ns 5056 ns 136234 // library used by Python currently
```
The changes in this PR are as follows:
- import the subset of HACL* that covers SHA2-256/224 into `Modules/_hacl`
- rewire sha256module.c to use the HACL* implementation
Co-authored-by: Gregory P. Smith [Google LLC] <greg@krypto.org>
Co-authored-by: Erlend E. Aasland <erlend.aasland@protonmail.com>
The GILState API (PEP 311) implementation from 2003 made the assumption that only one thread state would ever be used for any given OS thread, explicitly disregarding the case of subinterpreters. However, PyThreadState_Swap() still facilitated switching between subinterpreters, meaning the "current" thread state (holding the GIL), and the GILState thread state could end up out of sync, causing problems (including crashes).
This change addresses the issue by keeping the two in sync in PyThreadState_Swap(). I verified the fix against gh-99040.
Note that the other GILState-subinterpreter incompatibility (with autoInterpreterState) is not resolved here.
https://github.com/python/cpython/issues/59956
That causes the test to fail when run using a high UID as that ancient format
cannot represent it. The current default (PAX) and the old default (GNU) both
support high UIDs.
The summary of this diff is that it:
* adds a `_ctypes_alloc_format_padding` function to append strings like `37x` to a format string to indicate 37 padding bytes
* removes the branches that amount to "give up on producing a valid format string if the struct is packed"
* combines the resulting adjacent `if (isStruct) {`s now that neither is `if (isStruct && !isPacked) {`
* invokes `_ctypes_alloc_format_padding` to add padding between structure fields, and after the last structure field. The computation used for the total size is unchanged from ctypes already used.
This patch does not affect any existing aligment computation; all it does is use subtraction to deduce the amount of paddnig introduced by the existing code.
---
Without this fix, it would never include padding bytes - an assumption that was only
valid in the case when `_pack_` was set - and this case was explicitly not implemented.
This should allow conversion from ctypes structs to numpy structs
Fixes https://github.com/numpy/numpy/issues/10528
Update bundled pip version to 23.0
This is the current latest version of `pip`.
---------
Co-authored-by: Pradyun Gedam <pradyunsg@users.noreply.github.com>
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Fix the behaviour of the `__sizeof__` method (and hence the results returned by `sys.getsizeof`) for subclasses of `int`. Previously, `int` subclasses gave identical results to the `int` base class, ignoring the presence of the instance dictionary.
<!-- gh-issue-number: gh-101266 -->
* Issue: gh-101266
<!-- /gh-issue-number -->
This starts the process. Users who don't specify their own start method
and use the default on platforms where it is 'fork' will see a
DeprecationWarning upon multiprocessing.Pool() construction or upon
multiprocessing.Process.start() or concurrent.futures.ProcessPool use.
See the related issue and documentation within this change for details.
`warnings.warn()` gains the ability to skip stack frames based on code
filename prefix rather than only a numeric `stacklevel=` via a new
`skip_file_prefixes=` keyword argument.
To use this, ensure that clang support was selected in Visual Studio Installer, then set the PlatformToolset environment variable to "ClangCL" and build as normal from the command line.
It remains unsupported, but at least is possible now for experimentation.
Fixes a reference counting issue with `ctypes.Structure` when a `from_param()` method call is used and the structure size is larger than a C pointer `sizeof(void*)`.
This problem existed for a very long time, but became more apparent in 3.8+ by change likely due to garbage collection cleanup timing changes.
Python 2.x and up to 3.4 did not contain the "-32" in their registry name, so the 32 and 64-bit installs were treated equal. Since 3.5/PEP 514 this is no longer true, but we still want to detect the EOL versions correctly in case people are still using them.
Additionally, the code to replace a node with one with a lower sort key was buggy (wrong node chosen, replace never happened since parent was always NULL, replaced node never freed, etc)
When getaddrinfo returns an error, the output pointer is in an unknown state
Don't call freeaddrinfo on it. See the issue for discussion and details with
links to reasoning. _Most_ libc getaddrinfo implementations never modify the
output pointer unless they are returning success.
Co-authored-by: Sergey G. Brester <github@sebres.de>
Co-authored-by: Oleg Iarygin <dralife@yandex.ru>
When testing element truth values, emit a DeprecationWarning in all implementations.
This had emitted a FutureWarning in the rarely used python-only implementation since ~2.7 and has always been documented as a behavior not to rely on.
Matching an element in a tree search but having it test False can be unexpected. Raising the warning enables making the choice to finally raise an exception for this ambiguous behavior in the future.
This PR adds support for float-style formatting for `Fraction` objects: it supports the `"e"`, `"E"`, `"f"`, `"F"`, `"g"`, `"G"` and `"%"` presentation types, and all the various bells and whistles of the formatting mini-language for those presentation types. The behaviour almost exactly matches that of `float`, but the implementation works with the exact `Fraction` value and does not do an intermediate conversion to `float`, and so avoids loss of precision or issues with numbers that are outside the dynamic range of the `float` type.
Note that the `"n"` presentation type is _not_ supported. That support could be added later if people have a need for it.
There's one corner-case where the behaviour differs from that of float: for the `float` type, if explicit alignment is specified with a fill character of `'0'` and alignment type `'='`, then thousands separators (if specified) are inserted into the padding string:
```python
>>> format(3.14, '0=11,.2f')
'0,000,003.14'
```
The exact same effect can be achieved by using the `'0'` flag:
```python
>>> format(3.14, '011,.2f')
'0,000,003.14'
```
For `Fraction`, only the `'0'` flag has the above behaviour with respect to thousands separators: there's no special-casing of the particular `'0='` fill-character/alignment combination. Instead, we treat the fill character `'0'` just like any other:
```python
>>> format(Fraction('3.14'), '0=11,.2f')
'00000003.14'
>>> format(Fraction('3.14'), '011,.2f')
'0,000,003.14'
```
The `Fraction` formatter is also stricter about combining these two things: it's not permitted to use both the `'0'` flag _and_ explicit alignment, on the basis that we should refuse the temptation to guess in the face of ambiguity. `float` is less picky:
```python
>>> format(3.14, '0<011,.2f')
'3.140000000'
>>> format(Fraction('3.14'), '0<011,.2f')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/mdickinson/Repositories/python/cpython/Lib/fractions.py", line 414, in __format__
raise ValueError(
ValueError: Invalid format specifier '0<011,.2f' for object of type 'Fraction'; can't use explicit alignment when zero-padding
```
Use C long arithmetic instead of PyLong arithmetic to compute the range length, where possible.
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Mark Dickinson <dickinsm@gmail.com>
This PR fixes object allocation in long_subtype_new to ensure that there's at least one digit in all cases, and makes sure that the value of that digit is copied over from the source long.
Needs backport to 3.11, but not any further: the change to require at least one digit was only introduced for Python 3.11.
Fixes#101037.
* Update description of stdout, stderr, and stdin.
Changes:
- Move the ``None`` option (which is default) to the front of the list
of input options
- Move the ``None`` option description up to make the default behavior
more clear (No redirection)
- Remove mention of Child File Descriptors from ``None`` option description
The zipfile.Path open() and read_text() encoding parameter can be supplied as a positional argument without causing a TypeError again. 3.10.0b1 included a regression that made it keyword only.
Documentation update included as users writing code to be compatible with a wide range of versions will need to consider this for some time.
Have _posixsubprocess.c stop using boolean flags to say if gid and uid values were supplied and action is required. Such an implicit "either initialized or look somewhere else" confused both the reader (another mental connection to constantly track between functions) and a compiler (warnings on potentially uninitialized variables being passed). Instead, we can utilize a special group/user id as a flag value -1 defined by POSIX but used nowhere else. Namely:
gid: call_setgid = False → gid = -1
uid: call_setuid = False → uid = -1
groups: call_setgroups = False → groups = NULL (obtained with (groups_list != Py_None) ? groups : NULL)
This PR is required for #94519.
Fix a bug where `Path` takes and ignores `**kwargs` by adding to `PurePath` class `__init__` method which can take only positional arguments.
Automerge-Triggered-By: GH:brettcannon
Partially revert changes made in GH-93453.
asyncio.DefaultEventLoopPolicy.get_event_loop() now emits a
DeprecationWarning and creates and sets a new event loop instead of
raising a RuntimeError if there is no current event loop set.
Co-authored-by: Guido van Rossum <gvanrossum@gmail.com>
This brings the Python implementation of `ntpath.normpath()` in line with the C implementation added in 99fcf15
Co-authored-by: Eryk Sun <eryksun@gmail.com>
Fix the gdbm_compat library detection logic to actually check for
-lgdbm_compat independently of the ndbm detection.
This fixes the build failure with `--with-dbmliborder=gdbm`,
and implicit fallback to ndbm with the default value.
This PR removes the `volatile` qualifier on various intermediate quantities
in the `math.fsum` implementation, and updates the notes preceding the
algorithm accordingly (as well as fixing some of the exsting notes). See
the linked issue #100833 for discussion.
Mock objects which are not unsafe will now raise an AttributeError when accessing an
attribute that matches the name of an assertion but without the prefix `assert_`, e.g. accessing `called_once` instead of `assert_called_once`.
This is in addition to this already happening for accessing attributes with prefixes assert, assret, asert, aseert, and assrt.
Make some trivial performance optimizations in Fraction
Uses private class attributes `_numerator` and `_denominator` in place of the `numerator` and `denominator` property accesses.
Co-authored-by: hauntsaninja <hauntsaninja@gmail.com>
When cross-compiling, the compile/run test for -pthread always fails so -pthread
will never be automatically set without an override from the cache. ac_cv_pthread
can already be overridden, so do the same thing for ac_cv_cxx_thread.
Increase performance of the `absolute()` method by calling `os.getcwd()` directly, rather than using the `Path.cwd()` class method. This avoids constructing an extra `Path` object (and the parsing/normalization that comes with it).
Decrease performance of the `cwd()` class method by calling the `Path.absolute()` method, rather than using `os.getcwd()` directly. This involves constructing an extra `Path` object. We do this to maintain a longstanding pattern where `os` functions are called from only one place, which allows them to be more readily replaced by users. As `cwd()` is generally called at most once within user programs, it's a good bargain.
```shell
# before
$ ./python -m timeit -s 'from pathlib import Path; p = Path("foo", "bar")' 'p.absolute()'
50000 loops, best of 5: 9.04 usec per loop
# after
$ ./python -m timeit -s 'from pathlib import Path; p = Path("foo", "bar")' 'p.absolute()'
50000 loops, best of 5: 5.02 usec per loop
```
Automerge-Triggered-By: GH:AlexWaygood