This is related to fixing the refleaks introduced by commit 096d009. I haven't been able to find the leak yet, but these changes are a consequence of that effort. This includes some cleanup, some tweaks to the existing tests, and a bunch of new test cases. The only change here that might have impact outside the tests in question is in imp.py, where I update imp.load_dynamic() to use spec_from_file_location() instead of creating a ModuleSpec directly.
Also note that I've updated the tests to only skip if we're checking for refleaks (regrtest's --huntrleaks), whereas in gh-101969 I had skipped the tests entirely. The tests will be useful for some upcoming work and I'd rather the refleaks not hold that up. (It isn't clear how quickly we'll be able to fix the leaking code, though it will certainly be done in the short term.)
https://github.com/python/cpython/issues/102251
lzma.LZMADecompressor and bz2.BZ2Decompressor objects caused
segfaults when their `__init__()` methods were not called.
lzma.LZMADecompressor, lzma.LZMACompressor, bz2.BZ2Compressor,
and bz2.BZ2Decompressor objects would leak locks and internal buffers
when their `__init__()` methods were called multiple times.
https://bugs.python.org/issue23224
- partial tests for cosh/sinh overflows (L535 and L771). I doubt
both ||-ed conditions could be tested.
- removed inaccessible case in sqrt (L832): ax=ay=0 is handled
above (L823) because fabs() is exact. Also added test (checked
with mpmath and gmpy2) for second condition on that line.
- some trivial tests for isclose (cover all conditions on L1217-1218)
- add comment for uncovered L1018
Co-authored-by: Mark Dickinson <dickinsm@gmail.com>
* fileutils: handle non-blocking pipe IO on Windows
Handle erroring operations on non-blocking pipes by reading the _doserrno code.
Limit writes on non-blocking pipes that are too large.
* Support blocking functions on Windows
Use the GetNamedPipeHandleState and SetNamedPipeHandleState Win32 API functions to add support for os.get_blocking and os.set_blocking.
This merges their code. They're backed by the same single HACL* static library, having them be a single module simplifies maintenance.
This should unbreak the wasm enscripten builds that currently fail due to linking in --whole-archive mode and the HACL* library appearing twice.
Long unnoticed error fixed: _sha512.SHA384Type was doubly assigned and was actually SHA512Type. Nobody depends on those internal names.
Also rename LIBHACL_ make vars to LIBHACL_SHA2_ in preperation for other future HACL things.
Enforcing (optionally) the restriction set by PEP 489 makes sense. Furthermore, this sets the stage for a potential restriction related to a per-interpreter GIL.
This change includes the following:
* add tests for extension module subinterpreter compatibility
* add _PyInterpreterConfig.check_multi_interp_extensions
* add Py_RTFLAGS_MULTI_INTERP_EXTENSIONS
* add _PyImport_CheckSubinterpIncompatibleExtensionAllowed()
* fail iff the module does not implement multi-phase init and the current interpreter is configured to check
https://github.com/python/cpython/issues/98627
This builds HACL* as a library in one place.
A followup to #101707 which broke some WASM builds. This fixes 2/4 of them, but the enscripten toolchain in the others don't deduplicate linker arguments and error out. A follow-on PR will address those.
The new test exercises the most important variants for single-phase init extension modules. We also add some explanation about those variants to import.c.
https://github.com/python/cpython/issues/101758
Replace the builtin hashlib implementations of SHA2-384 and SHA2-512
originally from LibTomCrypt with formally verified, side-channel resistant
code from the [HACL*](https://github.com/hacl-star/hacl-star/) project.
The builtins remain a fallback only used when OpenSSL does not provide them.
Previously, we checked exclusively for `__GLIBC__` (AND'd with some other
conditions). Checking for `__linux__` instead should be fine.
This fixes using e.g. `os.listxattr()` on systems using musl libc.
Bug: https://bugs.gentoo.org/894130
Co-authored-by: Gregory P. Smith <greg@krypto.org>
`socket.getaddrinfo()` no longer raises `OverflowError` based on the **port** argument. Error reporting (or not) for its value is left up to the underlying C library `getaddrinfo()` implementation.
`math_1_to_whatever()` is no longer useful, since all existing uses of it convert to `float`.
Earlier versions of Python used `math_1_to_whatever` with an integer target; see
gh-16991 for the PR where that use was removed.
* Make sure that the current exception is always normalized.
* Remove redundant type and traceback fields for the current exception.
* Add new API functions: PyErr_GetRaisedException, PyErr_SetRaisedException
* Add new API functions: PyException_GetArgs, PyException_SetArgs
replacing hashlib primitives (for the non-OpenSSL case) with verified implementations from HACL*. This is the first PR in the series, and focuses specifically on SHA2-256 and SHA2-224.
This PR imports Hacl_Streaming_SHA2 into the Python tree. This is the HACL* implementation of SHA2, which combines a core implementation of SHA2 along with a layer of buffer management that allows updating the digest with any number of bytes. This supersedes the previous implementation in the tree.
@franziskuskiefer was kind enough to benchmark the changes: in addition to being verified (thus providing significant safety and security improvements), this implementation also provides a sizeable performance boost!
```
---------------------------------------------------------------
Benchmark Time CPU Iterations
---------------------------------------------------------------
Sha2_256_Streaming 3163 ns 3160 ns 219353 // this PR
LibTomCrypt_Sha2_256 5057 ns 5056 ns 136234 // library used by Python currently
```
The changes in this PR are as follows:
- import the subset of HACL* that covers SHA2-256/224 into `Modules/_hacl`
- rewire sha256module.c to use the HACL* implementation
Co-authored-by: Gregory P. Smith [Google LLC] <greg@krypto.org>
Co-authored-by: Erlend E. Aasland <erlend.aasland@protonmail.com>
This PR fixes the buildbot failures introduced by the merge of #5561, by restricting the relevant tests to something that should work on both 32-bit and 64-bit platforms. It also silences some compiler warnings introduced in that PR.
The summary of this diff is that it:
* adds a `_ctypes_alloc_format_padding` function to append strings like `37x` to a format string to indicate 37 padding bytes
* removes the branches that amount to "give up on producing a valid format string if the struct is packed"
* combines the resulting adjacent `if (isStruct) {`s now that neither is `if (isStruct && !isPacked) {`
* invokes `_ctypes_alloc_format_padding` to add padding between structure fields, and after the last structure field. The computation used for the total size is unchanged from ctypes already used.
This patch does not affect any existing aligment computation; all it does is use subtraction to deduce the amount of paddnig introduced by the existing code.
---
Without this fix, it would never include padding bytes - an assumption that was only
valid in the case when `_pack_` was set - and this case was explicitly not implemented.
This should allow conversion from ctypes structs to numpy structs
Fixes https://github.com/numpy/numpy/issues/10528
A PyThreadState can be in one of many states in its lifecycle, represented by some status value. Those statuses haven't been particularly clear, so we're addressing that here. Specifically:
* made the distinct lifecycle statuses clear on PyThreadState
* identified expectations of how various lifecycle-related functions relate to status
* noted the various places where those expectations don't match the actual behavior
At some point we'll need to address the mismatches.
(This change also includes some cleanup.)
https://github.com/python/cpython/issues/59956
To use this, ensure that clang support was selected in Visual Studio Installer, then set the PlatformToolset environment variable to "ClangCL" and build as normal from the command line.
It remains unsupported, but at least is possible now for experimentation.
Fixes a reference counting issue with `ctypes.Structure` when a `from_param()` method call is used and the structure size is larger than a C pointer `sizeof(void*)`.
This problem existed for a very long time, but became more apparent in 3.8+ by change likely due to garbage collection cleanup timing changes.
When getaddrinfo returns an error, the output pointer is in an unknown state
Don't call freeaddrinfo on it. See the issue for discussion and details with
links to reasoning. _Most_ libc getaddrinfo implementations never modify the
output pointer unless they are returning success.
Co-authored-by: Sergey G. Brester <github@sebres.de>
Co-authored-by: Oleg Iarygin <dralife@yandex.ru>
When testing element truth values, emit a DeprecationWarning in all implementations.
This had emitted a FutureWarning in the rarely used python-only implementation since ~2.7 and has always been documented as a behavior not to rely on.
Matching an element in a tree search but having it test False can be unexpected. Raising the warning enables making the choice to finally raise an exception for this ambiguous behavior in the future.
The objective of this change is to help make the GILState-related code easier to understand. This mostly involves moving code around and some semantically equivalent refactors. However, there are a also a small number of slight changes in structure and behavior:
* tstate_current is moved out of _PyRuntimeState.gilstate
* autoTSSkey is moved out of _PyRuntimeState.gilstate
* autoTSSkey is initialized earlier
* autoTSSkey is re-initialized (after fork) earlier
https://github.com/python/cpython/issues/59956
Have _posixsubprocess.c stop using boolean flags to say if gid and uid values were supplied and action is required. Such an implicit "either initialized or look somewhere else" confused both the reader (another mental connection to constantly track between functions) and a compiler (warnings on potentially uninitialized variables being passed). Instead, we can utilize a special group/user id as a flag value -1 defined by POSIX but used nowhere else. Namely:
gid: call_setgid = False → gid = -1
uid: call_setuid = False → uid = -1
groups: call_setgroups = False → groups = NULL (obtained with (groups_list != Py_None) ? groups : NULL)
This PR is required for #94519.
This PR removes the `volatile` qualifier on various intermediate quantities
in the `math.fsum` implementation, and updates the notes preceding the
algorithm accordingly (as well as fixing some of the exsting notes). See
the linked issue #100833 for discussion.
Not comprehensive, best effort warning. There are cases when threads exist on some platforms that this code cannot detect. macOS when API permissions allow and Linux with a readable /proc procfs present are the currently supported cases where a warning should show up reliably.
Starting with a DeprecationWarning for now, it is less disruptive than something like RuntimeWarning and most likely to only be seen in people's CI tests - a good place to start with this messaging.
The itemsize returned in a memoryview of a ctypes array is now computed from the item type, instead of dividing the total size by the length and assuming that the length is not zero.
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Christian Heimes <christian@python.org>
Co-authored-by: Hugo van Kemenade <hugovk@users.noreply.github.com>
Fixes https://github.com/python/cpython/issues/89051
An earlier commit only defined check_ticks_per_second() when HAVE_TIMES is defined. However, we also need it when HAVE_CLOCK is defined. This primarily affects Windows.
https://github.com/python/cpython/issues/81057
A few TCP socket options have been added to the Linux kernel these last
few years.
This commit adds all the ones available in Linux 6.0:
https://elixir.bootlin.com/linux/v6.0/source/include/uapi/linux/tcp.h#L91
While at it, the TCP_FASTOPEN option has been moved lower in the list
just to keep the same order as in tcp.h to ease future synchronisations.
Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net>
The Py_CLEAR(), Py_SETREF() and Py_XSETREF() macros now only evaluate
their arguments once. If an argument has side effects, these side
effects are no longer duplicated.
Use temporary variables to avoid duplicating side effects of macro
arguments. If available, use _Py_TYPEOF() to avoid type punning.
Otherwise, use memcpy() for the assignment to prevent a
miscompilation with strict aliasing caused by type punning.
Add _Py_TYPEOF() macro: __typeof__() on GCC and clang.
Add test_py_clear() and test_py_setref() unit tests to _testcapi.
asyncio.get_event_loop() now always return either running event loop or
the result of get_event_loop_policy().get_event_loop() call. The latter
should now raise an RuntimeError if no current event loop was set
instead of creating and setting a new event loop.
It affects also a number of asyncio functions and constructors which
call get_event_loop() implicitly: ensure_future(), shield(), gather(),
etc.
DeprecationWarning is no longer emitted if there is no running event loop but
the current event loop was set.
Co-authored-by: Łukasz Langa <lukasz@langa.pl>
builtins and extension module functions and methods that expect boolean values for parameters now accept any Python object rather than just a bool or int type. This is more consistent with how native Python code itself behaves.
* Add API to allow extensions to set callback function on creation and destruction of PyCodeObject
Co-authored-by: Ye11ow-Flash <janshah@cs.stonybrook.edu>
The implementation of __sizeof__() methods using _PyObject_SIZE() now
use an unsigned type (size_t) to compute the size, rather than a signed
type (Py_ssize_t).
Cast explicitly signed (Py_ssize_t) values to unsigned type
(Py_ssize_t).
* code_sizeof() now uses an unsigned type (size_t) to compute the result.
* Fix _PyObject_ComputedDictPointer(): cast _PyObject_VAR_SIZE() to
Py_ssize_t, rather than long: it's a different type on 64-bit Windows.
* Clarify that _PyObject_VAR_SIZE() uses an unsigned type (size_t).
* Replace Py_INCREF() and Py_XINCREF() using a cast with Py_NewRef()
and Py_XNewRef() in Modules/_elementtree.c.
* Make reference counting more explicit: don't steal implicitly a
reference on PyList_SET_ITEM(), use Py_NewRef() instead.
* Replace PyModule_AddObject() with PyModule_AddObjectRef().
Replace "Py_XDECREF(var); var = NULL;" with "Py_CLEAR(var);".
Don't replace "Py_DECREF(var); var = NULL;" with "Py_CLEAR(var);". It
would add an useless "if (var)" test in code path where var cannot be
NULL.
Fix potential race condition in code patterns:
* Replace "Py_DECREF(var); var = new;" with "Py_SETREF(var, new);"
* Replace "Py_XDECREF(var); var = new;" with "Py_XSETREF(var, new);"
* Replace "Py_CLEAR(var); var = new;" with "Py_XSETREF(var, new);"
Other changes:
* Replace "old = var; var = new; Py_DECREF(var)"
with "Py_SETREF(var, new);"
* Replace "old = var; var = new; Py_XDECREF(var)"
with "Py_XSETREF(var, new);"
* And remove the "old" variable.
The ``structmember.h`` header is deprecated, though it continues to be available
and there are no plans to remove it. There are no deprecation warnings. Old code
can stay unchanged (unless the extra include and non-namespaced macros bother
you greatly). Specifically, no uses in CPython are updated -- that would just be
unnecessary churn.
The ``structmember.h`` header is deprecated, though it continues to be
available and there are no plans to remove it.
Its contents are now available just by including ``Python.h``,
with a ``Py`` prefix added if it was missing:
- `PyMemberDef`, `PyMember_GetOne` and`PyMember_SetOne`
- Type macros like `Py_T_INT`, `Py_T_DOUBLE`, etc.
(previously ``T_INT``, ``T_DOUBLE``, etc.)
- The flags `Py_READONLY` (previously ``READONLY``) and
`Py_AUDIT_READ` (previously all uppercase)
Several items are not exposed from ``Python.h``:
- `T_OBJECT` (use `Py_T_OBJECT_EX`)
- `T_NONE` (previously undocumented, and pretty quirky)
- The macro ``WRITE_RESTRICTED`` which does nothing.
- The macros ``RESTRICTED`` and ``READ_RESTRICTED``, equivalents of
`Py_AUDIT_READ`.
- In some configurations, ``<stddef.h>`` is not included from ``Python.h``.
It should be included manually when using ``offsetof()``.
The deprecated header continues to provide its original
contents under the original names.
Your old code can stay unchanged, unless the extra include and non-namespaced
macros bother you greatly.
There is discussion on the issue to rename `T_PYSSIZET` to `PY_T_SSIZE` or
similar. I chose not to do that -- users will probably copy/paste that with any
spelling, and not renaming it makes migration docs simpler.
Co-Authored-By: Alexander Belopolsky <abalkin@users.noreply.github.com>
Co-Authored-By: Matthias Braun <MatzeB@users.noreply.github.com>
Fix a number of compile errors with GCC-12 on macOS:
1. In pylifecycle.c the compile rejects _Pragma within a declaration
2. posixmodule.c was missing a number of ..._RUNTIME macros for non-clang on macOS
3. _ctypes assumed that __builtin_available is always present on macOS
Before python3.11, when in a venv the zip path is calculated
from prefix on POSIX platforms. In python3.11 the behavior is
accidentally changed to calculating from default prefix. This
change will break venv created from a non-installed python
with a stdlib zip file. This commit restores the behavior back
to before python3.11.
Improves the docstring on signal.strsignal to make it explain when it returns a message, None, or when it raises ValueError.
Closes#98930
Co-authored-by: Gregory P. Smith <greg@krypto.org>
Introduce the autocommit attribute to Connection and the autocommit
parameter to connect() for PEP 249-compliant transaction handling.
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: C.A.M. Gerlach <CAM.Gerlach@Gerlach.CAM>
Co-authored-by: Géry Ogam <gery.ogam@gmail.com>
Check to see if `base_executable` exists. If it does not, attempt
to use known alternative names of the python binary to find an
executable in the path specified by `home`.
If no alternative is found, previous behavior is preserved.
Signed-off-by: Vincent Fazio <vfazio@gmail.com>
Signed-off-by: Vincent Fazio <vfazio@gmail.com>
The Py_CLEAR(), Py_SETREF() and Py_XSETREF() macros now only evaluate
their argument once. If an argument has side effects, these side
effects are no longer duplicated.
Add test_py_clear() and test_py_setref() unit tests to _testcapi.
We do the following:
* move the generated _PyUnicode_InitStaticStrings() to its own file
* move the generated _PyStaticObjects_CheckRefcnt() to its own file
* include pycore_global_objects.h in extension modules instead of pycore_runtime_init.h
These changes help us avoid including things that aren't needed.
https://github.com/python/cpython/issues/90868
This makes it more clear that a given test is definitely testing against a single-phase init (legacy) extension module. The new module is a companion to _testmultiphase.
https://github.com/python/cpython/issues/98627
Add PyFrame_GetVar() and PyFrame_GetVarString() functions to get a
frame variable by its name.
Move PyFrameObject C API tests from test_capi to test_frame.
Now that our int<->str conversions are size limited and we have the
_pylong module handling larger integers, we don't need to limit
everything just to avoid wasting time in the quadratic time DoS-like
case while fuzzing.
We can tweak these further after seeing how this goes.
In very rare circumstances the JUMP opcode could be confused with the
argument of the opcode in the "then" part which doesn't end with the
JUMP opcode. This led to incorrect detection of the final JUMP opcode
and incorrect calculation of the size of the subexpression.
NOTE: Changed return value of functions _validate_inner() and
_validate_charset() in Modules/_sre/sre.c. Now they return 0 on success,
-1 on failure, and 1 if the last op is JUMP (which usually is a failure).
Previously they returned 1 on success and 0 on failure.
Previously, the optional restrictions on subinterpreters were: disallow fork, subprocess, and threads. By default, we were disallowing all three for "isolated" interpreters. We always allowed all three for the main interpreter and those created through the legacy `Py_NewInterpreter()` API.
Those settings were a bit conservative, so here we've adjusted the optional restrictions to: fork, exec, threads, and daemon threads. The default for "isolated" interpreters disables fork, exec, and daemon threads. Regular threads are allowed by default. We continue always allowing everything For the main interpreter and the legacy API.
In the code, we add `_PyInterpreterConfig.allow_exec` and `_PyInterpreterConfig.allow_daemon_threads`. We also add `Py_RTFLAGS_DAEMON_THREADS` and `Py_RTFLAGS_EXEC`.
(see https://github.com/python/cpython/issues/98608)
This change does the following:
1. change the argument to a new `_PyInterpreterConfig` struct
2. rename the function to `_Py_NewInterpreterFromConfig()`, inspired by `Py_InitializeFromConfig()` (takes a `_PyInterpreterConfig` instead of `isolated_subinterpreter`)
3. split up the boolean `isolated_subinterpreter` into the corresponding multiple granular settings
* allow_fork
* allow_subprocess
* allow_threads
4. add `PyInterpreterState.feature_flags` to store those settings
5. add a function for checking if a feature is enabled on an opaque `PyInterpreterState *`
6. drop `PyConfig._isolated_interpreter`
The existing default (see `Py_NewInterpeter()` and `Py_Initialize*()`) allows fork, subprocess, and threads and the optional "isolated" interpreter (see the `_xxsubinterpreters` module) disables all three. None of that changes here; the defaults are preserved.
Note that the given `_PyInterpreterConfig` will not be used outside `_Py_NewInterpreterFromConfig()`, nor preserved. This contrasts with how `PyConfig` is currently preserved, used, and even modified outside `Py_InitializeFromConfig()`. I'd rather just avoid that mess from the start for `_PyInterpreterConfig`. We can preserve it later if we find an actual need.
This change allows us to follow up with a number of improvements (e.g. stop disallowing subprocess and support disallowing exec instead).
(Note that this PR adds "private" symbols. We'll probably make them public, and add docs, in a separate change.)
Functions re.sub() and re.subn() and corresponding re.Pattern methods
are now 2-3 times faster for replacement strings containing group references.
Closes#91524
Primarily authored by serhiy-storchaka Serhiy Storchaka
Minor-cleanups-by: Gregory P. Smith [Google] <greg@krypto.org>
Added os.setns and os.unshare to easily switch between namespaces
on Linux.
Co-authored-by: Christian Heimes <christian@python.org>
Co-authored-by: CAM Gerlach <CAM.Gerlach@Gerlach.CAM>
Co-authored-by: Victor Stinner <vstinner@python.org>
The os module and the PyUnicode_FSDecoder() function no longer accept
bytes-like paths, like bytearray and memoryview types: only the exact
bytes type is accepted for bytes strings.
Change summary:
+ There is now a `gzip.READ_BUFFER_SIZE` constant that is 128KB. Other programs that read in 128KB chunks: pigz and cat. So this seems best practice among good programs. Also it is faster than 8 kb chunks.
+ a zlib._ZlibDecompressor was added. This is the _bz2.BZ2Decompressor ported to zlib. Since the zlib.Decompress object is better for in-memory decompression, the _ZlibDecompressor is hidden. It only makes sense in file decompression, and that is already implemented now in the gzip library. No need to bother the users with this.
+ The ZlibDecompressor uses the older Cpython arrange_output_buffer functions, as those are faster and more appropriate for the use case.
+ GzipFile.read has been optimized. There is no longer a `unconsumed_tail` member to write back to padded file. This is instead handled by the ZlibDecompressor itself, which has an internal buffer. `_add_read_data` has been inlined, as it was just two calls.
EDIT: While I am adding improvements anyway, I figured I could add another one-liner optimization now to the python -m gzip application. That read chunks in io.DEFAULT_BUFFER_SIZE previously, but has been updated now to use READ_BUFFER_SIZE chunks.
On macOS, fix a crash in syslog.syslog() in multi-threaded
applications. On macOS, the libc syslog() function is not
thread-safe, so syslog.syslog() no longer releases the GIL to call
it.
Signal wakeup fd errors are now logged with
_PyErr_WriteUnraisableMsg(), rather than PySys_WriteStderr() and
PyErr_WriteUnraisable(), to pass the error message to
sys.unraisablehook. By default, it's still written into stderr (unless
sys.unraisablehook is overriden).
This seems pretty straightforward. The issue mentions other calls in mmapmodule that we could release the GIL on, but those are in methods where we'd need to be careful to ensure that something sensible happens if those are called concurrently. In prior art, note that #12073 released the GIL for munmap. In a toy benchmark, I see the speedup you'd expect from doing this.
Automerge-Triggered-By: GH:gvanrossum
* gh-96821: Fix undefined behaviour in `audioop.c`
Left-shifting negative numbers is undefined behaviour.
Fortunately, multiplication works just as well, is defined behaviour,
and gets compiled to the same machine code as before by optimizing
compilers.
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
These were intentionally skipped when operator was updated to use the argument clinic:
https://github.com/python/cpython/issues/64385#issuecomment-1093641466
However, by not using the argument clinic, they missed out on getting signatures.
This is a narrow PR to update the docstrings so that `__text_signature__` can be
extracted from them. Updating to use the argument clinic is beyond scope.
`methodcaller` uses `*args, **kwargs` to match variadic names used elsewhere,
including in `operator.call`.
The macOS 13 SDK includes support for the `mkfifoat` and `mknodat` system calls.
Using the `dir_fd` option with either `os.mkfifo` or `os.mknod` could result in a
segfault if cpython is built with the macOS 13 SDK but run on an earlier
version of macOS. Prevent this by adding runtime support for detection of
these system calls ("weaklinking") as is done for other newer syscalls on
macOS.
Fix the Python path configuration used to initialized sys.path at
Python startup. Paths are no longer encoded to UTF-8/strict to avoid
encoding errors if it contains surrogate characters (bytes paths are
decoded with the surrogateescape error handler).
getpath_basename() and getpath_dirname() functions no longer encode
the path to UTF-8/strict, but work directly on Unicode strings. These
functions now use PyUnicode_FindChar() and PyUnicode_Substring() on
the Unicode path, rather than strrchr() on the encoded bytes string.
This PR fixes undefined behaviour in the struct module unpacking support functions `bu_longlong`, `lu_longlong`, `bu_int` and `lu_int`; thanks to @kumaraditya303 for finding these.
The fix is to accumulate the bytes in an unsigned integer type instead of a signed integer type, then to convert to the appropriate signed type. In cases where the width matches, that conversion will typically be compiled away to a no-op.
(Evidence from Godbolt: https://godbolt.org/z/5zvxodj64 .)
To make the conversions efficient, I've specialised the relevant functions for their output size: for `bu_longlong` and `lu_longlong`, this only entails checking that the output size is indeed `8`. But `bu_int` and `lu_int` were used for format sizes `2` and `4` - I've split those into two separate functions each.
No tests, because all of the affected cases are already exercised by the test suite.
Fix the faulthandler implementation of faulthandler.register(signal,
chain=True) if the sigaction() function is not available: don't call
the previous signal handler if it's NULL.
⚠️⚠️ Note for reviewers, hackers and fellow systems/low-level/compiler engineers ⚠️⚠️
If you have a lot of experience with this kind of shenanigans and want to improve the **first** version, **please make a PR against my branch** or **reach out by email** or **suggest code changes directly on GitHub**.
If you have any **refinements or optimizations** please, wait until the first version is merged before starting hacking or proposing those so we can keep this PR productive.
datetime.isoformat generates the tzoffset with colons, but there
was no format code to make strftime output the same format.
for simplicity and consistency the %:z formatting behaves mostly
as %z, with the exception of adding colons. this includes the
dynamic behaviour of adding seconds and microseconds only when
needed (when not 0).
this fixes the still open "generate" part of this issue:
https://github.com/python/cpython/issues/69142
Co-authored-by: Kumar Aditya <59607654+kumaraditya303@users.noreply.github.com>
- Limited API needs to be enabled per source file
- Some builds don't support Limited API, so Limited API tests must be skipped on those builds
(currently this is `Py_TRACE_REFS`, but that may change.)
- `Py_LIMITED_API` must be defined before `<Python.h>` is included.
This puts the hoop-jumping in `testcapi/parts.h`, so individual
test files can be relatively simple. (Currently that's only
`vectorcall_limited.c`, imagine more.)
- On WASI `ENOTCAPABLE` is now mapped to `PermissionError`.
- The `errno` modules exposes the new error number.
- `getpath.py` now ignores `PermissionError` when it cannot open landmark
files `pybuilddir.txt` and `pyenv.cfg`.
* Make sure that tp_dictoffset is correct with Py_TPFLAGS_MANAGED_DICT is set.
* Avoid traversing managed dict twice when subclassing class with Py_TPFLAGS_MANAGED_DICT set.
We only statically initialize for core code and builtin modules. Extension modules still create
the tuple at runtime. We'll solve that part of interpreter isolation separately.
This change includes generated code. The non-generated changes are in:
* Tools/clinic/clinic.py
* Python/getargs.c
* Include/cpython/modsupport.h
* Makefile.pre.in (re-generate global strings after running clinic)
* very minor tweaks to Modules/_codecsmodule.c and Python/Python-tokenize.c
All other changes are generated code (clinic, global strings).
- Move PyUnicode tests to a separate file
- Add some more tests for PyUnicode_FromFormat
Co-authored-by: philg314 <110174000+philg314@users.noreply.github.com>
* Add test for inheriting explicit __dict__ and weakref.
* Restore 3.10 behavior for multiple inheritance of C extension classes that store their dictionary at the end of the struct.
- check for ``dup()`` libc function
- handle missing ``F_DUPFD`` in ``dup2()`` replacement function
- add workaround for WASI libc bug in MSG_TRUNC
- ESHUTDOWN is missing, use EPIPE instead
- POLLPRI is missing, define as 0 (no-op)
* syslog_get_argv() swallows exceptions, but not in all cases.
* if ident is non UTF-8 encodable, syslog.openlog() fails after setting the
global reference to ident. Now the C string saved internally in the previous
call to openlog() points to the freed memory.
* PySys_Audit() can crash if ident is NULL.
* There may be a race condition with syslog.syslog(), because the global
reference to ident is decrefed before setting the new value.
* Possible use of freed memory if syslog.openlog() is called while
the GIL is released in syslog.syslog().
This PR partially reverts gh-24421 (PR) and fixes the remaining concerns
given in gh-93044 (issue):
- keyword arguments are passed as positional arguments to factory()
- if an argument is not passed to sqlite3.connect(), its default value
is passed to factory()
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
The wrapper macros are more readable and match the form recommended in
the OpenSSL documentation. They also slightly less error-prone, as the
mapping of arguments to SSL_CTX_ctrl is not always clear. (Though in
this case it's straightforward.)
https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_get_max_proto_version.html
When binding a unix socket to an empty address on Linux, the socket is
automatically bound to an available address in the abstract namespace.
>>> s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
>>> s.bind("")
>>> s.getsockname()
b'\x0075499'
Since python 3.9, the socket is bound to the one address:
>>> s.getsockname()
b'\x00'
And trying to bind multiple sockets will fail with:
Traceback (most recent call last):
File "/home/nsoffer/src/cpython/Lib/test/test_socket.py", line 5553, in testAutobind
s2.bind("")
OSError: [Errno 98] Address already in use
Added 2 tests:
- Auto binding empty address on Linux
- Failing to bind an empty address on other platforms
Fixes f6b3a07b7d (bpo-44493: Add missing terminated NUL in sockaddr_un's length (GH-26866)