Fix parsing of the following corner cases:
* URLs with only a host name
* URLs containing a fragment
* URLs containing a query
* filenames with only a UNC sharepoint on Windows
Co-authored-by: Dong-hee Na <donghee.na92@gmail.com>
Change old space bit of young objects from 0 to gcstate->visited_space.
This ensures that any object created *and* collected during cycle GC has the bit set correctly.
Python 3.10 changed from using SSL_write() and SSL_read() to SSL_write_ex() and
SSL_read_ex(), but did not update handling of the return value.
Change error handling so that the return value is not examined.
OSError (not EOF) is now returned when retval is 0.
According to *recent* man pages of all functions for which we call
PySSL_SetError, (in OpenSSL 3.0 and 1.1.1), their return value should
be used to determine whether an error happened (i.e. if PySSL_SetError
should be called), but not what kind of error happened (so,
PySSL_SetError shouldn't need retval). To get the error,
we need to use SSL_get_error.
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
Co-authored-by: Petr Viktorin <encukou@gmail.com>
This fixes XML unittest fallout from the https://github.com/python/cpython/issues/115398 security fix. When configured using `--with-system-expat` on systems with older pre 2.6.0 versions of libexpat, our unittests were failing.
* sax|etree: Simplify Expat version guard where simplifiable
Idea by Matěj Cepl
* sax|etree: Fix reparse deferral tests for vanilla Expat <2.6.0
This *does not fix* the case of distros with an older version of libexpat with the 2.6.0 feature backported as a security fix. (Ubuntu is a known example of this with its libexpat1 2.5.0-2ubunutu0.1 package)
Instead of calling `exec()` once for each function added to a dataclass, only call `exec()` once per dataclass. This can lead to speed improvements of up to 20%.
* GH-113171: Fix "private" (really non-global) IP address ranges
The _private_networks variables, used by various is_private
implementations, were missing some ranges and at the same time had
overly strict ranges (where there are more specific ranges considered
globally reachable by the IANA registries).
This patch updates the ranges with what was missing or otherwise
incorrect.
I left 100.64.0.0/10 alone, for now, as it's been made special in [1]
and I'm not sure if we want to undo that as I don't quite understand the
motivation behind it.
The _address_exclude_many() call returns 8 networks for IPv4, 121
networks for IPv6.
[1] https://github.com/python/cpython/issues/61602
Add Py_GetConstant() and Py_GetConstantBorrowed() functions.
In the limited C API version 3.13, getting Py_None, Py_False,
Py_True, Py_Ellipsis and Py_NotImplemented singletons is now
implemented as function calls at the stable ABI level to hide
implementation details. Getting these constants still return borrowed
references.
Add _testlimitedcapi/object.c and test_capi/test_object.py to test
Py_GetConstant() and Py_GetConstantBorrowed() functions.
* Ensure importlib.metadata tests do not leak references in sys.modules.
* Move importlib.metadata tests to their own package for easier syncing with importlib_metadata.
* Update owners and makefile for new directories.
* Add blurb
Before this change, ctypes classes used a custom dict subclass, `StgDict`,
as their `tp_dict`. This acts like a regular dict but also includes extra information
about the type.
This replaces stgdict by `StgInfo`, a C struct on the type, accessed by
`PyObject_GetTypeData()` (PEP-697).
All usage of `StgDict` (mainly variables named `stgdict`, `dict`, `edict` etc.) is
converted to `StgInfo` (named `stginfo`, `info`, `einfo`, etc.).
Where the dict is actually used for class attributes (as a regular PyDict), it's now
called `attrdict`.
This change -- not overriding `tp_dict` -- is made to make me comfortable with
the next part of this PR: moving the initialization logic from `tp_new` to `tp_init`.
The `StgInfo` is set up in `__init__` of each class, with a guard that prevents
calling `__init__` more than once. Note that abstract classes (like `Array` or
`Structure`) are created using `PyType_FromMetaclass` and do not have
`__init__` called.
Previously, this was done in `__new__`, which also wasn't called for abstract
classes.
Since `__init__` can be called from Python code or skipped, there is a tested
guard to ensure `StgInfo` is initialized exactly once before it's used.
Co-authored-by: neonene <53406459+neonene@users.noreply.github.com>
Co-authored-by: Erlend E. Aasland <erlend.aasland@protonmail.com>
Starting in Python 3.12, we prevented calling fork() and starting new threads
during interpreter finalization (shutdown). This has led to a number of
regressions and flaky tests. We should not prevent starting new threads
(or `fork()`) until all non-daemon threads exit and finalization starts in
earnest.
This changes the checks to use `_PyInterpreterState_GetFinalizing(interp)`,
which is set immediately before terminating non-daemon threads.
If you catch DuplicateOptionError / DuplicateSectionError when reading a
config file (the intention is to skip invalid config files) and then
attempt to use the ConfigParser instance, any values it *had* read
successfully so far, were stored as a list instead of string! Later
`get` calls would raise "AttributeError: 'list' object has no attribute
'find'" from somewhere deep in the interpolation code.
These give applications the option of more forcefully terminating client
connections for asyncio servers. Useful when terminating a service and
there is limited time to wait for clients to finish up their work.
This is a do-over with a test fix for gh-114432, which was reverted.
This includes adding what should be a relatively temporary
`Modules/_decimal/windows/mpdecimal.h` shim to choose between `mpdecimal32vc.h`
or `mpdecimal64vc.h` based on which of `CONFIG_64` or `CONFIG_32` is defined.
* bpo-27578: Fix inspect.getsource() on empty file
For modules from empty files, `inspect.getsource()` now
returns an empty string, and `inspect.getsourcelines()` returns
a list of one empty string, fixing the expected invariant.
As indicated by `exec('')`, empty strings are valid Python
source code.
Co-authored-by: Oleg Iarygin <oleg@arhadthedev.net>
There is a race between when `Thread._tstate_lock` is released[^1] in `Thread._wait_for_tstate_lock()`
and when `Thread._stop()` asserts[^2] that it is unlocked. Consider the following execution
involving threads A, B, and C:
1. A starts.
2. B joins A, blocking on its `_tstate_lock`.
3. C joins A, blocking on its `_tstate_lock`.
4. A finishes and releases its `_tstate_lock`.
5. B acquires A's `_tstate_lock` in `_wait_for_tstate_lock()`, releases it, but is swapped
out before calling `_stop()`.
6. C is scheduled, acquires A's `_tstate_lock` in `_wait_for_tstate_lock()` but is swapped
out before releasing it.
7. B is scheduled, calls `_stop()`, which asserts that A's `_tstate_lock` is not held.
However, C holds it, so the assertion fails.
The race can be reproduced[^3] by inserting sleeps at the appropriate points in
the threading code. To do so, run the `repro_join_race.py` from the linked repo.
There are two main parts to this PR:
1. `_tstate_lock` is replaced with an event that is attached to `PyThreadState`.
The event is set by the runtime prior to the thread being cleared (in the same
place that `_tstate_lock` was released). `Thread.join()` blocks waiting for the
event to be set.
2. `_PyInterpreterState_WaitForThreads()` provides the ability to wait for all
non-daemon threads to exit. To do so, an `is_daemon` predicate was added to
`PyThreadState`. This field is set each time a thread is created. `threading._shutdown()`
now calls into `_PyInterpreterState_WaitForThreads()` instead of waiting on
`_tstate_lock`s.
[^1]: 441affc9e7/Lib/threading.py (L1201)
[^2]: 441affc9e7/Lib/threading.py (L1115)
[^3]: 8194653279
---------
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Antoine Pitrou <antoine@python.org>
Change automatically generated tkinter.Checkbutton widget names to
avoid collisions with automatically generated tkinter.ttk.Checkbutton
widget names within the same parent widget.
* Restore support of None and other false values.
* Raise TypeError for non-zero integers and non-empty sequences.
The regressions were introduced in gh-74668
(bdba8ef42b).
Any capitalization of "xn--" should be acceptable for the ACE prefix
(see https://tools.ietf.org/html/rfc3490#section-5).
Co-authored-by: Pepijn de Vos <pepijndevos@gmail.com>
Co-authored-by: Erlend E. Aasland <erlend@python.org>
Co-authored-by: Petr Viktorin <encukou@gmail.com>
Use the NtQueryInformationProcess system call to efficiently retrieve the parent process ID in a single step, rather than using the process snapshots API which retrieves large amounts of unnecessary information and is more prone to failure (since it makes heap allocations).
Includes a fallback to the original win32_getppid implementation in case the unstable API appears to return strange results.
On Windows, time.monotonic() now uses the QueryPerformanceCounter()
clock to have a resolution better than 1 us, instead of the
gGetTickCount64() clock which has a resolution of 15.6 ms.
* GH-116554: Relax list.sort()'s notion of "descending" run
Rewrote `count_run()` so that sub-runs of equal elements no longer end a descending run. Both ascending and descending runs can have arbitrarily many sub-runs of arbitrarily many equal elements now. This is tricky, because we only use ``<`` comparisons, so checking for equality doesn't come "for free". Surprisingly, it turned out there's a very cheap (one comparison) way to determine whether an ascending run consisted of all-equal elements. That sealed the deal.
In addition, after a descending run is reversed in-place, we now go on to see whether it can be extended by an ascending run that just happens to be adjacent. This succeeds in finding at least one additional element to append about half the time, and so appears to more than repay its cost (the savings come from getting to skip a binary search, when a short run is artificially forced to length MIINRUN later, for each new element `count_run()` can add to the initial run).
While these have been in the back of my mind for years, a question on StackOverflow pushed it to action:
https://stackoverflow.com/questions/78108792/
They were wondering why it took about 4x longer to sort a list like:
[999_999, 999_999, ..., 2, 2, 1, 1, 0, 0]
than "similar" lists. Of course that runs very much faster after this patch.
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Pieter Eendebak <pieter.eendebak@gmail.com>
gh-116307: Create a new import helper 'isolated modules' and use that instead of 'Clean Import' to ensure that tests from importlib_resources don't leave modules in sys.modules.
* and fix global flag repr
* Update Misc/NEWS.d/next/Library/2024-03-11-12-11-10.gh-issue-116600.FcNBy_.rst
Co-authored-by: Kirill Podoprigora <kirill.bast9@mail.ru>
These give applications the option of more forcefully terminating client
connections for asyncio servers. Useful when terminating a service and
there is limited time to wait for clients to finish up their work.
In free-threaded builds, running with `PYTHON_GIL=0` will now disable the
GIL. Follow-up issues track work to re-enable the GIL when loading an
incompatible extension, and to disable the GIL by default.
In order to support re-enabling the GIL at runtime, all GIL-related data
structures are initialized as usual, and disabling the GIL simply sets a flag
that causes `take_gil()` and `drop_gil()` to return early.
* set default return value of functional types as _mock_return_value
* added test of wrapping child attributes
* added backward compatibility with explicit return
* added docs on the order of precedence
* added test to check default return_value
This follows in the footsteps of #21586
This speeds up import uuid by about 6ms on Linux.
Before:
```
λ hyperfine -w 4 "./python -c 'import uuid'"
Benchmark 1: ./python -c 'import uuid'
Time (mean ± σ): 20.4 ms ± 0.4 ms [User: 16.7 ms, System: 3.8 ms]
Range (min … max): 19.6 ms … 21.8 ms 136 runs
```
After:
```
λ hyperfine -w 4 "./python -c 'import uuid'"
Benchmark 1: ./python -c 'import uuid'
Time (mean ± σ): 14.5 ms ± 0.3 ms [User: 11.5 ms, System: 3.2 ms]
Range (min … max): 13.9 ms … 16.0 ms 175 runs
```
This adds `VERIFY_X509_STRICT` to make the default
SSL context perform stricter (per RFC 5280) validation, as well
as `VERIFY_X509_PARTIAL_CHAIN` to enforce more standards-compliant
path-building behavior.
As part of this changeset, I had to tweak `make_ssl_certs.py`
slightly to emit 5280-conforming CA certs. This changeset includes
the regenerated certificates after that change.
Signed-off-by: William Woodruff <william@yossarian.net>
Co-authored-by: Victor Stinner <vstinner@python.org>
* Increment PyExpat_CAPI_MAGIC due to SetReparseDeferralEnabled addition.
This is a followup to git commit
6a95676bb5 from Github PR #115623.
* RESTify news API list.
Improve algorithm for computing which rolled-over log files to delete
in logging.TimedRotatingFileHandler. It is now reliable for handlers
without namer and with arbitrary deterministic namer that leaves
the datetime part in the file name unmodified.
If awailable, enable -fstrict-overflow for libmpdec. Also
shut off false positive warnings (-Warray-bounds).
The later was backported from mpdecimal-4.0.0.
This makes the asyncio REPL (`python -m asyncio`) more usable
and similar to the regular REPL.
This exposes register_readline() as a top-level function in site.py,
but it's intentionally undocumented.
Co-authored-by: Carol Willing <carolcode@willingconsulting.com>
Co-authored-by: Itamar Oren <itamarost@gmail.com>
* Do not overwrite already rolled over files. It happened at midnight or
during the DST change and caused the loss of data.
* computeRollover() now always return the timestamp larger than the
specified time.
* Fix computation of the rollover time during the DST change.
Support callables with the __call__() method and types with
__new__() and __init__() methods set to class methods, static
methods, bound methods, partial functions, and other types of
methods and descriptors.
Add tests for numerous types of callables and descriptors.
Allow controlling Expat >=2.6.0 reparse deferral (CVE-2023-52425) by adding five new methods:
- `xml.etree.ElementTree.XMLParser.flush`
- `xml.etree.ElementTree.XMLPullParser.flush`
- `xml.parsers.expat.xmlparser.GetReparseDeferralEnabled`
- `xml.parsers.expat.xmlparser.SetReparseDeferralEnabled`
- `xml.sax.expatreader.ExpatParser.flush`
Based on the "flush" idea from https://github.com/python/cpython/pull/115138#issuecomment-1932444270 .
### Notes
- Please treat as a security fix related to CVE-2023-52425.
Includes code suggested-by: Snild Dolkow <snild@sony.com>
and by core dev Serhiy Storchaka.
This change is part of the work on PEP-738: Adding Android as a
supported platform.
* Remove the "1.0" suffix from libpython's filename on Android, which
would prevent Gradle from packaging it into an app.
* Simplify the build command in the Makefile so that libpython always
gets given an SONAME with the `-Wl-h` argument, even if the SONAME is
identical to the actual filename.
* Disable a number of functions on Android which can be compiled and
linked against, but always fail at runtime. As a result, the native
_multiprocessing module is no longer built for Android.
* gh-115390 (bee7bb331) added some pre-determined results to the
configure script for things that can't be autodetected when
cross-compiling; this change adds Android to these where appropriate.
* Add a couple more pre-determined results for Android, and making them
cover iOS as well. This means the --enable-ipv6 configure option will
no longer be required on either platform.
Use of a proxy is intended to defer DNS for the hosts to the proxy itself, rather than a potential for information leak of the host doing DNS resolution itself for any reason. Proxy bypass lists are strictly name based. Most implementations of proxy support agree.
Nothing else in Python generally logs the contents of variables, so this
can be very unexpected for developers and could leak sensitive
information in to terminals and log files.
In some cases we might cause a StreamWriter to stay alive even when the
application has dropped all references to it. This prevents us from
doing automatical cleanup, and complaining that the StreamWriter wasn't
properly closed.
Fortunately, the extra reference was never actually used for anything so
we can just drop it.
Instead of showing a dot for each iteration, show:
- '.' for zero (on negative) leaks
- number of leaks for 1-9
- 'X' if there are more leaks
This allows more rapid iteration: when bisecting, I don't need
to wait for the final report to see if the test still leaks.
Also, show the full result if there are any non-zero entries.
This shows negative entries, for the unfortunate cases where
a reference is created and cleaned up in different runs.
Test *failure* is still determined by the existing heuristic.