This improves the lives of type annotation users of `float` - which type checkers implicitly treat as `int|float` because that is what most code actually wants. Before this change a `.is_integer()` method could not be assumed to exist on things annotated as `: float` due to the method not existing on both types.
Users may wish to define subclasses of `pathlib.Path` to add or modify
existing methods. Before this change, attempting to instantiate a subclass
raised an exception like:
AttributeError: type object 'PPath' has no attribute '_flavour'
Previously the `_flavour` attribute was assigned as follows:
PurePath._flavour = xxx not set!! xxx
PurePosixPath._flavour = _PosixFlavour()
PureWindowsPath._flavour = _WindowsFlavour()
This change replaces it with a `_pathmod` attribute, set as follows:
PurePath._pathmod = os.path
PurePosixPath._pathmod = posixpath
PureWindowsPath._pathmod = ntpath
Functionality from `_PosixFlavour` and `_WindowsFlavour` is moved into
`PurePath` as underscored-prefixed classmethods. Flavours are removed.
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Brett Cannon <brett@python.org>
Co-authored-by: Adam Turner <9087854+AA-Turner@users.noreply.github.com>
Co-authored-by: Eryk Sun <eryksun@gmail.com>
* Uses a better hashing algorithm to get better dispersion and remove commutativity.
* Incorporates `co_firstlineno`, `Py_SIZE(co)`, and bytecode instructions.
* This is now the entire set of criteria used in `code_richcompare`, except for `_PyCode_ConstantKey` (which would incorporate the types of `co_consts` rather than just their values).
The itemsize returned in a memoryview of a ctypes array is now computed from the item type, instead of dividing the total size by the length and assuming that the length is not zero.
This makes a couple related changes to inspect.signature's behaviour
when parsing a signature from `__text_signature__`.
First, `inspect.signature` is documented as only raising ValueError or
TypeError. However, in some cases, we could raise RuntimeError. This PR
changes that, thereby fixing #83685.
(Note that the new ValueErrors in RewriteSymbolics are caught and then
reraised with a message)
Second, `inspect.signature` could randomly drop parameters that it
didn't understand (corresponding to `return None` in the `p` function).
This is the core issue in #85267. I think this is very surprising
behaviour and it seems better to fail outright.
Third, adding this new failure broke a couple tests. To fix them (and to
e.g. allow `inspect.signature(select.epoll.register)` as in #85267), I
add constant folding of a couple binary operations to RewriteSymbolics.
(There's some discussion of making signature expression evaluation
arbitrary powerful in #68155. I think that's out of scope. The
additional constant folding here is pretty straightforward, useful, and
not much of a slippery slope)
Fourth, while #85267 is incorrect about the cause of the issue, it turns
out if you had consecutive newlines in __text_signature__, you'd get
`tokenize.TokenError`.
Finally, the `if name is invalid:` code path was dead, since
`parse_name` never returned `invalid`.
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Christian Heimes <christian@python.org>
Co-authored-by: Hugo van Kemenade <hugovk@users.noreply.github.com>
Fixes https://github.com/python/cpython/issues/89051
This introduces a new decorator `@inspect.markcoroutinefunction`,
which, applied to a sync function, makes it appear async to
`inspect.iscoroutinefunction()`.
Co-authored-by: Zachary Ware <zachary.ware@gmail.com>
Co-authored-by: Filipe Laíns <filipe.lains@gmail.com>
Co-authored-by: Eric Wieser <wieser.eric@gmail.com>
`urllib.unquote_to_bytes` and `urllib.unquote` could both potentially generate `O(len(string))` intermediate `bytes` or `str` objects while computing the unquoted final result depending on the input provided. As Python objects are relatively large, this could consume a lot of ram.
This switches the implementation to using an expanding `bytearray` and a generator internally instead of precomputed `split()` style operations.
Microbenchmarks with some antagonistic inputs like `mess = "\u0141%%%20a%fe"*1000` show this is 10-20% slower for unquote and unquote_to_bytes and no different for typical inputs that are short or lack much unicode or % escaping. But the functions are already quite fast anyways so not a big deal. The slowdown scales consistently linear with input size as expected.
Memory usage observed manually using `/usr/bin/time -v` on `python -m timeit` runs of larger inputs. Unittesting memory consumption is difficult and does not seem worthwhile.
Observed memory usage is ~1/2 for `unquote()` and <1/3 for `unquote_to_bytes()` using `python -m timeit -s 'from urllib.parse import unquote, unquote_to_bytes; v="\u0141%01\u0161%20"*500_000' 'unquote_to_bytes(v)'` as a test.