[bpo-29566]() notes that binhex.binhex uses inconsistent line endings (both Unix and MacOS9 line endings are used). This PR changes this to use the MacOS9 line endings everywhere.
Left-recursive rules need to check for errors explicitly, since
even if the rule returns NULL, the parsing might continue and lead
to long-distance failures.
Co-authored-by: Pablo Galindo <Pablogsal@gmail.com>
* Add a new _locale._get_locale_encoding() function to get the
current locale encoding.
* Modify locale.getpreferredencoding() to use it.
* Remove the _bootlocale module.
The _RandomSequence class in tempfile used to check the current pid every time its rng property was used.
This commit replaces this code with `os.register_at_fork` to reduce the overhead.
I am re-submitting an older PR which was abandoned but is still relevant, #10783 by @timb07.
The issue being solved () is still relevant. The original PR #10783 was closed as
the final request changes were not applied and since abandoned.
In this new PR I have re-used the original patch plus applied both comments from the review, by @maxking and @pganssle.
For reference, here is the original PR description:
In email.utils.parsedate_to_datetime(), a failure to parse the date, or invalid date components (such as hour outside 0..23) raises an exception. Document this behaviour, and add tests to test_email/test_utils.py to confirm this behaviour.
In email.headerregistry.DateHeader.parse(), check when parsedate_to_datetime() raises an exception and add a new defect InvalidDateDefect; preserve the invalid value as the string value of the header, but set the datetime attribute to None.
Add tests to test_email/test_headerregistry.py to confirm this behaviour; also added test to test_email/test_inversion.py to confirm emails with such defective date headers round trip successfully.
This pull request incorporates feedback gratefully received from @bitdancer, @brettcannon, @Mariatta and @warsaw, and replaces the earlier PR #2254.
Automerge-Triggered-By: GH:warsaw
Fix memory leak in subprocess.Popen() in case of uid/gid overflow
Also add a test that would catch this leak with `--huntrleaks`.
Alas, the test for `extra_groups` also exposes an inconsistency
in our error reporting: we use a custom ValueError for `extra_groups`,
but propagate OverflowError for `user` and `group`.
* bpo-41490: ``path`` method to aggressively close handles
* Add blurb
* In ZipReader.contents, eagerly evaluate the contents to release references to the zipfile.
* Instead use _ensure_sequence to ensure any iterable from a reader is eagerly converted to a list if it's not already a sequence.
The original tool wasn't working right and it was simpler to create a new one, partially re-using some of the old code. At this point the tool runs properly on the master. (Try: ./python Tools/c-analyzer/c-analyzer.py analyze.) It take ~40 seconds on my machine to analyze the full CPython code base.
Note that we'll need to iron out some OS-specific stuff (e.g. preprocessor). We're okay though since this tool isn't used yet in our workflow. We will also need to verify the analysis results in detail before activating the check in CI, though I'm pretty sure it's close.
https://bugs.python.org/issue36876
* Add _newline_ parameter to `pathlib.Path.write_text()`
* Update documentation of `pathlib.Path.write_text()`
* Add test case for `pathlib.Path.write_text()` calls with _newline_ parameter passed
Automerge-Triggered-By: GH:methane
* Add F_SETPIPE_SZ and F_GETPIPE_SZ to fcntl module
* Add pipesize parameter for subprocess.Popen class
This will allow the user to control the size of the pipes.
On linux the default is 64K. When a pipe is full it blocks for writing.
When a pipe is empty it blocks for reading. On processes that are
very fast this can lead to a lot of wasted CPU cycles. On a typical
Linux system the max pipe size is 1024K which is much better.
For high performance-oriented libraries such as xopen it is nice to
be able to set the pipe size.
The workaround without this feature is to use my_popen_process.stdout.fileno() in
conjuction with fcntl and 1031 (value of F_SETPIPE_SZ) to acquire this behavior.
This PR replaces #1977. The reason for the replacement is two-fold.
The fix itself is different is that if the CTE header doesn't exist in the original message, it is inserted. This is important because the new CTE could be quoted-printable whereas the original is implicit 8bit.
Also the tests are different. The test_nonascii_as_string_without_cte test in #1977 doesn't actually test the issue in that it passes without the fix. The test_nonascii_as_string_without_content_type_and_cte test is improved here, and even though it doesn't fail without the fix, it is included for completeness.
Automerge-Triggered-By: @warsaw
There are two different `SimpleQueue` types imported (from `multiprocessing.queues` and `queue`) in `Lib/test/test_genericalias.py`, the second one shadowing the first one, making the first one not actually tested. Fix by using different names.
Automerge-Triggered-By: @gvanrossum
Remove complex special methods __int__, __float__, __floordiv__,
__mod__, __divmod__, __rfloordiv__, __rmod__ and __rdivmod__
which always raised a TypeError.
This fixes the test failure with Tk 6.8.10 which is caused by changes to how Tk rounds the `from`, `to` and `tickinterval` arguments. This PR uses `noconv` if the patchlevel is greater than or equal to 8.6.10 (credit to Serhiy for this idea as it is much simpler than what I previously proposed).
Going into more detail for those who want it, the Tk change was made in [commit 591f68c](591f68cb38) and means that the arguments listed above are rounded relative to the value of `from`. However, when rounding the `from` argument ([line 623](591f68cb38/generic/tkScale.c (L623))), it is rounded relative to itself (i.e. rounding `0`) and therefore the assigned value for `from` is always what is given (no matter what values of `from` and `resolution`).
Automerge-Triggered-By: @pablogsal
This special marker annotation is intended to help in distinguishing
proper PEP 484-compliant type aliases from regular top-level variable
assignments.
The hard part was making all the tests pass; there are some subtle issues here, because apparently the future import wasn't tested very thoroughly in previous Python versions.
For example, `inspect.signature()` returned type objects normally (except for forward references), but strings with the future import. We changed it to try and return type objects by calling `typing.get_type_hints()`, but fall back on returning strings if that function fails (which it may do if there are future references in the annotations that require passing in a specific namespace to resolve).
Similarly to GH-22566, those tests called eval() on content received via
HTTP in test_named_sequences_full. This likely isn't exploitable because
unicodedata.lookup(seqname) is called before self.checkletter(seqname,
None) - thus any string which isn't a valid unicode character name
wouldn't ever reach the checkletter method.
Still, it's probably better to be safe than sorry.
Enable recursion checks which were disabled when get __bases__ of
non-type objects in issubclass() and isinstance() and when intern
strings. It fixes a stack overflow when getting __bases__ leads
to infinite recursion.
Originally recursion checks was disabled for PyDict_GetItem() which
silences all errors including the one raised in case of detected
recursion and can return incorrect result. But now the code uses
PyDict_GetItemWithError() and PyDict_SetDefault() instead.
* bpo-26680: Adds support for int.is_integer() for compatibility with float.is_integer().
The int.is_integer() method always returns True.
* bpo-26680: Adds a test to ensure that False.is_integer() and True.is_integer() are always True.
* bpo-26680: Adds Real.is_integer() with a trivial implementation using conversion to int.
This default implementation is intended to reduce the workload for subclass
implementers. It is not robust in the presence of infinities or NaNs and
may have suboptimal performance for other types.
* bpo-26680: Adds Rational.is_integer which returns True if the denominator is one.
This implementation assumes the Rational is represented in it's
lowest form, as required by the class docstring.
* bpo-26680: Adds Integral.is_integer which always returns True.
* bpo-26680: Adds tests for Fraction.is_integer called as an instance method.
The tests for the Rational abstract base class use an unbound
method to sidestep the inability to directly instantiate Rational.
These tests check that everything works correct as an instance method.
* bpo-26680: Updates documentation for Real.is_integer and built-ins int and float.
The call x.is_integer() is now listed in the table of operations
which apply to all numeric types except complex, with a reference
to the full documentation for Real.is_integer(). Mention of
is_integer() has been removed from the section 'Additional Methods
on Float'.
The documentation for Real.is_integer() describes its purpose, and
mentions that it should be overridden for performance reasons, or
to handle special values like NaN.
* bpo-26680: Adds Decimal.is_integer to the Python and C implementations.
The C implementation of Decimal already implements and uses
mpd_isinteger internally, we just expose the existing function to
Python.
The Python implementation uses internal conversion to integer
using to_integral_value().
In both cases, the corresponding context methods are also
implemented.
Tests and documentation are included.
* bpo-26680: Updates the ACKS file.
* bpo-26680: NEWS entries for int, the numeric ABCs and Decimal.
Co-authored-by: Robert Smallshire <rob@sixty-north.com>
This commit reverts commit ac0333e1e1 as the original links are working again and they provide extended features such as comments and alternative versions.
* When the parameters argument is a list, correctly handle the case
of changing it during iteration.
* When the parameters argument is a custom sequence, no longer
override an exception raised in ``__len__()``.
EnumMeta double-checks that `__repr__`, `__str__`, `__format__`, and `__reduce_ex__` are not the same as `object`'s, and replaces them if they are -- even if that replacement was intentionally done in the Enum being constructed. This patch fixes that.
Automerge-Triggered-By: @ethanfurman
Partially revert commit ac46eb4ad6662cf6d771b20d8963658b2186c48c:
"bpo-38113: Update the Python-ast.c generator to PEP384 (gh-15957)".
Using a module state per module instance is causing subtle practical
problems.
For example, the Mercurial project replaces the __import__() function
to implement lazy import, whereas Python expected that "import _ast"
always return a fully initialized _ast module.
Add _PyAST_Fini() to clear the state at exit.
The _ast module has no state (set _astmodule.m_size to 0). Remove
astmodule_traverse(), astmodule_clear() and astmodule_free()
functions.
tarfile writes full path to FNAME field of GZIP format instead of just basename if user specified absolute path. Some archive viewers may process file incorrectly. Also it creates security issue because anyone can know structure of directories on system and know username or other personal information.
RFC1952 says about FNAME:
This is the original name of the file being compressed, with any directory components removed.
So tarfile must remove directory names from FNAME and write only basename of file.
Automerge-Triggered-By: @jaraco
Currently, empty sequences in gather rules make the conditional for
gather rules fail as empty sequences evaluate as "False". We need to
explicitly check for "None" (the failure condition) to avoid false
negatives.
Simply closing the event loop isn't enough to avoid warnings. If we
don't also shut down the event loop's default executor, it sometimes
logs a "dangling thread" warning.
Follow-up to GH-22017
* bpo-41696: Fix handling of debug mode in asyncio.run
This allows PYTHONASYNCIODEBUG or -X dev to enable asyncio debug mode
when using asyncio.run
* 📜🤖 Added by blurb_it.
Co-authored-by: hauntsaninja <>
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
When allocating MemoryError classes, there is some logic to use
pre-allocated instances in a freelist only if the type that is being
allocated is not a subclass of MemoryError. Unfortunately in the
destructor this logic is not present so the freelist is altered even
with subclasses of MemoryError.
Stopping and restarting a proactor event loop on windows can lead to
spurious errors logged (ConnectionResetError while reading from the
self pipe). This fixes the issue by ensuring that we don't attempt
to start multiple copies of the self-pipe reading loop.
Currently, if `asyncio.wait_for()` itself is cancelled it will always
raise `CancelledError` regardless if the underlying task is still
running. This is similar to a race with the timeout, which is handled
already.
When I was fixing bpo-32751 back in GH-7216 I missed the case when
*timeout* is zero or negative. This takes care of that.
Props to @aaliddell for noticing the inconsistency.
asyncio.AbstractEventLoop.run_in_executor should be a method that returns an asyncio Future, not an async method.
This matches the concrete implementations, and the documentation better.
Prior to this change, attempting to subclass the C implementation of
zoneinfo.ZoneInfo gave the following error:
TypeError: unbound method ZoneInfo.__init_subclass__() needs an argument
https://bugs.python.org/issue41025
The http.cookiejar module has is_blocked() and blocked_domains()
methods, so "blocklist" term sounds better than "denylist" in this
module.
Replace also denylisted with denied in test___all__.
test_run method test_fatal_error failed when run twice, as with
python -m test -m test_fatal_error test_idle test_idle
because func.called was not reinitialized to 0.
This bug caused a failure on a refleak buildbot.
PEP 563 was updated to change the release where `from __future__ import annotations` becomes the default (and only) behavior from 4.0 to 3.10. Update `__future__.py` and its docs to reflect this.
On some platform such as VMware ESXi, DefaultSelector fails
to detect selector due to default value.
This fix adds a check and uses the correct selector depending upon
select implementation and actual call.
Fixes: [bpo-41182]()
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
Walk down the MRO backwards to find the type that originally defined the final `tp_setattro`, then make sure we are not jumping over intermediate C-level bases with the Python-level call.
Automerge-Triggered-By: @gvanrossum
* Merge gen and frame state variables into one.
* Replace stack pointer with depth in PyFrameObject. Makes code easier to read and saves a word of memory.
Add an accessor under SSLContext.security_level as a wrapper around
SSL_CTX_get_security_level, see:
https://www.openssl.org/docs/manmaster/man3/SSL_CTX_get_security_level.html
------
This is my first time contributing, so please pull me up on all the things I missed or did incorrectly.
Automerge-Triggered-By: @tiran
* bpo-41273: Proactor transport read loop to use recv_into
By using recv_into instead of recv we do not allocate a new buffer each
time _loop_reading calls recv.
This betters performance for any stream using proactor (basically any
asyncio stream on windows).
* bpo-41273: Double proactor read transport buffer size
By doubling the read buffer size we get better performance.
Keywords are present in the main module tab completion lists generated by rlcompleter, which is used by REPLs on *nix. Add all keywords to IDLE's main module name list except those already added from builtins (True, False, and None) . This list may also be used by Show Completions on the Edit menu, and its hot key.
Rewrite Completions doc.
Co-authored-by: Cheryl Sabella <cheryl.sabella@gmail.com>
* Add failing test.
* bpo-29590: fix stack trace for gen.throw() with yield from (GH-NNNN)
When gen.throw() is called on a generator after a "yield from", the
intermediate stack trace entries are lost. This commit fixes that.
* Document is_annotate() and update doc strings
* Move quotes to the next line.
Co-authored-by: Pablo Galindo <Pablogsal@gmail.com>
Co-authored-by: Pablo Galindo <Pablogsal@gmail.com>
The write_history() atexit function of the readline completer now
ignores any OSError to ignore error if the filesystem is read-only,
instead of only ignoring FileNotFoundError and PermissionError.
Add sys.orig_argv attribute: the list of the original command line
arguments passed to the Python executable.
Rename also PyConfig._orig_argv to PyConfig.orig_argv and
document it.
The __hash__() methods of classes IPv4Interface and IPv6Interface had issue
of generating constant hash values of 32 and 128 respectively causing hash collisions.
The fix uses the hash() function to generate hash values for the objects
instead of XOR operation
2020-06-29 13:39:29 -04:00
Srinivas Reddy Thatiparthy (శ్రీనివాస్ రెడ్డి తాటిపర్తి)
I've done the implementation for both non-chunked and chunked reads. I haven't benchmarked chunked reads because I don't currently have a convenient way to generate a high-bandwidth chunked stream, but I don't see any reason that it shouldn't enjoy the same benefits that the non-chunked case does. I've used the benchmark attached to the bpo bug to verify that performance now matches the unsized read case.
Automerge-Triggered-By: @methane
* Revert "bpo-40521: Make the empty frozenset per interpreter (GH-21068)"
This reverts commit 261cfedf76.
* bpo-40521: Empty frozensets are no longer singletons
* Complete the removal of the frozenset singleton
Unexpected errors in calling the __iter__ method are no longer
masked by TypeError in the "in" operator and functions
operator.contains(), operator.indexOf() and operator.countOf().
`GET_INVALID_TARGET` might unexpectedly return `NULL`, which if not
caught will cause a SEGFAULT. Therefore, this commit introduces a new
inline function `RAISE_SYNTAX_ERROR_INVALID_TARGET` that always
checks for `GET_INVALID_TARGET` returning NULL and can be used in
the grammar, replacing the long C ternary operation used till now.