Simplify the importlib external bootstrap code:
importlib._bootstrap_external now uses regular imports to import
builtin modules. When it is imported, the builtin __import__()
function is already fully working and so can be used to import
builtin modules like sys.
This change partically reverts
commit ad3252bad9
and the commit fe2978b3b9.
Many third party C extension modules rely on the ability of using
Py_TYPE() to set an object type: "Py_TYPE(obj) = type;" or to set an
object type using: "Py_SIZE(obj) = size;".
Fix a race condition in "make regen-all" when make -jN option is used
to run jobs in parallel. The clinic.py script now only use atomic
write to write files. Moveover, generated files are now left
unchanged if the content does not change, to not change the file
modification time.
The "make regen-all" command runs "make clinic" and "make
regen-importlib" targets:
* "make regen-importlib" builds object files (ex: Modules/_weakref.o)
from source files (ex: Modules/_weakref.c) and clinic files (ex:
Modules/clinic/_weakref.c.h)
* "make clinic" always rewrites all clinic files
(ex: Modules/clinic/_weakref.c.h)
Since there is no dependency between "clinic" and "regen-importlib"
Makefile targets, these two targets can be run in parallel. Moreover,
half of clinic.py file writes are not atomic and so there is a race
condition when "make regen-all" runs jobs in parallel using make -jN
option (which can be passed in MAKEFLAGS environment variable).
Fix clinic.py to make all file writes atomic:
* Add write_file() function to ensure that all file writes are
atomic: write into a temporary file and then use os.replace().
* Moreover, write_file() doesn't recreate or modify the file if the
content does not change to avoid modifying the file modification
file.
* Update test_clinic to verify these assertions with a functional
test.
* Remove Clinic.force attribute which was no longer used, whereas
Clinic.verify remains useful.
bpo-41686, bpo-41713: On Windows, the SIGINT event,
_PyOS_SigintEvent(), is now created even if Python is configured to
not install signal handlers (PyConfig.install_signal_handlers=0 or
Py_InitializeEx(0)).
Changes:
* Move global variables initialization from signal_exec() to
_PySignal_Init() to clarify that they are global variables cleared
by _PySignal_Fini().
* _PySignal_Fini() now closes sigint_event.
* IntHandler is no longer a global variable.
Remove the undocumented PyOS_InitInterrupts() C function.
* Rename PyOS_InitInterrupts() to _PySignal_Init(). It now installs
other signal handlers, not only SIGINT.
* Rename PyOS_FiniInterrupts() to _PySignal_Fini()
Literal equality no longer depends on the order of arguments.
Fix issue related to `typing.Literal` caching by adding `typed` parameter to `typing._tp_cache` function.
Add deduplication of `typing.Literal` arguments.
Currently walruses are not allowerd in set literals and set comprehensions:
>>> {y := 4, 4**2, 3**3}
File "<stdin>", line 1
{y := 4, 4**2, 3**3}
^
SyntaxError: invalid syntax
but they should be allowed as well per PEP 572
As AIX 5.3 and below do not support thread_cputime, it was decided in
https://bugs.python.org/issue40680 to require AIX 6.1 and above. This
commit removes workarounds for — and references to — older, unsupported
AIX versions.
time.time(), time.perf_counter() and time.monotonic() functions can
no longer fail with a Python fatal error, instead raise a regular
Python exception on failure.
Remove _PyTime_Init(): don't check system, monotonic and perf counter
clocks at startup anymore.
On error, _PyTime_GetSystemClock(), _PyTime_GetMonotonicClock() and
_PyTime_GetPerfCounter() now silently ignore the error and return 0.
They cannot fail with a Python fatal error anymore.
Add py_mach_timebase_info() and win_perf_counter_frequency()
sub-functions.
Fix the threading.Thread class at fork: do nothing if the thread is
already stopped (ex: fork called at Python exit). Previously, an
error was logged in the child process.
time.perf_counter() on Windows and time.monotonic() on macOS are now
system-wide. Previously, they used an offset computed at startup to
reduce the precision loss caused by the float type. Use
time.perf_counter_ns() and time.monotonic_ns() added in Python 3.7 to
avoid this precision loss.
Fix building pycore_bitutils.h internal header on old clang version
without __builtin_bswap16() (ex: Xcode 4.6.3 on Mac OS X 10.7).
Add a new private _Py__has_builtin() macro to check for availability
of a preprocessor builtin function.
Co-Authored-By: Joshua Root <jmr@macports.org>
Co-authored-by: Joshua Root <jmr@macports.org>
On Windows, fix a regression in signal handling which prevented to
interrupt a program using CTRL+C. The signal handler can be run in a
thread different than the Python thread, in which case the test
deciding if the thread can handle signals is wrong.
On Windows, _PyEval_SignalReceived() now always sets eval_breaker to
1 since it cannot test _Py_ThreadCanHandleSignals(), and
eval_frame_handle_pending() always calls
_Py_ThreadCanHandleSignals() to recompute eval_breaker.
It is no longer possible to build the _ctypes extension module
without wchar_t type: remove CTYPES_UNICODE macro. Anyway, the
wchar_t type is required to build Python.
# Improve asyncio.wait function
The original code creates the futures set two times.
We can create this set before, avoiding the second creation.
This new behaviour [breaks the aiokafka library](https://github.com/aio-libs/aiokafka/pull/672), because it gives an iterator to that function, so the second iteration become empty.
Automerge-Triggered-By: GH:1st1
Fix _PyConfig_Read() if compute_path_config=0: use values set by
Py_SetPath(), Py_SetPythonHome() and Py_SetProgramName(). Add
compute_path_config parameter to _PyConfig_InitPathConfig().
The following functions now return NULL if called before
Py_Initialize():
* Py_GetExecPrefix()
* Py_GetPath()
* Py_GetPrefix()
* Py_GetProgramFullPath()
* Py_GetProgramName()
* Py_GetPythonHome()
These functions no longer automatically computes the Python Path
Configuration. Moreover, Py_SetPath() no longer computes
program_full_path.
The onerror is supposed to be called with failed function, but in this case lstat is wrongly used instead of open.
Not sure if this needs bug or not...
Automerge-Triggered-By: GH:hynek
Adds support to Tools/i18n/pygettext.py for gettext calls in f-strings. This process is done by parsing the f-strings, processing each value, and flagging the ones which contain a gettext call.
Co-authored-by: Batuhan Taskaya <batuhanosmantaskaya@gmail.com>
Co-authored-by: Lawrence D’Anna <lawrence_danna@apple.com>
* Add support for macOS 11 and Apple Silicon (aka arm64)
As a side effect of this work use the system copy of libffi on macOS, and remove the vendored copy
* Support building on recent versions of macOS while deploying to older versions
This allows building installers on macOS 11 while still supporting macOS 10.9.
* The AST optimiser wasn't descending into named expressions, so
any constant subexpressions weren't being folded at compile time
* Remove "default:" clauses inside the AST optimiser code to reduce the
risk of similar bugs passing unnoticed in future compiler changes
The format_exception(), format_exception_only(), and
print_exception() functions can now take an exception object as a positional-only argument.
Co-Authored-By: Matthias Bussonnier <bussonniermatthias@gmail.com>
The PyConfig_Read() function now only parses PyConfig.argv arguments
once: PyConfig.parse_argv is set to 2 after arguments are parsed.
Since Python arguments are strippped from PyConfig.argv, parsing
arguments twice would parse the application options as Python
options.
* Rework the PyConfig documentation.
* Fix _testinternalcapi.set_config() error handling.
* SetConfigTests no longer needs parse_argv=0 when restoring the old
configuration.
Currently, a Mock object which is not unsafe will raise an
AttributeError if an attribute with the prefix assert or assret is
accessed on it. This protects against misspellings of real assert
method calls, which lead to tests passing silently even if the tested
code does not satisfy the intended assertion.
Recently a check was done in a large code base (Google) and three
more frequent ways of misspelling assert were found causing harm:
asert, aseert, assrt. These are now added to the existing check.
When Py_Initialize() is called twice, the second call now updates
more sys attributes for the configuration, rather than only sys.argv.
* Rename _PySys_InitMain() to _PySys_UpdateConfig().
* _PySys_UpdateConfig() now modifies sys.flags in-place, instead of
creating a new flags object.
* Remove old commented sys.flags flags (unbuffered and skip_first).
* Add private _PySys_GetObject() function.
* When Py_Initialize(), Py_InitializeFromConfig() and
The logging.FileHandler class now keeps a reference to the builtin
open() function to be able to open or reopen the file during Python
finalization.
Fix errors like:
Exception ignored in: (...)
Traceback (most recent call last):
(...)
File ".../logging/__init__.py", line 1463, in error
File ".../logging/__init__.py", line 1577, in _log
File ".../logging/__init__.py", line 1587, in handle
File ".../logging/__init__.py", line 1649, in callHandlers
File ".../logging/__init__.py", line 948, in handle
File ".../logging/__init__.py", line 1182, in emit
File ".../logging/__init__.py", line 1171, in _open
NameError: name 'open' is not defined
The ast module internal state is now per interpreter.
* Rename "astmodulestate" to "struct ast_state"
* Add pycore_ast.h internal header: the ast_state structure is now
declared in pycore_ast.h.
* Add PyInterpreterState.ast (struct ast_state)
* Remove get_ast_state()
* Rename get_global_ast_state() to get_ast_state()
* PyAST_obj2mod() now handles get_ast_state() failures
* Prevent some possible DoS attacks via providing invalid Plist files
with extremely large number of objects or collection sizes.
* Raise InvalidFileException for too large bytes and string size instead of returning garbage.
* Raise InvalidFileException instead of ValueError for specific invalid datetime (NaN).
* Raise InvalidFileException instead of TypeError for non-hashable dict keys.
* Add more tests for invalid Plist files.
This adds a new function named sys._current_exceptions() which is equivalent ot
sys._current_frames() except that it returns the exceptions currently handled
by other threads. It is equivalent to calling sys.exc_info() for each running
thread.
They were occurring with both repeated 'force-calltip' invocations and by typing parentheses
in expressions, strings, and comments in the argument code.
Co-authored-by: Terry Jan Reedy <tjreedy@udel.edu>
* bpo-37193: remove the thread which finished process request from threads list
* rename variable t to thread.
* don't remove thread from list if it is daemon.
* use lock to protect self._threads.
* use finally block in case of exception from shutdown_request().
* check "not thread.daemon" before lock to avoid holding the lock if it's unnecessary.
* fix the place of _threads_lock.
* separate code to remove a current thread into a function.
* check ValueError when removing thread.
* fix wrong code which all instance shared same lock.
* Extract thread management into a _Threads class to encapsulate atomic operations and separate concerns.
* Replace multiple references of 'block_on_close' with one, avoiding the possibility that 'block_on_close' could change during the course of processing requests. Now, there's exactly one _threads object with behavior fixed for the duration.
* Add docstrings to private classes.
* Add test to ensure that a ThreadingTCPServer can be closed without serving any requests.
* Use _NoThreads as the default value. Fixes AttributeError when server is closed without serving any requests.
* Add blurb
* Add test capturing failure.
Co-authored-by: Jason R. Coombs <jaraco@jaraco.com>
If the nl_langinfo(CODESET) function returns an empty string, Python
now uses UTF-8 as the filesystem encoding.
In May 2010 (commit b744ba1d14), I
modified Python to log a warning and use UTF-8 as the filesystem
encoding (instead of None) if nl_langinfo(CODESET) returns an empty
string.
In August 2020 (commit 94908bbc15), I
modified Python startup to fail with a fatal error and a specific
error message if nl_langinfo(CODESET) returns an empty string. The
intent was to prevent guessing the encoding and also investigate user
configuration where this case happens.
In 10 years (2010 to 2020), I saw zero user report about the error
message related to nl_langinfo(CODESET) returning an empty string.
Today, UTF-8 became the defacto standard and it's safe to make the
assumption that the user expects UTF-8. For example,
nl_langinfo(CODESET) can return an empty string on macOS if the
LC_CTYPE locale is not supported, and UTF-8 is the default encoding
on macOS.
While this change is likely to not affect anyone in practice, it
should make UTF-8 lover happy ;-)
Rewrite also the documentation explaining how Python selects the
filesystem encoding and error handler.
[bpo-29566]() notes that binhex.binhex uses inconsistent line endings (both Unix and MacOS9 line endings are used). This PR changes this to use the MacOS9 line endings everywhere.
Left-recursive rules need to check for errors explicitly, since
even if the rule returns NULL, the parsing might continue and lead
to long-distance failures.
Co-authored-by: Pablo Galindo <Pablogsal@gmail.com>
The _RandomSequence class in tempfile used to check the current pid every time its rng property was used.
This commit replaces this code with `os.register_at_fork` to reduce the overhead.
Removed the unicodedata.ucnhash_CAPI attribute which was an internal
PyCapsule object. The related private _PyUnicode_Name_CAPI structure
was moved to the internal C API.
Rename unicodedata.ucnhash_CAPI as unicodedata._ucnhash_CAPI.
I am re-submitting an older PR which was abandoned but is still relevant, #10783 by @timb07.
The issue being solved () is still relevant. The original PR #10783 was closed as
the final request changes were not applied and since abandoned.
In this new PR I have re-used the original patch plus applied both comments from the review, by @maxking and @pganssle.
For reference, here is the original PR description:
In email.utils.parsedate_to_datetime(), a failure to parse the date, or invalid date components (such as hour outside 0..23) raises an exception. Document this behaviour, and add tests to test_email/test_utils.py to confirm this behaviour.
In email.headerregistry.DateHeader.parse(), check when parsedate_to_datetime() raises an exception and add a new defect InvalidDateDefect; preserve the invalid value as the string value of the header, but set the datetime attribute to None.
Add tests to test_email/test_headerregistry.py to confirm this behaviour; also added test to test_email/test_inversion.py to confirm emails with such defective date headers round trip successfully.
This pull request incorporates feedback gratefully received from @bitdancer, @brettcannon, @Mariatta and @warsaw, and replaces the earlier PR #2254.
Automerge-Triggered-By: GH:warsaw
* Implement running the parser a second time for the errors messages
The first parser run is only responsible for detecting whether
there is a `SyntaxError` or not. If there isn't the AST gets returned.
Otherwise, the parser is run a second time with all the `invalid_*`
rules enabled so that all the customized error messages get produced.
Convert the unicodedata extension module to the multiphase
initialization API (PEP 489) and convert the unicodedata.UCD static
type to a heap type.
Co-Authored-By: Mohamed Koubaa <koubaa.m@gmail.com>
* UCD_Check() uses PyModule_Check()
* Simplify the internal _PyUnicode_Name_CAPI structure:
* Remove size and state members
* Remove state and self parameters of getcode() and getname()
functions
* Remove global_module_state
The private _PyUnicode_Name_CAPI structure of the PyCapsule API
unicodedata.ucnhash_CAPI moves to the internal C API. Moreover, the
structure gets a new state member which must be passed to the
getcode() and getname() functions.
* Move Include/ucnhash.h to Include/internal/pycore_ucnhash.h
* unicodedata module is now built with Py_BUILD_CORE_MODULE.
* unicodedata: move hashAPI variable into unicodedata_module_state.
Fix memory leak in subprocess.Popen() in case of uid/gid overflow
Also add a test that would catch this leak with `--huntrleaks`.
Alas, the test for `extra_groups` also exposes an inconsistency
in our error reporting: we use a custom ValueError for `extra_groups`,
but propagate OverflowError for `user` and `group`.
* bpo-41490: ``path`` method to aggressively close handles
* Add blurb
* In ZipReader.contents, eagerly evaluate the contents to release references to the zipfile.
* Instead use _ensure_sequence to ensure any iterable from a reader is eagerly converted to a list if it's not already a sequence.
* bpo-35823: subprocess: Use vfork() instead of fork() on Linux when safe
When used to run a new executable image, fork() is not a good choice
for process creation, especially if the parent has a large working set:
fork() needs to copy page tables, which is slow, and may fail on systems
where overcommit is disabled, despite that the child is not going to
touch most of its address space.
Currently, subprocess is capable of using posix_spawn() instead, which
normally provides much better performance. However, posix_spawn() does not
support many of child setup operations exposed by subprocess.Popen().
Most notably, it's not possible to express `close_fds=True`, which
happens to be the default, via posix_spawn(). As a result, most users
can't benefit from faster process creation, at least not without
changing their code.
However, Linux provides vfork() system call, which creates a new process
without copying the address space of the parent, and which is actually
used by C libraries to efficiently implement posix_spawn(). Due to sharing
of the address space and even the stack with the parent, extreme care
is required to use vfork(). At least the following restrictions must hold:
* No signal handlers must execute in the child process. Otherwise, they
might clobber memory shared with the parent, potentially confusing it.
* Any library function called after vfork() in the child must be
async-signal-safe (as for fork()), but it must also not interact with any
library state in a way that might break due to address space sharing
and/or lack of any preparations performed by libraries on normal fork().
POSIX.1 permits to call only execve() and _exit(), and later revisions
remove vfork() specification entirely. In practice, however, almost all
operations needed by subprocess.Popen() can be safely implemented on
Linux.
* Due to sharing of the stack with the parent, the child must be careful
not to clobber local variables that are alive across vfork() call.
Compilers are normally aware of this and take extra care with vfork()
(and setjmp(), which has a similar problem).
* In case the parent is privileged, special attention must be paid to vfork()
use, because sharing an address space across different privilege domains
is insecure[1].
This patch adds support for using vfork() instead of fork() on Linux
when it's possible to do safely given the above. In particular:
* vfork() is not used if credential switch is requested. The reverse case
(simple subprocess.Popen() but another application thread switches
credentials concurrently) is not possible for pure-Python apps because
subprocess.Popen() and functions like os.setuid() are mutually excluded
via GIL. We might also consider to add a way to opt-out of vfork() (and
posix_spawn() on platforms where it might be implemented via vfork()) in
a future PR.
* vfork() is not used if `preexec_fn != None`.
With this change, subprocess will still use posix_spawn() if possible, but
will fallback to vfork() on Linux in most cases, and, failing that,
to fork().
[1] https://ewontfix.com/7
Co-authored-by: Gregory P. Smith [Google LLC] <gps@google.com>
[bpo-39416](): Document string representations of the Numeric classes
This is a change to the specification of the Python language.
The idea here is to put sane minimal limits on the Python language's default
representations of its Numeric classes. That way "Marty's Robotic Massage Parlor
and Python Interpreter" implementation of Python won't do anything too
crazy.
Some discussion in the email thread:
Subject: Documenting Python's float.__str__()
https://mail.python.org/archives/list/python-dev@python.org/thread/FV22TKT3S2Q3P7PNN6MCXI6IX3HRRNAL/
* Add _newline_ parameter to `pathlib.Path.write_text()`
* Update documentation of `pathlib.Path.write_text()`
* Add test case for `pathlib.Path.write_text()` calls with _newline_ parameter passed
Automerge-Triggered-By: GH:methane
* Add F_SETPIPE_SZ and F_GETPIPE_SZ to fcntl module
* Add pipesize parameter for subprocess.Popen class
This will allow the user to control the size of the pipes.
On linux the default is 64K. When a pipe is full it blocks for writing.
When a pipe is empty it blocks for reading. On processes that are
very fast this can lead to a lot of wasted CPU cycles. On a typical
Linux system the max pipe size is 1024K which is much better.
For high performance-oriented libraries such as xopen it is nice to
be able to set the pipe size.
The workaround without this feature is to use my_popen_process.stdout.fileno() in
conjuction with fcntl and 1031 (value of F_SETPIPE_SZ) to acquire this behavior.
This PR replaces #1977. The reason for the replacement is two-fold.
The fix itself is different is that if the CTE header doesn't exist in the original message, it is inserted. This is important because the new CTE could be quoted-printable whereas the original is implicit 8bit.
Also the tests are different. The test_nonascii_as_string_without_cte test in #1977 doesn't actually test the issue in that it passes without the fix. The test_nonascii_as_string_without_content_type_and_cte test is improved here, and even though it doesn't fail without the fix, it is included for completeness.
Automerge-Triggered-By: @warsaw
It was moved out of the limited API in 7d95e40721.
This change re-enables it from 3.10, to avoid generating invalid extension modules for earlier versions.
Since c19c5a6, AIX builds have defaulted to using dynload_shlib over
dynload_aix when dlopen is available. This function has been available
since AIX 4.3, which went out of support in 2003, the same year the
previously referenced commit was made. It has been nearly 20 years
since a version of AIX has been supported which has not used
dynload_shlib so there's no reason to keep this legacy code around.
When running in a non-UTF-8 locale, if an error occurs while importing a
native Python module (say because a dependent share library is missing),
the error message string returned may contain non-ASCII code points
causing a UnicodeDecodeError.
PyUnicode_DecodeFSDefault is used for buffers which may contain
filesystem paths. For consistency with os.strerror(),
PyUnicode_DecodeLocale is used for buffers which contain system error
messages. While the shortname parameter is always encoded in ASCII
according to PEP 489, it is left decoded using PyUnicode_FromString to
minimize the changes and since it should not affect the decoding (albeit
_potentially_ slower).
In dynload_hpux, since the error buffer contains a message generated
from a static ASCII string and the module filesystem path,
PyUnicode_DecodeFSDefault is used instead of PyUnicode_DecodeLocale as
is used elsewhere.
* bpo-41894: Fix bugs in dynload error msg handling
For both dynload_aix and dynload_hpux, properly handle the possibility
that decoding strings may return NULL and when such an error happens,
properly decrement any previously decoded strings and return early.
In addition, in dynload_aix, ensure that we pass the decoded string
*object* pathname_ob to PyErr_SetImportError instead of the original
pathname buffer.
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>