Convert the _imp extension module to the multi-phase initialization
API (PEP 489).
* Add _PyImport_BootstrapImp() which fix a bootstrap issue: import
the _imp module before importlib is initialized.
* Add create_builtin() sub-function, used by _imp_create_builtin().
* Initialize PyInterpreterState.import_func earlier, in
pycore_init_builtins().
* Remove references to _PyImport_Cleanup(). This function has been
renamed to finalize_modules() and moved to pylifecycle.c.
This change partically reverts
commit ad3252bad9
and the commit fe2978b3b9.
Many third party C extension modules rely on the ability of using
Py_TYPE() to set an object type: "Py_TYPE(obj) = type;" or to set an
object type using: "Py_SIZE(obj) = size;".
* Add signal_add_constants() function and add ADD_INT_MACRO macro.
* The Python SIGINT handler is now installed at the end of
signal_exec().
* Use Py_NewRef().
bpo-41686, bpo-41713: On Windows, the SIGINT event,
_PyOS_SigintEvent(), is now created even if Python is configured to
not install signal handlers (PyConfig.install_signal_handlers=0 or
Py_InitializeEx(0)).
Changes:
* Move global variables initialization from signal_exec() to
_PySignal_Init() to clarify that they are global variables cleared
by _PySignal_Fini().
* _PySignal_Fini() now closes sigint_event.
* IntHandler is no longer a global variable.
Remove the undocumented PyOS_InitInterrupts() C function.
* Rename PyOS_InitInterrupts() to _PySignal_Init(). It now installs
other signal handlers, not only SIGINT.
* Rename PyOS_FiniInterrupts() to _PySignal_Fini()
As AIX 5.3 and below do not support thread_cputime, it was decided in
https://bugs.python.org/issue40680 to require AIX 6.1 and above. This
commit removes workarounds for — and references to — older, unsupported
AIX versions.
time.time(), time.perf_counter() and time.monotonic() functions can
no longer fail with a Python fatal error, instead raise a regular
Python exception on failure.
Remove _PyTime_Init(): don't check system, monotonic and perf counter
clocks at startup anymore.
On error, _PyTime_GetSystemClock(), _PyTime_GetMonotonicClock() and
_PyTime_GetPerfCounter() now silently ignore the error and return 0.
They cannot fail with a Python fatal error anymore.
Add py_mach_timebase_info() and win_perf_counter_frequency()
sub-functions.
Explicitly cast PyExc_Exception to PyTypeObject* to fix the warning:
modules\_ctypes\_ctypes.c(5748): warning C4133: '=':
incompatible types - from 'PyObject *' to '_typeobject *'
It is no longer possible to build the _ctypes extension module
without wchar_t type: remove CTYPES_UNICODE macro. Anyway, the
wchar_t type is required to build Python.
Fix reference leaks in the error path of the initialization function
the _ctypes extension module: call Py_DECREF(mod) on error.
Change PyCFuncPtr_Type name from _ctypes.PyCFuncPtr to
_ctypes.CFuncPtr to be consistent with the name exposed in the
_ctypes namespace (_ctypes.CFuncPtr).
Split PyInit__ctypes() function into sub-functions and add macros for
readability.
Co-authored-by: Lawrence D’Anna <lawrence_danna@apple.com>
* Add support for macOS 11 and Apple Silicon (aka arm64)
As a side effect of this work use the system copy of libffi on macOS, and remove the vendored copy
* Support building on recent versions of macOS while deploying to older versions
This allows building installers on macOS 11 while still supporting macOS 10.9.
The PyConfig_Read() function now only parses PyConfig.argv arguments
once: PyConfig.parse_argv is set to 2 after arguments are parsed.
Since Python arguments are strippped from PyConfig.argv, parsing
arguments twice would parse the application options as Python
options.
* Rework the PyConfig documentation.
* Fix _testinternalcapi.set_config() error handling.
* SetConfigTests no longer needs parse_argv=0 when restoring the old
configuration.
* Rename _Py_GetLocaleEncoding() to _Py_GetLocaleEncodingObject()
* Add _Py_GetLocaleEncoding() which returns a wchar_t* string to
share code between _Py_GetLocaleEncodingObject()
and config_get_locale_encoding().
* _Py_GetLocaleEncodingObject() now decodes nl_langinfo(CODESET)
from the current locale encoding with surrogateescape,
rather than using UTF-8.
* bpo-42146: Unify cleanup in subprocess_fork_exec()
Also ignore errors from _enable_gc():
* They are always suppressed by the current code due to a bug.
* _enable_gc() is only used if `preexec_fn != None`, which is unsafe.
* We don't have a good way to handle errors in case we successfully
created a child process.
Co-authored-by: Gregory P. Smith <greg@krypto.org>
* Add a new _locale._get_locale_encoding() function to get the
current locale encoding.
* Modify locale.getpreferredencoding() to use it.
* Remove the _bootlocale module.
_io.TextIOWrapper no longer calls getpreferredencoding(False) of
_bootlocale to get the locale encoding, but calls
_Py_GetLocaleEncoding() instead.
Add config_get_fs_encoding() sub-function. Reorganize also
config_get_locale_encoding() code.
The last GC collection is now done before clearing builtins and sys
dictionaries. Add also assertions to ensure that gc.collect() is no
longer called after _PyGC_Fini().
Pass also the tstate to PyInterpreterState_Clear() to pass the
correct tstate to _PyGC_CollectNoFail() and _PyGC_Fini().
Move private _PyGC_CollectNoFail() to the internal C API.
Remove the private _PyGC_CollectIfEnabled() which was just an alias
to the public PyGC_Collect() function since Python 3.8.
Rename functions:
* collect() => gc_collect_main()
* collect_with_callback() => gc_collect_with_callback()
* collect_generations() => gc_collect_generations()
Use _PyLong_GetZero() and _PyLong_GetOne() in Modules/ directory.
_cursesmodule.c and zoneinfo.c are now built with
Py_BUILD_CORE_MODULE macro defined.
Removed the unicodedata.ucnhash_CAPI attribute which was an internal
PyCapsule object. The related private _PyUnicode_Name_CAPI structure
was moved to the internal C API.
Rename unicodedata.ucnhash_CAPI as unicodedata._ucnhash_CAPI.
Convert the unicodedata extension module to the multiphase
initialization API (PEP 489) and convert the unicodedata.UCD static
type to a heap type.
Co-Authored-By: Mohamed Koubaa <koubaa.m@gmail.com>
* UCD_Check() uses PyModule_Check()
* Simplify the internal _PyUnicode_Name_CAPI structure:
* Remove size and state members
* Remove state and self parameters of getcode() and getname()
functions
* Remove global_module_state
The private _PyUnicode_Name_CAPI structure of the PyCapsule API
unicodedata.ucnhash_CAPI moves to the internal C API. Moreover, the
structure gets a new state member which must be passed to the
getcode() and getname() functions.
* Move Include/ucnhash.h to Include/internal/pycore_ucnhash.h
* unicodedata module is now built with Py_BUILD_CORE_MODULE.
* unicodedata: move hashAPI variable into unicodedata_module_state.
If PyDict_GetItemWithError is only used to check whether the key is in dict,
it is better to use PyDict_Contains instead.
And if it is used in combination with PyDict_SetItem, PyDict_SetDefault can
replace the combination.
These functions are considered not safe because they suppress all internal errors
and can return wrong result. PyDict_GetItemString and _PyDict_GetItemId can
also silence current exception in rare cases.
Remove no longer used _PyDict_GetItemId.
Add _PyDict_ContainsId and rename _PyDict_Contains into
_PyDict_Contains_KnownHash.
Fix memory leak in subprocess.Popen() in case of uid/gid overflow
Also add a test that would catch this leak with `--huntrleaks`.
Alas, the test for `extra_groups` also exposes an inconsistency
in our error reporting: we use a custom ValueError for `extra_groups`,
but propagate OverflowError for `user` and `group`.
It should just be a syscall updating a couple of fields in the kernel side
process info. Confirming, in glibc is appears to be a shim for the setsid
syscall (based on not finding any code implementing anything special for it)
and in uclibc (*much* easier to read) it is clearly just a setsid syscall shim.
A breadcrumb _suggesting_ that it is not allowed on Darwin/macOS comes from
a commit in emacs: https://lists.gnu.org/archive/html/bug-gnu-emacs/2017-04/msg00297.html
but I don't have a way to verify if that is true or not.
As we are not supporting vfork on macOS today I just left a note in a comment.
Using POSIX_CALL() is incorrect since pthread_sigmask() returns
the error number instead of setting errno.
Also handle failure of the first call to pthread_sigmask()
in the parent process, and explain why we don't handle failure
of the second call in a comment.
* bpo-35823: subprocess: Use vfork() instead of fork() on Linux when safe
When used to run a new executable image, fork() is not a good choice
for process creation, especially if the parent has a large working set:
fork() needs to copy page tables, which is slow, and may fail on systems
where overcommit is disabled, despite that the child is not going to
touch most of its address space.
Currently, subprocess is capable of using posix_spawn() instead, which
normally provides much better performance. However, posix_spawn() does not
support many of child setup operations exposed by subprocess.Popen().
Most notably, it's not possible to express `close_fds=True`, which
happens to be the default, via posix_spawn(). As a result, most users
can't benefit from faster process creation, at least not without
changing their code.
However, Linux provides vfork() system call, which creates a new process
without copying the address space of the parent, and which is actually
used by C libraries to efficiently implement posix_spawn(). Due to sharing
of the address space and even the stack with the parent, extreme care
is required to use vfork(). At least the following restrictions must hold:
* No signal handlers must execute in the child process. Otherwise, they
might clobber memory shared with the parent, potentially confusing it.
* Any library function called after vfork() in the child must be
async-signal-safe (as for fork()), but it must also not interact with any
library state in a way that might break due to address space sharing
and/or lack of any preparations performed by libraries on normal fork().
POSIX.1 permits to call only execve() and _exit(), and later revisions
remove vfork() specification entirely. In practice, however, almost all
operations needed by subprocess.Popen() can be safely implemented on
Linux.
* Due to sharing of the stack with the parent, the child must be careful
not to clobber local variables that are alive across vfork() call.
Compilers are normally aware of this and take extra care with vfork()
(and setjmp(), which has a similar problem).
* In case the parent is privileged, special attention must be paid to vfork()
use, because sharing an address space across different privilege domains
is insecure[1].
This patch adds support for using vfork() instead of fork() on Linux
when it's possible to do safely given the above. In particular:
* vfork() is not used if credential switch is requested. The reverse case
(simple subprocess.Popen() but another application thread switches
credentials concurrently) is not possible for pure-Python apps because
subprocess.Popen() and functions like os.setuid() are mutually excluded
via GIL. We might also consider to add a way to opt-out of vfork() (and
posix_spawn() on platforms where it might be implemented via vfork()) in
a future PR.
* vfork() is not used if `preexec_fn != None`.
With this change, subprocess will still use posix_spawn() if possible, but
will fallback to vfork() on Linux in most cases, and, failing that,
to fork().
[1] https://ewontfix.com/7
Co-authored-by: Gregory P. Smith [Google LLC] <gps@google.com>
* Add F_SETPIPE_SZ and F_GETPIPE_SZ to fcntl module
* Add pipesize parameter for subprocess.Popen class
This will allow the user to control the size of the pipes.
On linux the default is 64K. When a pipe is full it blocks for writing.
When a pipe is empty it blocks for reading. On processes that are
very fast this can lead to a lot of wasted CPU cycles. On a typical
Linux system the max pipe size is 1024K which is much better.
For high performance-oriented libraries such as xopen it is nice to
be able to set the pipe size.
The workaround without this feature is to use my_popen_process.stdout.fileno() in
conjuction with fcntl and 1031 (value of F_SETPIPE_SZ) to acquire this behavior.
Prepare unicodedata to add a state per module: start with a global
"module" state, pass it to subfunctions which access &UCD_Type. This
change also prepares the conversion of the UCD_Type static type to a
heap type.
This API is relatively lightweight and organizationally, given that it's
used by multiple modules, it makes sense to move it to fileutils.
Requires making sure that _posixsubprocess is compiled with the appropriate
Py_BUIILD_CORE_BUILTIN macro.
This suppression is no longer needed in os_closerange_impl, as it just
invokes the internal _Py_closerange implementation. On the other hand,
consumers of _Py_closerange may not have any other reason to suppress
invalid parameter issues, so narrow the scope to here.
close_range(2) should be preferred at all times if it's available, otherwise we'll use closefrom(2) if available with a fallback to fdwalk(3) or plain old loop over fd range in order of most efficient to least.
[note that this version does check for ENOSYS, but currently ignores all other errors]
Automerge-Triggered-By: @pablogsal
Such an API can be used both for os.closerange and subprocess. For the latter, this yields potential improvement for platforms that have fdwalk but wouldn't have used it there. This will prove even more beneficial later for platforms that have close_range(2), as the new API will prefer that over all else if it's available.
The new API is structured to look more like close_range(2), closing from [start, end] rather than the [low, high) of os.closerange().
Automerge-Triggered-By: @gpshead
* bpo-26680: Adds support for int.is_integer() for compatibility with float.is_integer().
The int.is_integer() method always returns True.
* bpo-26680: Adds a test to ensure that False.is_integer() and True.is_integer() are always True.
* bpo-26680: Adds Real.is_integer() with a trivial implementation using conversion to int.
This default implementation is intended to reduce the workload for subclass
implementers. It is not robust in the presence of infinities or NaNs and
may have suboptimal performance for other types.
* bpo-26680: Adds Rational.is_integer which returns True if the denominator is one.
This implementation assumes the Rational is represented in it's
lowest form, as required by the class docstring.
* bpo-26680: Adds Integral.is_integer which always returns True.
* bpo-26680: Adds tests for Fraction.is_integer called as an instance method.
The tests for the Rational abstract base class use an unbound
method to sidestep the inability to directly instantiate Rational.
These tests check that everything works correct as an instance method.
* bpo-26680: Updates documentation for Real.is_integer and built-ins int and float.
The call x.is_integer() is now listed in the table of operations
which apply to all numeric types except complex, with a reference
to the full documentation for Real.is_integer(). Mention of
is_integer() has been removed from the section 'Additional Methods
on Float'.
The documentation for Real.is_integer() describes its purpose, and
mentions that it should be overridden for performance reasons, or
to handle special values like NaN.
* bpo-26680: Adds Decimal.is_integer to the Python and C implementations.
The C implementation of Decimal already implements and uses
mpd_isinteger internally, we just expose the existing function to
Python.
The Python implementation uses internal conversion to integer
using to_integral_value().
In both cases, the corresponding context methods are also
implemented.
Tests and documentation are included.
* bpo-26680: Updates the ACKS file.
* bpo-26680: NEWS entries for int, the numeric ABCs and Decimal.
Co-authored-by: Robert Smallshire <rob@sixty-north.com>
Use _PyType_HasFeature() in the _io module and in structseq
implementation. Replace PyType_HasFeature() opaque function call with
_PyType_HasFeature() inlined function.