* bpo-26836: Add os.memfd_create()
* Use the glibc wrapper for memfd_create()
Co-Authored-By: Christian Heimes <christian@python.org>
* Fix deletions caused by autoreconf.
* Use MFD_CLOEXEC as the default value for *flags*.
* Add memset_s to configure.ac.
* Revert memset_s changes.
* Apply the requested changes.
* Tweak the docs.
It is also possible to link against a library or executable with a
statically linked libpython, but not both with the same DLL. In fact
building a statically linked python is currently broken on Cygwin
for other (related) reasons.
The same problem applies to other POSIX-like layers over Windows
(MinGW, MSYS) but Python's build system does not seem to attempt
to support those platforms at the moment.
To embed Python into an application, a new --embed option must be
passed to "python3-config --libs --embed" to get "-lpython3.8" (link
the application to libpython). To support both 3.8 and older, try
"python3-config --libs --embed" first and fallback to "python3-config
--libs" (without --embed) if the previous command fails.
Add a pkg-config "python-3.8-embed" module to embed Python into an
application: "pkg-config python-3.8-embed --libs" includes
"-lpython3.8". To support both 3.8 and older, try "pkg-config
python-X.Y-embed --libs" first and fallback to "pkg-config python-X.Y
--libs" (without --embed) if the previous command fails (replace
"X.Y" with the Python version).
On the other hand, "pkg-config python3.8 --libs" no longer contains
"-lpython3.8". C extensions must not be linked to libpython (except
on Android, case handled by the script); this change is backward
incompatible on purpose.
"make install" now also installs "python-3.8-embed.pc".
Under some conditions the earlier fix for bpo-18075, "Infinite recursion
tests triggering a segfault on Mac OS X", now causes failures on macOS
when attempting to change stack limit with resource.setrlimit
resource.RLIMIT_STACK, like regrtest does when running the test suite.
The reverted change had specified a non-default stack size when linking
the python executable on macOS. As of macOS 10.14.4, the previous
code causes a hard failure when running tests, although similar
failures had been seen under some conditions under some earlier
systems. For now, revert the original change and resume using
the default stack size when linking the interpreter.
Release build and debug build are now ABI compatible: the Py_DEBUG
define no longer implies Py_TRACE_REFS define which introduces the
only ABI incompatibility.
A new "./configure --with-trace-refs" build option is now required to
get Py_TRACE_REFS define which adds sys.getobjects() function and
PYTHONDUMPREFS environment variable.
Changes:
* Add ./configure --with-trace-refs
* Py_DEBUG no longer implies Py_TRACE_REFS
"./configure --with-pymalloc" no longer adds the "m" flag to SOABI
(sys.implementation.cache_tag).
Enabling or disabling pymalloc has no impact on the ABI.
Add -fmax-type-align=8 to CFLAGS when clang compiler is detected.
The pymalloc memory allocator aligns memory on 8 bytes. On x86-64,
clang expects alignment on 16 bytes by default and so uses MOVAPS
instruction which can lead to segmentation fault. Instruct clang that
Python is limited to alignemnt on 8 bytes to use MOVUPS instruction
instead: slower but don't trigger a SIGSEGV if the memory is not
aligned on 16 bytes.
Sadly, the flag must be expected to CFLAGS and not just
CFLAGS_NODIST, since third party C extensions can have the same
issue.
On AIX, sys.platform doesn't contain the major version anymore.
Always return 'aix', instead of 'aix3' .. 'aix7'. Since
older Python versions include the version number, it is recommended to
always use sys.platform.startswith('aix').
Per POSIX, `nice(3)` requires `unistd.h` and `exit(3)` requires `stdlib.h`.
Fixing the test will prevent false positives with pedantic compilers like clang.
Use autoconfig to probe for shm_open() and shm_unlink(). Set SHM_NEEDS_LIBRT if we must
link with librt to get the shm_* functions. Change setup.py to use the autoconfig defines. These
changes should make it more likely that _multiprocessing/posixshmem.c gets built correctly on
different platforms.
Use crypt_r() when available instead of crypt() in the crypt module.
As a nice side effect: This also avoids a memory sanitizer flake as clang msan doesn't know about crypt's internal libc allocated buffer.
When compiling 3rd party C extensions, the linker flags used by the
compiler for the interpreter and the stdlib modules, will get
leaked into distutils. In order to avoid that, the PY_CORE_LDFLAGS
and PY_LDFLAGS_NODIST are introduced to keep those flags separated.
When using link time optimizations, the -flto flag is passed to
BASECFLAGS, which makes it propagate to distutils. Those flags
should be reserved for the interpreter and the stdlib extension
modules only, thus moving those flags to CFLAGS_NODIST.
Adds configure flags for msan and ubsan builds to make it easier to enable.
These also encode the detail that address sanitizer and memory sanitizer
should disable pymalloc.
Define MEMORY_SANITIZER when appropriate at build time and adds workarounds
to existing code to mark things as initialized where the sanitizer is otherwise unable to
determine that. This lets our build succeed under the memory sanitizer. not all tests
pass without sanitizer failures yet but we're in pretty good shape after this.
.o generated by clang in LTO mode actually are LLVM bitcode files, which
leads to a few errors during configure/build step:
- add lto flags to the BASECFLAGS instead of CFLAGS, as CFLAGS are used
to build autoconf test case, and some are not compatible with clang LTO
(they assume binary in the .o, not bitcode)
- force llvm-ar instead of ar, as ar is not aware of .o files generated
by clang -flto
Currently configure.ac uses AC_RUN_IFELSE to determine the byte order of doubles, but this silently fails under cross compilation and Python doesn't do floats properly.
Instead, steal a macro from autoconf-archive which compiles code using magic doubles (which encode to ASCII) and grep for the representation in the binary.
RFC because this doesn't yet handle the weird ancient ARMv4 OABI 'mixed-endian' encoding properly. This encoding is ancient and I don't believe the union of "Python 3.8 users" and "OABI users" has anything in. Should the support for this just be dropped too? Alternatively, someone will need to find an OABI toolchain to verify the encoding of the magic double.
It is unused.
<!--
Thanks for your contribution!
Please read this comment in its entirety. It's quite important.
# Pull Request title
It should be in the following format:
```
bpo-NNNN: Summary of the changes made
```
Where: bpo-NNNN refers to the issue number in the https://bugs.python.org.
Most PRs will require an issue number. Trivial changes, like fixing a typo, do not need an issue.
# Backport Pull Request title
If this is a backport PR (PR made against branches other than `master`),
please ensure that the PR title is in the following format:
```
[X.Y] <title from the original PR> (GH-NNNN)
```
Where: [X.Y] is the branch name, e.g. [3.6].
GH-NNNN refers to the PR number from `master`.
-->
Release GIL on grp.getgrnam(), grp.getgrgid(), pwd.getpwnam() and
pwd.getpwuid() if reentrant variants of these functions are available.
Patch by William Grzybowski.
Introduce a configure check for strsignal(3) which defines HAVE_STRSIGNAL for
signalmodule.c. Add some common signals on HP-UX. This change applies for
Windows and HP-UX.
bpo-32430: Rename Modules/Setup.dist to Modules/Setup
Remove the necessity to copy the former manually to the latter when updating the local source tree.