``Modules/Setup.stdlib`` contains ``Setup`` lines for all stdlib extension modules for which ``configure`` has detected their dependencies. The file is not used yet and still under development. To use the file, do ``ln -sfr Modules/Setup.stdlib Modules/Setup.local``.
Instead we use $(PYTHON_FOR_REGEN) .../deepfreeze.py with the
frozen .h file as input, as we did for Windows in bpo-45850.
We also get rid of the code that generates the .h files
when make regen-frozen is run (i.e., .../make_frozen.py),
and the MANIFEST file.
Restore Python 3.8 and 3.9 as Windows host Python again
Co-authored-by: Kumar Aditya <59607654+kumaraditya303@users.noreply.github.com>
``configure`` now uses a standardized format to forward state, compiler
flags, and linker flags to ``Makefile``, ``setup.py``, and
``Modules/Setup``. ``makesetup`` use the new variables by default if a
module line does not contain any compiler or linker flags. ``setup.py``
has a new function ``addext()``.
For a module ``egg``, configure adds:
* ``MODULE_EGG`` with value yes, missing, disabled, or n/a
* ``MODULE_EGG_CFLAGS``
* ``MODULE_EGG_LDFLAGS``
``Makefile.pre.in`` may also provide ``MODULE_EGG_DEPS`` that lists
dependencies such as header files and static libs.
Signed-off-by: Christian Heimes <christian@python.org>
Settings for :mod:`pyexpat` C extension are now detected by ``configure``.
The bundled ``expat`` library is built in ``Makefile``.
Signed-off-by: Christian Heimes <christian@python.org>
Settings for :mod:`decimal` internal C extension are now detected by
:program:`configure`. The bundled `libmpdec` library is built in
``Makefile``.
Signed-off-by: Christian Heimes <christian@python.org>
This gains 10% or more in startup time for `python -c pass` on UNIX-ish systems.
The Makefile.pre.in generating code builds on Eric's work for bpo-45020, but the .c file generator is new.
Windows version TBD.
Was commented out by Jack Jansen in 2001-08-15 by commit
b6e9cad34ce46a6a733d8aa5bf5b9d389fa1316f:
"Ripped out Next/OpenStep support, which was broken anyway"
Automerge-Triggered-By: GH:tiran
From the autoconf docs *Obsolete Macros* section:
Defined the output variable EXEEXT based on the output of the
compiler, which is now done automatically. Typically set to empty
string if Posix and ‘.exe’ if a DOS variant.
Almost all checks are now cached by AC_CACHE_CHECK().
Common patterns are replaced by helper macros.
Variable names now use naming scheme ``ac_cv_func_$funcname``,
``ac_cv_lib_$library_$funcname``, or ``ac_cv_header_$headername_h``.
``SYS_SELECT_WITH_SYS_TIME`` is no longer used.
``uuid_create`` and ``uuid_enc_be`` are provided by libc on BSD. It is
safe to use ``AC_CHECK_FUNCS`` here.
Caching speeds up ./configure -C from ~ 4s to 2.6s on my system.
Co-authored-by: Erlend Egeberg Aasland <erlend.aasland@innova.no>
Add Modules subdirs to SRCDIRS to generate directories for out-of-tree
object files.
Debian wants ncurses lib. Works on Fedora, too.
Debian also needs pkg-config to detect correct flags.
Remove more outdated comments. Makefile now tracks header dependencies
-lintl is injected by configure when needed. Build _dbm with
gdbm-compat.
Group some modules by purpose. socket, select, and mmap work on Windows,
too.
The :mod:`math` and :mod:`cmath` implementation now require a C99 compatible
``libm`` and no longer ship with workarounds for missing acosh, asinh,
expm1, and log1p functions.
The changeset also removes ``_math.c`` and moves the last remaining
workaround into ``_math.h``. This simplifies static builds with
``Modules/Setup`` and resolves symbol conflicts.
Co-authored-by: Mark Dickinson <mdickinson@enthought.com>
Co-authored-by: Brett Cannon <brett@python.org>
Signed-off-by: Christian Heimes <christian@python.org>
Freelists for object structs can now be disabled. A new ``configure``
option ``--without-freelists`` can be used to disable all freelists
except empty tuple singleton. Internal Py*_MAXFREELIST macros can now
be defined as 0 without causing compiler warnings and segfaults.
Signed-off-by: Christian Heimes <christian@python.org>
Remove fallbacks for missing round(), copysign() and hypot() in
Python/pymath.c. Python now requires these functions to build.
These fallbacks were needed on Visual Studio 2012 and older. They are
no longer needed since Visual Stuido 2013. Python is now built with
Visual Studio 2017 or newer since Python 3.6.
Building Python now requires a C99 <math.h> header file providing
isinf(), isnan() and isfinite() functions.
Remove the Py_FORCE_DOUBLE() macro. It was used by the
Py_IS_INFINITY() macro.
Changes:
* Remove Py_IS_NAN(), Py_IS_INFINITY() and Py_IS_FINITE()
in PC/pyconfig.h.
* Remove the _Py_force_double() function.
* configure no longer checks if math.h defines isinf(), isnan() and
isfinite().
Change the configure logic to function properly on macOS when the compiler
outputs a platform triplet for option --print-multiarch.
Co-authored-by: Ned Deily <nad@python.org>
For example, without the guard the check could cause macOS
installer builds to fail to install on older supported macOS
releases where libnetwork is not available and is not needed
on any release.
On Unix, if the sem_clockwait() function is available in the C
library (glibc 2.30 and newer), the threading.Lock.acquire() method
now uses the monotonic clock (time.CLOCK_MONOTONIC) for the timeout,
rather than using the system clock (time.CLOCK_REALTIME), to not be
affected by system clock changes.
configure now checks if the sem_clockwait() function is available.
In Unix operating systems, time.sleep() now uses the clock_nanosleep() function,
if available, which allows to sleep for an interval specified with nanosecond precision.
Co-authored-by: Victor Stinner <vstinner@python.org>
Only complain if the config target is >= 10.3 and the current target is
< 10.3. The check was originally added to ensure that incompatible
LDSHARED flags are not used, because -undefined dynamic_lookup is
used when building for 10.3 and later, and is not supported on older OS
versions. Apart from that, there should be no problem in general
with using an older target.
Authored-by: Joshua Root <jmr@macports.org>
This allows reliably forcing macOS universal2 framework builds
to run under Rosetta 2 Intel-64 emulation on Apple Silicon Macs
if needed for testing or when universal2 wheels are not yet
available.
- Remove HAVE_X509_VERIFY_PARAM_SET1_HOST check
- Update hashopenssl to require OpenSSL 1.1.1
- multissltests only OpenSSL > 1.1.0
- ALPN is always supported
- SNI is always supported
- Remove deprecated NPN code. Python wrappers are no-op.
- ECDH is always supported
- Remove OPENSSL_VERSION_1_1 macro
- Remove locking callbacks
- Drop PY_OPENSSL_1_1_API macro
- Drop HAVE_SSL_CTX_CLEAR_OPTIONS macro
- SSL_CTRL_GET_MAX_PROTO_VERSION is always defined now
- security level is always available now
- get_num_tickets is available with TLS 1.3
- X509_V_ERR MISMATCH is always available now
- Always set SSL_MODE_RELEASE_BUFFERS
- X509_V_FLAG_TRUSTED_FIRST is always available
- get_ciphers is always supported
- SSL_CTX_set_keylog_callback is always available
- Update Modules/Setup with static link example
- Mention PEP in whatsnew
- Drop 1.0.2 and 1.1.0 from GHA tests
* Remove m68k-specific hack from ascii_decode
On m68k, alignments of primitives is more relaxed, with 4-byte and
8-byte types only requiring 2-byte alignment, thus using sizeof(size_t)
does not work. Instead, use the portable alternative.
Note that this is a minimal fix that only relaxes the assertion and the
condition for when to use the optimised version remains overly strict.
Such issues will be fixed tree-wide in the next commit.
NB: In C11 we could use _Alignof(size_t) instead, but for compatibility
we use autoconf.
* Optimise string routines for architectures with non-natural alignment
C only requires that sizeof(x) is a multiple of alignof(x), not that the
two are equal. Thus anywhere where we optimise based on alignment we
should be using alignof(x) not sizeof(x).
This is more annoying than it would be in C11 where we could just use
_Alignof(x) (and alignof(x) in C++11), but since we still require only
C99 we must plumb the information all the way from autoconf through the
various typedefs and defines.
These functions were undocumented and excluded from the limited C
API.
Most names defined by these header files were not prefixed by "Py"
and so could create names conflicts. For example, Python-ast.h
defined a "Yield" macro which was conflict with the "Yield" name used
by the Windows <winbase.h> header.
Use the Python ast module instead.
* Move Include/asdl.h to Include/internal/pycore_asdl.h.
* Move Include/Python-ast.h to Include/internal/pycore_ast.h.
* Remove ast.h header file.
* pycore_symtable.h no longer includes Python-ast.h.
Add a new configure --without-static-libpython option to not build
the libpythonMAJOR.MINOR.a static library and not install the
python.o object file.
Fix smelly.py and stable_abi.py tools when libpython3.10.a is
missing.
In contrast to macOS, libedit is available as its own include file and
library on Linux systems to prevent file name clashes. So if both
libraries are available on the system, readline is currently chosen by
default; and if only libedit is available, it is not found at all. This
patch adds a way to link against libedit by adding the following
arguments to configure:
--with-readline link against libreadline (the default)
--with-readline=editline link against libeditline
--with-readline=no disable building the readline module
--without-readline (same)
The runtime detection of libedit vs. readline was already done in commit
7105319ada (2019-12-04, serge-sans-paille: "bpo-38634: Allow
non-apple build to cope with libedit (GH-16986)").
Fixes: GH-12076 ("bpo-13501 Build or disable readline with Editline")
Fixes: bpo-13501 ("Make libedit support more generic; port readline / libedit to FreeBSD")
Co-authored-by: Enji Cooper (ngie-eign)
Co-authored-by: Martin Panter (vadmium)
Co-authored-by: Robert Marshall (kellinm)
Add --with-wheel-pkg-dir=PATH option to the ./configure script. If
specified, the ensurepip module looks for setuptools and pip wheel
packages in this directory: if both are present, these wheel packages
are used instead of ensurepip bundled wheel packages.
Some Linux distribution packaging policies recommend against bundling
dependencies. For example, Fedora installs wheel packages in the
/usr/share/python-wheels/ directory and don't install the
ensurepip._bundled package.
ensurepip: Remove unused runpy import.
According to [bpo-42874](), some versions of grep do not support the `-q` and `-E` options. Although both options are used elsewhere in the configure script, this particular bit of validation can be achieved without them,
so there's no real harm in using a grep call with no flags.
Would be good to get some people taking advantage of the `--with-tzpath` arguments in the wild to try this out.. Local testing seems to indicate that this does the same thing, but I don't know that we have any buildbots using this option. Maybe @pablogsal?
[bpo-42874]():
Added --disable-test-modules option to the configure script:
don't build nor install test modules.
Patch by Xavier de Gaye, Thomas Petazzoni and Peixing Xin.
Co-Authored-By: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Co-Authored-By: Xavier de Gaye <xdegaye@gmail.com>
Add pycore_atomic_funcs.h internal header file: similar to
pycore_atomic.h but don't require to declare variables as atomic.
Add _Py_atomic_size_get() and _Py_atomic_size_set() functions.
Now all platforms use a value for the "EXT_SUFFIX" build variable derived
from SOABI (for instance in FreeBSD, "EXT_SUFFIX" is now ".cpython-310d.so"
instead of ".so"). Previously only Linux, Mac and VxWorks were using a value
for "EXT_SUFFIX" that included "SOABI".
Co-authored-by: Pablo Galindo <pablogsal@gmail.com>
This is invalid in C99 and later and is an error with some compilers
(e.g. clang in Xcode 12), and can thus cause configure checks to
produce incorrect results.
As [bpo-38443]() says the error message from configure when specifying --enable-universalsdk with a set of architectures that is not supported by the compiler is not very helpful. This PR explicitly checks if the compiler works and bails out if it doesn't.
As AIX 5.3 and below do not support thread_cputime, it was decided in
https://bugs.python.org/issue40680 to require AIX 6.1 and above. This
commit removes workarounds for — and references to — older, unsupported
AIX versions.
Co-authored-by: Lawrence D’Anna <lawrence_danna@apple.com>
* Add support for macOS 11 and Apple Silicon (aka arm64)
As a side effect of this work use the system copy of libffi on macOS, and remove the vendored copy
* Support building on recent versions of macOS while deploying to older versions
This allows building installers on macOS 11 while still supporting macOS 10.9.
* bpo-35823: subprocess: Use vfork() instead of fork() on Linux when safe
When used to run a new executable image, fork() is not a good choice
for process creation, especially if the parent has a large working set:
fork() needs to copy page tables, which is slow, and may fail on systems
where overcommit is disabled, despite that the child is not going to
touch most of its address space.
Currently, subprocess is capable of using posix_spawn() instead, which
normally provides much better performance. However, posix_spawn() does not
support many of child setup operations exposed by subprocess.Popen().
Most notably, it's not possible to express `close_fds=True`, which
happens to be the default, via posix_spawn(). As a result, most users
can't benefit from faster process creation, at least not without
changing their code.
However, Linux provides vfork() system call, which creates a new process
without copying the address space of the parent, and which is actually
used by C libraries to efficiently implement posix_spawn(). Due to sharing
of the address space and even the stack with the parent, extreme care
is required to use vfork(). At least the following restrictions must hold:
* No signal handlers must execute in the child process. Otherwise, they
might clobber memory shared with the parent, potentially confusing it.
* Any library function called after vfork() in the child must be
async-signal-safe (as for fork()), but it must also not interact with any
library state in a way that might break due to address space sharing
and/or lack of any preparations performed by libraries on normal fork().
POSIX.1 permits to call only execve() and _exit(), and later revisions
remove vfork() specification entirely. In practice, however, almost all
operations needed by subprocess.Popen() can be safely implemented on
Linux.
* Due to sharing of the stack with the parent, the child must be careful
not to clobber local variables that are alive across vfork() call.
Compilers are normally aware of this and take extra care with vfork()
(and setjmp(), which has a similar problem).
* In case the parent is privileged, special attention must be paid to vfork()
use, because sharing an address space across different privilege domains
is insecure[1].
This patch adds support for using vfork() instead of fork() on Linux
when it's possible to do safely given the above. In particular:
* vfork() is not used if credential switch is requested. The reverse case
(simple subprocess.Popen() but another application thread switches
credentials concurrently) is not possible for pure-Python apps because
subprocess.Popen() and functions like os.setuid() are mutually excluded
via GIL. We might also consider to add a way to opt-out of vfork() (and
posix_spawn() on platforms where it might be implemented via vfork()) in
a future PR.
* vfork() is not used if `preexec_fn != None`.
With this change, subprocess will still use posix_spawn() if possible, but
will fallback to vfork() on Linux in most cases, and, failing that,
to fork().
[1] https://ewontfix.com/7
Co-authored-by: Gregory P. Smith [Google LLC] <gps@google.com>
Since c19c5a6, AIX builds have defaulted to using dynload_shlib over
dynload_aix when dlopen is available. This function has been available
since AIX 4.3, which went out of support in 2003, the same year the
previously referenced commit was made. It has been nearly 20 years
since a version of AIX has been supported which has not used
dynload_shlib so there's no reason to keep this legacy code around.
close_range(2) should be preferred at all times if it's available, otherwise we'll use closefrom(2) if available with a fallback to fdwalk(3) or plain old loop over fd range in order of most efficient to least.
[note that this version does check for ENOSYS, but currently ignores all other errors]
Automerge-Triggered-By: @pablogsal
"make install" now uses the PLATLIBDIR variable for the destination
lib-dynload/ directory when ./configure --with-platlibdir is used.
Update --with-platlibdir comment in configure.
This reverts commit 0da5466650.
The commit is causing make failures on a FreeBSD buildbot.
Due to the imminent 3.9.0b1 cutoff, revert this commit for
now pending further investigation.
Add support to the configure script for OBJC and OBJCXX command line options so that the macOS builds can use the clang compiler for the macOS-specific Objective C source files. This allows third-party compilers, like GNU gcc, to be used to build the rest of the project since some of the Objective C system header files are not compilable by GNU gcc.
Co-authored-by: Jeffrey Kintscher <websurfer@surf2c.net>
Co-authored-by: Terry Jan Reedy <tjreedy@udel.edu>
This is the initial implementation of PEP 615, the zoneinfo module,
ported from the standalone reference implementation (see
https://www.python.org/dev/peps/pep-0615/#reference-implementation for a
link, which has a more detailed commit history).
This includes (hopefully) all functional elements described in the PEP,
but documentation is found in a separate PR. This includes:
1. A pure python implementation of the ZoneInfo class
2. A C accelerated implementation of the ZoneInfo class
3. Tests with 100% branch coverage for the Python code (though C code
coverage is less than 100%).
4. A compile-time configuration option on Linux (though not on Windows)
Differences from the reference implementation:
- The module is arranged slightly differently: the accelerated module is
`_zoneinfo` rather than `zoneinfo._czoneinfo`, which also necessitates
some changes in the test support function. (Suggested by Victor
Stinner and Steve Dower.)
- The tests are arranged slightly differently and do not include the
property tests. The tests live at test/test_zoneinfo/test_zoneinfo.py
rather than test/test_zoneinfo.py or test/test_zoneinfo/__init__.py
because we may do some refactoring in the future that would likely
require this separation anyway; we may:
- include the property tests
- automatically run all the tests against both pure Python and C,
rather than manually constructing C and Python test classes (similar
to the way this works with test_datetime.py, which generates C
and Python test cases from datetimetester.py).
- This includes a compile-time configuration option on Linux (though not
on Windows); added with much help from Thomas Wouters.
- Integration into the CPython build system is obviously different from
building a standalone zoneinfo module wheel.
- This includes configuration to install the tzdata package as part of
CI, though only on the coverage jobs. Introducing a PyPI dependency as
part of the CI build was controversial, and this is seen as less of a
major change, since the coverage jobs already depend on pip and PyPI.
Additional changes that were introduced as part of this PR, most / all of
which were backported to the reference implementation:
- Fixed reference and memory leaks
With much debugging help from Pablo Galindo
- Added smoke tests ensuring that the C and Python modules are built
The import machinery can be somewhat fragile, and the "seamlessly falls
back to pure Python" nature of this module makes it so that a problem
building the C extension or a failure to import the pure Python version
might easily go unnoticed.
- Adjustments to zoneinfo.__dir__
Suggested by Petr Viktorin.
- Slight refactorings as suggested by Steve Dower.
- Removed unnecessary if check on std_abbr
Discovered this because of a missing line in branch coverage.
Add --with-experimental-isolated-subinterpreters build option to
configure: better isolate subinterpreters, experimental build mode.
When used, force the usage of the libc malloc() memory allocator,
since pymalloc relies on the unique global interpreter lock (GIL).
On Solaris, the regular "grep" command may be an old version that fails to search a binary file. We need to use the correct command (ggrep, in our case), which is found by the configure script earlier.
Automerge-Triggered-By: @pablogsal
This fixes a regression introduced in bpo-38960.
When DFLAGS was empty, "$DFLAGS" results in an empty argument ("").
Without the quotes, an empty variable will be ignored by the shell.
Add --with-platlibdir option to the configure script: name of the
platform-specific library directory, stored in the new sys.platlitdir
attribute. It is used to build the path of platform-specific dynamic
libraries and the path of the standard library.
It is equal to "lib" on most platforms. On Fedora and SuSE, it is
equal to "lib64" on 64-bit systems.
Co-Authored-By: Jan Matějek <jmatejek@suse.com>
Co-Authored-By: Matěj Cepl <mcepl@cepl.eu>
Co-Authored-By: Charalampos Stratakis <cstratak@redhat.com>
Setting `-D_XOPEN_SOURCE=700` on HP-UX causes system functions such as chroot to be undefined. This change stops `_XOPEN_SOURCE` begin set on HP-UX
Co-authored-by: Benjamin Peterson <benjamin@python.org>
The os.putenv() and os.unsetenv() functions are now always available.
On non-Windows platforms, Python now requires setenv() and unsetenv()
functions to build.
Remove putenv_dict from posixmodule.c: it's not longer needed.
If setenv() C function is available, os.putenv() is now implemented
with setenv() instead of putenv(), so Python doesn't have to handle
the environment variable memory.
Provides a richer platform tag for AIX that we expect to be sufficient for PEP 425
binary distribution identification. Any backports to earlier Python versions will be
handled via setuptools.
Patch by Michael Felt.
Fix stdatomic.h header check for ICC compiler: the ICC implementation
lacks atomic_uintptr_t type which is needed by Python.
Test:
* atomic_int and atomic_uintptr_t types
* atomic_load_explicit() and atomic_store_explicit()
* memory_order_relaxed and memory_order_seq_cst constants
But don't test ATOMIC_VAR_INIT(): it's not used in Python.
This changeset increases the default size of the stack
for threads on macOS to the size of the stack
of the main thread and reenables the relevant
recursion test.
Reduce the number of unit tests run for the PGO generation task. This
speeds up the task by a factor of about 15x. Running the full unit test
suite is slow. This change may result in a slightly less optimized build
since not as many code branches will be executed. If you are willing to
wait for the much slower build, the old behavior can be restored using
'./configure [..] PROFILE_TASK="-m test --pgo-extended"'. We make no
guarantees as to which PGO task set produces a faster build. Users who
care should run their own relevant benchmarks as results can depend on
the environment, workload, and compiler tool chain.
Under some conditions the earlier fix for bpo-18075, "Infinite recursion
tests triggering a segfault on Mac OS X", now causes failures on macOS
when attempting to change stack limit with resource.setrlimit
resource.RLIMIT_STACK, like regrtest does when running the test suite.
The reverted change had specified a non-default stack size when linking
the python executable on macOS. As of macOS 10.14.4, the previous
code causes a hard failure when running tests, although similar
failures had been seen under some conditions under some earlier
systems. Reverting the change to the interpreter stack size at link
time helped for release builds but caused some tests to fail when
built --with-pydebug. Try the opposite approach: continue to build
the interpreter with an increased stack size on macOS and remove
the failing setrlimit call in regrtest initialization. This will
definitely avoid the resource.RLIMIT_STACK error and should have
no, or fewer, side effects.
* bpo-26836: Add os.memfd_create()
* Use the glibc wrapper for memfd_create()
Co-Authored-By: Christian Heimes <christian@python.org>
* Fix deletions caused by autoreconf.
* Use MFD_CLOEXEC as the default value for *flags*.
* Add memset_s to configure.ac.
* Revert memset_s changes.
* Apply the requested changes.
* Tweak the docs.
It is also possible to link against a library or executable with a
statically linked libpython, but not both with the same DLL. In fact
building a statically linked python is currently broken on Cygwin
for other (related) reasons.
The same problem applies to other POSIX-like layers over Windows
(MinGW, MSYS) but Python's build system does not seem to attempt
to support those platforms at the moment.
To embed Python into an application, a new --embed option must be
passed to "python3-config --libs --embed" to get "-lpython3.8" (link
the application to libpython). To support both 3.8 and older, try
"python3-config --libs --embed" first and fallback to "python3-config
--libs" (without --embed) if the previous command fails.
Add a pkg-config "python-3.8-embed" module to embed Python into an
application: "pkg-config python-3.8-embed --libs" includes
"-lpython3.8". To support both 3.8 and older, try "pkg-config
python-X.Y-embed --libs" first and fallback to "pkg-config python-X.Y
--libs" (without --embed) if the previous command fails (replace
"X.Y" with the Python version).
On the other hand, "pkg-config python3.8 --libs" no longer contains
"-lpython3.8". C extensions must not be linked to libpython (except
on Android, case handled by the script); this change is backward
incompatible on purpose.
"make install" now also installs "python-3.8-embed.pc".
Under some conditions the earlier fix for bpo-18075, "Infinite recursion
tests triggering a segfault on Mac OS X", now causes failures on macOS
when attempting to change stack limit with resource.setrlimit
resource.RLIMIT_STACK, like regrtest does when running the test suite.
The reverted change had specified a non-default stack size when linking
the python executable on macOS. As of macOS 10.14.4, the previous
code causes a hard failure when running tests, although similar
failures had been seen under some conditions under some earlier
systems. For now, revert the original change and resume using
the default stack size when linking the interpreter.
Release build and debug build are now ABI compatible: the Py_DEBUG
define no longer implies Py_TRACE_REFS define which introduces the
only ABI incompatibility.
A new "./configure --with-trace-refs" build option is now required to
get Py_TRACE_REFS define which adds sys.getobjects() function and
PYTHONDUMPREFS environment variable.
Changes:
* Add ./configure --with-trace-refs
* Py_DEBUG no longer implies Py_TRACE_REFS
"./configure --with-pymalloc" no longer adds the "m" flag to SOABI
(sys.implementation.cache_tag).
Enabling or disabling pymalloc has no impact on the ABI.
Add -fmax-type-align=8 to CFLAGS when clang compiler is detected.
The pymalloc memory allocator aligns memory on 8 bytes. On x86-64,
clang expects alignment on 16 bytes by default and so uses MOVAPS
instruction which can lead to segmentation fault. Instruct clang that
Python is limited to alignemnt on 8 bytes to use MOVUPS instruction
instead: slower but don't trigger a SIGSEGV if the memory is not
aligned on 16 bytes.
Sadly, the flag must be expected to CFLAGS and not just
CFLAGS_NODIST, since third party C extensions can have the same
issue.
On AIX, sys.platform doesn't contain the major version anymore.
Always return 'aix', instead of 'aix3' .. 'aix7'. Since
older Python versions include the version number, it is recommended to
always use sys.platform.startswith('aix').
Use autoconfig to probe for shm_open() and shm_unlink(). Set SHM_NEEDS_LIBRT if we must
link with librt to get the shm_* functions. Change setup.py to use the autoconfig defines. These
changes should make it more likely that _multiprocessing/posixshmem.c gets built correctly on
different platforms.
Use crypt_r() when available instead of crypt() in the crypt module.
As a nice side effect: This also avoids a memory sanitizer flake as clang msan doesn't know about crypt's internal libc allocated buffer.
When compiling 3rd party C extensions, the linker flags used by the
compiler for the interpreter and the stdlib modules, will get
leaked into distutils. In order to avoid that, the PY_CORE_LDFLAGS
and PY_LDFLAGS_NODIST are introduced to keep those flags separated.
When using link time optimizations, the -flto flag is passed to
BASECFLAGS, which makes it propagate to distutils. Those flags
should be reserved for the interpreter and the stdlib extension
modules only, thus moving those flags to CFLAGS_NODIST.
Adds configure flags for msan and ubsan builds to make it easier to enable.
These also encode the detail that address sanitizer and memory sanitizer
should disable pymalloc.
Define MEMORY_SANITIZER when appropriate at build time and adds workarounds
to existing code to mark things as initialized where the sanitizer is otherwise unable to
determine that. This lets our build succeed under the memory sanitizer. not all tests
pass without sanitizer failures yet but we're in pretty good shape after this.
.o generated by clang in LTO mode actually are LLVM bitcode files, which
leads to a few errors during configure/build step:
- add lto flags to the BASECFLAGS instead of CFLAGS, as CFLAGS are used
to build autoconf test case, and some are not compatible with clang LTO
(they assume binary in the .o, not bitcode)
- force llvm-ar instead of ar, as ar is not aware of .o files generated
by clang -flto
It is unused.
<!--
Thanks for your contribution!
Please read this comment in its entirety. It's quite important.
# Pull Request title
It should be in the following format:
```
bpo-NNNN: Summary of the changes made
```
Where: bpo-NNNN refers to the issue number in the https://bugs.python.org.
Most PRs will require an issue number. Trivial changes, like fixing a typo, do not need an issue.
# Backport Pull Request title
If this is a backport PR (PR made against branches other than `master`),
please ensure that the PR title is in the following format:
```
[X.Y] <title from the original PR> (GH-NNNN)
```
Where: [X.Y] is the branch name, e.g. [3.6].
GH-NNNN refers to the PR number from `master`.
-->
Release GIL on grp.getgrnam(), grp.getgrgid(), pwd.getpwnam() and
pwd.getpwuid() if reentrant variants of these functions are available.
Patch by William Grzybowski.
Introduce a configure check for strsignal(3) which defines HAVE_STRSIGNAL for
signalmodule.c. Add some common signals on HP-UX. This change applies for
Windows and HP-UX.
bpo-32430: Rename Modules/Setup.dist to Modules/Setup
Remove the necessity to copy the former manually to the latter when updating the local source tree.
This issue covers various changes for the macOS installers provided via python.org for 3.7.0.
- Provide a provisional new installer variant for macOS 10.9 and later systems with 64-bit (x86_64) architecture only. Apple has made it known that future versions of macOS will only fully support 64-bit executables and some other third-party software suppliers have chosen 10.9 as their oldest supported system.
- Support **Tcl/Tk 8.6** with the 10.9 installer variant.
- Upgrade **OpenSSL** to 1.1.0g and **SQLite** to 3.22.0.
- The compiler name used for the interpreter build and for modules built with **Distutils / pip** is now _gcc_ rather than _gcc-4.2_. And extension module builds will no longer try to force use of an old SDK if present.
Until now Python used a hard coded white list of default TLS cipher
suites. The old approach has multiple downsides. OpenSSL's default
selection was completely overruled. Python did neither benefit from new
cipher suites (ChaCha20, TLS 1.3 suites) nor blacklisted cipher suites.
For example we used to re-enable 3DES.
Python now defaults to OpenSSL DEFAULT cipher suite selection and black
lists all unwanted ciphers. Downstream vendors can override the default
cipher list with --with-ssl-default-suites.
Signed-off-by: Christian Heimes <christian@python.org>
glibc is deprecating libcrypt in favor of libxcrypt, however python assumes
that crypt.h will always be included. This change makes the header inclusion
explicit when libxcrypt is present on the system.
Add https://www.gnu.org/software/autoconf-archive/ax_check_openssl.html
to auto-detect compiler flags, linker flags and libraries to compile
OpenSSL extensions. The M4 macro uses pkg-config and falls back to
manual detection.
Add autoconf magic to detect usable X509_VERIFY_PARAM_set1_host()
and related functions.
Refactor setup.py to use new config vars to compile _ssl and _hashlib
modules.
Signed-off-by: Christian Heimes <christian@python.org>
This module has never been enabled by default, never worked correctly
on x86-64, and caused ABI problems that caused C extension
compatibility. See bpo-29137 for details/discussion.
Starting with AIX6.1 there is support in libc.a for uuid (RFC4122)
This patch provides the changes needed for this integration with the OS.
On AIX the base function is uuid_create() rather than uuid_generate_time()
The AIX uuid_t typedef is more aligned to the UUID field based definition
while the Linux typedef that is more aligned with UUID bytes
(or perhaps UUID bytes_le) definitions.
Modify the code to use ncurses is_pad() instead of checking WINDOW
_flags field. If your platform does not provide the is_pad(), the
existing way that checks the field will be enabled.
Note: This change does not drop support for platforms where do not
have both WINDOW _flags field and is_pad().
See PEP 539 for details.
Highlights of changes:
- Add Thread Specific Storage (TSS) API
- Document the Thread Local Storage (TLS) API as deprecated
- Update code that used TLS API to use TSS API
Python requires C implementations provide memmove, so we shouldn't need to check for it. The only place using this configure check was expat, where we can simply always define HAVE_MEMMOVE.
Allow configure --with-lto to apply to all builds, not just profile-opt builds.
Whether this is actually useful or not must be determined by the person
building CPython using their own toolchain.
My own quick test on x86_64 Debian 9 (gcc 6.3, binutils 2.28) seemed
to suggest that it wasn't, but I expect better toolchains can or will exist
at some point. The point is to allow it at all.
* bpo-27584: New addition of vSockets to the python socket module
Support for AF_VSOCK on Linux only
* bpo-27584: Fixes for V2
Fixed syntax and naming problems.
Fixed #ifdef AF_VSOCK checking
Restored original aclocal.m4
* bpo-27584: Fixes for V3
Added checking for fcntl and thread modules.
* bpo-27584: Fixes for V4
Fixed white space error
* bpo-27584: Fixes for V5
Added back comma in (CID, port).
* bpo-27584: Fixes for V6
Added news file.
socket.rst now reflects first Linux introduction of AF_VSOCK.
Fixed get_cid in test_socket.py.
Replaced PyLong_FromLong with PyLong_FromUnsignedLong in socketmodule.c
Got rid of extra AF_VSOCK #define.
Added sockaddr_vm to sock_addr.
* bpo-27584: Fixes for V7
Minor cleanup.
* bpo-27584: Fixes for V8
Put back #undef AF_VSOCK as it is necessary when vm_sockets.h is not installed.
Include sys/sysmacros.h for major(), minor(), and makedev(). GNU C libray
plans to remove the functions from sys/types.h.
Signed-off-by: Christian Heimes <christian@python.org>
`PYTHONFRAMEWORK` is defined in `Makefile` and it shoulnd't be used
in `pyconfig.h`.
`sysconfig.py --generate-posix-vars` reads config vars from Makefile
and `pyconfig.h`. Conflicting variables should be avoided.
Especially, string config variables in Makefile are unquoted, but
in `pyconfig.h` are keep quoted. So it should be private (starts with
underscore).
- new PYTHONCOERCECLOCALE config setting
- coerces legacy C locale to C.UTF-8, C.utf8 or UTF-8 by default
- always uses C.UTF-8 on Android
- uses `surrogateescape` on stdin and stdout in the coercion
target locales
- configure option to disable locale coercion at build time
- configure option to disable C locale warning at build time
Don't rebuild generated files based on file modification time
anymore, the action is now explicit. Replace "make touch"
with "make regen-all".
Changes:
* Remove "make touch", Tools/hg/hgtouch.py and .hgtouch
* Add a new "make regen-all" command to rebuild all generated files
* Add subcommands to only generate specific files:
- regen-ast: Include/Python-ast.h and Python/Python-ast.c
- regen-grammar: Include/graminit.h and Python/graminit.c
- regen-importlib: Python/importlib_external.h and Python/importlib.h
- regen-opcode: Include/opcode.h
- regen-opcode-targets: Python/opcode_targets.h
- regen-typeslots: Objects/typeslots.inc
* Rename PYTHON_FOR_GEN to PYTHON_FOR_REGEN
* pgen is now only built by by "make regen-grammar"
* Add $(srcdir)/ prefix to paths to source files to handle correctly
compilation outside the source directory
Note: $(PYTHON_FOR_REGEN) is no more used nor needed by "make"
default target building Python.