* Py_Main() now starts by reading Py_xxx configuration variables to
only work on its own private structure, and then later writes back
the configuration into these variables.
* Replace Py_GETENV() with pymain_get_env_var() which ignores empty
variables.
* Add _PyCoreConfig.dump_refs
* Add _PyCoreConfig.malloc_stats
* _PyObject_DebugMallocStats() is now responsible to check if debug
hooks are installed. The function returns 1 if stats were written,
or 0 if the hooks are disabled. Mark _PyMem_PymallocEnabled() as
static.
* Simplify _PyCoreConfig_INIT, _PyMainInterpreterConfig_INIT,
_PyPathConfig_INIT macros: no need to set fields to 0/NULL, it's
redundant (the C language sets them to 0/NULL for us).
* Fix typo: pymain_run_statup() => pymain_run_startup()
* Remove a few XXX/TODO
_PyPathConfig_Init() now also initialize home and program_name:
* Rename existing _PyPathConfig_Init() to _PyPathConfig_Calculate().
Add a new _PyPathConfig_Init() function in pathconfig.c which
handles the _Py_path_config variable and call
_PyPathConfig_Calculate().
* Add home and program_name fields to _PyPathConfig.home
* _PyPathConfig_Init() now initialize home and program_name
from main_config
* Py_SetProgramName(), Py_SetPythonHome() and Py_GetPythonHome() now
calls Py_FatalError() on failure, instead of silently ignoring
failures.
* config_init_home() now gets directly _Py_path_config.home to only
get the value set by Py_SetPythonHome(), or NULL if
Py_SetPythonHome() was not called.
* config_get_program_name() now gets directly
_Py_path_config.program_name to only get the value set by
Py_SetProgramName(), or NULL if Py_SetProgramName() was not called.
* pymain_init_python() doesn't call Py_SetProgramName() anymore,
_PyPathConfig_Init() now always sets the program name
* Call _PyMainInterpreterConfig_Read() in
pymain_parse_cmdline_envvars_impl() to control the memory allocator
* C API documentation: it's no more safe to call Py_GetProgramName()
before Py_Initialize().
* Factorize code from PC/getpathp.c and Modules/getpath.c to remove
duplicated code
* rename pathconfig_clear() to _PyPathConfig_Clear()
* Inline _PyPathConfig_Fini() in pymain_impl() and then remove it,
since it's a oneliner
Changes:
* _PyPathConfig_Fini() cannot be called in Py_FinalizeEx().
Py_Initialize() and Py_Finalize() can be called multiple times, but
it must not "forget" parameters set by Py_SetProgramName(),
Py_SetPath() or Py_SetPythonHome(), whereas _PyPathConfig_Fini()
clear all these parameters.
* config_get_program_name() and calculate_program_full_path() now
also decode paths using Py_DecodeLocale() to use the
surrogateescape error handler, rather than decoding using
mbstowcs() which is strict.
* Change _Py_CheckPython3() prototype: () => (void)
* Truncate a few lines which were too long
* _PyMainInterpreterConfig_ReadEnv() now sets program_name from
environment variables and pymain_parse_envvars() implements the
falls back on argv[0].
* Remove _PyMain.program_name: use the program_name from
_PyMainInterpreterConfig
* Move the Py_SetProgramName() call back to pymain_init_python(),
just before _Py_InitializeCore().
* pathconfig_global_init() now also calls
_PyMainInterpreterConfig_Read() to set program_name if it isn't set
yet
* Cleanup PyCalculatePath: pass main_config to subfunctions to get
directly fields from main_config (home, module_search_path_env and
program_name)
* Rename PyPathConfig structure to _PyPathConfig and move it to
Include/internal/pystate.h
* Rename path_config to _Py_path_config
* _PyPathConfig: Rename program_name field to program_full_path
* Add assert(str != NULL); to _PyMem_RawWcsdup(), _PyMem_RawStrdup()
and _PyMem_Strdup().
* Rename calculate_path() to pathconfig_global_init(). The function
now does nothing if it's already initiallized.
In _io_FileIO_readall_impl(), lseek() and _Py_fstat_noraise() were called
without releasing the GIL. This can cause all threads to hang for
unlimited time when calling FileIO.read() and the NFS server is not
accessible.
Fix the following false-alarm Coverity warning:
Result is not floating-point
(UNINTENDED_INTEGER_DIVISION)integer_division: Dividing integer
expressions 9223372036854775807LL and 1000LL, and then converting
the integer quotient to type double. Any remainder, or fractional
part of the quotient, is ignored.
To compute and use a non-integer quotient, change or cast either
operand to type double. If integer division is intended, consider
indicating that by casting the result to type long long .
Handle PyModule_AddIntConstant() and PyModule_AddStringConstant()
failures. Add also constants before calling setup_readline(), since
setup_readline() registers callbacks which uses a reference to the
module, whereas the module is destroyed if adding constants fails.
Fix Coverity warning:
CID 1414686: Unchecked return value (CHECKED_RETURN)
2. check_return: Calling PyModule_AddStringConstant without checking
return value (as is done elsewhere 45 out of 55 times).
* bpo-32101: Add sys.flags.dev_mode flag
Rename also the "Developer mode" to the "Development mode".
* bpo-32101: Add PYTHONDEVMODE environment variable
Mention it in the development chapiter.
* Fix _PyMem_SetupAllocators("debug"): always restore allocators to
the defaults, rather than only caling _PyMem_SetupDebugHooks().
* Add _PyMem_SetDefaultAllocator() helper to set the "default"
allocator.
* Add _PyMem_GetAllocatorsName(): get the name of the allocators
* main() now uses debug hooks on memory allocators if Py_DEBUG is
defined, rather than calling directly malloc()
* Document default memory allocators in C API documentation
* _Py_InitializeCore() now fails with a fatal user error if
PYTHONMALLOC value is an unknown memory allocator, instead of
failing with a fatal internal error.
* Add new tests on the PYTHONMALLOC environment variable
* Add support.with_pymalloc()
* Add the _testcapi.WITH_PYMALLOC constant and expose it as
support.with_pymalloc().
* sysconfig.get_config_var('WITH_PYMALLOC') doesn't work on Windows, so
replace it with support.with_pymalloc().
* pythoninfo: add _testcapi collector for pymem
* Py_Main() now calls Py_SetProgramName() earlier to be able to get
the program name in _PyMainInterpreterConfig_ReadEnv().
* Rename prog to program_name
* Rename progpath to program_name
Py_GetPath() and Py_Main() now call
_PyMainInterpreterConfig_ReadEnv() to share the same code to get
environment variables.
Changes:
* Add _PyMainInterpreterConfig_ReadEnv()
* Add _PyMainInterpreterConfig_Clear()
* Add _PyMem_RawWcsdup()
* _PyMainInterpreterConfig: rename pythonhome to home
* Rename _Py_ReadMainInterpreterConfig() to
_PyMainInterpreterConfig_Read()
* Use _Py_INIT_USER_ERR(), instead of _Py_INIT_ERR(), for decoding
errors: the user is able to fix the issue, it's not a bug in
Python. Same change was made in _Py_INIT_NO_MEMORY().
* Remove _Py_GetPythonHomeWithConfig()
* calculate_path() rewritten in Modules/getpath.c and PC/getpathp.c
* Move global variables into a new PyPathConfig structure.
* calculate_path():
* Split the huge calculate_path() function into subfunctions.
* Add PyCalculatePath structure to pass data between subfunctions.
* Document PyCalculatePath fields.
* Move cleanup code into a new calculate_free() subfunction
* calculate_init() now handles Py_DecodeLocale() failures properly
* calculate_path() is now atomic: only replace PyPathConfig
(path_config) at once on success.
* _Py_GetPythonHomeWithConfig() now returns an error on failure
* Add _Py_INIT_NO_MEMORY() helper: report a memory allocation failure
* Coding style fixes (PEP 7)
* Py_Main() now reads the PYTHONHOME environment variable
* Add _Py_GetPythonHomeWithConfig() private function
* Add _PyWarnings_InitWithConfig()
* init_filters() doesn't get the current core configuration from the
current interpreter or Python thread anymore. Pass explicitly the
configuration to _PyWarnings_InitWithConfig().
* _Py_InitializeCore() now fails on _PyWarnings_InitWithConfig()
failure.
* Pass configuration as constant
Changes:
* Py_Main() initializes _PyCoreConfig.module_search_path_env from
the PYTHONPATH environment variable.
* PyInterpreterState_New() now initializes core_config and config
fields
* Compute sys.path a little bit ealier in
_Py_InitializeMainInterpreter() and new_interpreter()
* Add _Py_GetPathWithConfig() private function.
Py_Main() now handles two more -X options:
* -X showrefcount: new _PyCoreConfig.show_ref_count field
* -X showalloccount: new _PyCoreConfig.show_alloc_count field
The developer mode (-X dev) now creates all default warnings filters
to order filters in the correct order to always show ResourceWarning
and make BytesWarning depend on the -b option.
Write a functional test to make sure that ResourceWarning is logged
twice at the same location in the developer mode.
Add a new 'dev_mode' field to _PyCoreConfig.
Add a new "developer mode": new "-X dev" command line option to
enable debug checks at runtime.
Changes:
* Add unit tests for -X dev
* test_cmd_line: replace test.support with support.
* Fix _PyRuntimeState_Fini(): Use the same memory allocator
than _PyRuntimeState_Init().
* Fix _PyMem_GetDefaultRawAllocator()
Parse more env vars in Py_Main():
* Add more options to _PyCoreConfig:
* faulthandler
* tracemalloc
* importtime
* Move code to parse environment variables from _Py_InitializeCore()
to Py_Main(). This change fixes a regression from Python 3.6:
PYTHONUNBUFFERED is now read before calling pymain_init_stdio().
* _PyFaulthandler_Init() and _PyTraceMalloc_Init() now take an
argument to decide if the module has to be enabled at startup.
* tracemalloc_start() is now responsible to check the maximum number
of frames.
Other changes:
* Cleanup Py_Main():
* Rename some pymain_xxx() subfunctions
* Add pymain_run_python() subfunction
* Cleanup Py_NewInterpreter()
* _PyInterpreterState_Enable() now reports failure
* init_hash_secret() now considers pyurandom() failure as an "user
error": don't fail with abort().
* pymain_optlist_append() and pymain_strdup() now sets err on memory
allocation failure.
* Don't use "Python runtime" anymore to parse command line options or
to get environment variables: pymain_init() is now a strict
separation.
* Use an error message rather than "crashing" directly with
Py_FatalError(). Limit the number of calls to Py_FatalError(). It
prepares the code to handle errors more nicely later.
* Warnings options (-W, PYTHONWARNINGS) and "XOptions" (-X) are now
only added to the sys module once Python core is properly
initialized.
* _PyMain is now the well identified owner of some important strings
like: warnings options, XOptions, and the "program name". The
program name string is now properly freed at exit.
pymain_free() is now responsible to free the "command" string.
* Rename most methods in Modules/main.c to use a "pymain_" prefix to
avoid conflits and ease debug.
* Replace _Py_CommandLineDetails_INIT with memset(0)
* Reorder a lot of code to fix the initialization ordering. For
example, initializing standard streams now comes before parsing
PYTHONWARNINGS.
* Py_Main() now handles errors when adding warnings options and
XOptions.
* Add _PyMem_GetDefaultRawAllocator() private function.
* Cleanup _PyMem_Initialize(): remove useless global constants: move
them into _PyMem_Initialize().
* Call _PyRuntime_Initialize() as soon as possible:
_PyRuntime_Initialize() now returns an error message on failure.
* Add _PyInitError structure and following macros:
* _Py_INIT_OK()
* _Py_INIT_ERR(msg)
* _Py_INIT_USER_ERR(msg): "user" error, don't abort() in that case
* _Py_INIT_FAILED(err)
* Fix compilation of the socket module on NetBSD 8.
* Fix the assertion failure or reading arbitrary data when parse
a AF_BLUETOOTH address on NetBSD and DragonFly BSD.
* Fix other potential errors and make the code more reliable.
kB (*kilo* byte) unit means 1000 bytes, whereas KiB ("kibibyte")
means 1024 bytes. KB was misused: replace kB or KB with KiB when
appropriate.
Same change for MB and GB which become MiB and GiB.
Change the output of Tools/iobench/iobench.py.
Round also the size of the documentation from 5.5 MB to 5 MiB.
All Blake2 params have to be encoded in little-endian byte order. For
the two multi-byte integer params, leaf_length and node_offset, that
means that assigning a native-endian integer to them appears to work on
little-endian platforms, but gives the wrong result on big-endian. The
current libb2 API doesn't make that very clear, and @sneves is working
on new API functions in the GH issue above. In the meantime, we can work
around the problem by explicitly assigning little-endian values to the
parameter block.
See https://github.com/BLAKE2/libb2/issues/12.
When a single .c file contains several functions and/or methods with
the same name, a safety _METHODDEF #define statement is generated
only for one of them.
This fixes the bug by using the full name of the function to avoid
duplicates rather than just the name.
Add new time functions:
* time.clock_gettime_ns()
* time.clock_settime_ns()
* time.monotonic_ns()
* time.perf_counter_ns()
* time.process_time_ns()
* time.time_ns()
Add new _PyTime functions:
* _PyTime_FromTimespec()
* _PyTime_FromNanosecondsObject()
* _PyTime_FromTimeval()
Other changes:
* Add also os.times() tests to test_os.
* pytime_fromtimeval() and pytime_fromtimeval() now return
_PyTime_MAX or _PyTime_MIN on overflow, rather than undefined
behaviour
* _PyTime_FromNanoseconds() parameter type changes from long long to
_PyTime_t
Replace occurence of nested comments in blake2 reference implementation
with preprocessor directive for disabling unused code.
`blake2s-load-xop.h` is conditionally pulled in only on chips with XOP
support, among others the AMD Bulldozer. The malformed comments in the
source file breaks the build of `hashlib`'s `_blake2` on GCC 6.3.0.
Official reference code on github uses `#if` so this change should be
uncontroversial.
Modify the code to use ncurses is_pad() instead of checking WINDOW
_flags field. If your platform does not provide the is_pad(), the
existing way that checks the field will be enabled.
Note: This change does not drop support for platforms where do not
have both WINDOW _flags field and is_pad().
Cleanup pymalloc:
* Rename _PyObject_Alloc() to pymalloc_alloc()
* Rename _PyObject_FreeImpl() to pymalloc_free()
* Rename _PyObject_Realloc() to pymalloc_realloc()
* pymalloc_alloc() and pymalloc_realloc() don't fallback on the raw
allocator anymore, it now must be done by the caller
* Add "success" and "failed" labels to pymalloc_alloc() and
pymalloc_free()
* pymalloc_alloc() and pymalloc_free() don't update
num_allocated_blocks anymore: it should be done in the caller
* _PyObject_Calloc() is now responsible to fill the memory block
allocated by pymalloc with zeros
* Simplify pymalloc_alloc() prototype
* _PyObject_Realloc() now calls _PyObject_Malloc() rather than
calling directly pymalloc_alloc()
_PyMem_DebugRawAlloc() and _PyMem_DebugRawRealloc():
* document the layout of a memory block
* don't increase the serial number if the allocation failed
* check for integer overflow before computing the total size
* add a 'data' variable to make the code easiler to follow
test_setallocators() of _testcapimodule.c now test also the context.
Use the _PyTime_t type rather than double for the faulthandler
timeout in dump_traceback_later().
This change should fix the following Coverity warning:
CID 1420311: Incorrect expression (UNINTENDED_INTEGER_DIVISION)
Dividing integer expressions "9223372036854775807LL" and "1000LL",
and then converting the integer quotient to type "double". Any
remainder, or fractional part of the quotient, is ignored.
if ((timeout * 1e6) >= (double) PY_TIMEOUT_MAX) {
The warning comes from (double)PY_TIMEOUT_MAX with:
#define PY_TIMEOUT_MAX (PY_LLONG_MAX / 1000)
The startup refactoring means command line settings
are now applied after settings are read from the
environment.
This updates the way command line settings are applied
to account for that, ensures more settings are first read
from the environment in _PyInitializeCore, and adds a
simple test case covering the flags that are easy to check.
Fix the pthread+semaphore implementation of
PyThread_acquire_lock_timed() when called with timeout > 0 and
intr_flag=0: recompute the timeout if sem_timedwait() is interrupted
by a signal (EINTR).
See also the PEP 475.
The pthread implementation of PyThread_acquire_lock() now fails with
a fatal error if the timeout is larger than PY_TIMEOUT_MAX, as done
in the Windows implementation.
The check prevents any risk of overflow in PyThread_acquire_lock().
Add also PY_DWORD_MAX constant.
Rework the code choosing BLAKE2 code paths from using the optimized
variant on all x86_64 machines to using it when SSSE3 or better
supported instructions sets are available.
Firstly, this solves the problem of using pure SSE2 code path on x86_64
machines. As reported in the bug, this code is slower than the reference
code on all tested x86_64 machines. Furthermore, on Athlon64 that lacks
SSSE3, it is even 2.5 times slower than the reference code! Checking
for SSSE3 therefore ensures that the optimized implementation will only
be used when it has a chance of performing better.
Secondly, this makes it possible to use SSSE3+ optimizations on 32-bit
x86 systems. This allows for even 2 times speed gain on modern 32-bit
x86 systems (tested in a 32-bit chroot).
Fix the following Coverity warning:
>>> CID 1420038: Control flow issues (DEADCODE)
>>> Execution cannot reach this statement: "res = sem_trywait(self->han...".
321 res = sem_trywait(self->handle);
The deadcode was introduced by the commit
c872d39d32.
Fix timeout rounding in time.sleep(), threading.Lock.acquire() and
socket.socket.settimeout() to round correctly negative timeouts between -1.0 and
0.0. The functions now block waiting for events as expected. Previously, the
call was incorrectly non-blocking.
bpo-31803: time.clock() and time.get_clock_info('clock') now emit a
DeprecationWarning warning.
Replace time.clock() with time.perf_counter() in tests and demos.
Remove also hasattr(time, 'monotonic') in test_time since time.monotonic()
is now always available since Python 3.5.
Always pass -1, or INFTIM where defined, to the poll() system call when
a negative timeout is passed to the poll.poll([timeout]) method in the
select module. Various OSes throw an error with arbitrary negative
values.
Freeze all the objects tracked by gc - move them to a permanent generation
and ignore all the future collections. This can be used before a POSIX
fork() call to make the gc copy-on-write friendly or to speed up collection.
* Rewrite win_perf_counter() to only use integers internally.
* Add _PyTime_MulDiv() which compute "ticks * mul / div"
in two parts (int part and remaining) to prevent integer overflow.
* Clock frequency is checked at initialization for integer overflow.
* Enhance also pymonotonic() to reduce the precision loss on macOS
(mach_absolute_time() clock).
time.clock() and time.perf_counter() now use again C double
internally.
Remove also _PyTime_GetWinPerfCounterWithInfo(): use
_PyTime_GetPerfCounterDoubleWithInfo() instead on Windows.
* Separated functions and constants descriptions in sections.
* Added a note about the limitations of timezone constants.
* Removed redundant lists from the module docstring.
See PEP 539 for details.
Highlights of changes:
- Add Thread Specific Storage (TSS) API
- Document the Thread Local Storage (TLS) API as deprecated
- Update code that used TLS API to use TSS API
While a rare potential failure (it requires swapping out zlib.decompress() itself and forcing it to return a non-bytes object), this change prevents a potential C-level assertion failure and instead substitutes it with an exception.
Thanks to Oren Milman for the patch.
Python requires C implementations provide memmove, so we shouldn't need to check for it. The only place using this configure check was expat, where we can simply always define HAVE_MEMMOVE.
* Maintain a list of BufferedWriter objects. Flush them on exit.
In Python 3, the buffer and the underlying file object are separate
and so the order in which objects are finalized matters. This is
unlike Python 2 where the file and buffer were a single object and
finalization was done for both at the same time. In Python 3, if
the file is finalized and closed before the buffer then the data in
the buffer is lost.
This change adds a doubly linked list of open file buffers. An atexit
hook ensures they are flushed before proceeding with interpreter
shutdown. This is addition does not remove the need to properly close
files as there are other reasons why buffered data could get lost during
finalization.
Initial patch by Armin Rigo.
* Use weakref.WeakSet instead of WeakKeyDictionary.
* Simplify buffered double-linked list types.
* In _flush_all_writers(), suppress errors from flush().
* Remove NEWS entry, use blurb.
* Take more care when flushing file buffers from atexit.
The previous implementation was not careful enough to avoid
causing issues in multi-threaded cases. Check for buf->ok
and buf->finalizing before actually doing the flush. Also,
increase the refcnt to ensure the object does not disappear.
Fix a memory corruption in getpath.c due to mixed memory allocators
between Py_GetPath() and Py_SetPath().
The fix use the Raw allocator to mimic the windows version.
This patch should be used from python3.6 to the current version
for more details, see the bug report and
https://github.com/pyinstaller/pyinstaller/issues/2812
* bpo-31499, xml.etree: Fix xmlparser_gc_clear() crash
xml.etree: xmlparser_gc_clear() now sets self.parser to NULL to prevent a
crash in xmlparser_dealloc() if xmlparser_gc_clear() was called previously
by the garbage collector, because the parser was part of a reference cycle.
Co-Authored-By: Serhiy Storchaka <storchaka@gmail.com>
The concrete PyDict_* API is used to interact with PyInterpreterState.modules in a number of places. This isn't compatible with all dict subclasses, nor with other Mapping implementations. This patch switches the concrete API usage to the corresponding abstract API calls.
We also add a PyImport_GetModule() function (and some other helpers) to reduce a bunch of code duplication.
* Add Py_UNREACHABLE() as an alias to abort().
* Use Py_UNREACHABLE() instead of assert(0)
* Convert more unreachable code to use Py_UNREACHABLE()
* Document Py_UNREACHABLE() and a few other macros.
* Avoid calling "PyObject_GetAttrString()" (and potentially executing user code) with a live exception set.
* Ignore only AttributeError on attribute lookups in ElementTree.XMLParser() and propagate all other exceptions.
Cast Py_buffer.len (Py_ssize_t, signed) to size_t (unsigned) to
prevent the following warning:
Modules/_ssl.c:3089:21: warning: comparison between signed and
unsigned integer expressions [-Wsign-compare]
PR #1638, for bpo-28411, causes problems in some (very) edge cases. Until that gets sorted out, we're reverting the merge. PR #3506, a fix on top of #1638, is also getting reverted.
The SSL module now raises SSLCertVerificationError when OpenSSL fails to
verify the peer's certificate. The exception contains more information about
the error.
Original patch by Chi Hsuan Yen
Signed-off-by: Christian Heimes <christian@python.org>
* group the (stateful) runtime globals into various topical structs
* consolidate the topical structs under a single top-level _PyRuntimeState struct
* add a check-c-globals.py script that helps identify runtime globals
Other globals are excluded (see globals.txt and check-c-globals.py).
* bpo-29136: Add TLS 1.3 support
TLS 1.3 introduces a new, distinct set of cipher suites. The TLS 1.3
cipher suites don't overlap with cipher suites from TLS 1.2 and earlier.
Since Python sets its own set of permitted ciphers, TLS 1.3 handshake
will fail as soon as OpenSSL 1.1.1 is released. Let's enable the common
AES-GCM and ChaCha20 suites.
Additionally the flag OP_NO_TLSv1_3 is added. It defaults to 0 (no op) with
OpenSSL prior to 1.1.1. This allows applications to opt-out from TLS 1.3
now.
Signed-off-by: Christian Heimes <christian@python.org>
* bpo-27584: New addition of vSockets to the python socket module
Support for AF_VSOCK on Linux only
* bpo-27584: Fixes for V2
Fixed syntax and naming problems.
Fixed #ifdef AF_VSOCK checking
Restored original aclocal.m4
* bpo-27584: Fixes for V3
Added checking for fcntl and thread modules.
* bpo-27584: Fixes for V4
Fixed white space error
* bpo-27584: Fixes for V5
Added back comma in (CID, port).
* bpo-27584: Fixes for V6
Added news file.
socket.rst now reflects first Linux introduction of AF_VSOCK.
Fixed get_cid in test_socket.py.
Replaced PyLong_FromLong with PyLong_FromUnsignedLong in socketmodule.c
Got rid of extra AF_VSOCK #define.
Added sockaddr_vm to sock_addr.
* bpo-27584: Fixes for V7
Minor cleanup.
* bpo-27584: Fixes for V8
Put back #undef AF_VSOCK as it is necessary when vm_sockets.h is not installed.
Add basic fuzz tests for a few common builtin functions.
This is an easy place to start, and these functions are probably safe.
We'll want to add more fuzz tests later. Lets bootstrap using these.
While the fuzz tests are included in CPython and compiled / tested on a
very basic level inside CPython itself, the actual fuzzing happens as
part of oss-fuzz (https://github.com/google/oss-fuzz). The reason to
include the tests in CPython is to make sure that they're maintained
as part of the CPython project, especially when (as some eventually
will) they use internal implementation details in the test.
(This will be necessary sometimes because e.g. the fuzz test should
never enter Python's interpreter loop, whereas some APIs only expose
themselves publicly as Python functions.)
This particular set of changes is part of testing Python's builtins,
tracked internally at Google by b/37562550.
The _xxtestfuzz module that this change adds need not be shipped with binary distributions of Python.
SSLObject.version() now correctly returns None when handshake over BIO has
not been performed yet.
Signed-off-by: Christian Heimes <christian@python.org>
* group the (stateful) runtime globals into various topical structs
* consolidate the topical structs under a single top-level _PyRuntimeState struct
* add a check-c-globals.py script that helps identify runtime globals
Other globals are excluded (see globals.txt and check-c-globals.py).
Include sys/sysmacros.h for major(), minor(), and makedev(). GNU C libray
plans to remove the functions from sys/types.h.
Signed-off-by: Christian Heimes <christian@python.org>
The ssl and hashlib modules now call OPENSSL_add_all_algorithms_noconf() on
OpenSSL < 1.1.0. The function detects CPU features and enables optimizations
on some CPU architectures such as POWER8. Patch is based on research from
Gustavo Serra Scalet.
Signed-off-by: Christian Heimes <christian@python.org>
* Maintain a list of BufferedWriter objects. Flush them on exit.
In Python 3, the buffer and the underlying file object are separate
and so the order in which objects are finalized matters. This is
unlike Python 2 where the file and buffer were a single object and
finalization was done for both at the same time. In Python 3, if
the file is finalized and closed before the buffer then the data in
the buffer is lost.
This change adds a doubly linked list of open file buffers. An atexit
hook ensures they are flushed before proceeding with interpreter
shutdown. This is addition does not remove the need to properly close
files as there are other reasons why buffered data could get lost during
finalization.
Initial patch by Armin Rigo.
* Use weakref.WeakSet instead of WeakKeyDictionary.
* Simplify buffered double-linked list types.
* In _flush_all_writers(), suppress errors from flush().
* Remove NEWS entry, use blurb.
* Change NPN detection:
Version breakdown, support disabled (pre-patch/post-patch):
- pre-1.0.1: OPENSSL_NPN_NEGOTIATED will not be defined -> False/False
- 1.0.1 and 1.0.2: OPENSSL_NPN_NEGOTIATED will not be defined ->
False/False
- 1.1.0+: OPENSSL_NPN_NEGOTIATED will be defined and
OPENSSL_NO_NEXTPROTONEG will be defined -> True/False
Version breakdown support enabled (pre-patch/post-patch):
- pre-1.0.1: OPENSSL_NPN_NEGOTIATED will not be defined -> False/False
- 1.0.1 and 1.0.2: OPENSSL_NPN_NEGOTIATED will be defined and
OPENSSL_NO_NEXTPROTONEG will not be defined -> True/True
- 1.1.0+: OPENSSL_NPN_NEGOTIATED will be defined and
OPENSSL_NO_NEXTPROTONEG will not be defined -> True/True
* Refine NPN guard:
- If NPN is disabled, but ALPN is available we need our callback
- Make clinic's ssl behave the same way
This created a working ssl module for me, with NPN disabled and ALPN
enabled for OpenSSL 1.1.0f.
Concerns to address:
The initial commit for NPN support into OpenSSL [1], had the
OPENSSL_NPN_* variables defined inside the OPENSSL_NO_NEXTPROTONEG
guard. The question is if that ever made it into a release.
This would need an ugly hack, something like:
#if defined(OPENSSL_NO_NEXTPROTONEG) && \
!defined(OPENSSL_NPN_NEGOTIATED)
# define OPENSSL_NPN_UNSUPPORTED 0
# define OPENSSL_NPN_NEGOTIATED 1
# define OPENSSL_NPN_NO_OVERLAP 2
#endif
[1] https://github.com/openssl/openssl/commit/68b33cc5c7
* Fixes#30581 by adding a path to use newer GetMaximumProcessorCount API on Windows calls to os.cpu_count()
* Add NEWS.d entry for bpo-30581, os.cpu_count on Windows.
* Tweak NEWS entry
Ctypes currently produces wrong pep3118 type codes for several types.
E.g. memoryview(ctypes.c_long()).format gives "<l" on 64-bit platforms,
but it should be "<q" instead for sizeof(c_long) == 8
The problem is that the '<>' endian specification in the struct syntax
also turns on the "standard size" mode, which makes type characters have
a platform-independent meaning, which does not match with the codes used
internally in ctypes. The struct module format syntax also does not
allow specifying native-size non-native-endian items.
This commit adds a converter function that maps the internal ctypes
codes to appropriate struct module standard-size codes in the pep3118
format strings. The tests are modified to check for this.
* Added support for CAN_ISOTP protocol
* Added unit tests for CAN ISOTP
* Updated documentation for ISO-TP protocol
* Removed trailing whitespace in documentation
* Added blurb NEWS.d file
* updated Misc/ACKS
* Fixed broken unit test that was using isotp const outside of skippable section
* Removed dependecy over third party project
* Added implementation for getsockname + unit tests
* Missing newline at end of ACKS file
* Accidentally inserted a type in ACKS file
* Followed tiran changes review #1 recommendations
* Added spaces after comma
The current test_child_terminated_in_stopped_state() function test
creates a child process which calls ptrace(PTRACE_TRACEME, 0, 0) and
then crash (SIGSEGV). The problem is that calling os.waitpid() in the
parent process is not enough to close the process: the child process
remains alive and so the unit test leaks a child process in a
strange state. Closing the child process requires non-trivial code,
maybe platform specific.
Remove the functional test and replaces it with an unit test which
mocks os.waitpid() using a new _testcapi.W_STOPCODE() function to
test the WIFSTOPPED() path.
* Closes issue bpo-5288: Allow tzinfo objects with sub-minute offsets.
* bpo-5288: Implemented %z formatting of sub-minute offsets.
* bpo-5288: Removed mentions of the whole minute limitation on TZ offsets.
* bpo-5288: Removed one more mention of the whole minute limitation.
Thanks @csabella!
* Fix a formatting error in the docs
* Addressed review comments.
Thanks, @haypo.
* Improve signal delivery
Avoid using Py_AddPendingCall from signal handler, to avoid calling signal-unsafe functions.
* Remove unused function
* Improve comments
* Use _Py_atomic API for concurrency-sensitive signal state
* Add blurb
If history-length is set in .inputrc, and the history file is double the
history size (or more), history_get(N) returns NULL, and python
segfaults. Fix that by checking for NULL return value.
It seems that the root cause is incorrect handling of bigger history in
readline, but Python should not segfault even if readline returns
unexpected value.
This issue affects only GNU readline. When using libedit emulation
system history size option does not work.
* Improve signal delivery
Avoid using Py_AddPendingCall from signal handler, to avoid calling signal-unsafe functions.
* Remove unused function
* Improve comments
* Add stress test
* Adapt for --without-threads
* Add second stress test
* Add NEWS blurb
* Address comments @haypo
Based on patch by Victor Stinner.
Add private C API function _PyUnicode_AsUnicode() which is similar to
PyUnicode_AsUnicode(), but checks for null characters.
New error condition paths were introduced, which did not decrement
`key2` and `val2` objects. Therefore, decrement references before
jumping to the error label.
Signed-off-by: Eric N. Vander Weele <ericvw@gmail.com>
* Make PyTraceMalloc_Track() and PyTraceMalloc_Untrack() functions
public (remove the "_" prefix)
* Remove the _PyTraceMalloc_domain_t type: use directly unsigned
int.
* Document methods
Note: methods are already tested in test_tracemalloc.
- removes PY_WARN_ON_C_LOCALE build time flag
- locale coercion and compatibility warnings are now always compiled
in, but are off by default
- adds PYTHONCOERCECLOCALE=warn runtime option to aid in
debugging potentially locale related compatibility problems
Due to not-yet-resolved test failures on *BSD systems (including
Mac OS X), this also temporarily disables UTF-8 as a locale coercion
target, and skips testing the interpreter's behavior in the POSIX locale.
When os.spawnv() fails while handling arguments, free correctly
argvlist: pass lastarg+1 rather than lastarg to free_string_array()
to also free the first item.
* bpo-29591: Upgrade Modules/expat to libexpat 2.2
* bpo-29591: Restore Python changes on expat
* bpo-29591: Remove expat config of unsupported platforms
Remove the configuration (Modules/expat/*config.h) of unsupported
platforms:
* Amiga
* MacOS Classic on PPC32
* Open Watcom
* bpo-29591: Remove useless XML_HAS_SET_HASH_SALT
The XML_HAS_SET_HASH_SALT define of Modules/expat/expat.h became
useless since our local expat copy was upgrade to expat 2.1 (it's now
expat 2.2.0).
When os.spawnve() fails while handling arguments, free correctly
argvlist: pass lastarg+1 rather than lastarg to free_string_array()
to also free the first item.
The function '_PyArg_ParseStack()' and
'_PyArg_UnpackStack' were failing (with error
"XXX() takes Y argument (Z given)") before
the function '_PyArg_NoStackKeywords()' was called.
Thus, the latter did not raise its more meaningful
error : "XXX() takes no keyword arguments".
Fix a reference leak in _io._WindowsConsoleIO: PyUnicode_FSDecoder()
always initialize decodedname when it succeed and it doesn't clear
input decodedname object.
Error messages when pass keyword arguments to some builtins that
don't support keyword arguments contained double parenthesis: "()()".
The regression was introduced by bpo-30534.
If pass a server_hostname= that fails IDNA decoding to SSLContext.wrap_socket or SSLContext.wrap_bio, then the SSLContext object had a spurious Py_DECREF called on it, eventually leading to segfaults.
* bpo-30537: use PyNumber in itertools instead of PyLong
* bpo-30537: revert changes except to islice_new
* bpo-30537: test itertools.islice and add entry to Misc/NEWS
* Simplify X.509 extension handling code
The previous implementation had grown organically over time, as OpenSSL's API evolved.
* Delete even more code
* bpo-30557: faulthandler now correctly filters and displays exception codes on Windows
* Adds test for non-fatal exceptions.
* Adds bpo number to comment.
* bpo-30544: _io._WindowsConsoleIO.write raises the wrong error when WriteConsoleW fails
* bpo-30544: _io._WindowsConsoleIO.write raises the wrong error when WriteConsoleW fails
Rather than saving the Python object and calling PyObject_IsTrue()
every time when the boolean argument is used, call it only once and
save C boolean value.
* bpo-16500: Allow registering at-fork handlers
* Address Serhiy's comments
* Add doc for new C API
* Add doc for new Python-facing function
* Add NEWS entry + doc nit
Drop handshake_done and peer_cert members from PySSLSocket struct. The
peer certificate can be acquired from *SSL directly.
SSL_get_peer_certificate() does not trigger any network activity.
Instead of manually tracking the handshake state, simply use
SSL_is_init_finished().
In combination these changes fix auto-handshake for non-blocking
MemoryBIO connections.
Signed-off-by: Christian Heimes <christian@python.org>
PEP 432 specifies a number of large changes to interpreter startup code, including exposing a cleaner C-API. The major changes depend on a number of smaller changes. This patch includes all those smaller changes.
If we have a chain of generators/coroutines that are 'yield from'ing
each other, then resuming the stack works like:
- call send() on the outermost generator
- this enters _PyEval_EvalFrameDefault, which re-executes the
YIELD_FROM opcode
- which calls send() on the next generator
- which enters _PyEval_EvalFrameDefault, which re-executes the
YIELD_FROM opcode
- ...etc.
However, every time we enter _PyEval_EvalFrameDefault, the first thing
we do is to check for pending signals, and if there are any then we
run the signal handler. And if it raises an exception, then we
immediately propagate that exception *instead* of starting to execute
bytecode. This means that e.g. a SIGINT at the wrong moment can "break
the chain" – it can be raised in the middle of our yield from chain,
with the bottom part of the stack abandoned for the garbage collector.
The fix is pretty simple: there's already a special case in
_PyEval_EvalFrameEx where it skips running signal handlers if the next
opcode is SETUP_FINALLY. (I don't see how this accomplishes anything
useful, but that's another story.) If we extend this check to also
skip running signal handlers when the next opcode is YIELD_FROM, then
that closes the hole – now the exception can only be raised at the
innermost stack frame.
This shouldn't have any performance implications, because the opcode
check happens inside the "slow path" after we've already determined
that there's a pending signal or something similar for us to process;
the vast majority of the time this isn't true and the new check
doesn't run at all.
Before, it was possible to get the following sequence of
events (especially on Windows, where the C-level signal handler for
SIGINT is run in a separate thread):
- SIGINT arrives
- trip_signal is called
- trip_signal writes to the wakeup fd
- the main thread wakes up from select()-or-equivalent
- the main thread checks for pending signals, but doesn't see any
- the main thread drains the wakeup fd
- the main thread goes back to sleep
- trip_signal sets is_tripped=1 and calls Py_AddPendingCall to notify
the main thread the it should run the Python-level signal handler
- the main thread doesn't notice because it's asleep
This has been causing repeated failures in the Trio test suite:
https://github.com/python-trio/trio/issues/119
when there are no more `await` or `yield (from)` before return in coroutine,
cancel was ignored.
example:
async def coro():
asyncio.Task.current_task().cancel()
return 42
...
res = await coro() # should raise CancelledError
Compiled regular expression objects with the re.LOCALE flag no longer
depend on the locale at compile time. Only the locale at matching
time affects the result of matching.
FileIO.seek() and FileIO.tell() method now set the internal seekable
attribute to avoid one syscall on open() (in buffered or text mode).
The seekable property is now also more reliable since its value is
set correctly on memory allocation failure.
bpo-28769 changed PyUnicode_AsUTF8() return type from const char* to
char* in Python 3.7, but tm_zone field type of the tm structure is
char* on FreeBSD.
Cast PyUnicode_AsUTF8() to char* in gettmarg() to fix the warning:
Modules/timemodule.c:443:20: warning: assigning to 'char *'
from 'const char *' discards qualifiers
timegm() return type is time_t, not int. Use time_t to prevent the
following compiler warning on Windows:
timemodule.c: warning C4244: '=': conversion from 'time_t' to 'int',
possible loss of data
* bpo-30125: Cleanup faulthandler.c
* Use size_t type for iterators
* Add { ... }
* bpo-30125: Fix faulthandler.disable() on Windows
On Windows, faulthandler.disable() now removes the exception handler
installed by faulthandler.enable().
Only define the get_zone() and get_gmtoff() private functions in the
time module if these functions are needed to initialize the module.
The change fixes the following warnings on AIX:
Modules/timemodule.c:1175:1: warning: 'get_gmtoff' defined but not used [-Wunused-function]
Modules/timemodule.c:1164:1: warning: 'get_zone' defined but not used [-Wunused-function]
* Remove conditional on free of `dps`, since `dps` is now allocated for
all versions of OpenSSL
* Remove call to `x509_check_ca` since it was only used to cache
the `crldp` field of the certificate
CRL_DIST_POINTS_free is available in all supported versions of OpenSSL
(recent 0.9.8+) and LibreSSL.
* Implement math.remainder.
* Fix markup for arguments; use double spaces after period.
* Mark up function reference in what's new entry.
* Add comment explaining the calculation in the final branch.
* Fix out-of-order entry in whatsnew.
* Add comment explaining why it's good enough to compare m with c, in spite of possible rounding error.
Fix the use of recursion in itertools.chain.from_iterable. Using recursion
is unnecessary, and can easily cause stack overflows, especially when
building in low optimization modes or with Py_DEBUG enabled.
Element.getiterator() and the html parameter of XMLParser() were
deprecated only in the documentation (since Python 3.2 and 3.4 correspondintly).
Now using them emits a deprecation warning.
* Don’t need check_warnings any more.
There was few cases of using literal 0 instead of NULL in the context of
pointers. While this was a legitimate C code, using NULL rather than 0 makes
the code clearer.
* bpo-6532: Make the thread id an unsigned integer.
From C API side the type of results of PyThread_start_new_thread() and
PyThread_get_thread_ident(), the id parameter of
PyThreadState_SetAsyncExc(), and the thread_id field of PyThreadState
changed from "long" to "unsigned long".
* Restore a check in thread_get_ident().
* Add _PyObject_HasFastCall()
* partial_call() now avoids temporary tuple to pass positional
arguments if the callable supports the FASTCALL calling convention
for positional arguments.
* Fix also a performance regression in partial_call() if the callable
doesn't support FASTCALL.
Directory and zipfile execution previously added
the parent directory of the directory or zipfile
as sys.path[0] and then subsequently overwrote
it with the directory or zipfile itself.
This caused problems in isolated mode, as it
overwrote the "stdlib as a zip archive" entry
in sys.path, as the parent directory was
never added.
The attempted fix to that issue in bpo-29319
created the opposite problem in *non*-isolated
mode, by potentially leaving the parent
directory on sys.path instead of overwriting it.
This change fixes the root cause of the problem
by removing the whole "add-and-overwrite" dance
for sys.path[0], and instead simply never adds
the parent directory to sys.path in the first
place.
* bpo-26121: Use C library implementation for math functions:
tgamma(), lgamma(), erf() and erfc().
* Don't use tgamma() and lgamma() from libc on OS X.
sys.version and the platform module python_build(),
python_branch(), and python_revision() functions now use
git information rather than hg when building from a repo.
Based on original patches by Brett Cannon and Steve Dower.
* init commit, with initial tests for from_param and fields __set__ and __get__, and some additions to from_buffer and from_buffer_copy
* added the rest of tests and patches. probably only a first draft.
* removed trailing spaces
* replace ctype with ctypes in error messages
* change back from ctypes instance to ctype instance
The curses module used mkstemp() + fopen() to create a temporary file in
/tmp. The /tmp directory does not exist on Android. The tmpfile()
function simplifies the task a lot. It creates a temporary file in a
correct directory, takes care of cleanup and returns FILE*.
tmpfile is supported on all platforms (C89, POSIX 2001, Android,
Windows).
Signed-off-by: Christian Heimes <christian@python.org>
sock_addr_t is used to define the minimum size of any socket address on
the stack. Let's make sure that an AF_ALG address always fits. Coverity
complains because in theory, AF_ALG might be larger than any of the other
structs. In practice it already fits.
Closes Coverity CID 1398948, 1398949, 1398950
Signed-off-by: Christian Heimes <christian@python.org>
* Fixed bpo-29565: Corrected ctypes passing of large structs by value.
Added code and test to check that when a structure passed by value
is large enough to need to be passed by reference, a copy of the
original structure is passed. The callee updates the passed-in value,
and the test verifies that the caller's copy is unchanged. A similar
change was also added to the test added for bpo-20160 (that test was
passing, but the changes should guard against regressions).
* Reverted unintended whitespace changes.
bltinmodule.c: Added in b744ba1 and no longer necessary since d64e8a7
posixmodule.c: Added in d1cd4d4 and no longer necessary since efb00c0
pythonrun.c: Added in 73d538b and no longer necessary since d600951
sysmodule.c: Added in 5467d4c and no longer necessary since a2c17c5
Issue #29452: Use METH_FASTCALL calling convention for index(), insert() and
rotate() methods of collections.deque to avoid the creation a temporary tuple
to pass position arguments. Speedup on deque methods:
* d.rotate(): 1.10x faster
* d.rotate(1): 1.24x faster
* d.insert(): 1.18x faster
* d.index(): 1.24x faster
Issue #29300: Rename struct.unpack() second parameter from "inputstr" to
"buffer", and use the Py_buffer type.
Fix also unit tests on struct.unpack() which passed a Unicode string instead of
a bytes string as struct.unpack() second parameter. The purpose of
test_trailing_counter() is to test invalid format strings, not to test the
buffer parameter.
* The struct module now requires contiguous buffers.
* Convert most functions and methods of the _struct module to Argument Clinic
* Use "Py_buffer" type for the "buffer" argument. Argument Clinic is
responsible to create and release the Py_buffer object.
* Use "PyStructObject *" type for self to avoid explicit conversions.
* Add an unit test on the _struct.Struct.unpack_from() method to test passing
arguments as keywords.
* Rephrase docstrings.
* Rename "fmt" argument to "format" in docstrings and the documentation.
As a side effect, functions and methods which used METH_VARARGS calling
convention like struct.pack() now use the METH_FASTCALL calling convention
which avoids the creation of temporary tuple to pass positional arguments and
so is faster. For example, struct.pack("i", 1) becomes 1.56x faster (-36%)::
$ ./python -m perf timeit \
-s 'import struct; pack=struct.pack' 'pack("i", 1)' \
--compare-to=../default-ref/python
Median +- std dev: 119 ns +- 1 ns -> 76.8 ns +- 0.4 ns: 1.56x faster (-36%)
Significant (t=295.91)
Patch co-written with Serhiy Storchaka.
Issue #29286. Run Argument Clinic to get the new faster METH_FASTCALL calling
convention for functions using "boring" positional arguments.
Manually fix _elementtree: _elementtree_XMLParser_doctype() must remain
consistent with the clinic code.
* Indent versionchanged at method level, not class level
* Mark up ``--help`` to avoid generating an en dash
* Use forward slash in Unix command line with a dollar sign ($) prompt
Fix time_hash() function: replace DATE_xxx() macros with TIME_xxx() macros.
Before, the hash function used a wrong value for microseconds if fold is set
(equal to 1).