* bpo-32101: Add sys.flags.dev_mode flag
Rename also the "Developer mode" to the "Development mode".
* bpo-32101: Add PYTHONDEVMODE environment variable
Mention it in the development chapiter.
* Fix _PyMem_SetupAllocators("debug"): always restore allocators to
the defaults, rather than only caling _PyMem_SetupDebugHooks().
* Add _PyMem_SetDefaultAllocator() helper to set the "default"
allocator.
* Add _PyMem_GetAllocatorsName(): get the name of the allocators
* main() now uses debug hooks on memory allocators if Py_DEBUG is
defined, rather than calling directly malloc()
* Document default memory allocators in C API documentation
* _Py_InitializeCore() now fails with a fatal user error if
PYTHONMALLOC value is an unknown memory allocator, instead of
failing with a fatal internal error.
* Add new tests on the PYTHONMALLOC environment variable
* Add support.with_pymalloc()
* Add the _testcapi.WITH_PYMALLOC constant and expose it as
support.with_pymalloc().
* sysconfig.get_config_var('WITH_PYMALLOC') doesn't work on Windows, so
replace it with support.with_pymalloc().
* pythoninfo: add _testcapi collector for pymem
* Py_Main() now calls Py_SetProgramName() earlier to be able to get
the program name in _PyMainInterpreterConfig_ReadEnv().
* Rename prog to program_name
* Rename progpath to program_name
Py_GetPath() and Py_Main() now call
_PyMainInterpreterConfig_ReadEnv() to share the same code to get
environment variables.
Changes:
* Add _PyMainInterpreterConfig_ReadEnv()
* Add _PyMainInterpreterConfig_Clear()
* Add _PyMem_RawWcsdup()
* _PyMainInterpreterConfig: rename pythonhome to home
* Rename _Py_ReadMainInterpreterConfig() to
_PyMainInterpreterConfig_Read()
* Use _Py_INIT_USER_ERR(), instead of _Py_INIT_ERR(), for decoding
errors: the user is able to fix the issue, it's not a bug in
Python. Same change was made in _Py_INIT_NO_MEMORY().
* Remove _Py_GetPythonHomeWithConfig()
* calculate_path() rewritten in Modules/getpath.c and PC/getpathp.c
* Move global variables into a new PyPathConfig structure.
* calculate_path():
* Split the huge calculate_path() function into subfunctions.
* Add PyCalculatePath structure to pass data between subfunctions.
* Document PyCalculatePath fields.
* Move cleanup code into a new calculate_free() subfunction
* calculate_init() now handles Py_DecodeLocale() failures properly
* calculate_path() is now atomic: only replace PyPathConfig
(path_config) at once on success.
* _Py_GetPythonHomeWithConfig() now returns an error on failure
* Add _Py_INIT_NO_MEMORY() helper: report a memory allocation failure
* Coding style fixes (PEP 7)
* Py_Main() now reads the PYTHONHOME environment variable
* Add _Py_GetPythonHomeWithConfig() private function
* Add _PyWarnings_InitWithConfig()
* init_filters() doesn't get the current core configuration from the
current interpreter or Python thread anymore. Pass explicitly the
configuration to _PyWarnings_InitWithConfig().
* _Py_InitializeCore() now fails on _PyWarnings_InitWithConfig()
failure.
* Pass configuration as constant
Changes:
* Py_Main() initializes _PyCoreConfig.module_search_path_env from
the PYTHONPATH environment variable.
* PyInterpreterState_New() now initializes core_config and config
fields
* Compute sys.path a little bit ealier in
_Py_InitializeMainInterpreter() and new_interpreter()
* Add _Py_GetPathWithConfig() private function.
Py_Main() now handles two more -X options:
* -X showrefcount: new _PyCoreConfig.show_ref_count field
* -X showalloccount: new _PyCoreConfig.show_alloc_count field
The developer mode (-X dev) now creates all default warnings filters
to order filters in the correct order to always show ResourceWarning
and make BytesWarning depend on the -b option.
Write a functional test to make sure that ResourceWarning is logged
twice at the same location in the developer mode.
Add a new 'dev_mode' field to _PyCoreConfig.
Add a new "developer mode": new "-X dev" command line option to
enable debug checks at runtime.
Changes:
* Add unit tests for -X dev
* test_cmd_line: replace test.support with support.
* Fix _PyRuntimeState_Fini(): Use the same memory allocator
than _PyRuntimeState_Init().
* Fix _PyMem_GetDefaultRawAllocator()
Parse more env vars in Py_Main():
* Add more options to _PyCoreConfig:
* faulthandler
* tracemalloc
* importtime
* Move code to parse environment variables from _Py_InitializeCore()
to Py_Main(). This change fixes a regression from Python 3.6:
PYTHONUNBUFFERED is now read before calling pymain_init_stdio().
* _PyFaulthandler_Init() and _PyTraceMalloc_Init() now take an
argument to decide if the module has to be enabled at startup.
* tracemalloc_start() is now responsible to check the maximum number
of frames.
Other changes:
* Cleanup Py_Main():
* Rename some pymain_xxx() subfunctions
* Add pymain_run_python() subfunction
* Cleanup Py_NewInterpreter()
* _PyInterpreterState_Enable() now reports failure
* init_hash_secret() now considers pyurandom() failure as an "user
error": don't fail with abort().
* pymain_optlist_append() and pymain_strdup() now sets err on memory
allocation failure.
* Don't use "Python runtime" anymore to parse command line options or
to get environment variables: pymain_init() is now a strict
separation.
* Use an error message rather than "crashing" directly with
Py_FatalError(). Limit the number of calls to Py_FatalError(). It
prepares the code to handle errors more nicely later.
* Warnings options (-W, PYTHONWARNINGS) and "XOptions" (-X) are now
only added to the sys module once Python core is properly
initialized.
* _PyMain is now the well identified owner of some important strings
like: warnings options, XOptions, and the "program name". The
program name string is now properly freed at exit.
pymain_free() is now responsible to free the "command" string.
* Rename most methods in Modules/main.c to use a "pymain_" prefix to
avoid conflits and ease debug.
* Replace _Py_CommandLineDetails_INIT with memset(0)
* Reorder a lot of code to fix the initialization ordering. For
example, initializing standard streams now comes before parsing
PYTHONWARNINGS.
* Py_Main() now handles errors when adding warnings options and
XOptions.
* Add _PyMem_GetDefaultRawAllocator() private function.
* Cleanup _PyMem_Initialize(): remove useless global constants: move
them into _PyMem_Initialize().
* Call _PyRuntime_Initialize() as soon as possible:
_PyRuntime_Initialize() now returns an error message on failure.
* Add _PyInitError structure and following macros:
* _Py_INIT_OK()
* _Py_INIT_ERR(msg)
* _Py_INIT_USER_ERR(msg): "user" error, don't abort() in that case
* _Py_INIT_FAILED(err)
* Fix compilation of the socket module on NetBSD 8.
* Fix the assertion failure or reading arbitrary data when parse
a AF_BLUETOOTH address on NetBSD and DragonFly BSD.
* Fix other potential errors and make the code more reliable.
kB (*kilo* byte) unit means 1000 bytes, whereas KiB ("kibibyte")
means 1024 bytes. KB was misused: replace kB or KB with KiB when
appropriate.
Same change for MB and GB which become MiB and GiB.
Change the output of Tools/iobench/iobench.py.
Round also the size of the documentation from 5.5 MB to 5 MiB.
All Blake2 params have to be encoded in little-endian byte order. For
the two multi-byte integer params, leaf_length and node_offset, that
means that assigning a native-endian integer to them appears to work on
little-endian platforms, but gives the wrong result on big-endian. The
current libb2 API doesn't make that very clear, and @sneves is working
on new API functions in the GH issue above. In the meantime, we can work
around the problem by explicitly assigning little-endian values to the
parameter block.
See https://github.com/BLAKE2/libb2/issues/12.
When a single .c file contains several functions and/or methods with
the same name, a safety _METHODDEF #define statement is generated
only for one of them.
This fixes the bug by using the full name of the function to avoid
duplicates rather than just the name.
Add new time functions:
* time.clock_gettime_ns()
* time.clock_settime_ns()
* time.monotonic_ns()
* time.perf_counter_ns()
* time.process_time_ns()
* time.time_ns()
Add new _PyTime functions:
* _PyTime_FromTimespec()
* _PyTime_FromNanosecondsObject()
* _PyTime_FromTimeval()
Other changes:
* Add also os.times() tests to test_os.
* pytime_fromtimeval() and pytime_fromtimeval() now return
_PyTime_MAX or _PyTime_MIN on overflow, rather than undefined
behaviour
* _PyTime_FromNanoseconds() parameter type changes from long long to
_PyTime_t
Replace occurence of nested comments in blake2 reference implementation
with preprocessor directive for disabling unused code.
`blake2s-load-xop.h` is conditionally pulled in only on chips with XOP
support, among others the AMD Bulldozer. The malformed comments in the
source file breaks the build of `hashlib`'s `_blake2` on GCC 6.3.0.
Official reference code on github uses `#if` so this change should be
uncontroversial.
Modify the code to use ncurses is_pad() instead of checking WINDOW
_flags field. If your platform does not provide the is_pad(), the
existing way that checks the field will be enabled.
Note: This change does not drop support for platforms where do not
have both WINDOW _flags field and is_pad().
Cleanup pymalloc:
* Rename _PyObject_Alloc() to pymalloc_alloc()
* Rename _PyObject_FreeImpl() to pymalloc_free()
* Rename _PyObject_Realloc() to pymalloc_realloc()
* pymalloc_alloc() and pymalloc_realloc() don't fallback on the raw
allocator anymore, it now must be done by the caller
* Add "success" and "failed" labels to pymalloc_alloc() and
pymalloc_free()
* pymalloc_alloc() and pymalloc_free() don't update
num_allocated_blocks anymore: it should be done in the caller
* _PyObject_Calloc() is now responsible to fill the memory block
allocated by pymalloc with zeros
* Simplify pymalloc_alloc() prototype
* _PyObject_Realloc() now calls _PyObject_Malloc() rather than
calling directly pymalloc_alloc()
_PyMem_DebugRawAlloc() and _PyMem_DebugRawRealloc():
* document the layout of a memory block
* don't increase the serial number if the allocation failed
* check for integer overflow before computing the total size
* add a 'data' variable to make the code easiler to follow
test_setallocators() of _testcapimodule.c now test also the context.
Use the _PyTime_t type rather than double for the faulthandler
timeout in dump_traceback_later().
This change should fix the following Coverity warning:
CID 1420311: Incorrect expression (UNINTENDED_INTEGER_DIVISION)
Dividing integer expressions "9223372036854775807LL" and "1000LL",
and then converting the integer quotient to type "double". Any
remainder, or fractional part of the quotient, is ignored.
if ((timeout * 1e6) >= (double) PY_TIMEOUT_MAX) {
The warning comes from (double)PY_TIMEOUT_MAX with:
#define PY_TIMEOUT_MAX (PY_LLONG_MAX / 1000)
The startup refactoring means command line settings
are now applied after settings are read from the
environment.
This updates the way command line settings are applied
to account for that, ensures more settings are first read
from the environment in _PyInitializeCore, and adds a
simple test case covering the flags that are easy to check.
Fix the pthread+semaphore implementation of
PyThread_acquire_lock_timed() when called with timeout > 0 and
intr_flag=0: recompute the timeout if sem_timedwait() is interrupted
by a signal (EINTR).
See also the PEP 475.
The pthread implementation of PyThread_acquire_lock() now fails with
a fatal error if the timeout is larger than PY_TIMEOUT_MAX, as done
in the Windows implementation.
The check prevents any risk of overflow in PyThread_acquire_lock().
Add also PY_DWORD_MAX constant.
Rework the code choosing BLAKE2 code paths from using the optimized
variant on all x86_64 machines to using it when SSSE3 or better
supported instructions sets are available.
Firstly, this solves the problem of using pure SSE2 code path on x86_64
machines. As reported in the bug, this code is slower than the reference
code on all tested x86_64 machines. Furthermore, on Athlon64 that lacks
SSSE3, it is even 2.5 times slower than the reference code! Checking
for SSSE3 therefore ensures that the optimized implementation will only
be used when it has a chance of performing better.
Secondly, this makes it possible to use SSSE3+ optimizations on 32-bit
x86 systems. This allows for even 2 times speed gain on modern 32-bit
x86 systems (tested in a 32-bit chroot).
Fix the following Coverity warning:
>>> CID 1420038: Control flow issues (DEADCODE)
>>> Execution cannot reach this statement: "res = sem_trywait(self->han...".
321 res = sem_trywait(self->handle);
The deadcode was introduced by the commit
c872d39d32.
Fix timeout rounding in time.sleep(), threading.Lock.acquire() and
socket.socket.settimeout() to round correctly negative timeouts between -1.0 and
0.0. The functions now block waiting for events as expected. Previously, the
call was incorrectly non-blocking.
bpo-31803: time.clock() and time.get_clock_info('clock') now emit a
DeprecationWarning warning.
Replace time.clock() with time.perf_counter() in tests and demos.
Remove also hasattr(time, 'monotonic') in test_time since time.monotonic()
is now always available since Python 3.5.
Always pass -1, or INFTIM where defined, to the poll() system call when
a negative timeout is passed to the poll.poll([timeout]) method in the
select module. Various OSes throw an error with arbitrary negative
values.
Freeze all the objects tracked by gc - move them to a permanent generation
and ignore all the future collections. This can be used before a POSIX
fork() call to make the gc copy-on-write friendly or to speed up collection.
* Rewrite win_perf_counter() to only use integers internally.
* Add _PyTime_MulDiv() which compute "ticks * mul / div"
in two parts (int part and remaining) to prevent integer overflow.
* Clock frequency is checked at initialization for integer overflow.
* Enhance also pymonotonic() to reduce the precision loss on macOS
(mach_absolute_time() clock).
time.clock() and time.perf_counter() now use again C double
internally.
Remove also _PyTime_GetWinPerfCounterWithInfo(): use
_PyTime_GetPerfCounterDoubleWithInfo() instead on Windows.
* Separated functions and constants descriptions in sections.
* Added a note about the limitations of timezone constants.
* Removed redundant lists from the module docstring.
See PEP 539 for details.
Highlights of changes:
- Add Thread Specific Storage (TSS) API
- Document the Thread Local Storage (TLS) API as deprecated
- Update code that used TLS API to use TSS API
While a rare potential failure (it requires swapping out zlib.decompress() itself and forcing it to return a non-bytes object), this change prevents a potential C-level assertion failure and instead substitutes it with an exception.
Thanks to Oren Milman for the patch.
Python requires C implementations provide memmove, so we shouldn't need to check for it. The only place using this configure check was expat, where we can simply always define HAVE_MEMMOVE.
* Maintain a list of BufferedWriter objects. Flush them on exit.
In Python 3, the buffer and the underlying file object are separate
and so the order in which objects are finalized matters. This is
unlike Python 2 where the file and buffer were a single object and
finalization was done for both at the same time. In Python 3, if
the file is finalized and closed before the buffer then the data in
the buffer is lost.
This change adds a doubly linked list of open file buffers. An atexit
hook ensures they are flushed before proceeding with interpreter
shutdown. This is addition does not remove the need to properly close
files as there are other reasons why buffered data could get lost during
finalization.
Initial patch by Armin Rigo.
* Use weakref.WeakSet instead of WeakKeyDictionary.
* Simplify buffered double-linked list types.
* In _flush_all_writers(), suppress errors from flush().
* Remove NEWS entry, use blurb.
* Take more care when flushing file buffers from atexit.
The previous implementation was not careful enough to avoid
causing issues in multi-threaded cases. Check for buf->ok
and buf->finalizing before actually doing the flush. Also,
increase the refcnt to ensure the object does not disappear.
Fix a memory corruption in getpath.c due to mixed memory allocators
between Py_GetPath() and Py_SetPath().
The fix use the Raw allocator to mimic the windows version.
This patch should be used from python3.6 to the current version
for more details, see the bug report and
https://github.com/pyinstaller/pyinstaller/issues/2812
* bpo-31499, xml.etree: Fix xmlparser_gc_clear() crash
xml.etree: xmlparser_gc_clear() now sets self.parser to NULL to prevent a
crash in xmlparser_dealloc() if xmlparser_gc_clear() was called previously
by the garbage collector, because the parser was part of a reference cycle.
Co-Authored-By: Serhiy Storchaka <storchaka@gmail.com>
The concrete PyDict_* API is used to interact with PyInterpreterState.modules in a number of places. This isn't compatible with all dict subclasses, nor with other Mapping implementations. This patch switches the concrete API usage to the corresponding abstract API calls.
We also add a PyImport_GetModule() function (and some other helpers) to reduce a bunch of code duplication.
* Add Py_UNREACHABLE() as an alias to abort().
* Use Py_UNREACHABLE() instead of assert(0)
* Convert more unreachable code to use Py_UNREACHABLE()
* Document Py_UNREACHABLE() and a few other macros.
* Avoid calling "PyObject_GetAttrString()" (and potentially executing user code) with a live exception set.
* Ignore only AttributeError on attribute lookups in ElementTree.XMLParser() and propagate all other exceptions.
Cast Py_buffer.len (Py_ssize_t, signed) to size_t (unsigned) to
prevent the following warning:
Modules/_ssl.c:3089:21: warning: comparison between signed and
unsigned integer expressions [-Wsign-compare]
PR #1638, for bpo-28411, causes problems in some (very) edge cases. Until that gets sorted out, we're reverting the merge. PR #3506, a fix on top of #1638, is also getting reverted.
The SSL module now raises SSLCertVerificationError when OpenSSL fails to
verify the peer's certificate. The exception contains more information about
the error.
Original patch by Chi Hsuan Yen
Signed-off-by: Christian Heimes <christian@python.org>
* group the (stateful) runtime globals into various topical structs
* consolidate the topical structs under a single top-level _PyRuntimeState struct
* add a check-c-globals.py script that helps identify runtime globals
Other globals are excluded (see globals.txt and check-c-globals.py).
* bpo-29136: Add TLS 1.3 support
TLS 1.3 introduces a new, distinct set of cipher suites. The TLS 1.3
cipher suites don't overlap with cipher suites from TLS 1.2 and earlier.
Since Python sets its own set of permitted ciphers, TLS 1.3 handshake
will fail as soon as OpenSSL 1.1.1 is released. Let's enable the common
AES-GCM and ChaCha20 suites.
Additionally the flag OP_NO_TLSv1_3 is added. It defaults to 0 (no op) with
OpenSSL prior to 1.1.1. This allows applications to opt-out from TLS 1.3
now.
Signed-off-by: Christian Heimes <christian@python.org>
* bpo-27584: New addition of vSockets to the python socket module
Support for AF_VSOCK on Linux only
* bpo-27584: Fixes for V2
Fixed syntax and naming problems.
Fixed #ifdef AF_VSOCK checking
Restored original aclocal.m4
* bpo-27584: Fixes for V3
Added checking for fcntl and thread modules.
* bpo-27584: Fixes for V4
Fixed white space error
* bpo-27584: Fixes for V5
Added back comma in (CID, port).
* bpo-27584: Fixes for V6
Added news file.
socket.rst now reflects first Linux introduction of AF_VSOCK.
Fixed get_cid in test_socket.py.
Replaced PyLong_FromLong with PyLong_FromUnsignedLong in socketmodule.c
Got rid of extra AF_VSOCK #define.
Added sockaddr_vm to sock_addr.
* bpo-27584: Fixes for V7
Minor cleanup.
* bpo-27584: Fixes for V8
Put back #undef AF_VSOCK as it is necessary when vm_sockets.h is not installed.
Add basic fuzz tests for a few common builtin functions.
This is an easy place to start, and these functions are probably safe.
We'll want to add more fuzz tests later. Lets bootstrap using these.
While the fuzz tests are included in CPython and compiled / tested on a
very basic level inside CPython itself, the actual fuzzing happens as
part of oss-fuzz (https://github.com/google/oss-fuzz). The reason to
include the tests in CPython is to make sure that they're maintained
as part of the CPython project, especially when (as some eventually
will) they use internal implementation details in the test.
(This will be necessary sometimes because e.g. the fuzz test should
never enter Python's interpreter loop, whereas some APIs only expose
themselves publicly as Python functions.)
This particular set of changes is part of testing Python's builtins,
tracked internally at Google by b/37562550.
The _xxtestfuzz module that this change adds need not be shipped with binary distributions of Python.
SSLObject.version() now correctly returns None when handshake over BIO has
not been performed yet.
Signed-off-by: Christian Heimes <christian@python.org>
* group the (stateful) runtime globals into various topical structs
* consolidate the topical structs under a single top-level _PyRuntimeState struct
* add a check-c-globals.py script that helps identify runtime globals
Other globals are excluded (see globals.txt and check-c-globals.py).
Include sys/sysmacros.h for major(), minor(), and makedev(). GNU C libray
plans to remove the functions from sys/types.h.
Signed-off-by: Christian Heimes <christian@python.org>
The ssl and hashlib modules now call OPENSSL_add_all_algorithms_noconf() on
OpenSSL < 1.1.0. The function detects CPU features and enables optimizations
on some CPU architectures such as POWER8. Patch is based on research from
Gustavo Serra Scalet.
Signed-off-by: Christian Heimes <christian@python.org>
* Maintain a list of BufferedWriter objects. Flush them on exit.
In Python 3, the buffer and the underlying file object are separate
and so the order in which objects are finalized matters. This is
unlike Python 2 where the file and buffer were a single object and
finalization was done for both at the same time. In Python 3, if
the file is finalized and closed before the buffer then the data in
the buffer is lost.
This change adds a doubly linked list of open file buffers. An atexit
hook ensures they are flushed before proceeding with interpreter
shutdown. This is addition does not remove the need to properly close
files as there are other reasons why buffered data could get lost during
finalization.
Initial patch by Armin Rigo.
* Use weakref.WeakSet instead of WeakKeyDictionary.
* Simplify buffered double-linked list types.
* In _flush_all_writers(), suppress errors from flush().
* Remove NEWS entry, use blurb.
* Change NPN detection:
Version breakdown, support disabled (pre-patch/post-patch):
- pre-1.0.1: OPENSSL_NPN_NEGOTIATED will not be defined -> False/False
- 1.0.1 and 1.0.2: OPENSSL_NPN_NEGOTIATED will not be defined ->
False/False
- 1.1.0+: OPENSSL_NPN_NEGOTIATED will be defined and
OPENSSL_NO_NEXTPROTONEG will be defined -> True/False
Version breakdown support enabled (pre-patch/post-patch):
- pre-1.0.1: OPENSSL_NPN_NEGOTIATED will not be defined -> False/False
- 1.0.1 and 1.0.2: OPENSSL_NPN_NEGOTIATED will be defined and
OPENSSL_NO_NEXTPROTONEG will not be defined -> True/True
- 1.1.0+: OPENSSL_NPN_NEGOTIATED will be defined and
OPENSSL_NO_NEXTPROTONEG will not be defined -> True/True
* Refine NPN guard:
- If NPN is disabled, but ALPN is available we need our callback
- Make clinic's ssl behave the same way
This created a working ssl module for me, with NPN disabled and ALPN
enabled for OpenSSL 1.1.0f.
Concerns to address:
The initial commit for NPN support into OpenSSL [1], had the
OPENSSL_NPN_* variables defined inside the OPENSSL_NO_NEXTPROTONEG
guard. The question is if that ever made it into a release.
This would need an ugly hack, something like:
#if defined(OPENSSL_NO_NEXTPROTONEG) && \
!defined(OPENSSL_NPN_NEGOTIATED)
# define OPENSSL_NPN_UNSUPPORTED 0
# define OPENSSL_NPN_NEGOTIATED 1
# define OPENSSL_NPN_NO_OVERLAP 2
#endif
[1] https://github.com/openssl/openssl/commit/68b33cc5c7
* Fixes#30581 by adding a path to use newer GetMaximumProcessorCount API on Windows calls to os.cpu_count()
* Add NEWS.d entry for bpo-30581, os.cpu_count on Windows.
* Tweak NEWS entry
Ctypes currently produces wrong pep3118 type codes for several types.
E.g. memoryview(ctypes.c_long()).format gives "<l" on 64-bit platforms,
but it should be "<q" instead for sizeof(c_long) == 8
The problem is that the '<>' endian specification in the struct syntax
also turns on the "standard size" mode, which makes type characters have
a platform-independent meaning, which does not match with the codes used
internally in ctypes. The struct module format syntax also does not
allow specifying native-size non-native-endian items.
This commit adds a converter function that maps the internal ctypes
codes to appropriate struct module standard-size codes in the pep3118
format strings. The tests are modified to check for this.
* Added support for CAN_ISOTP protocol
* Added unit tests for CAN ISOTP
* Updated documentation for ISO-TP protocol
* Removed trailing whitespace in documentation
* Added blurb NEWS.d file
* updated Misc/ACKS
* Fixed broken unit test that was using isotp const outside of skippable section
* Removed dependecy over third party project
* Added implementation for getsockname + unit tests
* Missing newline at end of ACKS file
* Accidentally inserted a type in ACKS file
* Followed tiran changes review #1 recommendations
* Added spaces after comma
The current test_child_terminated_in_stopped_state() function test
creates a child process which calls ptrace(PTRACE_TRACEME, 0, 0) and
then crash (SIGSEGV). The problem is that calling os.waitpid() in the
parent process is not enough to close the process: the child process
remains alive and so the unit test leaks a child process in a
strange state. Closing the child process requires non-trivial code,
maybe platform specific.
Remove the functional test and replaces it with an unit test which
mocks os.waitpid() using a new _testcapi.W_STOPCODE() function to
test the WIFSTOPPED() path.
* Closes issue bpo-5288: Allow tzinfo objects with sub-minute offsets.
* bpo-5288: Implemented %z formatting of sub-minute offsets.
* bpo-5288: Removed mentions of the whole minute limitation on TZ offsets.
* bpo-5288: Removed one more mention of the whole minute limitation.
Thanks @csabella!
* Fix a formatting error in the docs
* Addressed review comments.
Thanks, @haypo.
* Improve signal delivery
Avoid using Py_AddPendingCall from signal handler, to avoid calling signal-unsafe functions.
* Remove unused function
* Improve comments
* Use _Py_atomic API for concurrency-sensitive signal state
* Add blurb
If history-length is set in .inputrc, and the history file is double the
history size (or more), history_get(N) returns NULL, and python
segfaults. Fix that by checking for NULL return value.
It seems that the root cause is incorrect handling of bigger history in
readline, but Python should not segfault even if readline returns
unexpected value.
This issue affects only GNU readline. When using libedit emulation
system history size option does not work.
* Improve signal delivery
Avoid using Py_AddPendingCall from signal handler, to avoid calling signal-unsafe functions.
* Remove unused function
* Improve comments
* Add stress test
* Adapt for --without-threads
* Add second stress test
* Add NEWS blurb
* Address comments @haypo
Based on patch by Victor Stinner.
Add private C API function _PyUnicode_AsUnicode() which is similar to
PyUnicode_AsUnicode(), but checks for null characters.
New error condition paths were introduced, which did not decrement
`key2` and `val2` objects. Therefore, decrement references before
jumping to the error label.
Signed-off-by: Eric N. Vander Weele <ericvw@gmail.com>
* Make PyTraceMalloc_Track() and PyTraceMalloc_Untrack() functions
public (remove the "_" prefix)
* Remove the _PyTraceMalloc_domain_t type: use directly unsigned
int.
* Document methods
Note: methods are already tested in test_tracemalloc.
- removes PY_WARN_ON_C_LOCALE build time flag
- locale coercion and compatibility warnings are now always compiled
in, but are off by default
- adds PYTHONCOERCECLOCALE=warn runtime option to aid in
debugging potentially locale related compatibility problems
Due to not-yet-resolved test failures on *BSD systems (including
Mac OS X), this also temporarily disables UTF-8 as a locale coercion
target, and skips testing the interpreter's behavior in the POSIX locale.
When os.spawnv() fails while handling arguments, free correctly
argvlist: pass lastarg+1 rather than lastarg to free_string_array()
to also free the first item.
* bpo-29591: Upgrade Modules/expat to libexpat 2.2
* bpo-29591: Restore Python changes on expat
* bpo-29591: Remove expat config of unsupported platforms
Remove the configuration (Modules/expat/*config.h) of unsupported
platforms:
* Amiga
* MacOS Classic on PPC32
* Open Watcom
* bpo-29591: Remove useless XML_HAS_SET_HASH_SALT
The XML_HAS_SET_HASH_SALT define of Modules/expat/expat.h became
useless since our local expat copy was upgrade to expat 2.1 (it's now
expat 2.2.0).
When os.spawnve() fails while handling arguments, free correctly
argvlist: pass lastarg+1 rather than lastarg to free_string_array()
to also free the first item.
The function '_PyArg_ParseStack()' and
'_PyArg_UnpackStack' were failing (with error
"XXX() takes Y argument (Z given)") before
the function '_PyArg_NoStackKeywords()' was called.
Thus, the latter did not raise its more meaningful
error : "XXX() takes no keyword arguments".
Fix a reference leak in _io._WindowsConsoleIO: PyUnicode_FSDecoder()
always initialize decodedname when it succeed and it doesn't clear
input decodedname object.
Error messages when pass keyword arguments to some builtins that
don't support keyword arguments contained double parenthesis: "()()".
The regression was introduced by bpo-30534.
If pass a server_hostname= that fails IDNA decoding to SSLContext.wrap_socket or SSLContext.wrap_bio, then the SSLContext object had a spurious Py_DECREF called on it, eventually leading to segfaults.
* bpo-30537: use PyNumber in itertools instead of PyLong
* bpo-30537: revert changes except to islice_new
* bpo-30537: test itertools.islice and add entry to Misc/NEWS
* Simplify X.509 extension handling code
The previous implementation had grown organically over time, as OpenSSL's API evolved.
* Delete even more code
* bpo-30557: faulthandler now correctly filters and displays exception codes on Windows
* Adds test for non-fatal exceptions.
* Adds bpo number to comment.
* bpo-30544: _io._WindowsConsoleIO.write raises the wrong error when WriteConsoleW fails
* bpo-30544: _io._WindowsConsoleIO.write raises the wrong error when WriteConsoleW fails
Rather than saving the Python object and calling PyObject_IsTrue()
every time when the boolean argument is used, call it only once and
save C boolean value.
* bpo-16500: Allow registering at-fork handlers
* Address Serhiy's comments
* Add doc for new C API
* Add doc for new Python-facing function
* Add NEWS entry + doc nit