Replace occurence of nested comments in blake2 reference implementation
with preprocessor directive for disabling unused code.
`blake2s-load-xop.h` is conditionally pulled in only on chips with XOP
support, among others the AMD Bulldozer. The malformed comments in the
source file breaks the build of `hashlib`'s `_blake2` on GCC 6.3.0.
Official reference code on github uses `#if` so this change should be
uncontroversial.
Modify the code to use ncurses is_pad() instead of checking WINDOW
_flags field. If your platform does not provide the is_pad(), the
existing way that checks the field will be enabled.
Note: This change does not drop support for platforms where do not
have both WINDOW _flags field and is_pad().
Cleanup pymalloc:
* Rename _PyObject_Alloc() to pymalloc_alloc()
* Rename _PyObject_FreeImpl() to pymalloc_free()
* Rename _PyObject_Realloc() to pymalloc_realloc()
* pymalloc_alloc() and pymalloc_realloc() don't fallback on the raw
allocator anymore, it now must be done by the caller
* Add "success" and "failed" labels to pymalloc_alloc() and
pymalloc_free()
* pymalloc_alloc() and pymalloc_free() don't update
num_allocated_blocks anymore: it should be done in the caller
* _PyObject_Calloc() is now responsible to fill the memory block
allocated by pymalloc with zeros
* Simplify pymalloc_alloc() prototype
* _PyObject_Realloc() now calls _PyObject_Malloc() rather than
calling directly pymalloc_alloc()
_PyMem_DebugRawAlloc() and _PyMem_DebugRawRealloc():
* document the layout of a memory block
* don't increase the serial number if the allocation failed
* check for integer overflow before computing the total size
* add a 'data' variable to make the code easiler to follow
test_setallocators() of _testcapimodule.c now test also the context.
test_curses now saves/restores signals. On FreeBSD, the curses module
sets handlers of some signals, but don't restore old handlers when
the module is deinitialized.
Editor and output windows only see an empty last prompt line.
This simplifies the code and fixes a minor bug when newline is inserted.
Sys.ps1, if present, is read on Shell start-up, but is not set or changed.
Use the _PyTime_t type rather than double for the faulthandler
timeout in dump_traceback_later().
This change should fix the following Coverity warning:
CID 1420311: Incorrect expression (UNINTENDED_INTEGER_DIVISION)
Dividing integer expressions "9223372036854775807LL" and "1000LL",
and then converting the integer quotient to type "double". Any
remainder, or fractional part of the quotient, is ignored.
if ((timeout * 1e6) >= (double) PY_TIMEOUT_MAX) {
The warning comes from (double)PY_TIMEOUT_MAX with:
#define PY_TIMEOUT_MAX (PY_LLONG_MAX / 1000)
Only declaring these as interns inside the CLI's main C module
caused build problems on some platforms (notably Cygwin), so
this switches them to a regular underscore prefixed "private" C
API declaration.
This kludge is from 1992. Any C99 compiler is going to be able to handle the
ceval dispatch switch.
Anyway, we have much bigger switches than the ceval dispatch one around. (See,
e.g., Objects/unicodetype_db.h.)
The startup refactoring means command line settings
are now applied after settings are read from the
environment.
This updates the way command line settings are applied
to account for that, ensures more settings are first read
from the environment in _PyInitializeCore, and adds a
simple test case covering the flags that are easy to check.
Fix the pthread+semaphore implementation of
PyThread_acquire_lock_timed() when called with timeout > 0 and
intr_flag=0: recompute the timeout if sem_timedwait() is interrupted
by a signal (EINTR).
See also the PEP 475.
The pthread implementation of PyThread_acquire_lock() now fails with
a fatal error if the timeout is larger than PY_TIMEOUT_MAX, as done
in the Windows implementation.
The check prevents any risk of overflow in PyThread_acquire_lock().
Add also PY_DWORD_MAX constant.