Exactly which locale requests will end up giving
you the "C" locale is actually platform dependent.
A blank locale and "POSIX" will translate to "C"
on most Linux distros, but may not do so on other platforms, so this adjusts the way the tests are structured to better account for that.
This is an initial step towards fixing the current
test failure on Cygwin (hence the issue reference)
It no longer spends much time doing complex calculations and no
longer consumes much memory for creating large constants that will
be dropped later.
This fixes also bpo-21074.
bpo-32329, bpo-32030:
* The -R option now turns on hash randomization when the
PYTHONHASHSEED environment variable is set to 0 Previously, the
option was ignored.
* sys.flags.hash_randomization is now properly set to 0 when hash
randomization is turned off by PYTHONHASHSEED=0.
* _PyCoreConfig_ReadEnv() now reads the PYTHONHASHSEED environment
variable. _Py_HashRandomization_Init() now only apply the
configuration, it doesn't read PYTHONHASHSEED anymore.
asyncio.get_event_loop(), and, subsequently asyncio._get_running_loop()
are one of the most frequently executed functions in asyncio. They also
can't be sped up by third-party event loops like uvloop.
When implemented in C they become 4x faster.
* Add -X utf8 command line option, PYTHONUTF8 environment variable
and a new sys.flags.utf8_mode flag.
* If the LC_CTYPE locale is "C" at startup: enable automatically the
UTF-8 mode.
* Add _winapi.GetACP(). encodings._alias_mbcs() now calls
_winapi.GetACP() to get the ANSI code page
* locale.getpreferredencoding() now returns 'UTF-8' in the UTF-8
mode. As a side effect, open() now uses the UTF-8 encoding by
default in this mode.
* Py_DecodeLocale() and Py_EncodeLocale() now use the UTF-8 encoding
in the UTF-8 Mode.
* Update subprocess._args_from_interpreter_flags() to handle -X utf8
* Skip some tests relying on the current locale if the UTF-8 mode is
enabled.
* Add test_utf8mode.py.
* _Py_DecodeUTF8_surrogateescape() gets a new optional parameter to
return also the length (number of wide characters).
* pymain_get_global_config() and pymain_set_global_config() now
always copy flag values, rather than only copying if the new value
is greater than the old value.
Rather than supporting dev mode directly in the warnings module, this
instead adjusts the initialisation code to add an extra 'default'
entry to sys.warnoptions when dev mode is enabled.
This ensures that dev mode behaves *exactly* as if `-Wdefault` had
been passed on the command line, including in the way it interacts
with `sys.warnoptions`, and with other command line flags like `-bb`.
Fix also bpo-20361: have -b & -bb options take precedence over any
other warnings options.
Patch written by Nick Coghlan, with minor modifications of Victor Stinner.
The error messages in `object.__new__` and `object.__init__` now aim
to point the user more directly at the name of the class being instantiated
in cases where they *haven't* been overridden (on the assumption that
the actual problem is a missing `__new__` or `__init__` definition in the
class body).
When they *have* been overridden, the errors still report themselves as
coming from object, on the assumption that the problem is with the call
up to the base class in the method implementation, rather than with the
way the constructor is being called.
Reference siphash takes the keys as a bytes, so it makes sense to byte swap
when reifying the keys as 64-bit integers. However, Python's siphash takes host
integers in to start with.
Python now supports checking bytecode cache up-to-dateness with a hash of the
source contents rather than volatile source metadata. See the PEP for details.
While a fairly straightforward idea, quite a lot of code had to be modified due
to the pervasiveness of pyc implementation details in the codebase. Changes in
this commit include:
- The core changes to importlib to understand how to read, validate, and
regenerate hash-based pycs.
- Support for generating hash-based pycs in py_compile and compileall.
- Modifications to our siphash implementation to support passing a custom
key. We then expose it to importlib through _imp.
- Updates to all places in the interpreter, standard library, and tests that
manually generate or parse pyc files to grok the new format.
- Support in the interpreter command line code for long options like
--check-hash-based-pycs.
- Tests and documentation for all of the above.
* Convert asyncio/tasks.py to async/await
* Convert asyncio/queues.py to async/await
* Convert asyncio/test_utils.py to async/await
* Convert asyncio/base_subprocess.py to async/await
* Convert asyncio/subprocess.py to async/await
* Convert asyncio/streams.py to async/await
* Fix comments
* Convert asyncio/locks.py to async/await
* Convert asyncio.sleep to async def
* Add a comment
* Add missing news
* Convert stubs from AbstrctEventLoop to async functions
* Convert subprocess_shell/subprocess_exec
* Convert connect_read_pipe/connect_write_pip to async/await syntax
* Convert create_datagram_endpoint
* Convert create_unix_server/create_unix_connection
* Get rid of old style coroutines in unix_events.py
* Convert selector_events.py to async/await
* Convert wait_closed and create_connection
* Drop redundant line
* Convert base_events.py
* Code cleanup
* Drop redundant comments
* Fix indentation
* Add explicit tests for compatibility between old and new coroutines
* Convert windows event loop to use async/await
* Fix double awaiting of async function
* Convert asyncio/locks.py
* Improve docstring
* Convert tests to async/await
* Convert more tests
* Convert more tests
* Convert more tests
* Convert tests
* Improve test
* Rather than raise TypeError, warn and call list() on the value.
* Fix tests, revise NEWS and whatsnew text.
* Revise documentation, a string is okay as well.
* Ensure 'requires' and 'obsoletes' are real lists.
* Test that requires and obsoletes are turned to lists.
When tk event handling is driven by IDLE's run loop, a confusing
and distracting queue.EMPTY traceback context is no longer added
to tk event exception tracebacks. The traceback is now the same
as when event handling is driven by user code. Patch based on
a suggestion by Serhiy Storchaka.
The original algorithm tried to delegate the folding to the tokens so
that those tokens whose folding rules differed could specify the
differences. However, this resulted in a lot of duplicated code because
most of the rules were the same.
The new algorithm moves all folding logic into a set of functions
external to the token classes, but puts the information about which
tokens can be folded in which ways on the tokens...with the exception of
mime-parameters, which are a special case (which was not even
implemented in the old folder).
This algorithm can still probably be improved and hopefully simplified
somewhat.
Note that some of the test expectations are changed. I believe the
changes are toward more desirable and consistent behavior: in general
when (re) folding a line the canonical version of the tokens is
generated, rather than preserving errors or extra whitespace.
Previously, CO_NOFREE was set in the compiler, which meant
it could end up being set incorrectly when code objects
were created directly. Setting it in the constructor based
on freevars and cellvars ensures it is always accurate,
regardless of how the code object is defined.
The current behaviour of yield expressions inside comprehensions and
generator expressions is essentially an accident of implementation - it
arises implicitly from the way the compiler handles yield expressions inside
nested functions and generators.
Since the current behaviour wasn't deliberately designed, and is inherently
confusing, we're deprecating it, with no current plans to reintroduce it.
Instead, our advice will be to use a named nested generator definition
for cases where this behaviour is desired.
In _io_FileIO_readall_impl(), lseek() and _Py_fstat_noraise() were called
without releasing the GIL. This can cause all threads to hang for
unlimited time when calling FileIO.read() and the NFS server is not
accessible.
* Fixed saving bytearrays.
* Identical objects will be saved only once.
* Equal references will be load as identical objects.
* Added support for saving and loading recursive data structures.
When PyGILState_Ensure() is called in a non-Python thread before
PyEval_InitThreads(), only call PyEval_InitThreads() after calling
PyThreadState_New() to fix a crash.
Add an unit test in test_embed.
* bpo-32101: Add sys.flags.dev_mode flag
Rename also the "Developer mode" to the "Development mode".
* bpo-32101: Add PYTHONDEVMODE environment variable
Mention it in the development chapiter.
* Add most_recent_first parameter to tracemalloc.Traceback.format to allow
reversing the order of the frames in the output
* Reversed default sorting of tracemalloc.Traceback frames
* Allowed negative limit, truncating from the other side.
``uuid.getnode()`` now preferentially returns universally administered MAC addresses if available, over locally administered MAC addresses. This makes a better guarantee for global uniqueness of UUIDs returned from ``uuid.uuid1()``. If only locally administered MAC addresses are available, the first such one found is returned.
Also improve internal code style by being explicit about ``return None`` rather than falling off the end of the function.
Improve the test robustness.
CPython migrated from CVS to Subversion, to Mercurial, and then to
Git. CVS and Subversion are not more used to develop CPython.
* platform module: drop support for sys.subversion. The
sys.subversion attribute has been removed in Python 3.3.
* Remove Misc/svnmap.txt
* Remove Tools/scripts/svneol.py
* Remove Tools/scripts/treesync.py
* distutils.config: Use the PyPIRCCommand.realm attribute if set
* turtledemo: wait until macOS osascript command completes to not
create a zombie process
* Tools/scripts/treesync.py: declare 'default_answer' and
'create_files' as globals to modify them with the command line
arguments. Previously, -y, -n, -f and -a options had no effect.
flake8 warning: "F841 local variable 'p' is assigned to but never
used".
Some parts of the C API are only relevant to larger
applications embedding CPython as a runtime engine.
The helpers to test those APIs are already separated
out into Programs/_testembed.c, this update moves
the associated test cases out into their own dedicated
test file.
Improve UUID1 MAC address calculation and related tests.
There are two bits in the MAC address that are relevant to UUID1. The first is the locally administered vs. universally administered bit (second least significant of the first octet). Physical network interfaces such as ethernet ports and wireless adapters will always be universally administered, but some interfaces --such as the interface that MacBook Pros communicate with their Touch Bars-- are locally administered. The former are guaranteed to be globally unique, while the latter are demonstrably *not* globally unique and are in fact the same on every MBP with a Touch Bar. With this bit is set, the MAC is locally administered; with it unset it is universally administered.
The other bit is the multicast bit (least significant bit of the first octet). When no other MAC address can be found, RFC 4122 mandates that a random 48-bit number be generated. This randomly generated number *must* have the multicast bit set.
The improvements in uuid.py include:
* Preferentially return a universally administered MAC address, falling back to a locally administered address if none of the former can be found.
* Improve several coding style issues, such as adding explicit returns of None, using a more readable bitmask pattern, and assuming that the ultimate fallback, random MAC generation will not fail (and propagating any exception there instead of swallowing them).
Improvements in test_uuid.py include:
* Always testing the calculated MAC for universal administration, unless explicitly disabled (i.e. for the random case), or implicitly disabled due to running in the Travis environment. Travis test machines have *no* universally administered MAC address at the time of this writing.
The warnings module doesn't leak memory anymore in the hidden
warnings registry for the "ignore" action of warnings filters.
The warn_explicit() function doesn't add the warning key to the
registry anymore for the "ignore" action.
The test.support.skip_unless_bind_unix_socket() decorator is used to skip
asyncio tests that fail because the platform lacks a functional bind()
function for unix domain sockets (as it is the case for non root users on the
recent Android versions that run now SELinux in enforcing mode).
bpo-32096, bpo-30860: Partially revert the commit
2ebc5ce42a8a9e047e790aefbf9a94811569b2b6:
* Move structures back from Include/internal/mem.h to
Objects/obmalloc.c
* Remove _PyObject_Initialize() and _PyMem_Initialize()
* Remove Include/internal/pymalloc.h
* Add test_capi.test_pre_initialization_api():
Make sure that it's possible to call Py_DecodeLocale(), and then call
Py_SetProgramName() with the decoded string, before Py_Initialize().
PyMem_RawMalloc() and Py_DecodeLocale() can be called again before
_PyRuntimeState_Init().
Co-Authored-By: Eric Snow <ericsnowcurrently@gmail.com>
Previously, 'msilib.OpenDatabase()' function raised a
cryptical exception message when it couldn't open or
create an MSI file. For example:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
_msi.MSIError: unknown error 6e
Adds a simpler and faster alternative to ExitStack for handling
single optional context managers without having to change the
lexical structure of your code.
When Python is build is debug mode (Py_DEBUG), DeprecationWarning,
PendingDeprecationWarning and ImportWarning warnings are now
displayed by default.
test_venv: run "-m pip" and "-m ensurepip._uninstall" with -W
ignore::DeprecationWarning since pip code is not part of Python.
Add a new "developer mode": new "-X dev" command line option to
enable debug checks at runtime.
Changes:
* Add unit tests for -X dev
* test_cmd_line: replace test.support with support.
* Fix _PyRuntimeState_Fini(): Use the same memory allocator
than _PyRuntimeState_Init().
* Fix _PyMem_GetDefaultRawAllocator()
* Setting sys.tracebacklimit to 0 or less now suppresses printing tracebacks.
* Setting sys.tracebacklimit to None now causes using the default limit.
* Setting sys.tracebacklimit to an integer larger than LONG_MAX now means using
the limit LONG_MAX rather than the default limit.
* Fixed integer overflows in the case of more than 2**31 traceback items on
Windows.
* Fixed output errors handling.
The openfp functions of aifp, sunau, and wave had pointed to the open
function of each module since 1993 as a matter of backwards
compatibility. In the case of aifc.openfp, it was both undocumented
and untested. This change begins the formal deprecation of those
openfp functions, with their removal coming in 3.9.
This additionally adds a TODO in test_pyclbr around using aifc.openfp,
though it shouldn't be changed until removal in 3.9.
* Fix compilation of the socket module on NetBSD 8.
* Fix the assertion failure or reading arbitrary data when parse
a AF_BLUETOOTH address on NetBSD and DragonFly BSD.
* Fix other potential errors and make the code more reliable.
kB (*kilo* byte) unit means 1000 bytes, whereas KiB ("kibibyte")
means 1024 bytes. KB was misused: replace kB or KB with KiB when
appropriate.
Same change for MB and GB which become MiB and GiB.
Change the output of Tools/iobench/iobench.py.
Round also the size of the documentation from 5.5 MB to 5 MiB.
blocksize was hardcoded to 8192, preventing efficient upload when using
file-like body. Add blocksize argument to __init__, so users can
configure the blocksize to fit their needs.
I tested this uploading data from /dev/zero to a web server dropping the
received data, to test the overhead of the HTTPConnection.send() with a
file-like object.
Here is an example 10g upload with the default buffer size (8192):
$ time ~/src/cpython/release/python upload-httplib.py 10 https://localhost:8000/
Uploaded 10.00g in 17.53 seconds (584.00m/s)
real 0m17.574s
user 0m8.887s
sys 0m5.971s
Same with 512k blocksize:
$ time ~/src/cpython/release/python upload-httplib.py 10 https://localhost:8000/
Uploaded 10.00g in 6.60 seconds (1551.15m/s)
real 0m6.641s
user 0m3.426s
sys 0m2.162s
In real world usage the difference will be smaller, depending on the
local and remote storage and the network.
See https://github.com/nirs/http-bench for more info.
All Blake2 params have to be encoded in little-endian byte order. For
the two multi-byte integer params, leaf_length and node_offset, that
means that assigning a native-endian integer to them appears to work on
little-endian platforms, but gives the wrong result on big-endian. The
current libb2 API doesn't make that very clear, and @sneves is working
on new API functions in the GH issue above. In the meantime, we can work
around the problem by explicitly assigning little-endian values to the
parameter block.
See https://github.com/BLAKE2/libb2/issues/12.
* bpo-31310: multiprocessing's semaphore tracker should be launched again if crashed
* Avoid mucking with process state in test.
Add a warning if the semaphore process died, as semaphores may then be leaked.
* Add NEWS entry
* bpo-31308: If multiprocessing's forkserver dies, launch it again when necessary.
* Fix test on Windows
* Add NEWS entry
* Adopt a different approach: ignore SIGINT and SIGTERM, as in semaphore tracker.
* Fix comment
* Make sure the test doesn't muck with process state
* Also test previously-started processes
* Update 2017-08-30-17-59-36.bpo-31308.KbexyC.rst
* Avoid masking SIGTERM in forkserver. It's not necessary and causes a race condition in test_many_processes.
When a single .c file contains several functions and/or methods with
the same name, a safety _METHODDEF #define statement is generated
only for one of them.
This fixes the bug by using the full name of the function to avoid
duplicates rather than just the name.
* bpo-28643: Record profile-opt build progress with stamp files
The profile-opt makefile target is expensive to build. Since the
makefile does not contain complete dependency information for this
target, much extra work can get done if the build is interrupted and
re-started. Even running "make" a second time will result in a huge
amount of redundant work.
As a minimal fix (rather than removing recursive "make" and adding a
proper dependency graph), split the profile-opt target into parts:
- ensure tree is clean (profile-clean-stamp)
- build with profile generation enabled (profile-gen-stamp)
- run task to generate profile information (profile-run-stamp)
- build optimized Python using above information (profile-opt)
We use "stamp" files to record completion of the steps. Running
"make clean" will not remove the profile-run-stamp file.
Other minor changes:
- remove the "build_all_use_profile" target. I don't expect callers
of the makefile to use this target so that should be safe.
- remove execution of "profile-removal" at end of "profile-opt". I
don't see any reason to not to keep the profile information, given
the cost to generate it. Removing the "profile-run-stamp" file
will force re-generation of it.
Add new time functions:
* time.clock_gettime_ns()
* time.clock_settime_ns()
* time.monotonic_ns()
* time.perf_counter_ns()
* time.process_time_ns()
* time.time_ns()
Add new _PyTime functions:
* _PyTime_FromTimespec()
* _PyTime_FromNanosecondsObject()
* _PyTime_FromTimeval()
Other changes:
* Add also os.times() tests to test_os.
* pytime_fromtimeval() and pytime_fromtimeval() now return
_PyTime_MAX or _PyTime_MIN on overflow, rather than undefined
behaviour
* _PyTime_FromNanoseconds() parameter type changes from long long to
_PyTime_t