Previously, CO_NOFREE was set in the compiler, which meant
it could end up being set incorrectly when code objects
were created directly. Setting it in the constructor based
on freevars and cellvars ensures it is always accurate,
regardless of how the code object is defined.
The current behaviour of yield expressions inside comprehensions and
generator expressions is essentially an accident of implementation - it
arises implicitly from the way the compiler handles yield expressions inside
nested functions and generators.
Since the current behaviour wasn't deliberately designed, and is inherently
confusing, we're deprecating it, with no current plans to reintroduce it.
Instead, our advice will be to use a named nested generator definition
for cases where this behaviour is desired.
In _io_FileIO_readall_impl(), lseek() and _Py_fstat_noraise() were called
without releasing the GIL. This can cause all threads to hang for
unlimited time when calling FileIO.read() and the NFS server is not
accessible.
* Fixed saving bytearrays.
* Identical objects will be saved only once.
* Equal references will be load as identical objects.
* Added support for saving and loading recursive data structures.
When PyGILState_Ensure() is called in a non-Python thread before
PyEval_InitThreads(), only call PyEval_InitThreads() after calling
PyThreadState_New() to fix a crash.
Add an unit test in test_embed.
* bpo-32101: Add sys.flags.dev_mode flag
Rename also the "Developer mode" to the "Development mode".
* bpo-32101: Add PYTHONDEVMODE environment variable
Mention it in the development chapiter.
* Add most_recent_first parameter to tracemalloc.Traceback.format to allow
reversing the order of the frames in the output
* Reversed default sorting of tracemalloc.Traceback frames
* Allowed negative limit, truncating from the other side.
``uuid.getnode()`` now preferentially returns universally administered MAC addresses if available, over locally administered MAC addresses. This makes a better guarantee for global uniqueness of UUIDs returned from ``uuid.uuid1()``. If only locally administered MAC addresses are available, the first such one found is returned.
Also improve internal code style by being explicit about ``return None`` rather than falling off the end of the function.
Improve the test robustness.
CPython migrated from CVS to Subversion, to Mercurial, and then to
Git. CVS and Subversion are not more used to develop CPython.
* platform module: drop support for sys.subversion. The
sys.subversion attribute has been removed in Python 3.3.
* Remove Misc/svnmap.txt
* Remove Tools/scripts/svneol.py
* Remove Tools/scripts/treesync.py
* distutils.config: Use the PyPIRCCommand.realm attribute if set
* turtledemo: wait until macOS osascript command completes to not
create a zombie process
* Tools/scripts/treesync.py: declare 'default_answer' and
'create_files' as globals to modify them with the command line
arguments. Previously, -y, -n, -f and -a options had no effect.
flake8 warning: "F841 local variable 'p' is assigned to but never
used".
Some parts of the C API are only relevant to larger
applications embedding CPython as a runtime engine.
The helpers to test those APIs are already separated
out into Programs/_testembed.c, this update moves
the associated test cases out into their own dedicated
test file.
Improve UUID1 MAC address calculation and related tests.
There are two bits in the MAC address that are relevant to UUID1. The first is the locally administered vs. universally administered bit (second least significant of the first octet). Physical network interfaces such as ethernet ports and wireless adapters will always be universally administered, but some interfaces --such as the interface that MacBook Pros communicate with their Touch Bars-- are locally administered. The former are guaranteed to be globally unique, while the latter are demonstrably *not* globally unique and are in fact the same on every MBP with a Touch Bar. With this bit is set, the MAC is locally administered; with it unset it is universally administered.
The other bit is the multicast bit (least significant bit of the first octet). When no other MAC address can be found, RFC 4122 mandates that a random 48-bit number be generated. This randomly generated number *must* have the multicast bit set.
The improvements in uuid.py include:
* Preferentially return a universally administered MAC address, falling back to a locally administered address if none of the former can be found.
* Improve several coding style issues, such as adding explicit returns of None, using a more readable bitmask pattern, and assuming that the ultimate fallback, random MAC generation will not fail (and propagating any exception there instead of swallowing them).
Improvements in test_uuid.py include:
* Always testing the calculated MAC for universal administration, unless explicitly disabled (i.e. for the random case), or implicitly disabled due to running in the Travis environment. Travis test machines have *no* universally administered MAC address at the time of this writing.
The warnings module doesn't leak memory anymore in the hidden
warnings registry for the "ignore" action of warnings filters.
The warn_explicit() function doesn't add the warning key to the
registry anymore for the "ignore" action.
The test.support.skip_unless_bind_unix_socket() decorator is used to skip
asyncio tests that fail because the platform lacks a functional bind()
function for unix domain sockets (as it is the case for non root users on the
recent Android versions that run now SELinux in enforcing mode).
bpo-32096, bpo-30860: Partially revert the commit
2ebc5ce42a8a9e047e790aefbf9a94811569b2b6:
* Move structures back from Include/internal/mem.h to
Objects/obmalloc.c
* Remove _PyObject_Initialize() and _PyMem_Initialize()
* Remove Include/internal/pymalloc.h
* Add test_capi.test_pre_initialization_api():
Make sure that it's possible to call Py_DecodeLocale(), and then call
Py_SetProgramName() with the decoded string, before Py_Initialize().
PyMem_RawMalloc() and Py_DecodeLocale() can be called again before
_PyRuntimeState_Init().
Co-Authored-By: Eric Snow <ericsnowcurrently@gmail.com>
Previously, 'msilib.OpenDatabase()' function raised a
cryptical exception message when it couldn't open or
create an MSI file. For example:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
_msi.MSIError: unknown error 6e
Adds a simpler and faster alternative to ExitStack for handling
single optional context managers without having to change the
lexical structure of your code.
When Python is build is debug mode (Py_DEBUG), DeprecationWarning,
PendingDeprecationWarning and ImportWarning warnings are now
displayed by default.
test_venv: run "-m pip" and "-m ensurepip._uninstall" with -W
ignore::DeprecationWarning since pip code is not part of Python.
Add a new "developer mode": new "-X dev" command line option to
enable debug checks at runtime.
Changes:
* Add unit tests for -X dev
* test_cmd_line: replace test.support with support.
* Fix _PyRuntimeState_Fini(): Use the same memory allocator
than _PyRuntimeState_Init().
* Fix _PyMem_GetDefaultRawAllocator()
* Setting sys.tracebacklimit to 0 or less now suppresses printing tracebacks.
* Setting sys.tracebacklimit to None now causes using the default limit.
* Setting sys.tracebacklimit to an integer larger than LONG_MAX now means using
the limit LONG_MAX rather than the default limit.
* Fixed integer overflows in the case of more than 2**31 traceback items on
Windows.
* Fixed output errors handling.
The openfp functions of aifp, sunau, and wave had pointed to the open
function of each module since 1993 as a matter of backwards
compatibility. In the case of aifc.openfp, it was both undocumented
and untested. This change begins the formal deprecation of those
openfp functions, with their removal coming in 3.9.
This additionally adds a TODO in test_pyclbr around using aifc.openfp,
though it shouldn't be changed until removal in 3.9.
* Fix compilation of the socket module on NetBSD 8.
* Fix the assertion failure or reading arbitrary data when parse
a AF_BLUETOOTH address on NetBSD and DragonFly BSD.
* Fix other potential errors and make the code more reliable.
kB (*kilo* byte) unit means 1000 bytes, whereas KiB ("kibibyte")
means 1024 bytes. KB was misused: replace kB or KB with KiB when
appropriate.
Same change for MB and GB which become MiB and GiB.
Change the output of Tools/iobench/iobench.py.
Round also the size of the documentation from 5.5 MB to 5 MiB.
blocksize was hardcoded to 8192, preventing efficient upload when using
file-like body. Add blocksize argument to __init__, so users can
configure the blocksize to fit their needs.
I tested this uploading data from /dev/zero to a web server dropping the
received data, to test the overhead of the HTTPConnection.send() with a
file-like object.
Here is an example 10g upload with the default buffer size (8192):
$ time ~/src/cpython/release/python upload-httplib.py 10 https://localhost:8000/
Uploaded 10.00g in 17.53 seconds (584.00m/s)
real 0m17.574s
user 0m8.887s
sys 0m5.971s
Same with 512k blocksize:
$ time ~/src/cpython/release/python upload-httplib.py 10 https://localhost:8000/
Uploaded 10.00g in 6.60 seconds (1551.15m/s)
real 0m6.641s
user 0m3.426s
sys 0m2.162s
In real world usage the difference will be smaller, depending on the
local and remote storage and the network.
See https://github.com/nirs/http-bench for more info.
All Blake2 params have to be encoded in little-endian byte order. For
the two multi-byte integer params, leaf_length and node_offset, that
means that assigning a native-endian integer to them appears to work on
little-endian platforms, but gives the wrong result on big-endian. The
current libb2 API doesn't make that very clear, and @sneves is working
on new API functions in the GH issue above. In the meantime, we can work
around the problem by explicitly assigning little-endian values to the
parameter block.
See https://github.com/BLAKE2/libb2/issues/12.
* bpo-31310: multiprocessing's semaphore tracker should be launched again if crashed
* Avoid mucking with process state in test.
Add a warning if the semaphore process died, as semaphores may then be leaked.
* Add NEWS entry
* bpo-31308: If multiprocessing's forkserver dies, launch it again when necessary.
* Fix test on Windows
* Add NEWS entry
* Adopt a different approach: ignore SIGINT and SIGTERM, as in semaphore tracker.
* Fix comment
* Make sure the test doesn't muck with process state
* Also test previously-started processes
* Update 2017-08-30-17-59-36.bpo-31308.KbexyC.rst
* Avoid masking SIGTERM in forkserver. It's not necessary and causes a race condition in test_many_processes.
When a single .c file contains several functions and/or methods with
the same name, a safety _METHODDEF #define statement is generated
only for one of them.
This fixes the bug by using the full name of the function to avoid
duplicates rather than just the name.
* bpo-28643: Record profile-opt build progress with stamp files
The profile-opt makefile target is expensive to build. Since the
makefile does not contain complete dependency information for this
target, much extra work can get done if the build is interrupted and
re-started. Even running "make" a second time will result in a huge
amount of redundant work.
As a minimal fix (rather than removing recursive "make" and adding a
proper dependency graph), split the profile-opt target into parts:
- ensure tree is clean (profile-clean-stamp)
- build with profile generation enabled (profile-gen-stamp)
- run task to generate profile information (profile-run-stamp)
- build optimized Python using above information (profile-opt)
We use "stamp" files to record completion of the steps. Running
"make clean" will not remove the profile-run-stamp file.
Other minor changes:
- remove the "build_all_use_profile" target. I don't expect callers
of the makefile to use this target so that should be safe.
- remove execution of "profile-removal" at end of "profile-opt". I
don't see any reason to not to keep the profile information, given
the cost to generate it. Removing the "profile-run-stamp" file
will force re-generation of it.
Add new time functions:
* time.clock_gettime_ns()
* time.clock_settime_ns()
* time.monotonic_ns()
* time.perf_counter_ns()
* time.process_time_ns()
* time.time_ns()
Add new _PyTime functions:
* _PyTime_FromTimespec()
* _PyTime_FromNanosecondsObject()
* _PyTime_FromTimeval()
Other changes:
* Add also os.times() tests to test_os.
* pytime_fromtimeval() and pytime_fromtimeval() now return
_PyTime_MAX or _PyTime_MIN on overflow, rather than undefined
behaviour
* _PyTime_FromNanoseconds() parameter type changes from long long to
_PyTime_t
Modify the code to use ncurses is_pad() instead of checking WINDOW
_flags field. If your platform does not provide the is_pad(), the
existing way that checks the field will be enabled.
Note: This change does not drop support for platforms where do not
have both WINDOW _flags field and is_pad().
Editor and output windows only see an empty last prompt line.
This simplifies the code and fixes a minor bug when newline is inserted.
Sys.ps1, if present, is read on Shell start-up, but is not set or changed.
The startup refactoring means command line settings
are now applied after settings are read from the
environment.
This updates the way command line settings are applied
to account for that, ensures more settings are first read
from the environment in _PyInitializeCore, and adds a
simple test case covering the flags that are easy to check.
Fix the pthread+semaphore implementation of
PyThread_acquire_lock_timed() when called with timeout > 0 and
intr_flag=0: recompute the timeout if sem_timedwait() is interrupted
by a signal (EINTR).
See also the PEP 475.
The pthread implementation of PyThread_acquire_lock() now fails with
a fatal error if the timeout is larger than PY_TIMEOUT_MAX, as done
in the Windows implementation.
The check prevents any risk of overflow in PyThread_acquire_lock().
Add also PY_DWORD_MAX constant.
Calendar.itermonthdates() will now consistently raise an exception when a date falls outside of the 0001-01-01 through 9999-12-31 range. To support applications that cannot tolerate such exceptions, the new methods itermonthdays3() and itermonthdays4() are added. The new methods return tuples and are not restricted by the range supported by datetime.date.
Thanks @serhiy-storchaka for suggesting the itermonthdays4() method and for the review.
Rework the code choosing BLAKE2 code paths from using the optimized
variant on all x86_64 machines to using it when SSSE3 or better
supported instructions sets are available.
Firstly, this solves the problem of using pure SSE2 code path on x86_64
machines. As reported in the bug, this code is slower than the reference
code on all tested x86_64 machines. Furthermore, on Athlon64 that lacks
SSSE3, it is even 2.5 times slower than the reference code! Checking
for SSSE3 therefore ensures that the optimized implementation will only
be used when it has a chance of performing better.
Secondly, this makes it possible to use SSSE3+ optimizations on 32-bit
x86 systems. This allows for even 2 times speed gain on modern 32-bit
x86 systems (tested in a 32-bit chroot).
Improve human friendliness of the Popen API: Add text=False as a
keyword-only argument to subprocess.Popen along with a Popen
attribute .text_mode and set this based on the
encoding/errors/universal_newlines/text arguments.
The universal_newlines parameter and attribute are maintained for
backwards compatibility.
This used to be the case on Python 2. Commit
212b590e11 changed the implementation for Python
3, making the `log()` method of LogAdapter call `logger._log()` directly. This
makes nested log adapters not execute their ``process()`` method. This patch
fixes the issue.
Also, now proxying `name`, too, to make `repr()` work with nested log adapters.
New tests added.
Fix timeout rounding in time.sleep(), threading.Lock.acquire() and
socket.socket.settimeout() to round correctly negative timeouts between -1.0 and
0.0. The functions now block waiting for events as expected. Previously, the
call was incorrectly non-blocking.
Even if one selects a font that defines a limited subset of the unicode
Basic Multilingual Plane, tcl/tk will use other fonts that define a
character. The expanded example give users of non-Latin characters
a better idea of what they might see in the IDLE shell and editors.
To make room for the expanded sample, frames on the Font tab are
re-arranged. The Font/Tabs help explains a bit about the additions.
bpo-31803: time.clock() and time.get_clock_info('clock') now emit a
DeprecationWarning warning.
Replace time.clock() with time.perf_counter() in tests and demos.
Remove also hasattr(time, 'monotonic') in test_time since time.monotonic()
is now always available since Python 3.5.
Always pass -1, or INFTIM where defined, to the poll() system call when
a negative timeout is passed to the poll.poll([timeout]) method in the
select module. Various OSes throw an error with arbitrary negative
values.
The new method allows the developer to control when to stop the
feature of mocks that automagically creates new mocks when accessing
an attribute that was not declared before
Signed-off-by: Mario Corchero <mariocj89@gmail.com>
Pattern `[a-z]` with `IGNORECASE` flag can match to some non-ASCII characters.
Straightforward solution for this is using `IGNORECASE | ASCII` flag.
But users may subclass `Template` and override only `idpattern`. So we want to
avoid changing `Template.flags`.
So this commit uses local flag `-i` for `idpattern` and change `[a-z]` to `[a-zA-Z]`.
See PEP 539 for details.
Highlights of changes:
- Add Thread Specific Storage (TSS) API
- Document the Thread Local Storage (TLS) API as deprecated
- Update code that used TLS API to use TSS API
sre_compile does bit test (e.g. `flags & SRE_FLAG_IGNORECASE`) in loop.
`IntFlag.__and__` and `IntFlag.__new__` made it slower.
So this commit convert it to normal int before passing flags to `sre_compile()`.
Passing a widget instead of an flist with a root widget opens the option of
creating a browser frame that is only part of a window. Passing a full file
name instead of pieces assumed to come from a .py file opens the possibility
of browsing python files that do not end in .py.
While a rare potential failure (it requires swapping out zlib.decompress() itself and forcing it to return a non-bytes object), this change prevents a potential C-level assertion failure and instead substitutes it with an exception.
Thanks to Oren Milman for the patch.
Class execution requires that __prepare__() methods return
a proper execution namespace. Check for that immediately
after calling __prepare__(), rather than passing it through
to the code execution machinery and potentially triggering
SystemError (in debug builds) or a cryptic TypeError
(in release builds).
Patch by Oren Milman.
Fix the logic in python-config.sh to avoid attempting to substitute
prefix in a variable that might have already been subject to
substitution. This e.g. happened if @exec_prefix@ was defined as
"${prefix}" (which is the default of the configure script) -- in which
case the exec_prefix_build variable was initialized with
already-subtituted prefix, and then another round of substitution was
performed which might have resulted in duplicate prefix.
To avoid that, rename the variables so that the variables matching
likely configure names (prefix, exec_prefix) retain their original
values and a '_real' suffix is used for the real values of prefix.
Furthermore, replace the unnecessary prefix and exec_prefix
substitutions with direct prefix_real references since the sed
always replaced the whole string anyway by design.