PyImport_ExtendInittab() now uses PyMem_RawRealloc() rather than
PyMem_Realloc(). PyImport_ExtendInittab() can be called before
Py_Initialize() whereas only the PyMem_Raw allocator is supposed to
be used before Py_Initialize().
Add _PyImport_Fini2() to release the memory allocated by
PyImport_ExtendInittab() at exit. PyImport_ExtendInittab() now forces
the usage of the default raw allocator, to be able to release memory
in _PyImport_Fini2().
Don't export these functions anymore to be C API, only to
Py_BUILD_CORE:
* _PyExc_Fini()
* _PyImport_Fini()
* _PyGC_DumpShutdownStats()
* _PyGC_Fini()
* _PyType_Fini()
* _Py_HashRandomization_Fini()
* Py_Main() now starts by reading Py_xxx configuration variables to
only work on its own private structure, and then later writes back
the configuration into these variables.
* Replace Py_GETENV() with pymain_get_env_var() which ignores empty
variables.
* Add _PyCoreConfig.dump_refs
* Add _PyCoreConfig.malloc_stats
* _PyObject_DebugMallocStats() is now responsible to check if debug
hooks are installed. The function returns 1 if stats were written,
or 0 if the hooks are disabled. Mark _PyMem_PymallocEnabled() as
static.
* Simplify _PyCoreConfig_INIT, _PyMainInterpreterConfig_INIT,
_PyPathConfig_INIT macros: no need to set fields to 0/NULL, it's
redundant (the C language sets them to 0/NULL for us).
* Fix typo: pymain_run_statup() => pymain_run_startup()
* Remove a few XXX/TODO
Add import_find_and_load() helper function. The addition of
the importtime option has made PyImport_ImportModuleLevelObject() large
and so using a helper seems worthwhile. It also makes it clearer that
abs_name is the only argument needed by _find_and_load().
Previously, CO_NOFREE was set in the compiler, which meant
it could end up being set incorrectly when code objects
were created directly. Setting it in the constructor based
on freevars and cellvars ensures it is always accurate,
regardless of how the code object is defined.
_PyPathConfig_Init() now also initialize home and program_name:
* Rename existing _PyPathConfig_Init() to _PyPathConfig_Calculate().
Add a new _PyPathConfig_Init() function in pathconfig.c which
handles the _Py_path_config variable and call
_PyPathConfig_Calculate().
* Add home and program_name fields to _PyPathConfig.home
* _PyPathConfig_Init() now initialize home and program_name
from main_config
* Py_SetProgramName(), Py_SetPythonHome() and Py_GetPythonHome() now
calls Py_FatalError() on failure, instead of silently ignoring
failures.
* config_init_home() now gets directly _Py_path_config.home to only
get the value set by Py_SetPythonHome(), or NULL if
Py_SetPythonHome() was not called.
* config_get_program_name() now gets directly
_Py_path_config.program_name to only get the value set by
Py_SetProgramName(), or NULL if Py_SetProgramName() was not called.
* pymain_init_python() doesn't call Py_SetProgramName() anymore,
_PyPathConfig_Init() now always sets the program name
* Call _PyMainInterpreterConfig_Read() in
pymain_parse_cmdline_envvars_impl() to control the memory allocator
* C API documentation: it's no more safe to call Py_GetProgramName()
before Py_Initialize().
* Factorize code from PC/getpathp.c and Modules/getpath.c to remove
duplicated code
* rename pathconfig_clear() to _PyPathConfig_Clear()
* Inline _PyPathConfig_Fini() in pymain_impl() and then remove it,
since it's a oneliner
Changes:
* _PyPathConfig_Fini() cannot be called in Py_FinalizeEx().
Py_Initialize() and Py_Finalize() can be called multiple times, but
it must not "forget" parameters set by Py_SetProgramName(),
Py_SetPath() or Py_SetPythonHome(), whereas _PyPathConfig_Fini()
clear all these parameters.
* config_get_program_name() and calculate_program_full_path() now
also decode paths using Py_DecodeLocale() to use the
surrogateescape error handler, rather than decoding using
mbstowcs() which is strict.
* Change _Py_CheckPython3() prototype: () => (void)
* Truncate a few lines which were too long
The current behaviour of yield expressions inside comprehensions and
generator expressions is essentially an accident of implementation - it
arises implicitly from the way the compiler handles yield expressions inside
nested functions and generators.
Since the current behaviour wasn't deliberately designed, and is inherently
confusing, we're deprecating it, with no current plans to reintroduce it.
Instead, our advice will be to use a named nested generator definition
for cases where this behaviour is desired.
When PyGILState_Ensure() is called in a non-Python thread before
PyEval_InitThreads(), only call PyEval_InitThreads() after calling
PyThreadState_New() to fix a crash.
Add an unit test in test_embed.
_Py_InitializeEx_Private() now calls
_PyMainInterpreterConfig_ReadEnv() to read environment variables
PYTHONHOME and PYTHONPATH, and set the program name.
* bpo-32101: Add sys.flags.dev_mode flag
Rename also the "Developer mode" to the "Development mode".
* bpo-32101: Add PYTHONDEVMODE environment variable
Mention it in the development chapiter.
* Fix _PyMem_SetupAllocators("debug"): always restore allocators to
the defaults, rather than only caling _PyMem_SetupDebugHooks().
* Add _PyMem_SetDefaultAllocator() helper to set the "default"
allocator.
* Add _PyMem_GetAllocatorsName(): get the name of the allocators
* main() now uses debug hooks on memory allocators if Py_DEBUG is
defined, rather than calling directly malloc()
* Document default memory allocators in C API documentation
* _Py_InitializeCore() now fails with a fatal user error if
PYTHONMALLOC value is an unknown memory allocator, instead of
failing with a fatal internal error.
* Add new tests on the PYTHONMALLOC environment variable
* Add support.with_pymalloc()
* Add the _testcapi.WITH_PYMALLOC constant and expose it as
support.with_pymalloc().
* sysconfig.get_config_var('WITH_PYMALLOC') doesn't work on Windows, so
replace it with support.with_pymalloc().
* pythoninfo: add _testcapi collector for pymem
The warnings module doesn't leak memory anymore in the hidden
warnings registry for the "ignore" action of warnings filters.
The warn_explicit() function doesn't add the warning key to the
registry anymore for the "ignore" action.
clang can't figure out that fatal_error is noreturn itself and emits warnings:
../cpython/Python/pylifecycle.c:2116:1: warning: function declared 'noreturn' should not return [-Winvalid-noreturn]
}
^
../cpython/Python/pylifecycle.c:2125:1: warning: function declared 'noreturn' should not return [-Winvalid-noreturn]
}
^
* Py_Main() now calls Py_SetProgramName() earlier to be able to get
the program name in _PyMainInterpreterConfig_ReadEnv().
* Rename prog to program_name
* Rename progpath to program_name
Py_GetPath() and Py_Main() now call
_PyMainInterpreterConfig_ReadEnv() to share the same code to get
environment variables.
Changes:
* Add _PyMainInterpreterConfig_ReadEnv()
* Add _PyMainInterpreterConfig_Clear()
* Add _PyMem_RawWcsdup()
* _PyMainInterpreterConfig: rename pythonhome to home
* Rename _Py_ReadMainInterpreterConfig() to
_PyMainInterpreterConfig_Read()
* Use _Py_INIT_USER_ERR(), instead of _Py_INIT_ERR(), for decoding
errors: the user is able to fix the issue, it's not a bug in
Python. Same change was made in _Py_INIT_NO_MEMORY().
* Remove _Py_GetPythonHomeWithConfig()
bpo-32096, bpo-30860: Partially revert the commit
2ebc5ce42a8a9e047e790aefbf9a94811569b2b6:
* Move structures back from Include/internal/mem.h to
Objects/obmalloc.c
* Remove _PyObject_Initialize() and _PyMem_Initialize()
* Remove Include/internal/pymalloc.h
* Add test_capi.test_pre_initialization_api():
Make sure that it's possible to call Py_DecodeLocale(), and then call
Py_SetProgramName() with the decoded string, before Py_Initialize().
PyMem_RawMalloc() and Py_DecodeLocale() can be called again before
_PyRuntimeState_Init().
Co-Authored-By: Eric Snow <ericsnowcurrently@gmail.com>
create_filter() now expects the action as a _Py_Identifier which
avoids string comparison, and more important, to avoid handling the
"unknown action" annoying case.
* calculate_path() rewritten in Modules/getpath.c and PC/getpathp.c
* Move global variables into a new PyPathConfig structure.
* calculate_path():
* Split the huge calculate_path() function into subfunctions.
* Add PyCalculatePath structure to pass data between subfunctions.
* Document PyCalculatePath fields.
* Move cleanup code into a new calculate_free() subfunction
* calculate_init() now handles Py_DecodeLocale() failures properly
* calculate_path() is now atomic: only replace PyPathConfig
(path_config) at once on success.
* _Py_GetPythonHomeWithConfig() now returns an error on failure
* Add _Py_INIT_NO_MEMORY() helper: report a memory allocation failure
* Coding style fixes (PEP 7)
* Py_Main() now reads the PYTHONHOME environment variable
* Add _Py_GetPythonHomeWithConfig() private function
* Add _PyWarnings_InitWithConfig()
* init_filters() doesn't get the current core configuration from the
current interpreter or Python thread anymore. Pass explicitly the
configuration to _PyWarnings_InitWithConfig().
* _Py_InitializeCore() now fails on _PyWarnings_InitWithConfig()
failure.
* Pass configuration as constant
Changes:
* Py_Main() initializes _PyCoreConfig.module_search_path_env from
the PYTHONPATH environment variable.
* PyInterpreterState_New() now initializes core_config and config
fields
* Compute sys.path a little bit ealier in
_Py_InitializeMainInterpreter() and new_interpreter()
* Add _Py_GetPathWithConfig() private function.
* Optimize warnings.filterwarnings(). Replace re.compile('') with
None to avoid the cost of calling a regex.match() method, whereas
it always matchs.
* Optimize get_warnings_attr(): replace PyObject_GetAttrString() with
_PyObject_GetAttrId().
Cleanup also create_filter():
* Use _Py_IDENTIFIER() to allow to cleanup strings at Python
finalization
* Replace Py_FatalError() with a regular exceptions
Py_Main() now handles two more -X options:
* -X showrefcount: new _PyCoreConfig.show_ref_count field
* -X showalloccount: new _PyCoreConfig.show_alloc_count field
The developer mode (-X dev) now creates all default warnings filters
to order filters in the correct order to always show ResourceWarning
and make BytesWarning depend on the -b option.
Write a functional test to make sure that ResourceWarning is logged
twice at the same location in the developer mode.
Add a new 'dev_mode' field to _PyCoreConfig.
When Python is build is debug mode (Py_DEBUG), DeprecationWarning,
PendingDeprecationWarning and ImportWarning warnings are now
displayed by default.
test_venv: run "-m pip" and "-m ensurepip._uninstall" with -W
ignore::DeprecationWarning since pip code is not part of Python.
Add a new "developer mode": new "-X dev" command line option to
enable debug checks at runtime.
Changes:
* Add unit tests for -X dev
* test_cmd_line: replace test.support with support.
* Fix _PyRuntimeState_Fini(): Use the same memory allocator
than _PyRuntimeState_Init().
* Fix _PyMem_GetDefaultRawAllocator()
Parse more env vars in Py_Main():
* Add more options to _PyCoreConfig:
* faulthandler
* tracemalloc
* importtime
* Move code to parse environment variables from _Py_InitializeCore()
to Py_Main(). This change fixes a regression from Python 3.6:
PYTHONUNBUFFERED is now read before calling pymain_init_stdio().
* _PyFaulthandler_Init() and _PyTraceMalloc_Init() now take an
argument to decide if the module has to be enabled at startup.
* tracemalloc_start() is now responsible to check the maximum number
of frames.
Other changes:
* Cleanup Py_Main():
* Rename some pymain_xxx() subfunctions
* Add pymain_run_python() subfunction
* Cleanup Py_NewInterpreter()
* _PyInterpreterState_Enable() now reports failure
* init_hash_secret() now considers pyurandom() failure as an "user
error": don't fail with abort().
* pymain_optlist_append() and pymain_strdup() now sets err on memory
allocation failure.
* Don't use "Python runtime" anymore to parse command line options or
to get environment variables: pymain_init() is now a strict
separation.
* Use an error message rather than "crashing" directly with
Py_FatalError(). Limit the number of calls to Py_FatalError(). It
prepares the code to handle errors more nicely later.
* Warnings options (-W, PYTHONWARNINGS) and "XOptions" (-X) are now
only added to the sys module once Python core is properly
initialized.
* _PyMain is now the well identified owner of some important strings
like: warnings options, XOptions, and the "program name". The
program name string is now properly freed at exit.
pymain_free() is now responsible to free the "command" string.
* Rename most methods in Modules/main.c to use a "pymain_" prefix to
avoid conflits and ease debug.
* Replace _Py_CommandLineDetails_INIT with memset(0)
* Reorder a lot of code to fix the initialization ordering. For
example, initializing standard streams now comes before parsing
PYTHONWARNINGS.
* Py_Main() now handles errors when adding warnings options and
XOptions.
* Add _PyMem_GetDefaultRawAllocator() private function.
* Cleanup _PyMem_Initialize(): remove useless global constants: move
them into _PyMem_Initialize().
* Call _PyRuntime_Initialize() as soon as possible:
_PyRuntime_Initialize() now returns an error message on failure.
* Add _PyInitError structure and following macros:
* _Py_INIT_OK()
* _Py_INIT_ERR(msg)
* _Py_INIT_USER_ERR(msg): "user" error, don't abort() in that case
* _Py_INIT_FAILED(err)
* Setting sys.tracebacklimit to 0 or less now suppresses printing tracebacks.
* Setting sys.tracebacklimit to None now causes using the default limit.
* Setting sys.tracebacklimit to an integer larger than LONG_MAX now means using
the limit LONG_MAX rather than the default limit.
* Fixed integer overflows in the case of more than 2**31 traceback items on
Windows.
* Fixed output errors handling.
Get rid of _PyObject_HasAttrId() and PyDict_GetItemString().
Silence only expected AttributeError, KeyError and ImportError when
get an attribute, look up in a dict or import a module.
kB (*kilo* byte) unit means 1000 bytes, whereas KiB ("kibibyte")
means 1024 bytes. KB was misused: replace kB or KB with KiB when
appropriate.
Same change for MB and GB which become MiB and GiB.
Change the output of Tools/iobench/iobench.py.
Round also the size of the documentation from 5.5 MB to 5 MiB.
Add new time functions:
* time.clock_gettime_ns()
* time.clock_settime_ns()
* time.monotonic_ns()
* time.perf_counter_ns()
* time.process_time_ns()
* time.time_ns()
Add new _PyTime functions:
* _PyTime_FromTimespec()
* _PyTime_FromNanosecondsObject()
* _PyTime_FromTimeval()
Other changes:
* Add also os.times() tests to test_os.
* pytime_fromtimeval() and pytime_fromtimeval() now return
_PyTime_MAX or _PyTime_MIN on overflow, rather than undefined
behaviour
* _PyTime_FromNanoseconds() parameter type changes from long long to
_PyTime_t
This kludge is from 1992. Any C99 compiler is going to be able to handle the
ceval dispatch switch.
Anyway, we have much bigger switches than the ceval dispatch one around. (See,
e.g., Objects/unicodetype_db.h.)
The startup refactoring means command line settings
are now applied after settings are read from the
environment.
This updates the way command line settings are applied
to account for that, ensures more settings are first read
from the environment in _PyInitializeCore, and adds a
simple test case covering the flags that are easy to check.
Fix the pthread+semaphore implementation of
PyThread_acquire_lock_timed() when called with timeout > 0 and
intr_flag=0: recompute the timeout if sem_timedwait() is interrupted
by a signal (EINTR).
See also the PEP 475.
The pthread implementation of PyThread_acquire_lock() now fails with
a fatal error if the timeout is larger than PY_TIMEOUT_MAX, as done
in the Windows implementation.
The check prevents any risk of overflow in PyThread_acquire_lock().
Add also PY_DWORD_MAX constant.
* Rewrite win_perf_counter() to only use integers internally.
* Add _PyTime_MulDiv() which compute "ticks * mul / div"
in two parts (int part and remaining) to prevent integer overflow.
* Clock frequency is checked at initialization for integer overflow.
* Enhance also pymonotonic() to reduce the precision loss on macOS
(mach_absolute_time() clock).
time.clock() and time.perf_counter() now use again C double
internally.
Remove also _PyTime_GetWinPerfCounterWithInfo(): use
_PyTime_GetPerfCounterDoubleWithInfo() instead on Windows.
See PEP 539 for details.
Highlights of changes:
- Add Thread Specific Storage (TSS) API
- Document the Thread Local Storage (TLS) API as deprecated
- Update code that used TLS API to use TSS API
On Windows, Py_FatalError() now limits the size to 256 bytes of the
buffer used to call OutputDebugStringW(). Previously, the size
depended on the length of the error message.
Class execution requires that __prepare__() methods return
a proper execution namespace. Check for that immediately
after calling __prepare__(), rather than passing it through
to the code execution machinery and potentially triggering
SystemError (in debug builds) or a cryptic TypeError
(in release builds).
Patch by Oren Milman.
The concrete PyDict_* API is used to interact with PyInterpreterState.modules in a number of places. This isn't compatible with all dict subclasses, nor with other Mapping implementations. This patch switches the concrete API usage to the corresponding abstract API calls.
We also add a PyImport_GetModule() function (and some other helpers) to reduce a bunch of code duplication.
* Add Py_UNREACHABLE() as an alias to abort().
* Use Py_UNREACHABLE() instead of assert(0)
* Convert more unreachable code to use Py_UNREACHABLE()
* Document Py_UNREACHABLE() and a few other macros.
A bunch of code currently uses PyInterpreterState.modules directly instead of PyImport_GetModuleDict(). This complicates efforts to make changes relative to sys.modules. This patch switches to using PyImport_GetModuleDict() uniformly. Also, a number of related uses of sys.modules are updated for uniformity for the same reason.
Note that this code was already reviewed and merged as part of #1638. I reverted that and am now splitting it up into more focused parts.
PR #1638, for bpo-28411, causes problems in some (very) edge cases. Until that gets sorted out, we're reverting the merge. PR #3506, a fix on top of #1638, is also getting reverted.
* Drop warnoptions from PyInterpreterState.
* Drop xoptions from PyInterpreterState.
* Don't set warnoptions and _xoptions again.
* Decref after adding to sys.__dict__.
* Drop an unused macro.
* Check sys.xoptions *before* we delete it.
This undoes a853a8ba78 except for the pytime.c
parts. We want to continue to allow IEEE 754 doubles larger than FLT_MAX to be
rounded into finite floats. Tests were added to very this behavior.
* group the (stateful) runtime globals into various topical structs
* consolidate the topical structs under a single top-level _PyRuntimeState struct
* add a check-c-globals.py script that helps identify runtime globals
Other globals are excluded (see globals.txt and check-c-globals.py).
f_trace_lines: enable/disable line trace events
f_trace_opcodes: enable/disable opcode trace events
These are intended primarily for testing of the interpreter
itself, as they make it much easier to emulate signals
arriving at unfortunate times.
* group the (stateful) runtime globals into various topical structs
* consolidate the topical structs under a single top-level _PyRuntimeState struct
* add a check-c-globals.py script that helps identify runtime globals
Other globals are excluded (see globals.txt and check-c-globals.py).
According to the comment, there was previously a call to PyObject_IsSubclass() involved which could fail, but since it was replaced with a call to PyType_IsSubtype(), it can no longer fail.
Use sys.modules.get() in the "with _ModuleLockManager(name):" block
to protect the dictionary key with the module lock and use an atomic
get to prevent race condition.
Remove also _bootstrap._POPULATE since it was unused
(_bootstrap_external now has its own _POPULATE object), add a new
_SENTINEL object instead.
* Rewrite importlib _get_module_lock(): it is now responsible to hold
the imp lock directly.
* _find_and_load() now holds the module lock to check if name is in
sys.modules to prevent a race condition
* bpo-30832: Remove own implementation for thread-local storage
CPython has provided the own implementation for thread-local storage
(TLS) on Python/thread.c, it's used in the case which a platform has
not supplied native TLS. However, currently all supported platforms
(NT and pthreads) have provided native TLS and defined the
Py_HAVE_NATIVE_TLS macro with unconditional in any case.
* bpo-30832: replace NT with Windows
* bpo-30832: change to directive chain
* bpo-30832: remove comemnt which making no sense
- On some versions of FreeBSD, setting the "UTF-8" locale
succeeds, but a subsequent "nl_langinfo(CODESET)" fails
- adding a check for this in the coercion logic means that
coercion will happen on systems where this check succeeds,
and will be skipped otherwise
- that way CPython should automatically adapt to changes in
platform behaviour, rather than needing a new release to
enable coercion at build time
- this also allows UTF-8 to be re-enabled as a coercion
target, restoring the locale coercion behaviour on Mac OS X
`PYTHONFRAMEWORK` is defined in `Makefile` and it shoulnd't be used
in `pyconfig.h`.
`sysconfig.py --generate-posix-vars` reads config vars from Makefile
and `pyconfig.h`. Conflicting variables should be avoided.
Especially, string config variables in Makefile are unquoted, but
in `pyconfig.h` are keep quoted. So it should be private (starts with
underscore).
* Improve signal delivery
Avoid using Py_AddPendingCall from signal handler, to avoid calling signal-unsafe functions.
* Remove unused function
* Improve comments
* Add stress test
* Adapt for --without-threads
* Add second stress test
* Add NEWS blurb
* Address comments @haypo
Based on patch by Victor Stinner.
Add private C API function _PyUnicode_AsUnicode() which is similar to
PyUnicode_AsUnicode(), but checks for null characters.
* bpo-30765: Avoid blocking when PyThread_acquire_lock() is asked not to lock
This is especially important if PyThread_acquire_lock() is called reentrantly
(for example from a signal handler).
* Update 2017-06-26-14-29-50.bpo-30765.Q5iBmf.rst
* Avoid core logic when taking the mutex failed
* bpo-30183: Fixes HP-UX cc compilation error in pytime.c
HP-UX does not support the CLOCK_MONOTONIC identifier, and will fail to
compile:
"Python/pytime.c", line 723: error #2020: identifier
"CLOCK_MONOTONIC" is undefined
const clockid_t clk_id = CLOCK_MONOTONIC;
Add a new section for __hpux that calls 'gethrtime()' instead of
'clock_gettime()'.
* bpo-30183: Removes unnecessary return
- removes PY_WARN_ON_C_LOCALE build time flag
- locale coercion and compatibility warnings are now always compiled
in, but are off by default
- adds PYTHONCOERCECLOCALE=warn runtime option to aid in
debugging potentially locale related compatibility problems
Due to not-yet-resolved test failures on *BSD systems (including
Mac OS X), this also temporarily disables UTF-8 as a locale coercion
target, and skips testing the interpreter's behavior in the POSIX locale.
- new PYTHONCOERCECLOCALE config setting
- coerces legacy C locale to C.UTF-8, C.utf8 or UTF-8 by default
- always uses C.UTF-8 on Android
- uses `surrogateescape` on stdin and stdout in the coercion
target locales
- configure option to disable locale coercion at build time
- configure option to disable C locale warning at build time
The function '_PyArg_ParseStack()' and
'_PyArg_UnpackStack' were failing (with error
"XXX() takes Y argument (Z given)") before
the function '_PyArg_NoStackKeywords()' was called.
Thus, the latter did not raise its more meaningful
error : "XXX() takes no keyword arguments".
'invalid character in identifier' now is raised instead of
'f-string: empty expression not allowed' if a subexpression contains
only whitespaces and they are not accepted by Python parser.
Error messages when pass keyword arguments to some builtins that
don't support keyword arguments contained double parenthesis: "()()".
The regression was introduced by bpo-30534.
Fix a reference in subinterpreters, like test_callbacks_leak() of
test_atexit.
warnoptions is a list used to pass options from the command line to
the sys module constructor. Before this change, the list was shared
by multiple interpreter which is not the expected behaviour. Each
interpreter should have their own independent mutable world.
This change duplicates the list in each interpreter. So each
interpreter owns its own list, so each interpreter can clear its own
list.
* bpo-16500: Allow registering at-fork handlers
* Address Serhiy's comments
* Add doc for new C API
* Add doc for new Python-facing function
* Add NEWS entry + doc nit
* Improves test_underpth_nosite_file to reveal why it fails.
* Enable building with Windows 10 SDK.
* Fix WinSDK detection
* Fix initialization on Windows when a ._pth file exists.
* Fix tabs
* Adds comment about Py_GetPath call.
PEP 432 specifies a number of large changes to interpreter startup code, including exposing a cleaner C-API. The major changes depend on a number of smaller changes. This patch includes all those smaller changes.
If we have a chain of generators/coroutines that are 'yield from'ing
each other, then resuming the stack works like:
- call send() on the outermost generator
- this enters _PyEval_EvalFrameDefault, which re-executes the
YIELD_FROM opcode
- which calls send() on the next generator
- which enters _PyEval_EvalFrameDefault, which re-executes the
YIELD_FROM opcode
- ...etc.
However, every time we enter _PyEval_EvalFrameDefault, the first thing
we do is to check for pending signals, and if there are any then we
run the signal handler. And if it raises an exception, then we
immediately propagate that exception *instead* of starting to execute
bytecode. This means that e.g. a SIGINT at the wrong moment can "break
the chain" – it can be raised in the middle of our yield from chain,
with the bottom part of the stack abandoned for the garbage collector.
The fix is pretty simple: there's already a special case in
_PyEval_EvalFrameEx where it skips running signal handlers if the next
opcode is SETUP_FINALLY. (I don't see how this accomplishes anything
useful, but that's another story.) If we extend this check to also
skip running signal handlers when the next opcode is YIELD_FROM, then
that closes the hole – now the exception can only be raised at the
innermost stack frame.
This shouldn't have any performance implications, because the opcode
check happens inside the "slow path" after we've already determined
that there's a pending signal or something similar for us to process;
the vast majority of the time this isn't true and the new check
doesn't run at all.
Python/thread_foobar.h is the template code that is threading adaptation
for new platforms, also hasn't been used on actual platforms.
Python/thread_*.h give concrete examples of adaptation instead of the
template code.
is_valid_fd() now uses fstat() instead of dup() on macOS to return 0
on a pipe when the other side of the pipe is closed. fstat() fails
with EBADF in that case, whereas dup() succeed.
* bpo-6532: Make the thread id an unsigned integer.
From C API side the type of results of PyThread_start_new_thread() and
PyThread_get_thread_ident(), the id parameter of
PyThreadState_SetAsyncExc(), and the thread_id field of PyThreadState
changed from "long" to "unsigned long".
* Restore a check in thread_get_ident().
When LOAD_METHOD is used for calling C mehtod, PyMethodDescrObject
was passed to profilefunc from 5566bbb.
But lsprof traces only PyCFunctionObject. Additionally, there can be
some third party extension which assumes passed arg is
PyCFunctionObject without calling PyCFunction_Check().
So make PyCFunctionObject from PyMethodDescrObject when
tstate->c_profilefunc is set.
sys.version and the platform module python_build(),
python_branch(), and python_revision() functions now use
git information rather than hg when building from a repo.
Based on original patches by Brett Cannon and Steve Dower.
bpo-29463 added optional "docstring" field to 4 AST types.
While it is optional, it breaks backward compatibility because AST constructor
requires number of positional argument is same to number of fields.
AST types accepts empty arguments, and incomplete keyword arguments.
But it's not big problem because field can be filled after creation, and checked when compiling.
So stop requiring complete set of fields for positional arguments too.
When you use `'%s' % SubClassOfStr()`, where `SubClassOfStr.__rmod__` exists, the reverse operation is ignored as normally such string formatting operations use the `PyUnicode_Format()` fast path. This patch tests for subclasses of `str` first and picks the slow path in that case.
Patch by Martijn Pieters.
* bpo-29463: Add docstring field to some AST nodes.
ClassDef, ModuleDef, FunctionDef, and AsyncFunctionDef has docstring
field for now. It was first statement of there body.
* fix document. thanks travis!
* doc fixes
bltinmodule.c: Added in b744ba1 and no longer necessary since d64e8a7
posixmodule.c: Added in d1cd4d4 and no longer necessary since efb00c0
pythonrun.c: Added in 73d538b and no longer necessary since d600951
sysmodule.c: Added in 5467d4c and no longer necessary since a2c17c5
* Move all functions to call objects in a new Objects/call.c file.
* Rename fast_function() to _PyFunction_FastCallKeywords().
* Copy null_error() from Objects/abstract.c
* Inline type_error() in call.c to not have to copy it, it was only
called once.
* Export _PyEval_EvalCodeWithName() since it is now called
from call.c.
* Move all functions to call objects in a new Objects/call.c file.
* Rename fast_function() to _PyFunction_FastCallKeywords().
* Copy null_error() from Objects/abstract.c
* Inline type_error() in call.c to not have to copy it, it was only
called once.
* Export _PyEval_EvalCodeWithName() since it is now called
from call.c.
* Replace PyArg_ParseTupleAndKeywords() with _PyArg_ParseStackAndKeywords()
which is more efficient to parse keywords, since it decodes only keywords
(char*) from UTF-8 once, instead of decoding at each call.
* METH_FASTCALL avoids the creation of a temporary tuple to pass positional
arguments.
Patch written by INADA Naoki, pushed by Victor Stinner.
Issue #29286. Run Argument Clinic to get the new faster METH_FASTCALL calling
convention for functions using "boring" positional arguments.
Manually fix _elementtree: _elementtree_XMLParser_doctype() must remain
consistent with the clinic code.
Issue #29227: Inline call_function() into _PyEval_EvalFrameDefault() using
Py_LOCAL_INLINE to reduce the stack consumption.
It reduces the stack consumption, bytes per call, before => after:
test_python_call: 1152 => 1040 (-112 B)
test_python_getitem: 1008 => 976 (-32 B)
test_python_iterator: 1232 => 1120 (-112 B)
=> total: 3392 => 3136 (- 256 B)
Copy and then adapt Python/random.c from default branch. Difference between 3.5
and default branches:
* Python 3.5 only uses getrandom() in non-blocking mode: flags=GRND_NONBLOCK
* If getrandom() fails with EAGAIN: py_getrandom() immediately fails and
remembers that getrandom() doesn't work.
* Python 3.5 has no _PyOS_URandomNonblock() function: _PyOS_URandom()
works in non-blocking mode on Python 3.5
* dev_urandom() now calls py_getentropy(). Prepare the fallback to support
getentropy() failure and falls back on reading from /dev/urandom.
* Simplify dev_urandom(). pyurandom() is now responsible to call getentropy()
or getrandom(). Enhance also dev_urandom() and pyurandom() documentation.
* getrandom() is now preferred over getentropy(). The glibc 2.24 now implements
getentropy() on Linux using the getrandom() syscall. But getentropy()
doesn't support non-blocking mode. Since getrandom() is tried first, it's not
more needed to explicitly exclude getentropy() on Solaris. Replace:
"if defined(HAVE_GETENTROPY) && !defined(sun)"
with "if defined(HAVE_GETENTROPY)"
* Enhance py_getrandom() documentation. py_getentropy() now supports ENOSYS,
EPERM & EINTR
The glibc now implements getentropy() on Linux using the getrandom() syscall.
But getentropy() doesn't support non-blocking mode.
Since getrandom() is tried first, it's not more needed to explicitly exclude
getentropy() on Solaris. Replace:
if defined(HAVE_GETENTROPY) && !defined(sun)
with
if defined(HAVE_GETENTROPY)
Issue #28870: Add a new _PY_FASTCALL_SMALL_STACK constant, size of "small
stacks" allocated on the C stack to pass positional arguments to
_PyObject_FastCall().
_PyObject_Call_Prepend() now uses a small stack of 5 arguments (40 bytes)
instead of 8 (64 bytes), since it is modified to use _PY_FASTCALL_SMALL_STACK.
Special thanks to INADA Naoki for pushing the patch through
the last mile, Serhiy Storchaka for reviewing the code, and to
Victor Stinner for suggesting the idea (originally implemented
in the PyPy project).
The PEP 523 modified PyEval_EvalFrameEx(): it's now an indirection to
interp->eval_frame().
Inline the call in performance critical code. Leave PyEval_EvalFrame()
unchanged, this function is only kept for backward compatibility.
Issue #28915: Replace _PyObject_CallMethodId() with
_PyObject_CallMethodIdObjArgs() in various modules when the format string was
only made of "O" formats, PyObject* arguments.
_PyObject_CallMethodIdObjArgs() avoids the creation of a temporary tuple and
doesn't have to parse a format string.
Issue #28915: Replace _PyObject_CallMethodId() with
_PyObject_CallMethodIdObjArgs() when the format string only use the format 'O'
for objects, like "(O)".
_PyObject_CallMethodIdObjArgs() avoids the code to parse a format string and
avoids the creation of a temporary tuple.
Issue #28915: Py_ssize_t type is better for indexes. The compiler might emit
more efficient code for i++. Py_ssize_t is the type of a PyTuple index for
example.
Replace also "int endchar" with "char endchar".
Issue #28838: Rename parameters of the "calls" functions of the Python C API.
* Rename 'callable_object' and 'func' to 'callable': any Python callable object
is accepted, not only Python functions
* Rename 'method' and 'nameid' to 'name' (method name)
* Rename 'o' to 'obj'
* Move, fix and update documentation of PyObject_CallXXX() functions
in abstract.h
* Update also the documentaton of the C API (update parameter names)
Replace
_PyObject_CallArg1(func, arg)
with
PyObject_CallFunctionObjArgs(func, arg, NULL)
Using the _PyObject_CallArg1() macro increases the usage of the C stack, which
was unexpected and unwanted. PyObject_CallFunctionObjArgs() doesn't have this
issue.
Handling zero-argument super() in __init_subclass__ and
__set_name__ involved moving __class__ initialisation to
type.__new__. This requires cooperation from custom
metaclasses to ensure that the new __classcell__ entry
is passed along appropriately.
The initial implementation of that change resulted in abruptly
broken zero-argument super() support in metaclasses that didn't
adhere to the new requirements (such as Django's metaclass for
Model definitions).
The updated approach adopted here instead emits a deprecation
warning for those cases, and makes them work the same way they
did in Python 3.5.
This patch also improves the related class machinery documentation
to cover these details and to include more reader-friendly
cross-references and index entries.
Issue #28858: The change b9c9691c72c5 introduced a regression. It seems like
_PyObject_CallArg1() uses more stack memory than
PyObject_CallFunctionObjArgs().
Replace
PyObject_CallFunction(func, "O", arg)
and
PyObject_CallFunction(func, "O", arg, NULL)
with
_PyObject_CallArg1(func, arg)
Replace
PyObject_CallFunction(func, NULL)
with
_PyObject_CallNoArg(func)
_PyObject_CallNoArg() and _PyObject_CallArg1() are simpler and don't allocate
memory on the C stack.
* PyObject_CallFunctionObjArgs(func, NULL) => _PyObject_CallNoArg(func)
* PyObject_CallFunctionObjArgs(func, arg, NULL) => _PyObject_CallArg1(func, arg)
PyObject_CallFunctionObjArgs() allocates 40 bytes on the C stack and requires
extra work to "parse" C arguments to build a C array of PyObject*.
_PyObject_CallNoArg() and _PyObject_CallArg1() are simpler and don't allocate
memory on the C stack.
This change is part of the fastcall project. The change on listsort() is
related to the issue #23507.
Issue #28799:
* Remove the PyEval_GetCallStats() function.
* Deprecate the untested and undocumented sys.callstats() function.
* Remove the CALL_PROFILE special build
Use the sys.setprofile() function, cProfile or profile module to profile
function calls.
Issue #28782: Fix a bug in the implementation ``yield from`` when checking
if the next instruction is YIELD_FROM. Regression introduced by WORDCODE
(issue #26647).
Reviewed by Serhiy Storchaka and Yury Selivanov.
Issue #28691: Fix warn_invalid_escape_sequence(): handle correctly
DeprecationWarning raised as an exception. First clear the current exception to
replace the DeprecationWarning exception with a SyntaxError exception.
Unit test written by Serhiy Storchaka.
When Python is not compiled with PGO, the performance of Python on call_simple
and call_method microbenchmarks depend highly on the code placement. In the
worst case, the performance slowdown can be up to 70%.
The GCC __attribute__((hot)) attribute helps to keep hot code close to reduce
the risk of such major slowdown. This attribute is ignored when Python is
compiled with PGO.
The following functions are considered as hot according to statistics collected
by perf record/perf report:
* _PyEval_EvalFrameDefault()
* call_function()
* _PyFunction_FastCall()
* PyFrame_New()
* frame_dealloc()
* PyErr_Occurred()