* PyMapping_HasKey() is not safe because it silences all exceptions and can return incorrect result.
* Informative exceptions from PyMapping_DelItem() are overridden with RuntimeError and
the original exception raised before calling remove_module() is lost.
* There is a race condition between PyMapping_HasKey() and PyMapping_DelItem().
Since _PyImport_ReInitLock() now calls _PyThread_at_fork_reinit() on
the import lock, the lock is now in a known state: unlocked. It
became safe to acquire it after fork.
PyOS_AfterFork_Child() helper functions now return a PyStatus:
PyOS_AfterFork_Child() is now responsible to handle errors.
* Move _PySignal_AfterFork() to the internal C API
* Add #ifdef HAVE_FORK on _PyGILState_Reinit(), _PySignal_AfterFork()
and _PyInterpreterState_DeleteExceptMain().
I can add another commit with the new test case I wrote to verify that the warning was being printed before my change, stopped printing after my change, and that the function does not return null after my change.
Automerge-Triggered-By: @brettcannon
Otherwise we leave a dangling pointer to free'd memory. If we
then initialize a new interpreter in the same process and call
PyImport_ExtendInittab, we will (likely) crash when calling
PyMem_RawRealloc(inittab_copy, ...) since the pointer address
is bogus.
Automerge-Triggered-By: @brettcannon
PyFrame_GetCode(frame): return a borrowed reference to the frame
code.
Replace frame->f_code with PyFrame_GetCode(frame) in most code,
except in frameobject.c, genobject.c and ceval.c.
Also add PyFrame_GetLineNumber() to the limited C API.
Rename _PyInterpreterState_GET_UNSAFE() to _PyInterpreterState_GET()
for consistency with _PyThreadState_GET() and to have a shorter name
(help to fit into 80 columns).
Add also "assert(tstate != NULL);" to the function.
Don't access PyInterpreterState.config member directly anymore, but
use new functions:
* _PyInterpreterState_GetConfig()
* _PyInterpreterState_SetConfig()
* _Py_GetConfig()
Replace _PyInterpreterState_Get() function call with
_PyInterpreterState_GET_UNSAFE() macro which is more efficient but
don't check if tstate or interp is NULL.
_Py_GetConfigsAsDict() now uses _PyThreadState_GET().
The Py_FatalError() function is replaced with a macro which logs
automatically the name of the current function, unless the
Py_LIMITED_API macro is defined.
Changes:
* Add _Py_FatalErrorFunc() function.
* Remove the function name from the message of Py_FatalError() calls
which included the function name.
* Update tests.
The bulk of this patch was generated automatically with:
for name in \
PyObject_Vectorcall \
Py_TPFLAGS_HAVE_VECTORCALL \
PyObject_VectorcallMethod \
PyVectorcall_Function \
PyObject_CallOneArg \
PyObject_CallMethodNoArgs \
PyObject_CallMethodOneArg \
;
do
echo $name
git grep -lwz _$name | xargs -0 sed -i "s/\b_$name\b/$name/g"
done
old=_PyObject_FastCallDict
new=PyObject_VectorcallDict
git grep -lwz $old | xargs -0 sed -i "s/\b$old\b/$new/g"
and then cleaned up:
- Revert changes to in docs & news
- Revert changes to backcompat defines in headers
- Nudge misaligned comments
Currently, during runtime destruction, `_PyImport_Cleanup` is clearing the interpreter state before clearing out the modules themselves. This leads to a segfault on modules that rely on the module state to clear themselves up.
For example, let's take the small snippet added in the issue by @DinoV :
```
import _struct
class C:
def __init__(self):
self.pack = _struct.pack
def __del__(self):
self.pack('I', -42)
_struct.x = C()
```
The module `_struct` uses the module state to run `pack`. Therefore, the module state has to be alive until after the module has been cleared out to successfully run `C.__del__`. This happens at line 606, when `_PyImport_Cleanup` calls `_PyModule_Clear`. In fact, the loop that calls `_PyModule_Clear` has in its comments:
> Now, if there are any modules left alive, clear their globals to minimize potential leaks. All C extension modules actually end up here, since they are kept alive in the interpreter state.
That means that we can't clear the module state (which is used by C Extensions) before we run that loop.
Moving `_PyInterpreterState_ClearModules` until after it, fixes the segfault in the code snippet.
Finally, this updates a test in `io` to correctly assert the error that it now throws (since it now finds the io module state). The test that uses this is: `test_create_at_shutdown_without_encoding`. Given this test is now working is a proof that the module state now stays alive even when `__del__` is called at module destruction time. Thus, I didn't add a new tests for this.
https://bugs.python.org/issue38076
new_interpreter() now calls _PyBuiltin_Init() to create the builtins
module and calls _PyImport_FixupBuiltin(), rather than using
_PyImport_FindBuiltin(tstate, "builtins").
pycore_init_builtins() is now responsible to initialize
intepr->builtins_copy: inline _PyImport_Init() and remove this
function.
If _PyImport_FixupExtensionObject() is called from a subinterpreter,
leave extensions unchanged and don't copy the module dictionary
into def->m_base.m_copy.
* Add GCState type for readability
* gcmodule.c now gets its gcstate from tstate
* _PyGC_DumpShutdownStats() now expects tstate rather than runtime
* Rename "state" to "gcstate" for readability: to avoid confusion
between "state" and "tstate" for example.
* collect() now only expects tstate: it gets gcstate from tstate.
* Pass tstate to _PyErr_xxx() functions
Relative imports use resolve_name to get the absolute target name,
which first seeks the current module's absolute package name from the globals:
If __package__ (and __spec__.parent) are missing then
import uses __name__, truncating the last segment if
the module is a submodule rather than a package __init__.py
(which it guesses from whether __path__ is defined).
The __name__ attempt should fail if there is no parent package (top level modules),
if __name__ is '__main__' (-m entry points), or both (scripts).
That is, if both __name__ has no subcomponents and the module does not seem
to be a package __init__ module then import should fail.
Imports now raise `TypeError` instead of `ValueError` for relative import failures. This makes things consistent between `builtins.__import__` and `importlib.__import__` as well as using a more natural import for the failure.
https://bugs.python.org/issue37444
Automerge-Triggered-By: @brettcannon
* Rename PyImport_Cleanup() to _PyImport_Cleanup() and move it to the
internal C API. Add 'tstate' parameters.
* Remove documentation of _PyImport_Init(), PyImport_Cleanup(),
_PyImport_Fini(). All three were documented as "For internal use
only.".
* Add 'tstate' parameter to many internal import.c functions.
* _PyImportZip_Init() now gets 'tstate' parameter rather than
'interp'.
* Add 'interp' parameter to _PyState_ClearModules() and rename it
to _PyInterpreterState_ClearModules().
* Move private _PyImport_FindBuiltin() to the internal C API; add
'tstate' parameter to it.
* Remove private _PyImport_AddModuleObject() from the C API:
use public PyImport_AddModuleObject() instead.
* Remove private _PyImport_FindExtensionObjectEx() from the C API:
use private _PyImport_FindExtensionObject() instead.
* ast.h now includes Python-ast.h and node.h
* parsetok.h now includes node.h and grammar.h
* symtable.h now includes Python-ast.h
* Modify asdl_c.py to enhance Python-ast.h:
* Add #ifndef/#define Py_PYTHON_AST_H to be able to include the header
twice
* Add "extern { ... }" for C++
* Undefine "Yield" macro conflicting with winbase.h
* Remove "#undef Yield" from C files, it's now done in Python-ast.h
* Remove now useless includes in C files
Two kind of mistakes:
1. Missed space. After concatenating there is no space between words.
2. Missed comma. Causes unintentional concatenating in a list of strings.
* And pycore_lifecycle.h and pycore_pathconfig.h headers to
Include/internal/
* Move Py_BUILD_CORE specific code from coreconfig.h and
pylifecycle.h to pycore_pathconfig.h and pycore_lifecycle.h
* Move _Py_wstrlist_XXX() definitions and _PyPathConfig code
from pycore_state.h to pycore_pathconfig.h
* Move "Init" and "Fini" function definitions from pylifecycle.c to
pycore_lifecycle.h.
Modules imported last are now cleared first at interpreter shutdown.
A newly imported module is moved to the end of sys.modules, behind
modules on which it depends.
bpo-31650, bpo-34170: Replace _Py_CheckHashBasedPycsMode with
_PyCoreConfig._check_hash_pycs_mode. Modify PyInit__imp() and
zipimport to get the parameter from the current interpreter core
configuration.
Remove Include/internal/import.h file.
* Add Include/coreconfig.h
* Move config_*() and _PyCoreConfig_*() functions from Modules/main.c
to a new Python/coreconfig.c file.
* Inline _Py_ReadHashSeed() into config_init_hash_seed()
* Move global configuration variables to coreconfig.c
As a result of 92a3c6f493, the compiler complains:
Python/import.c:2311:21: warning: comparison of integers of different signs: 'long' and 'unsigned long' [-Wsign-compare]
if ((i + n + 1) <= PY_SSIZE_T_MAX / sizeof(struct _inittab)) {
~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This overflow is extremely unlikely to happen, but let's avoid undefined
behavior anyway.
Fix the warning:
Python/import.c: warning: comparison between signed and unsigned integer expressions
if ((i + n + 1) <= PY_SSIZE_T_MAX / sizeof(struct _inittab)) {
Python now supports checking bytecode cache up-to-dateness with a hash of the
source contents rather than volatile source metadata. See the PEP for details.
While a fairly straightforward idea, quite a lot of code had to be modified due
to the pervasiveness of pyc implementation details in the codebase. Changes in
this commit include:
- The core changes to importlib to understand how to read, validate, and
regenerate hash-based pycs.
- Support for generating hash-based pycs in py_compile and compileall.
- Modifications to our siphash implementation to support passing a custom
key. We then expose it to importlib through _imp.
- Updates to all places in the interpreter, standard library, and tests that
manually generate or parse pyc files to grok the new format.
- Support in the interpreter command line code for long options like
--check-hash-based-pycs.
- Tests and documentation for all of the above.
PyImport_ExtendInittab() now uses PyMem_RawRealloc() rather than
PyMem_Realloc(). PyImport_ExtendInittab() can be called before
Py_Initialize() whereas only the PyMem_Raw allocator is supposed to
be used before Py_Initialize().
Add _PyImport_Fini2() to release the memory allocated by
PyImport_ExtendInittab() at exit. PyImport_ExtendInittab() now forces
the usage of the default raw allocator, to be able to release memory
in _PyImport_Fini2().
Don't export these functions anymore to be C API, only to
Py_BUILD_CORE:
* _PyExc_Fini()
* _PyImport_Fini()
* _PyGC_DumpShutdownStats()
* _PyGC_Fini()
* _PyType_Fini()
* _Py_HashRandomization_Fini()
Add import_find_and_load() helper function. The addition of
the importtime option has made PyImport_ImportModuleLevelObject() large
and so using a helper seems worthwhile. It also makes it clearer that
abs_name is the only argument needed by _find_and_load().
Py_Main() now handles two more -X options:
* -X showrefcount: new _PyCoreConfig.show_ref_count field
* -X showalloccount: new _PyCoreConfig.show_alloc_count field
Parse more env vars in Py_Main():
* Add more options to _PyCoreConfig:
* faulthandler
* tracemalloc
* importtime
* Move code to parse environment variables from _Py_InitializeCore()
to Py_Main(). This change fixes a regression from Python 3.6:
PYTHONUNBUFFERED is now read before calling pymain_init_stdio().
* _PyFaulthandler_Init() and _PyTraceMalloc_Init() now take an
argument to decide if the module has to be enabled at startup.
* tracemalloc_start() is now responsible to check the maximum number
of frames.
Other changes:
* Cleanup Py_Main():
* Rename some pymain_xxx() subfunctions
* Add pymain_run_python() subfunction
* Cleanup Py_NewInterpreter()
* _PyInterpreterState_Enable() now reports failure
* init_hash_secret() now considers pyurandom() failure as an "user
error": don't fail with abort().
* pymain_optlist_append() and pymain_strdup() now sets err on memory
allocation failure.
* Don't use "Python runtime" anymore to parse command line options or
to get environment variables: pymain_init() is now a strict
separation.
* Use an error message rather than "crashing" directly with
Py_FatalError(). Limit the number of calls to Py_FatalError(). It
prepares the code to handle errors more nicely later.
* Warnings options (-W, PYTHONWARNINGS) and "XOptions" (-X) are now
only added to the sys module once Python core is properly
initialized.
* _PyMain is now the well identified owner of some important strings
like: warnings options, XOptions, and the "program name". The
program name string is now properly freed at exit.
pymain_free() is now responsible to free the "command" string.
* Rename most methods in Modules/main.c to use a "pymain_" prefix to
avoid conflits and ease debug.
* Replace _Py_CommandLineDetails_INIT with memset(0)
* Reorder a lot of code to fix the initialization ordering. For
example, initializing standard streams now comes before parsing
PYTHONWARNINGS.
* Py_Main() now handles errors when adding warnings options and
XOptions.
* Add _PyMem_GetDefaultRawAllocator() private function.
* Cleanup _PyMem_Initialize(): remove useless global constants: move
them into _PyMem_Initialize().
* Call _PyRuntime_Initialize() as soon as possible:
_PyRuntime_Initialize() now returns an error message on failure.
* Add _PyInitError structure and following macros:
* _Py_INIT_OK()
* _Py_INIT_ERR(msg)
* _Py_INIT_USER_ERR(msg): "user" error, don't abort() in that case
* _Py_INIT_FAILED(err)
* Rewrite win_perf_counter() to only use integers internally.
* Add _PyTime_MulDiv() which compute "ticks * mul / div"
in two parts (int part and remaining) to prevent integer overflow.
* Clock frequency is checked at initialization for integer overflow.
* Enhance also pymonotonic() to reduce the precision loss on macOS
(mach_absolute_time() clock).
time.clock() and time.perf_counter() now use again C double
internally.
Remove also _PyTime_GetWinPerfCounterWithInfo(): use
_PyTime_GetPerfCounterDoubleWithInfo() instead on Windows.
The concrete PyDict_* API is used to interact with PyInterpreterState.modules in a number of places. This isn't compatible with all dict subclasses, nor with other Mapping implementations. This patch switches the concrete API usage to the corresponding abstract API calls.
We also add a PyImport_GetModule() function (and some other helpers) to reduce a bunch of code duplication.
A bunch of code currently uses PyInterpreterState.modules directly instead of PyImport_GetModuleDict(). This complicates efforts to make changes relative to sys.modules. This patch switches to using PyImport_GetModuleDict() uniformly. Also, a number of related uses of sys.modules are updated for uniformity for the same reason.
Note that this code was already reviewed and merged as part of #1638. I reverted that and am now splitting it up into more focused parts.
PR #1638, for bpo-28411, causes problems in some (very) edge cases. Until that gets sorted out, we're reverting the merge. PR #3506, a fix on top of #1638, is also getting reverted.
* group the (stateful) runtime globals into various topical structs
* consolidate the topical structs under a single top-level _PyRuntimeState struct
* add a check-c-globals.py script that helps identify runtime globals
Other globals are excluded (see globals.txt and check-c-globals.py).
* Rewrite importlib _get_module_lock(): it is now responsible to hold
the imp lock directly.
* _find_and_load() now holds the module lock to check if name is in
sys.modules to prevent a race condition
* bpo-6532: Make the thread id an unsigned integer.
From C API side the type of results of PyThread_start_new_thread() and
PyThread_get_thread_ident(), the id parameter of
PyThreadState_SetAsyncExc(), and the thread_id field of PyThreadState
changed from "long" to "unsigned long".
* Restore a check in thread_get_ident().
Issue #28915: Replace _PyObject_CallMethodId() with
_PyObject_CallMethodIdObjArgs() in various modules when the format string was
only made of "O" formats, PyObject* arguments.
_PyObject_CallMethodIdObjArgs() avoids the creation of a temporary tuple and
doesn't have to parse a format string.
Issue #28858: The change b9c9691c72c5 introduced a regression. It seems like
_PyObject_CallArg1() uses more stack memory than
PyObject_CallFunctionObjArgs().
* PyObject_CallFunctionObjArgs(func, NULL) => _PyObject_CallNoArg(func)
* PyObject_CallFunctionObjArgs(func, arg, NULL) => _PyObject_CallArg1(func, arg)
PyObject_CallFunctionObjArgs() allocates 40 bytes on the C stack and requires
extra work to "parse" C arguments to build a C array of PyObject*.
_PyObject_CallNoArg() and _PyObject_CallArg1() are simpler and don't allocate
memory on the C stack.
This change is part of the fastcall project. The change on listsort() is
related to the issue #23507.
or builtins for importing submodules or "from import". Fixed a crash if
raise a warning about unabling to resolve package from __spec__ or
__package__.
* Replace PyUnicode_RPartition() with PyUnicode_FindChar() and
PyUnicode_Substring() to avoid the creation of a temporary tuple.
* Use PyUnicode_FromFormat() to build a string and avoid the single_dot ('.')
singleton
Thanks Serhiy Storchaka for your review.
with no known parent package.
Previously SystemError was raised if the parent package didn't exist
(e.g., __package__ was set to '').
Thanks to Florent Xicluna and Yongzhi Pan for reporting the issue.
In a previous change, __spec__.parent was prioritized over
__package__. That is a backwards-compatibility break, but we do
eventually want __spec__ to be the ground truth for module details. So
this change reverts the change in semantics and instead raises an
ImportWarning when __package__ != __spec__.parent to give people time
to adjust to using spec objects.
not defined for a relative import.
This is the start of work to try and clean up import semantics to rely
more on a module's spec than on the myriad attributes that get set on
a module. Thanks to Rose Ames for the patch.
Known limitations of the current implementation:
- documentation changes are incomplete
- there's a reference leak I haven't tracked down yet
The leak is most visible by running:
./python -m test -R3:3 test_importlib
However, you can also see it by running:
./python -X showrefcount
Importing the array or _testmultiphase modules, and
then deleting them from both sys.modules and the local
namespace shows significant increases in the total
number of active references each cycle. By contrast,
with _testcapi (which continues to use single-phase
initialisation) the global refcounts stabilise after
a couple of cycles.