Make the Unicode dictionary of interned strings compatible with
subinterpreters.
Remove the INTERN_NAME_STRINGS macro in typeobject.c: names are
always now interned (even if EXPERIMENTAL_ISOLATED_SUBINTERPRETERS
macro is defined).
_PyUnicode_ClearInterned() now uses PyDict_Next() to no longer
allocate memory, to ensure that the interned dictionary is cleared.
Make the type attribute lookup cache per-interpreter.
Add private _PyType_InitCache() function, called by PyInterpreterState_New().
Continue to share next_version_tag between interpreters, since static
types are still shared by interpreters.
Remove MCACHE macro: the cache is no longer disabled if the
EXPERIMENTAL_ISOLATED_SUBINTERPRETERS macro is defined.
Make _PyUnicode_FromId() function compatible with subinterpreters.
Each interpreter now has an array of identifier objects (interned
strings decoded from UTF-8).
* Add PyInterpreterState.unicode.identifiers: array of identifiers
objects.
* Add _PyRuntimeState.unicode_ids used to allocate unique indexes
to _Py_Identifier.
* Rewrite the _Py_Identifier structure.
Microbenchmark on _PyUnicode_FromId(&PyId_a) with _Py_IDENTIFIER(a):
[ref] 2.42 ns +- 0.00 ns -> [atomic] 3.39 ns +- 0.00 ns: 1.40x slower
This change adds 1 ns per _PyUnicode_FromId() call in average.
zipimport's _unmarshal_code swallows import errors and then _get_module_code doesn't know the cause of the error, and returns the generic, and sometimes incorrect, 'could not find...'.
Automerge-Triggered-By: GH:brettcannon
* Delete jump instructions that bypass empty blocks
* Add news entry
* Explicitly check for unconditional jump opcodes
Using the is_jump function results in the inclusion of instructions like
returns for which this optimization is not really valid. So, instead
explicitly check that the instruction is an unconditional jump.
* Handle conditional jumps, delete jumps gracefully
* Ensure b_nofallthrough and b_reachable are valid
* Add test for redundant jumps
* Regenerate importlib.h and edit Misc/ACKS
* Fix bad whitespace
* Add _PyAtExit_Call() function and remove pyexitfunc and
pyexitmodule members of PyInterpreterState. The function
logs atexit callback errors using _PyErr_WriteUnraisableMsg().
* Add _PyAtExit_Init() and _PyAtExit_Fini() functions.
* Remove traverse, clear and free functions of the atexit module.
Co-authored-by: Dong-hee Na <donghee.na@python.org>
At Python exit, if a callback registered with atexit.register()
fails, its exception is now logged. Previously, only some exceptions
were logged, and the last exception was always silently ignored.
Add _PyAtExit_Call() function and remove
PyInterpreterState.atexit_func member. call_py_exitfuncs() now calls
directly _PyAtExit_Call().
The atexit module must now always be built as a built-in module.
pymain_run_file() no longer encodes the filename: pass the filename
as an object to the new _PyRun_AnyFileObject() function.
Add new private functions:
* _PyRun_AnyFileObject()
* _PyRun_InteractiveLoopObject()
* _Py_FdIsInteractive()
Fix encoding name when running a ".pyc" file on Windows:
PyRun_SimpleFileExFlags() now uses the correct encoding to decode the
filename.
* Add pyrun_file() subfunction.
* Add pyrun_simple_file() subfunction.
* PyRun_SimpleFileExFlags() now calls _Py_fopen_obj() rather than
_Py_fopen().
Several built-in and standard library types now ensure that their internal result tuples are always tracked by the garbage collector:
- collections.OrderedDict.items
- dict.items
- enumerate
- functools.reduce
- itertools.combinations
- itertools.combinations_with_replacement
- itertools.permutations
- itertools.product
- itertools.zip_longest
- zip
Previously, they could have become untracked by a prior garbage collection.
No longer use deprecated aliases to functions:
* Replace PyObject_MALLOC() with PyObject_Malloc()
* Replace PyObject_REALLOC() with PyObject_Realloc()
* Replace PyObject_FREE() with PyObject_Free()
* Replace PyObject_Del() with PyObject_Free()
* Replace PyObject_DEL() with PyObject_Free()
No longer use deprecated aliases to functions:
* Replace PyMem_MALLOC() with PyMem_Malloc()
* Replace PyMem_REALLOC() with PyMem_Realloc()
* Replace PyMem_FREE() with PyMem_Free()
* Replace PyMem_Del() with PyMem_Free()
* Replace PyMem_DEL() with PyMem_Free()
Modify also the PyMem_DEL() macro to use directly PyMem_Free().
Reduce memory footprint and improve performance of loading modules having many func annotations.
>>> sys.getsizeof({"a":"int","b":"int","return":"int"})
232
>>> sys.getsizeof(("a","int","b","int","return","int"))
88
The tuple is converted into dict on the fly when `func.__annotations__` is accessed first.
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
Co-authored-by: Inada Naoki <songofacandy@gmail.com>
Use @staticmethod on methods using @classmethod but don't use their
cls parameter on the following classes:
* BuiltinImporter
* FrozenImporter
* WindowsRegistryFinder
* PathFinder
Leave methods using @_requires_builtin or @_requires_frozen unchanged,
since this decorator requires the wrapped method to have an extra parameter
(cls or self).
Simplify the importlib external bootstrap code:
importlib._bootstrap_external now uses regular imports to import
builtin modules. When it is imported, the builtin __import__()
function is already fully working and so can be used to import
builtin modules like sys.
Convert the _imp extension module to the multi-phase initialization
API (PEP 489).
* Add _PyImport_BootstrapImp() which fix a bootstrap issue: import
the _imp module before importlib is initialized.
* Add create_builtin() sub-function, used by _imp_create_builtin().
* Initialize PyInterpreterState.import_func earlier, in
pycore_init_builtins().
* Remove references to _PyImport_Cleanup(). This function has been
renamed to finalize_modules() and moved to pylifecycle.c.
Remove the undocumented PyOS_InitInterrupts() C function.
* Rename PyOS_InitInterrupts() to _PySignal_Init(). It now installs
other signal handlers, not only SIGINT.
* Rename PyOS_FiniInterrupts() to _PySignal_Fini()
time.time(), time.perf_counter() and time.monotonic() functions can
no longer fail with a Python fatal error, instead raise a regular
Python exception on failure.
Remove _PyTime_Init(): don't check system, monotonic and perf counter
clocks at startup anymore.
On error, _PyTime_GetSystemClock(), _PyTime_GetMonotonicClock() and
_PyTime_GetPerfCounter() now silently ignore the error and return 0.
They cannot fail with a Python fatal error anymore.
Add py_mach_timebase_info() and win_perf_counter_frequency()
sub-functions.
time.perf_counter() on Windows and time.monotonic() on macOS are now
system-wide. Previously, they used an offset computed at startup to
reduce the precision loss caused by the float type. Use
time.perf_counter_ns() and time.monotonic_ns() added in Python 3.7 to
avoid this precision loss.
On Windows, fix a regression in signal handling which prevented to
interrupt a program using CTRL+C. The signal handler can be run in a
thread different than the Python thread, in which case the test
deciding if the thread can handle signals is wrong.
On Windows, _PyEval_SignalReceived() now always sets eval_breaker to
1 since it cannot test _Py_ThreadCanHandleSignals(), and
eval_frame_handle_pending() always calls
_Py_ThreadCanHandleSignals() to recompute eval_breaker.
* Call _PyTime_Init() and _PyWarnings_InitState() earlier during the
Python initialization.
* Inline _PyImportHooks_Init() into _PySys_InitCore().
* The _warnings initialization function no longer call
_PyWarnings_InitState() to prevent resetting filters_version to 0.
* _PyWarnings_InitState() now returns an int and no longer clear the
state in case of error (it's done anyway at Python exit).
* Rework init_importlib(), fix refleaks on errors.
Fix _PyConfig_Read() if compute_path_config=0: use values set by
Py_SetPath(), Py_SetPythonHome() and Py_SetProgramName(). Add
compute_path_config parameter to _PyConfig_InitPathConfig().
The following functions now return NULL if called before
Py_Initialize():
* Py_GetExecPrefix()
* Py_GetPath()
* Py_GetPrefix()
* Py_GetProgramFullPath()
* Py_GetProgramName()
* Py_GetPythonHome()
These functions no longer automatically computes the Python Path
Configuration. Moreover, Py_SetPath() no longer computes
program_full_path.
The path configuration is now computed in the "main" initialization.
The core initialization no longer computes it.
* Add _PyConfig_Read() function to read the configuration without
computing the path configuration.
* pyinit_core() no longer computes the path configuration: it is now
computed by init_interp_main().
* The path configuration output members of PyConfig are now optional:
* executable
* base_executable
* prefix
* base_prefix
* exec_prefix
* base_exec_prefix
* _PySys_UpdateConfig() now skips NULL strings in PyConfig.
* _testembed: Rename test_set_config() to test_init_set_config() for
consistency with other tests.
Co-authored-by: Lawrence D’Anna <lawrence_danna@apple.com>
* Add support for macOS 11 and Apple Silicon (aka arm64)
As a side effect of this work use the system copy of libffi on macOS, and remove the vendored copy
* Support building on recent versions of macOS while deploying to older versions
This allows building installers on macOS 11 while still supporting macOS 10.9.
* The AST optimiser wasn't descending into named expressions, so
any constant subexpressions weren't being folded at compile time
* Remove "default:" clauses inside the AST optimiser code to reduce the
risk of similar bugs passing unnoticed in future compiler changes
The PyConfig_Read() function now only parses PyConfig.argv arguments
once: PyConfig.parse_argv is set to 2 after arguments are parsed.
Since Python arguments are strippped from PyConfig.argv, parsing
arguments twice would parse the application options as Python
options.
* Rework the PyConfig documentation.
* Fix _testinternalcapi.set_config() error handling.
* SetConfigTests no longer needs parse_argv=0 when restoring the old
configuration.
* Inline _PyInterpreterState_SetConfig(): replace it with
_PyConfig_Copy().
* Add _PyErr_SetFromPyStatus()
* Add _PyInterpreterState_GetConfigCopy()
* Add a new _PyInterpreterState_SetConfig() function.
* Add an unit which gets, modifies, and sets the config.
When Py_Initialize() is called twice, the second call now updates
more sys attributes for the configuration, rather than only sys.argv.
* Rename _PySys_InitMain() to _PySys_UpdateConfig().
* _PySys_UpdateConfig() now modifies sys.flags in-place, instead of
creating a new flags object.
* Remove old commented sys.flags flags (unbuffered and skip_first).
* Add private _PySys_GetObject() function.
* When Py_Initialize(), Py_InitializeFromConfig() and
Replace PyModule_AddObject() with PyModule_AddObjectRef() in the
_warnings module to fix a reference leak on error.
Use also PyModule_AddObjectRef() in importdl.c.
Call _PyAST_Fini() on all interpreters, not only on the main
interpreter. Also, call it ealier to fix a reference leak.
Python types contain a reference to themselves in in their
PyTypeObject.tp_mro member. _PyAST_Fini() must called before the last
GC collection to destroy AST types.
_PyInterpreterState_Clear() now calls _PyAST_Fini(). It now also
calls _PyWarnings_Fini() on subinterpeters, not only on the main
interpreter.
Add an assertion in AST init_types() to ensure that the _ast module
is no longer used after _PyAST_Fini() has been called.
The logging.FileHandler class now keeps a reference to the builtin
open() function to be able to open or reopen the file during Python
finalization.
Fix errors like:
Exception ignored in: (...)
Traceback (most recent call last):
(...)
File ".../logging/__init__.py", line 1463, in error
File ".../logging/__init__.py", line 1577, in _log
File ".../logging/__init__.py", line 1587, in handle
File ".../logging/__init__.py", line 1649, in callHandlers
File ".../logging/__init__.py", line 948, in handle
File ".../logging/__init__.py", line 1182, in emit
File ".../logging/__init__.py", line 1171, in _open
NameError: name 'open' is not defined
The ast module internal state is now per interpreter.
* Rename "astmodulestate" to "struct ast_state"
* Add pycore_ast.h internal header: the ast_state structure is now
declared in pycore_ast.h.
* Add PyInterpreterState.ast (struct ast_state)
* Remove get_ast_state()
* Rename get_global_ast_state() to get_ast_state()
* PyAST_obj2mod() now handles get_ast_state() failures
Enhance the documentation of the Python startup, filesystem encoding
and error handling, locale encoding. Add a new "Python UTF-8 Mode"
section.
* Add "locale encoding" and "filesystem encoding and error handler"
to the glossary
* Remove documentation from Include/cpython/initconfig.h: move it to
Doc/c-api/init_config.rst.
* Doc/c-api/init_config.rst:
* Document command line options and environment variables
* Document default values.
* Add a new "Python UTF-8 Mode" section in Doc/library/os.rst.
* Add warnings to Py_DecodeLocale() and Py_EncodeLocale() docs.
* Document how Python selects the filesystem encoding and error
handler at a single place: PyConfig.filesystem_encoding and
PyConfig.filesystem_errors.
* PyConfig: move orig_argv member at the right place.
This adds a new function named sys._current_exceptions() which is equivalent ot
sys._current_frames() except that it returns the exceptions currently handled
by other threads. It is equivalent to calling sys.exc_info() for each running
thread.
If the nl_langinfo(CODESET) function returns an empty string, Python
now uses UTF-8 as the filesystem encoding.
In May 2010 (commit b744ba1d14), I
modified Python to log a warning and use UTF-8 as the filesystem
encoding (instead of None) if nl_langinfo(CODESET) returns an empty
string.
In August 2020 (commit 94908bbc15), I
modified Python startup to fail with a fatal error and a specific
error message if nl_langinfo(CODESET) returns an empty string. The
intent was to prevent guessing the encoding and also investigate user
configuration where this case happens.
In 10 years (2010 to 2020), I saw zero user report about the error
message related to nl_langinfo(CODESET) returning an empty string.
Today, UTF-8 became the defacto standard and it's safe to make the
assumption that the user expects UTF-8. For example,
nl_langinfo(CODESET) can return an empty string on macOS if the
LC_CTYPE locale is not supported, and UTF-8 is the default encoding
on macOS.
While this change is likely to not affect anyone in practice, it
should make UTF-8 lover happy ;-)
Rewrite also the documentation explaining how Python selects the
filesystem encoding and error handler.
* Rename _Py_GetLocaleEncoding() to _Py_GetLocaleEncodingObject()
* Add _Py_GetLocaleEncoding() which returns a wchar_t* string to
share code between _Py_GetLocaleEncodingObject()
and config_get_locale_encoding().
* _Py_GetLocaleEncodingObject() now decodes nl_langinfo(CODESET)
from the current locale encoding with surrogateescape,
rather than using UTF-8.
_io.TextIOWrapper no longer calls getpreferredencoding(False) of
_bootlocale to get the locale encoding, but calls
_Py_GetLocaleEncoding() instead.
Add config_get_fs_encoding() sub-function. Reorganize also
config_get_locale_encoding() code.
The last GC collection is now done before clearing builtins and sys
dictionaries. Add also assertions to ensure that gc.collect() is no
longer called after _PyGC_Fini().
Pass also the tstate to PyInterpreterState_Clear() to pass the
correct tstate to _PyGC_CollectNoFail() and _PyGC_Fini().
Move private _PyGC_CollectNoFail() to the internal C API.
Remove the private _PyGC_CollectIfEnabled() which was just an alias
to the public PyGC_Collect() function since Python 3.8.
Rename functions:
* collect() => gc_collect_main()
* collect_with_callback() => gc_collect_with_callback()
* collect_generations() => gc_collect_generations()
* UCD_Check() uses PyModule_Check()
* Simplify the internal _PyUnicode_Name_CAPI structure:
* Remove size and state members
* Remove state and self parameters of getcode() and getname()
functions
* Remove global_module_state
The private _PyUnicode_Name_CAPI structure of the PyCapsule API
unicodedata.ucnhash_CAPI moves to the internal C API. Moreover, the
structure gets a new state member which must be passed to the
getcode() and getname() functions.
* Move Include/ucnhash.h to Include/internal/pycore_ucnhash.h
* unicodedata module is now built with Py_BUILD_CORE_MODULE.
* unicodedata: move hashAPI variable into unicodedata_module_state.
If PyDict_GetItemWithError is only used to check whether the key is in dict,
it is better to use PyDict_Contains instead.
And if it is used in combination with PyDict_SetItem, PyDict_SetDefault can
replace the combination.
These functions are considered not safe because they suppress all internal errors
and can return wrong result. PyDict_GetItemString and _PyDict_GetItemId can
also silence current exception in rare cases.
Remove no longer used _PyDict_GetItemId.
Add _PyDict_ContainsId and rename _PyDict_Contains into
_PyDict_Contains_KnownHash.
Since c19c5a6, AIX builds have defaulted to using dynload_shlib over
dynload_aix when dlopen is available. This function has been available
since AIX 4.3, which went out of support in 2003, the same year the
previously referenced commit was made. It has been nearly 20 years
since a version of AIX has been supported which has not used
dynload_shlib so there's no reason to keep this legacy code around.
When running in a non-UTF-8 locale, if an error occurs while importing a
native Python module (say because a dependent share library is missing),
the error message string returned may contain non-ASCII code points
causing a UnicodeDecodeError.
PyUnicode_DecodeFSDefault is used for buffers which may contain
filesystem paths. For consistency with os.strerror(),
PyUnicode_DecodeLocale is used for buffers which contain system error
messages. While the shortname parameter is always encoded in ASCII
according to PEP 489, it is left decoded using PyUnicode_FromString to
minimize the changes and since it should not affect the decoding (albeit
_potentially_ slower).
In dynload_hpux, since the error buffer contains a message generated
from a static ASCII string and the module filesystem path,
PyUnicode_DecodeFSDefault is used instead of PyUnicode_DecodeLocale as
is used elsewhere.
* bpo-41894: Fix bugs in dynload error msg handling
For both dynload_aix and dynload_hpux, properly handle the possibility
that decoding strings may return NULL and when such an error happens,
properly decrement any previously decoded strings and return early.
In addition, in dynload_aix, ensure that we pass the decoded string
*object* pathname_ob to PyErr_SetImportError instead of the original
pathname buffer.
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
All references to this dynamic loading method were removed in b9949db,
when support for this method was dropped, but the implementation code
was not dropped (seemingly in oversight).
This API is relatively lightweight and organizationally, given that it's
used by multiple modules, it makes sense to move it to fileutils.
Requires making sure that _posixsubprocess is compiled with the appropriate
Py_BUIILD_CORE_BUILTIN macro.
* PyMapping_HasKey() is not safe because it silences all exceptions and can return incorrect result.
* Informative exceptions from PyMapping_DelItem() are overridden with RuntimeError and
the original exception raised before calling remove_module() is lost.
* There is a race condition between PyMapping_HasKey() and PyMapping_DelItem().
- Use the proper asdl sequence when creating empty arguments
- Remove reduntant casts (thanks to new typed asdl_sequences)
- Remove MarshalPrototypeVisitor and some utilities from asdl generator
- Fix the header of `Python/ast.c` (kept from pgen times)
Automerge-Triggered-By: @pablogsal
The hard part was making all the tests pass; there are some subtle issues here, because apparently the future import wasn't tested very thoroughly in previous Python versions.
For example, `inspect.signature()` returned type objects normally (except for forward references), but strings with the future import. We changed it to try and return type objects by calling `typing.get_type_hints()`, but fall back on returning strings if that function fails (which it may do if there are future references in the annotations that require passing in a specific namespace to resolve).
Remove the global _Py_CheckRecursionLimit variable: it has been
replaced by ceval.recursion_limit of the PyInterpreterState
structure.
There is no need to keep the variable for the stable ABI, since
Py_EnterRecursiveCall() and Py_LeaveRecursiveCall() were not usable
in Python 3.8 and older: these macros accessed PyThreadState members,
whereas the PyThreadState structure is opaque in the limited C API.
The new API allows to efficiently send values into native generators
and coroutines avoiding use of StopIteration exceptions to signal
returns.
ceval loop now uses this method instead of the old "private"
_PyGen_Send C API. This translates to 1.6x increased performance
of 'await' calls in micro-benchmarks.
Aside from CPython core improvements, this new API will also allow
Cython to generate more efficient code, benefiting high-performance
IO libraries like uvloop.
* Add new capability to the PEG parser to type variable assignments. For instance:
```
| a[asdl_stmt_seq*]=';'.small_stmt+ [';'] NEWLINE { a }
```
* Add new sequence types from the asdl definition (automatically generated)
* Make `asdl_seq` type a generic aliasing pointer type.
* Create a new `asdl_generic_seq` for the generic case using `void*`.
* The old `asdl_seq_GET`/`ast_seq_SET` macros now are typed.
* New `asdl_seq_GET_UNTYPED`/`ast_seq_SET_UNTYPED` macros for dealing with generic sequences.
* Changes all possible `asdl_seq` types to use specific versions everywhere.
Partially revert commit ac46eb4ad6662cf6d771b20d8963658b2186c48c:
"bpo-38113: Update the Python-ast.c generator to PEP384 (gh-15957)".
Using a module state per module instance is causing subtle practical
problems.
For example, the Mercurial project replaces the __import__() function
to implement lazy import, whereas Python expected that "import _ast"
always return a fully initialized _ast module.
Add _PyAST_Fini() to clear the state at exit.
The _ast module has no state (set _astmodule.m_size to 0). Remove
astmodule_traverse(), astmodule_clear() and astmodule_free()
functions.
Fix GCC 9.3 (using -O3) warnings on x86:
initconfig.c: In function ‘init_dump_ascii_wstr’:
initconfig.c:2679:34: warning: format ‘%lc’ expects argument of type
‘wint_t’, but argument 2 has type ‘wchar_t’ {aka ‘long int’}
2679 | PySys_WriteStderr("%lc", ch);
initconfig.c:2682:38: warning: format ‘%x’ expects argument of type
‘unsigned int’, but argument 2 has type ‘wchar_t’ {aka ‘long int’}
2682 | PySys_WriteStderr("\\x%02x", ch);
initconfig.c:2686:38: warning: format ‘%x’ expects argument of type
‘unsigned int’, but argument 2 has type ‘wchar_t’ {aka ‘long int’}
2686 | PySys_WriteStderr("\\U%08x", ch);
initconfig.c:2690:38: warning: format ‘%x’ expects argument of type
‘unsigned int’, but argument 2 has type ‘wchar_t’ {aka ‘long int’}
2690 | PySys_WriteStderr("\\u%04x", ch);
* bpo-41524: fix pointer bug in PyOS_mystr{n}icmp
The existing implementations of PyOS_mystrnicmp and PyOS_mystricmp
can increment pointers beyond the end of a string.
This commit fixes those cases by moving the mutation out of the condition.
* 📜🤖 Added by blurb_it.
* Address comments
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
* Merge gen and frame state variables into one.
* Replace stack pointer with depth in PyFrameObject. Makes code easier to read and saves a word of memory.
Fix a crash in the _ast module: it can no longer be loaded more than
once. It now uses a global state rather than a module state.
* Move _ast module state: use a global state instead.
* Set _astmodule.m_size to -1, so the extension cannot be loaded more
than once.
Rework asdl_c.py to pass the module state to functions in
Python-ast.c, instead of using astmodulestate_global.
Handle also PyState_AddModule() failure in init_types().
Add sys.orig_argv attribute: the list of the original command line
arguments passed to the Python executable.
Rename also PyConfig._orig_argv to PyConfig.orig_argv and
document it.
This commit changes the parsing of f-string expressions with the new parser. The parser gets pre-fed with the location of the expression itself (not the f-string, which was what we were doing before). This allows us to completely skip the shifting of the AST nodes after the parsing is completed.
Always create the empty bytes string singleton.
Optimize PyBytes_FromStringAndSize(str, 0): it no longer has to check
if the empty string singleton was created or not, it is always
available.
Add functions:
* _PyBytes_Init()
* bytes_get_empty(), bytes_new_empty()
* bytes_create_empty_string_singleton()
* unicode_create_empty_string_singleton()
_Py_unicode_state: rename empty structure member to empty_string.
Py_InitializeFromConfig() now always creates the empty tuple
singleton as soon as possible.
Optimize PyTuple_New(0): it no longer has to check if the empty tuple
was created or not, it is always creatd.
* Add tuple_create_empty_tuple_singleton() function.
* Add tuple_get_empty() function.
* Remove state parameter of tuple_alloc().
Each interpreter now has its own MemoryError free list: it is not
longer shared by all interpreters.
Add _Py_exc_state structure and PyInterpreterState.exc_state member.
Move also errnomap into _Py_exc_state.
* Revert "bpo-40521: Make the empty frozenset per interpreter (GH-21068)"
This reverts commit 261cfedf76.
* bpo-40521: Empty frozensets are no longer singletons
* Complete the removal of the frozenset singleton
Each interpreter now has its own empty bytes string and single byte
character singletons.
Replace STRINGLIB_EMPTY macro with STRINGLIB_GET_EMPTY() macro.
Each interpreter now has its own dict free list:
* Move dict free lists into PyInterpreterState.
* Move PyDict_MAXFREELIST define to pycore_interp.h
* Add _Py_dict_state structure.
* Add tstate parameter to _PyDict_ClearFreeList() and _PyDict_Fini().
* In debug mode, ensure that the dict free lists are not used after
_PyDict_Fini() is called.
* Remove "#ifdef EXPERIMENTAL_ISOLATED_SUBINTERPRETERS".
zip() now supports PEP 618's strict parameter, which raises a
ValueError if the arguments are exhausted at different lengths.
Patch by Brandt Bucher.
Co-authored-by: Brandt Bucher <brandtbucher@gmail.com>
Co-authored-by: Ram Rachum <ram@rachum.com>
The PY_SSIZE_T_CLEAN macro must now be defined to use
PyArg_ParseTuple() and Py_BuildValue() "#" formats: "es#", "et#",
"s#", "u#", "y#", "z#", "U#" and "Z#". See the PEP 353.
Update _testcapi.test_buildvalue_issue38913().
When a file ends with a line that contains a line continuation character
the text of the emitted SyntaxError is empty, contrary to the old
parser, where the error text contained the text of the last line.
The C99 functions snprintf() and vsnprintf() are now required
to build Python.
PyOS_snprintf() and PyOS_vsnprintf() no longer call Py_FatalError().
Previously, they called Py_FatalError() on a buffer overflow on platforms
which don't provide vsnprintf().
On Windows, #include "pyerrors.h" no longer defines "snprintf" and
"vsnprintf" macros.
PyOS_snprintf() and PyOS_vsnprintf() should be used to get portable
behavior.
Replace snprintf() calls with PyOS_snprintf() and replace vsnprintf()
calls with PyOS_vsnprintf().
In GH-2866, _Py_Bit_Length() was added to pymath.h for lack of a better
location. GH-20518 added a more appropriate header file for bit utilities. It
also shows how to properly use intrinsics. This allows reconsidering bpo-29782.
* Move the function to the new header.
* Changed return type to match __builtin_clzl() and reviewed usage.
* Use intrinsics where available.
* Pick a fallback implementation suitable for inlining.
This commit removes the old parser, the deprecated parser module, the old parser compatibility flags and environment variables and all associated support code and documentation.
The PEP 353, written in 2005, introduced PY_FORMAT_SIZE_T. Python no
longer supports macOS 10.4 and Visual Studio 2010, but requires more
recent macOS and Visual Studio versions. In 2020 with Python 3.10, it
is now safe to use directly "%zu" to format size_t and "%zi" to
format Py_ssize_t.
Export explicitly the Py_GetArgcArgv() function to the C API and
document the function. Previously, it was exported implicitly which
no longer works since Python is built with -fvisibility=hidden.
* Add PyConfig._orig_argv member.
* Py_InitializeFromConfig() no longer calls _PyConfig_Write() twice.
* PyConfig_Read() no longer initializes Py_GetArgcArgv(): it is now
_PyConfig_Write() responsibility.
* _PyConfig_Write() result type becomes PyStatus instead of void.
* Write an unit test on Py_GetArgcArgv().
* Rename pycore_byteswap.h to pycore_bitutils.h.
* Move popcount_digit() to pycore_bitutils.h as _Py_popcount32().
* _Py_popcount32() uses GCC and clang builtin function if available.
* Add unit tests to _Py_popcount32().
* Provide native .files support on SourceFileLoader.
* Add native importlib.resources.files() support to zipimporter. Remove fallback support.
* make regen-all
* 📜🤖 Added by blurb_it.
* Move 'files' into the ResourceReader so it can carry the relevant module name context.
* Create 'importlib.readers' module and add FileReader to it.
* Add zip reader and rely on it for a TraversableResources object on zipimporter.
* Remove TraversableAdapter, no longer needed.
* Update blurb.
* Replace backslashes with forward slashes.
* Incorporate changes from importlib_metadata 2.0, finalizing the interface for extension via get_resource_reader.
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
In debug mode, ensure that free lists are no longer used after being
finalized. Set numfree to -1 in finalization functions
(eg. _PyList_Fini()), and then check that numfree is not equal to -1
before using a free list (e.g list_dealloc()).
Each interpreter now has its own context free list:
* Move context free list into PyInterpreterState.
* Add _Py_context_state structure.
* Add tstate parameter to _PyContext_ClearFreeList()
and _PyContext_Fini().
* Pass tstate to clear_freelists().
Each interpreter now has its own asynchronous generator free lists:
* Move async gen free lists into PyInterpreterState.
* Move _PyAsyncGen_MAXFREELIST define to pycore_interp.h
* Add _Py_async_gen_state structure.
* Add tstate parameter to _PyAsyncGen_ClearFreeLists
and _PyAsyncGen_Fini().
Each interpreter now has its own list free list:
* Move list numfree and free_list into PyInterpreterState.
* Add _Py_list_state structure.
* Add tstate parameter to _PyList_ClearFreeList()
and _PyList_Fini().
* Remove "#ifdef EXPERIMENTAL_ISOLATED_SUBINTERPRETERS".
* _PyGC_Fini() clears gcstate->garbage list which can be stored in
the list free list. Call _PyGC_Fini() before _PyList_Fini() to
prevent leaking this list.
Each interpreter now has its own frame free list:
* Move frame free list into PyInterpreterState.
* Add _Py_frame_state structure.
* Add tstate parameter to _PyFrame_ClearFreeList()
and _PyFrame_Fini().
* Remove "#if PyFrame_MAXFREELIST > 0".
* Remove "#ifdef EXPERIMENTAL_ISOLATED_SUBINTERPRETERS".
Each interpreter now has its own float free list:
* Move tuple numfree and free_list into PyInterpreterState.
* Add _Py_float_state structure.
* Add tstate parameter to _PyFloat_ClearFreeList()
and _PyFloat_Fini().
Each interpreter now has its own tuple free lists:
* Move tuple numfree and free_list arrays into PyInterpreterState.
* Define PyTuple_MAXSAVESIZE and PyTuple_MAXFREELIST macros in
pycore_interp.h.
* Add _Py_tuple_state structure. Pass it explicitly to tuple_alloc().
* Add tstate parameter to _PyTuple_ClearFreeList()
* Each interpreter now has its own empty tuple singleton.
If name is NULL, name is now set to co->co_name.
If qualname is NULL, qualname is now set to name.
qualname must not be NULL: it is used to build error messages.
Cleanup also the code: declare variables where they are initialized.
Rename "name" local variables to "varname" to avoid overriding "name"
parameter.
Since _PyImport_ReInitLock() now calls _PyThread_at_fork_reinit() on
the import lock, the lock is now in a known state: unlocked. It
became safe to acquire it after fork.
PyOS_AfterFork_Child() helper functions now return a PyStatus:
PyOS_AfterFork_Child() is now responsible to handle errors.
* Move _PySignal_AfterFork() to the internal C API
* Add #ifdef HAVE_FORK on _PyGILState_Reinit(), _PySignal_AfterFork()
and _PyInterpreterState_DeleteExceptMain().
* Fix failure of _Py_dg_dtoa to remove trailing zeros
* Add regression test and news entry
* Add explanation about why it's safe to strip trailing zeros
* Make code safer, clean up comments, add change note at top of file
* Nitpick: avoid implicit int-to-float conversion in tests
Previously, the result could have been an instance of a subclass of int.
Also revert bpo-26202 and make attributes start, stop and step of the range
object having exact type int.
Add private function _PyNumber_Index() which preserves the old behavior
of PyNumber_Index() for performance to use it in the conversion functions
like PyLong_AsLong().
This updates _PyErr_ChainStackItem() to use _PyErr_SetObject()
instead of _PyErr_ChainExceptions(). This prevents a hang in
certain circumstances because _PyErr_SetObject() performs checks
to prevent cycles in the exception context chain while
_PyErr_ChainExceptions() doesn't.
When an asyncio.Task is cancelled, the exception traceback now
starts with where the task was first interrupted. Previously,
the traceback only had "depth one."
Clarify the zip built-in docstring.
This puts much simpler text up front along with an example.
As it was, the zip built-in docstring was technically correct. But too
technical for the reader who shouldn't _need_ to know about `__next__` and
`StopIteration` as most people do not need to understand the internal
implementation details of the iterator protocol in their daily life.
This is a documentation only change, intended to be backported to 3.8; it is
only tangentially related to PEP-618 which might offer new behavior options
in the future.
Wording based a bit more on enumerate per Brandt's suggestion.
This gets rid of the legacy wording paragraph which seems too tied to
implementation details of the iterator protocol which isn't relevant here.
Co-authored-by: Brandt Bucher <brandtbucher@gmail.com>
This fixes both the traceback.py module and the C code for formatting syntax errors (in Python/pythonrun.c). They now both consistently do the following:
- Suppress caret if it points left of text
- Allow caret pointing just past end of line
- If caret points past end of line, clip to *just* past end of line
The syntax error formatting code in traceback.py was mostly rewritten; small, subtle changes were applied to the C code in pythonrun.c.
There's still a difference when the text contains embedded newlines. Neither handles these very well, and I don't think the case occurs in practice.
Automerge-Triggered-By: @gvanrossum
_Py_hashtable_get_entry_ptr() avoids comparing the entry hash:
compare directly keys.
Move _Py_hashtable_get_entry_ptr() just after
_Py_hashtable_get_entry_generic().
_Py_hashtable_t values become regular "void *" pointers.
* Add _Py_hashtable_entry_t.data member
* Remove _Py_hashtable_t.data_size member
* Remove _Py_hashtable_t.get_func member. It is no longer needed
to specialize _Py_hashtable_get() for a specific value size, since
all entries now have the same size (void*).
* Remove the following macros:
* _Py_HASHTABLE_GET()
* _Py_HASHTABLE_SET()
* _Py_HASHTABLE_SET_NODATA()
* _Py_HASHTABLE_POP()
* Rename _Py_hashtable_pop() to _Py_hashtable_steal()
* _Py_hashtable_foreach() callback now gets key and value rather than
entry.
* Remove _Py_hashtable_value_destroy_func type. value_destroy_func
callback now only has a single parameter: data (void*).
Rewrite _tracemalloc to store "trace_t*" rather than directly
"trace_t" in traces hash tables. Traces are now allocated on the heap
memory, outside the hash table.
Add tracemalloc_copy_traces() and tracemalloc_copy_domains() helper
functions.
Remove _Py_hashtable_copy() function since there is no API to copy a
key or a value.
Remove also _Py_hashtable_delete() function which was commented.
Rewrite _Py_hashtable_t type to always store the key as
a "const void *" pointer. Add an explicit "key" member to
_Py_hashtable_entry_t.
Remove _Py_hashtable_t.key_size member.
hash and compare functions drop their hash table parameter, and their
'key' parameter type becomes "const void *".
Add a new _Py_HashPointerRaw() function which avoids replacing -1
with -2 to micro-optimize hash table using pointer keys: using
_Py_hashtable_hash_ptr() hash function.
Optimize _Py_hashtable_get() and _Py_hashtable_get_entry() for
pointer keys:
* key_size == sizeof(void*)
* hash_func == _Py_hashtable_hash_ptr
* compare_func == _Py_hashtable_compare_direct
Changes:
* Add get_func and get_entry_func members to _Py_hashtable_t
* Convert _Py_hashtable_get() and _Py_hashtable_get_entry() functions
to static nline functions.
* Add specialized get and get entry for pointer keys.
_Py_hashtable_new() now uses PyMem_Malloc/PyMem_Free allocator by
default, rather than PyMem_RawMalloc/PyMem_RawFree.
PyMem_Malloc is faster than PyMem_RawMalloc for memory blocks smaller
than or equal to 512 bytes.
* Move Modules/hashtable.h to Include/internal/pycore_hashtable.h
* Move Modules/hashtable.c to Python/hashtable.c
* Python is now linked to hashtable.c. _tracemalloc is no longer
linked to hashtable.c. Previously, marshal.c got hashtable.c via
_tracemalloc.c which is built as a builtin module.
In the experimental isolated subinterpreters build mode, the GIL is
now per-interpreter.
Move gil from _PyRuntimeState.ceval to PyInterpreterState.ceval.
new_interpreter() always get the config from the main interpreter.
In the experimental isolated subinterpreters build mode,
_PyThreadState_GET() gets the autoTSSkey variable and
_PyThreadState_Swap() sets the autoTSSkey variable.
* Add _PyThreadState_GetTSS()
* _PyRuntimeState_GetThreadState() and _PyThreadState_GET()
return _PyThreadState_GetTSS()
* PyEval_SaveThread() sets the autoTSSkey variable to current Python
thread state rather than NULL.
* eval_frame_handle_pending() doesn't check that
_PyThreadState_Swap() result is NULL.
* _PyThreadState_Swap() gets the current Python thread state with
_PyThreadState_GetTSS() rather than
_PyRuntimeGILState_GetThreadState().
* PyGILState_Ensure() no longer checks _PyEval_ThreadsInitialized()
since it cannot access the current interpreter.
_PyErr_ChainExceptions() now ensures that the first parameter is an
exception type, as done by _PyErr_SetObject().
* The following function now check PyExceptionInstance_Check() in an
assertion using a new _PyBaseExceptionObject_cast() helper
function:
* PyException_GetTraceback(), PyException_SetTraceback()
* PyException_GetCause(), PyException_SetCause()
* PyException_GetContext(), PyException_SetContext()
* PyExceptionClass_Name() now checks PyExceptionClass_Check() with an
assertion.
* Remove XXX comment and add gi_exc_state variable to _gen_throw().
* Remove comment from test_generators
Move recursion_limit member from _PyRuntimeState.ceval to
PyInterpreterState.ceval.
* Py_SetRecursionLimit() now only sets _Py_CheckRecursionLimit
of ceval.c if the current Python thread is part of the main
interpreter.
* Inline _Py_MakeEndRecCheck() into _Py_LeaveRecursiveCall().
* Convert _Py_RecursionLimitLowerWaterMark() macro into a static
inline function.
Add --with-experimental-isolated-subinterpreters build option to
configure: better isolate subinterpreters, experimental build mode.
When used, force the usage of the libc malloc() memory allocator,
since pymalloc relies on the unique global interpreter lock (GIL).
Due to backwards compatibility concerns regarding keywords immediately followed by a string without whitespace between them (like in `bg="#d00" if clear else"#fca"`) will fail to parse,
commit 41d5b94af4 has to be reverted.
I can add another commit with the new test case I wrote to verify that the warning was being printed before my change, stopped printing after my change, and that the function does not return null after my change.
Automerge-Triggered-By: @brettcannon
Otherwise we leave a dangling pointer to free'd memory. If we
then initialize a new interpreter in the same process and call
PyImport_ExtendInittab, we will (likely) crash when calling
PyMem_RawRealloc(inittab_copy, ...) since the pointer address
is bogus.
Automerge-Triggered-By: @brettcannon
An isolated subinterpreter cannot spawn threads, spawn a child
process or call os.fork().
* Add private _Py_NewInterpreter(isolated_subinterpreter) function.
* Add isolated=True keyword-only parameter to
_xxsubinterpreters.create().
* Allow again os.fork() in "non-isolated" subinterpreters.
This implements full support for # type: <type> comments, # type: ignore <stuff> comments, and the func_type parsing mode for ast.parse() and compile().
Closes https://github.com/we-like-parsers/cpython/issues/95.
(For now, you need to use the master branch of mypy, since another issue unique to 3.9 had to be fixed there, and there's no mypy release yet.)
The only thing missing is `feature_version=N`, which is being tracked in https://github.com/we-like-parsers/cpython/issues/124.
New PyFrame_GetBack() function: get the frame next outer frame.
Replace frame->f_back with PyFrame_GetBack(frame) in most code but
frameobject.c, ceval.c and genobject.c.
Remove the following function from the C API:
* PyAsyncGen_ClearFreeLists()
* PyContext_ClearFreeList()
* PyDict_ClearFreeList()
* PyFloat_ClearFreeList()
* PyFrame_ClearFreeList()
* PyList_ClearFreeList()
* PySet_ClearFreeList()
* PyTuple_ClearFreeList()
Make these functions private, move them to the internal C API and
change their return type to void.
Call explicitly PyGC_Collect() to free all free lists.
Note: PySet_ClearFreeList() did nothing.
PyFrame_GetCode(frame): return a borrowed reference to the frame
code.
Replace frame->f_code with PyFrame_GetCode(frame) in most code,
except in frameobject.c, genobject.c and ceval.c.
Also add PyFrame_GetLineNumber() to the limited C API.
This commit also allows to pass flags to the new parser in all interfaces and fixes a bug in the parser generator that was causing to inline rules with actions, making them disappear.
If _PyCode_InitOpcache() fails in _PyEval_EvalFrameDefault(), use
"goto exit_eval_frame;" rather than "return NULL;" to exit the
function in a consistent state. For example, tstate->frame is now
reset properly.
This is invoked by mypy, using ast.parse(source, "<func_type>", "func_type"). Since the new grammar doesn't yet support the func_type_input start symbol we must use the old compiler in this case to prevent a crash.
https://bugs.python.org/issue40334
* Rename PyConfig.use_peg to _use_peg_parser
* Document PyConfig._use_peg_parser and mark it a deprecated
* Mark -X oldparser option and PYTHONOLDPARSER env var as deprecated
in the documentation.
* Add use_old_parser() and skip_if_new_parser() to test.support
* Remove sys.flags.use_peg: use_old_parser() uses
_testinternalcapi.get_configs() instead.
* Enhance test_embed tests
* subprocess._args_from_interpreter_flags() copies -X oldparser
The constant values of future flags in the __future__ module
is updated in order to prevent collision with compiler flags.
Previously PyCF_ALLOW_TOP_LEVEL_AWAIT was clashing
with CO_FUTURE_DIVISION.
* Replace PY_INT64_T with int64_t
* Replace PY_UINT32_T with uint32_t
* Replace PY_UINT64_T with uint64_t
sha3module.c no longer checks if PY_UINT64_T is defined since it's
always defined and uint64_t is always available on platforms
supported by Python.
Avoid a temporary buffer to create a bytes string: use
PyBytes_FromStringAndSize() to directly allocate a bytes object.
Cleanup also the code: PEP 7 formatting, move variable definitions
closer to where they are used. Fix assertion checking "j" index.
Rename _PyInterpreterState_GET_UNSAFE() to _PyInterpreterState_GET()
for consistency with _PyThreadState_GET() and to have a shorter name
(help to fit into 80 columns).
Add also "assert(tstate != NULL);" to the function.
Don't access PyInterpreterState.config member directly anymore, but
use new functions:
* _PyInterpreterState_GetConfig()
* _PyInterpreterState_SetConfig()
* _Py_GetConfig()
Fix the signal handler: it now always uses the main interpreter,
rather than trying to get the current Python thread state.
The following function now accepts an interpreter, instead of a
Python thread state:
* _PyEval_SignalReceived()
* _Py_ThreadCanHandleSignals()
* _PyEval_AddPendingCall()
* COMPUTE_EVAL_BREAKER()
* SET_GIL_DROP_REQUEST(), RESET_GIL_DROP_REQUEST()
* SIGNAL_PENDING_CALLS(), UNSIGNAL_PENDING_CALLS()
* SIGNAL_PENDING_SIGNALS(), UNSIGNAL_PENDING_SIGNALS()
* SIGNAL_ASYNC_EXC(), UNSIGNAL_ASYNC_EXC()
Py_AddPendingCall() now uses the main interpreter if it fails to the
current Python thread state.
Convert _PyThreadState_GET() and PyInterpreterState_GET_UNSAFE()
macros to static inline functions.
PyInterpreterState_New() is now responsible to create pending calls,
PyInterpreterState_Delete() now deletes pending calls.
* Rename _PyEval_InitThreads() to _PyEval_InitGIL() and rename
_PyEval_InitGIL() to _PyEval_FiniGIL().
* _PyEval_InitState() and PyEval_FiniState() now create and delete
pending calls. _PyEval_InitState() now returns -1 on memory
allocation failure.
* Add init_interp_create_gil() helper function: code shared by
Py_NewInterpreter() and Py_InitializeFromConfig().
* init_interp_create_gil() now also calls _PyEval_FiniGIL(),
_PyEval_InitGIL() and _PyGILState_Init() in subinterpreters, but
these functions now do nothing when called from a subinterpreter.
Add _PyIndex_Check() function to the internal C API: fast inlined
verson of PyIndex_Check().
Add Include/internal/pycore_abstract.h header file.
Replace PyIndex_Check() with _PyIndex_Check() in C files of Objects
and Python subdirectories.
Add a private _at_fork_reinit() method to _thread.Lock,
_thread.RLock, threading.RLock and threading.Condition classes:
reinitialize the lock after fork in the child process; reset the lock
to the unlocked state.
Rename also the private _reset_internal_locks() method of
threading.Event to _at_fork_reinit().
* Add _PyThread_at_fork_reinit() private function. It is excluded
from the limited C API.
* threading.Thread._reset_internal_locks() now calls
_at_fork_reinit() on self._tstate_lock rather than creating a new
Python lock object.
The _PyErr_WarnUnawaitedCoroutine() fallback now also sets the
coroutine object as the source of the warning, as done by the Python
implementation warnings._warn_unawaited_coroutine().
Moreover, don't truncate the coroutine name: Python supports
arbitrary string length to format the message.
Add _PySys_Audit() function to the internal C API: similar to
PySys_Audit(), but requires a mandatory tstate parameter.
Cleanup sys_audit_tstate() code: remove code path for NULL tstate,
since the function exits at entry if tstate is NULL. Remove also code
path for NULL tstate->interp: should_audit() now ensures that it is
not NULL (even if tstate->interp cannot be NULL in practice).
PySys_AddAuditHook() now checks if tstate is not NULL to decide if
tstate can be used or not, and tstate is set to NULL if the runtime
is not initialized yet.
Use _PySys_Audit() in sysmodule.c.
Remove two unused imports: _thread and _weakref. Avoid creating a new
winreg builtin module if it's already available in sys.modules.
The winreg module is now stored as "winreg" rather than "_winreg".
PyThreadState.frame is a borrowed reference, not a strong reference:
PyThreadState_Clear() must not call Py_CLEAR(tstate->frame).
Remove test_threading.test_warnings_at_exit(): we cannot warranty
that the Python thread state of daemon threads is cleared in a
reliable way during Python shutdown.
* Re-add removed classes Suite, slice, Param, AugLoad and AugStore.
* Add docstrings for dummy classes.
* Add docstrings for attribute aliases.
* Set __module__ to "ast" instead of "_ast".
* bpo-22490: Remove "__PYVENV_LAUNCHER__" from the shell environment on macOS
This changeset removes the environment varialbe "__PYVENV_LAUNCHER__"
during interpreter launch as it is only needed to communicate between
the stub executable in framework installs and the actual interpreter.
Leaving the environment variable present may lead to misbehaviour when
launching other scripts.
* Actually commit the changes for issue 22490...
* Correct typo
Co-Authored-By: Nicola Soranzo <nicola.soranzo@gmail.com>
* Run make patchcheck
Co-authored-by: Jason R. Coombs <jaraco@jaraco.com>
Co-authored-by: Nicola Soranzo <nicola.soranzo@gmail.com>
Remove _PyRuntime.getframe hook and remove _PyThreadState_GetFrame
macro which was an alias to _PyRuntime.getframe. They were only
exposed by the internal C API. Remove also PyThreadFrameGetter type.
If a thread different than the main thread schedules a pending call
(Py_AddPendingCall()), the bytecode evaluation loop is no longer
interrupted at each bytecode instruction to check for pending calls
which cannot be executed. Only the main thread can execute pending
calls.
Previously, the bytecode evaluation loop was interrupted at each
instruction until the main thread executes pending calls.
* Add _Py_ThreadCanHandlePendingCalls() function.
* SIGNAL_PENDING_CALLS() now only sets eval_breaker to 1 if the
current thread can execute pending calls. Only the main thread can
execute pending calls.
COMPUTE_EVAL_BREAKER() now also checks if the Python thread state
belongs to the main interpreter. Don't break the evaluation loop if
there are pending signals but the Python thread state it belongs to a
subinterpeter.
* Add _Py_IsMainThread() function.
* Add _Py_ThreadCanHandleSignals() function.
If a thread different than the main thread gets a signal, the
bytecode evaluation loop is no longer interrupted at each bytecode
instruction to check for pending signals which cannot be handled.
Only the main thread of the main interpreter can handle signals.
Previously, the bytecode evaluation loop was interrupted at each
instruction until the main thread handles signals.
Changes:
* COMPUTE_EVAL_BREAKER() and SIGNAL_PENDING_SIGNALS() no longer set
eval_breaker to 1 if the current thread cannot handle signals.
* take_gil() now always recomputes eval_breaker.
If Py_AddPendingCall() is called in a subinterpreter, the function is
now scheduled to be called from the subinterpreter, rather than being
called from the main interpreter.
Each subinterpreter now has its own list of scheduled calls.
* Move pending and eval_breaker fields from _PyRuntimeState.ceval
to PyInterpreterState.ceval.
* new_interpreter() now calls _PyEval_InitThreads() to create
pending calls lock.
* Fix Py_AddPendingCall() for subinterpreters. It now calls
_PyThreadState_GET() which works in a subinterpreter if the
caller holds the GIL, and only falls back on
PyGILState_GetThisThreadState() if _PyThreadState_GET()
returns NULL.
Do not apply AST-based optimizations if 'from __future__ import annotations' is used in order to
prevent information lost in the final version of the annotations.
bpo-37127, bpo-39984:
* trip_signal() and Py_AddPendingCall() now get the current Python
thread state using PyGILState_GetThisThreadState() rather than
_PyRuntimeState_GetThreadState() to be able to get it even if the
GIL is released.
* _PyEval_SignalReceived() now expects tstate rather than ceval.
* Remove ceval parameter of _PyEval_AddPendingCall(): ceval is now
get from tstate parameter.
* _PyThreadState_DeleteCurrent() now takes tstate rather than
runtime.
* Add ensure_tstate_not_null() helper to pystate.c.
* Add _PyEval_ReleaseLock() function.
* _PyThreadState_DeleteCurrent() now calls
_PyEval_ReleaseLock(tstate) and frees PyThreadState memory after
this call, not before.
* PyGILState_Release(): rename "tcur" variable to "tstate".
* Rename _PyInterpreterState_Get() to PyInterpreterState_Get() and
move it the limited C API.
* Add _PyInterpreterState_Get() alias to PyInterpreterState_Get() for
backward compatibility with Python 3.8.
Replace _PyInterpreterState_Get() function call with
_PyInterpreterState_GET_UNSAFE() macro which is more efficient but
don't check if tstate or interp is NULL.
_Py_GetConfigsAsDict() now uses _PyThreadState_GET().
* sys.settrace(), sys.setprofile() and _lsprof.Profiler.enable() now
properly report PySys_Audit() error if "sys.setprofile" or
"sys.settrace" audit event is denied.
* Add _PyEval_SetProfile() and _PyEval_SetTrace() function: similar
to PyEval_SetProfile() and PyEval_SetTrace() but take a tstate
parameter and return -1 on error.
* Add _PyObject_FastCallTstate() function.
PyInterpreterState.eval_frame function now requires a tstate (Python
thread state) parameter.
Add private functions to the C API to get and set the frame
evaluation function:
* Add tstate parameter to _PyFrameEvalFunction function type.
* Add _PyInterpreterState_GetEvalFrameFunc() and
_PyInterpreterState_SetEvalFrameFunc() functions.
* Add tstate parameter to _PyEval_EvalFrameDefault().
The 32-bit (49-day) TickCount relied on in EnterNonRecursiveMutex can overflow
in the gap between the 'target' time and the 'now' time WaitForSingleObjectEx
returns, causing the loop to think it needs to wait another 49 days. This is
most likely to happen when the machine is hibernated during
WaitForSingleObjectEx.
This makes acquiring a lock/event/etc from the _thread or threading module
appear to never timeout.
Replace with GetTickCount64 - this is OK now Python no longer supports XP which
lacks it, and is in use for time.monotonic().
Co-authored-by: And Clover <and.clover@bromium.com>
* Remove the slice type.
* Make Slice a kind of the expr type instead of the slice type.
* Replace ExtSlice(slices) with Tuple(slices, Load()).
* Replace Index(value) with a value itself.
All non-terminal nodes in AST for expressions are now of the expr type.
Add --with-platlibdir option to the configure script: name of the
platform-specific library directory, stored in the new sys.platlitdir
attribute. It is used to build the path of platform-specific dynamic
libraries and the path of the standard library.
It is equal to "lib" on most platforms. On Fedora and SuSE, it is
equal to "lib64" on 64-bit systems.
Co-Authored-By: Jan Matějek <jmatejek@suse.com>
Co-Authored-By: Matěj Cepl <mcepl@cepl.eu>
Co-Authored-By: Charalampos Stratakis <cstratak@redhat.com>
PyGILState_Ensure() doesn't call PyEval_InitThreads() anymore when a
new Python thread state is created. The GIL is created by
Py_Initialize() since Python 3.7, it's not needed to call
PyEval_InitThreads() explicitly.
Add an assertion to ensure that the GIL is already created.
Clear the frames of daemon threads earlier during the Python shutdown to
call objects destructors. So "unclosed file" resource warnings are now
emitted for daemon threads in a more reliable way.
Cleanup _PyThreadState_DeleteExcept() code: rename "garbage" to
"list".
* Remove ceval parameter of take_gil(): get it from tstate.
* Move exit_thread_if_finalizing() call inside take_gil(). Replace
exit_thread_if_finalizing() with tstate_must_exit(): the caller is
now responsible to call PyThread_exit_thread().
* Move is_tstate_valid() assertion inside take_gil(). Remove
is_tstate_valid(): inline code into take_gil().
* Move gil_created() assertion inside take_gil().
* exit_thread_if_finalizing() does now access directly _PyRuntime
variable, rather than using tstate->interp->runtime since tstate
can be a dangling pointer after Py_Finalize() has been called.
* exit_thread_if_finalizing() is now called *before* calling
take_gil(). _PyRuntime.finalizing is an atomic variable,
we don't need to hold the GIL to access it.
* Add ensure_tstate_not_null() function to check that tstate is not
NULL at runtime. Check tstate earlier. take_gil() does not longer
check if tstate is NULL.
Cleanup:
* PyEval_RestoreThread() no longer saves/restores errno: it's already
done inside take_gil().
* PyEval_AcquireLock(), PyEval_AcquireThread(),
PyEval_RestoreThread() and _PyEval_EvalFrameDefault() now check if
tstate is valid with the new is_tstate_valid() function which uses
_PyMem_IsPtrFreed().
The Py_FatalError() function is replaced with a macro which logs
automatically the name of the current function, unless the
Py_LIMITED_API macro is defined.
Changes:
* Add _Py_FatalErrorFunc() function.
* Remove the function name from the message of Py_FatalError() calls
which included the function name.
* Update tests.
Convert _PyRuntimeState.finalizing field to an atomic variable:
* Rename it to _finalizing
* Change its type to _Py_atomic_address
* Add _PyRuntimeState_GetFinalizing() and _PyRuntimeState_SetFinalizing()
functions
* Remove _Py_CURRENTLY_FINALIZING() function: replace it with testing
directly _PyRuntimeState_GetFinalizing() value
Convert _PyRuntimeState_GetThreadState() to static inline function.
The AST "Suite" node is no longer used and it can be removed from the ASDL definition and related structures (compiler, visitors, ...).
Co-Authored-By: Victor Stinner <vstinner@python.org>
Co-authored-by: Brett Cannon <54418+brettcannon@users.noreply.github.com>
Co-authored-by: Pablo Galindo <Pablogsal@gmail.com>
* Add _PyWarnings_InitState() which only initializes the _warnings
module state (tstate->interp->warnings) without creating a module
object
* Py_InitializeFromConfig() now calls _PyWarnings_InitState() instead
of _PyWarnings_Init()
* Rename also private functions of _warnings.c to avoid confusion
between the public C API and the private C API.
In function `_PyEval_EvalFrameDefault`, macros PREDICT and PREDICTED use the same identifier creation scheme, which may be shared between them, reducing code repetition, and do ensure that the same identifier is generated.
* Hard reset + cherry piciking the changes.
* 📜🤖 Added by blurb_it.
* Added @vstinner News
* Update Misc/NEWS.d/next/Library/2020-02-11-13-01-38.bpo-38691.oND8Sk.rst
Co-Authored-By: Victor Stinner <vstinner@python.org>
* Hard reset to master
* Hard reset to master + latest changes
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Victor Stinner <vstinner@python.org>
Move the dtoa.h header file to the internal C API as pycore_dtoa.h:
it only contains private functions (prefixed by "_Py").
The math and cmath modules must now be compiled with the
Py_BUILD_CORE macro defined.
gcc -Wcast-qual turns up a number of instances of casting away constness of pointers. Some of these can be safely modified, by either:
Adding the const to the type cast, as in:
- return _PyUnicode_FromUCS1((unsigned char*)s, size);
+ return _PyUnicode_FromUCS1((const unsigned char*)s, size);
or, Removing the cast entirely, because it's not necessary (but probably was at one time), as in:
- PyDTrace_FUNCTION_ENTRY((char *)filename, (char *)funcname, lineno);
+ PyDTrace_FUNCTION_ENTRY(filename, funcname, lineno);
These changes will not change code, but they will make it much easier to check for errors in consts
The bulk of this patch was generated automatically with:
for name in \
PyObject_Vectorcall \
Py_TPFLAGS_HAVE_VECTORCALL \
PyObject_VectorcallMethod \
PyVectorcall_Function \
PyObject_CallOneArg \
PyObject_CallMethodNoArgs \
PyObject_CallMethodOneArg \
;
do
echo $name
git grep -lwz _$name | xargs -0 sed -i "s/\b_$name\b/$name/g"
done
old=_PyObject_FastCallDict
new=PyObject_VectorcallDict
git grep -lwz $old | xargs -0 sed -i "s/\b$old\b/$new/g"
and then cleaned up:
- Revert changes to in docs & news
- Revert changes to backcompat defines in headers
- Nudge misaligned comments
Currently, during runtime destruction, `_PyImport_Cleanup` is clearing the interpreter state before clearing out the modules themselves. This leads to a segfault on modules that rely on the module state to clear themselves up.
For example, let's take the small snippet added in the issue by @DinoV :
```
import _struct
class C:
def __init__(self):
self.pack = _struct.pack
def __del__(self):
self.pack('I', -42)
_struct.x = C()
```
The module `_struct` uses the module state to run `pack`. Therefore, the module state has to be alive until after the module has been cleared out to successfully run `C.__del__`. This happens at line 606, when `_PyImport_Cleanup` calls `_PyModule_Clear`. In fact, the loop that calls `_PyModule_Clear` has in its comments:
> Now, if there are any modules left alive, clear their globals to minimize potential leaks. All C extension modules actually end up here, since they are kept alive in the interpreter state.
That means that we can't clear the module state (which is used by C Extensions) before we run that loop.
Moving `_PyInterpreterState_ClearModules` until after it, fixes the segfault in the code snippet.
Finally, this updates a test in `io` to correctly assert the error that it now throws (since it now finds the io module state). The test that uses this is: `test_create_at_shutdown_without_encoding`. Given this test is now working is a proof that the module state now stays alive even when `__del__` is called at module destruction time. Thus, I didn't add a new tests for this.
https://bugs.python.org/issue38076
Move the following functions from the public C API to the internal C
API:
* _PyDebug_PrintTotalRefs(),
* _Py_PrintReferenceAddresses()
* _Py_PrintReferences()
PyThreadState.on_delete is a callback used to notify Python when a
thread completes. _thread._set_sentinel() function creates a lock
which is released when the thread completes. It sets on_delete
callback to the internal release_sentinel() function. This lock is
known as Threading._tstate_lock in the threading module.
The release_sentinel() function uses the Python C API. The problem is
that on_delete is called late in the Python finalization, when the C
API is no longer fully working.
The PyThreadState_Clear() function now calls the
PyThreadState.on_delete callback. Previously, that happened in
PyThreadState_Delete().
The release_sentinel() function is now called when the C API is still
fully working.
Replace a few Py_FatalError() calls if tstate is NULL with
assert(tstate != NULL) in ceval.c.
PyEval_AcquireThread(), PyEval_ReleaseThread() and
PyEval_RestoreThread() must never be called with a NULL tstate.
* Add DICT_UPDATE and DICT_MERGE bytecodes. Use them for ** unpacking.
* Remove BUILD_MAP_UNPACK and BUILD_MAP_UNPACK_WITH_CALL, as they are now unused.
* Update magic number for ** unpacking opcodes.
* Update dis.rst to incorporate new bytecodes.
* Add blurb entry.
* Add three new bytecodes: LIST_TO_TUPLE, LIST_EXTEND, SET_UPDATE. Use them to implement star unpacking expressions.
* Remove four bytecodes BUILD_LIST_UNPACK, BUILD_TUPLE_UNPACK, BUILD_SET_UNPACK and BUILD_TUPLE_UNPACK_WITH_CALL opcodes as they are now unused.
* Update magic number and dis.rst for new bytecodes.
* bpo-39336: Allow setattr to fail on modules which aren't assignable
When attaching a child module to a package if the object in sys.modules raises an AttributeError (e.g. because it is immutable) it causes the whole import to fail. This now allows immutable packages to exist and an ImportWarning is reported and the AttributeError exception is ignored.
Python-ast.h contains a macro named Yield that conflicts with the Yield macro
in Windows system headers. While Python-ast.h has an "undef Yield" directive
to prevent this, it means that Python-ast.h must be included before Windows
header files or we run into a re-declaration warning. In commit c96be811fa
an include for pycore_pystate.h was added which indirectly includes Windows
header files. In this commit we re-order the includes to fix this warning.
* Reorder the __aenter__ and __aexit__ checks for async with
* Add assertions for async with body being skipped
* Swap __aexit__ and __aenter__ loading in the documentation
Break up COMPARE_OP into four logically distinct opcodes:
* COMPARE_OP for rich comparisons
* IS_OP for 'is' and 'is not' tests
* CONTAINS_OP for 'in' and 'is not' tests
* JUMP_IF_NOT_EXC_MATCH for checking exceptions in 'try-except' statements.
This adds a new function named _PyErr_GetExcInfo() that is a variation of the
original PyErr_GetExcInfo() taking a PyThreadState as its first argument.
That function allows to retrieve the exceptions information of any Python
thread -- not only the current one.
The fix changes copy_location() to require an extra node from which to extract the end location, and fixing all 5 call sites.
https://bugs.python.org/issue39235
When producing the bytecode of exception handlers with name binding (like `except Exception as e`) we need to produce a try-finally block to make sure that the name is deleted after the handler is executed to prevent cycles in the stack frame objects. The bytecode associated with this try-finally block does not have source lines associated and it was causing problems when the tracing functionality was running over it.
All keywords should first be checked for pointer identity. Only
after that failed for all keywords (unlikely) should unicode
equality be used.
The original code would call unicode equality on any non-matching
keyword argument. Meaning calling it often e.g. when a function
has many kwargs but only the last one is provided.
Each Python subinterpreter now has its own "small integer
singletons": numbers in [-5; 257] range.
It is no longer possible to change the number of small integers at
build time by overriding NSMALLNEGINTS and NSMALLPOSINTS macros:
macros should now be modified manually in pycore_pystate.h header
file.
For now, continue to share _PyLong_Zero and _PyLong_One singletons
between all subinterpreters.
When parsing an "elif" node, lineno and col_offset of the node now point to the "elif" keyword and not to its condition, making it consistent with the "if" node.
https://bugs.python.org/issue39031
Automerge-Triggered-By: @pablogsal
In Python 3.9.0a1, sys.argv[0] was made an asolute path if a filename
was specified on the command line. Revert this change, since most
users expect sys.argv to be unmodified.