The array of small PyLong objects has been statically declared. Here I also statically initialize them. Consequently they are no longer initialized dynamically during runtime init.
I've also moved them under a new sub-struct in _PyRuntimeState, in preparation for static allocation and initialization of other global objects.
https://bugs.python.org/issue45953
The line numbers of actually calling the decorator functions of
functions and classes was wrong (as opposed to loading them, were they
have been correct previously too).
Co-authored-by: Łukasz Langa <lukasz@langa.pl>
This defines VPATH differently in PGO instrumentation builds, to account for a different default output directory. It also adds sys._vpath on Windows to make the value available to sysconfig so that it can be used in tests.
When Python is embedded in other applications, it is not easy to determine which version of Python is being used. This change exposes the Python version as part of the API data. Tools like Austin (https://github.com/P403n1x87/austin) can benefit from this data when targeting applications like uWSGI, as the Python version can then be inferred systematically by looking at the exported symbols rather than relying on unreliable pattern matching or other hacks (like remote code execution etc...).
Automerge-Triggered-By: GH:pablogsal
This change is strictly renames and moving code around. It helps in the following ways:
* ensures type-related init functions focus strictly on one of the three aspects (state, objects, types)
* passes in PyInterpreterState * to all those functions, simplifying work on moving types/objects/state to the interpreter
* consistent naming conventions help make what's going on more clear
* keeping API related to a type in the corresponding header file makes it more obvious where to look for it
https://bugs.python.org/issue46008
Previously, basic initialization of PyInterprterState happened in PyInterpreterState_New() (along with allocation and adding the new interpreter to the runtime state). This prevented us from initializing interpreter states that were allocated separately (e.g. statically or in a free list). We've addressed that here by factoring out a separate function just for initialization. We've done the same for PyThreadState. _PyRuntimeState was sorted out when we added it since _PyRuntime is statically allocated. However, here we update the existing init code to line up with the functions for PyInterpreterState and PyThreadState.
https://bugs.python.org/issue46008
PyInterpreterState_Main() is a plain function exposed in the public C-API. For internal usage we can take the more efficient approach in this PR.
https://bugs.python.org/issue46008
This simplifies new_threadstate(). We also rename _PyThreadState_Init() to _PyThreadState_SetCurrent() to reflect what it actually does.
https://bugs.python.org/issue46008
Doing so allows us to stop assigning various fields to `NULL` and 0. It also more closely matches the behavior of a static initializer.
Automerge-Triggered-By: GH:ericsnowcurrently
This parallels _PyRuntimeState.interpreters. Doing this helps make it more clear what part of PyInterpreterState relates to its threads.
https://bugs.python.org/issue46008
This falls into the category of keep-allocation-and-initialization separate. It also allows us to use _PyEval_InitState() safely in functions that return void.
https://bugs.python.org/issue46008
* Make generator, coroutine and async gen structs all the same size.
* Store interpreter frame in generator (and coroutine). Reduces the number of allocations neeeded for a generator from two to one.
The build system now uses a :program:`_bootstrap_python` interpreter for
freezing and deepfreezing again. To speed up build process the build tools
:program:`_bootstrap_python` and :program:`_freeze_module` are no longer
build with LTO.
Cross building depends on a build Python interpreter, which must have same
version and bytecode as target host Python.
The getpath.py file is frozen at build time and executed as code over a namespace. It is never imported, nor is it meant to be importable or reusable. However, it should be easier to read, modify, and patch than the previous code.
This commit attempts to preserve every previously tested quirk, but these may be changed in the future to better align platforms.
Rename PyConfig.no_debug_ranges to PyConfig.code_debug_ranges and
invert the value.
Document -X no_debug_ranges and PYTHONNODEBUGRANGES env var in
PyConfig.code_debug_ranges documentation.
* Split exit paths into exceptional and non-exceptional.
* Move exit tracing code to individual bytecodes.
* Wrap all trace entry and exit events in macros to make them clearer and easier to enhance.
* Move return sequence into RETURN_VALUE, YIELD_VALUE and YIELD_FROM. Distinguish between normal trace events and dtrace events.
* Make internal APIs that take PyFrameConstructor take a PyFunctionObject instead.
* Add reference to function to frame, borrow references to builtins and globals.
* Add COPY_FREE_VARS instruction to allow specialization of calls to inner functions.
Implement changes to build with deep-frozen modules on Windows.
Note that we now require Python 3.10 as the "bootstrap" or "host" Python.
This causes a modest startup speed (around 7%) on Windows.
Unlike the other locks reinitialized by _PyRuntimeState_ReInitThreads,
the "interpreters.main->id_mutex" is not freed by _PyRuntimeState_Fini
and should not force the default raw allocator.
Remove the asyncore and asynchat modules, deprecated in Python
3.6: use the asyncio module instead.
Remove the smtpd module, deprecated in Python 3.6: the aiosmtpd
module can be used instead, it is based on asyncio.
* Remove asyncore, asynchat and smtpd documentation
* Remove test_asyncore, test_asynchat and test_smtpd
* Rename Lib/asynchat.py to Lib/test/support/_asynchat.py
* Rename Lib/asyncore.py to Lib/test/support/_asyncore.py
* Rename Lib/smtpd.py to Lib/test/support/_smtpd.py
* Remove DeprecationWarning from private _asyncore, _asynchat and
_smtpd modules
* _smtpd: remove deprecated properties
This gains 10% or more in startup time for `python -c pass` on UNIX-ish systems.
The Makefile.pre.in generating code builds on Eric's work for bpo-45020, but the .c file generator is new.
Windows version TBD.
Currently custom modules (the array set on PyImport_FrozenModules) replace all the frozen stdlib modules. That can be problematic and is unlikely to be what the user wants. This change treats the custom frozen modules as additions instead. They take precedence over all other frozen modules except for those needed to bootstrap the import system. If the "code" field of an entry in the custom array is NULL then that frozen module is treated as disabled, which allows a custom entry to disable a frozen stdlib module.
This change allows us to get rid of is_essential_frozen_module() and simplifies the logic for which frozen modules should be ignored.
https://bugs.python.org/issue45395
The recently added PyConfig.stdlib_dir was being set with ".." entries. When __file__ was added for from modules this caused a problem on out-of-tree builds. This PR fixes that by normalizing "stdlib_dir" when it is calculated in getpath.c.
https://bugs.python.org/issue45506
Freelists for object structs can now be disabled. A new ``configure``
option ``--without-freelists`` can be used to disable all freelists
except empty tuple singleton. Internal Py*_MAXFREELIST macros can now
be defined as 0 without causing compiler warnings and segfaults.
Signed-off-by: Christian Heimes <christian@python.org>
Move Include/longobject.h non-limited API to a new
Include/cpython/longobject.h header file.
Move the following definitions to the internal C API:
* _PyLong_DigitValue
* _PyLong_FormatAdvancedWriter()
* _PyLong_FormatWriter()
* Avoid making C calls for most calls to Python functions.
* Change initialize_locals(steal=true) and _PyTuple_FromArraySteal to consume the argument references regardless of whether they succeed or fail.
The default was "off". Switching it to "on" means users get the benefit of frozen stdlib modules without having to do anything. There's a special-case for running-in-source-tree, so contributors don't get surprised when their stdlib changes don't get used.
https://bugs.python.org/issue45020
Remove fallbacks for missing round(), copysign() and hypot() in
Python/pymath.c. Python now requires these functions to build.
These fallbacks were needed on Visual Studio 2012 and older. They are
no longer needed since Visual Stuido 2013. Python is now built with
Visual Studio 2017 or newer since Python 3.6.
Add PyThreadState_EnterTracing() and PyThreadState_LeaveTracing()
functions to the limited C API to suspend and resume tracing and
profiling.
Add an unit test on the PyThreadState C API to _testcapi.
Add also internal _PyThreadState_DisableTracing() and
_PyThreadState_ResetTracing().
Rename Include/namespaceobject.h to
Include/internal/pycore_namespace.h.
The _testmultiphase extension is now built with the
Py_BUILD_CORE_MODULE macro defined to access _PyNamespace_Type.
object.c: remove unused "pycore_context.h" include.
Move classobject.h, context.h, genobject.h and longintrepr.h header
files from Include/ to Include/cpython/.
Remove redundant "#ifndef Py_LIMITED_API" in context.h.
Remove explicit #include "longintrepr.h" in C files. It's not needed,
Python.h already includes it.
Currently frozen modules do not have __file__ set. In their spec, origin is set to "frozen" and they are marked as not having a location. (Similarly, for frozen packages __path__ is set to an empty list.) However, for frozen stdlib modules we are able to extrapolate __file__ as long as we can determine the stdlib directory at runtime. (We now do so since gh-28586.) Having __file__ set is helpful for a number of reasons. Likewise, having a non-empty __path__ means we can import submodules of a frozen package from the filesystem (e.g. we could partially freeze the encodings module).
This change sets __file__ (and adds to __path__) for frozen stdlib modules. It uses sys._stdlibdir (from gh-28586) and the frozen module alias information (from gh-28655). All that work is done in FrozenImporter (in Lib/importlib/_bootstrap.py).
Also, if a frozen module is imported before importlib is bootstrapped (during interpreter initialization) then we fix up that module and its spec during the importlib bootstrapping step (i.e. imporlib._bootstrap._setup()) to match what gets set by FrozenImporter, including setting the file info (if the stdlib dir is known). To facilitate this, modules imported using PyImport_ImportFrozenModule() have __origname__ set using the frozen module alias info. __origname__ is popped off during importlib bootstrap.
(To be clear, even with this change the new code to set __file__ during fixups in imporlib._bootstrap._setup() doesn't actually get triggered yet. This is because sys._stdlibdir hasn't been set yet in interpreter initialization at the point importlib is bootstrapped. However, we do fix up such modules at that point to otherwise match the result of importing through FrozenImporter, just not the __file__ and __path__ parts. Doing so will require changes in the order in which things happen during interpreter initialization. That can be addressed separately. Once it is, the file-related fixup code from this PR will kick in.)
Here are things this change does not do:
* set __file__ for non-stdlib modules (no way of knowing the parent dir)
* set __file__ if the stdlib dir is not known (nor assume the expense of finding it)
* relatedly, set __file__ if the stdlib is in a zip file
* verify that the filename set to __file__ actually exists (too expensive)
* update __path__ for frozen packages that alias a non-package (since there is no package dir)
Other things this change skips, but we may do later:
* set __file__ on modules imported using PyImport_ImportFrozenModule()
* set co_filename when we unmarshal the frozen code object while importing the module (e.g. in FrozenImporter.exec_module()) -- this would allow tracebacks to show source lines
* implement FrozenImporter.get_filename() and FrozenImporter.get_source()
https://bugs.python.org/issue21736
The change in gh-28586 (bpo-45211) should not have included code to set _Py_path_config.stdlib_dir in Py_SetPythonHome(). We fix that here.
https://bugs.python.org/issue45471
* Move _PyObject_VectorcallTstate() and _PyObject_FastCallTstate() to
pycore_call.h (internal C API).
* Convert PyObject_CallOneArg(), PyObject_Vectorcall(),
_PyObject_FastCall() and PyVectorcall_Function() static inline
functions to regular functions.
* Add _PyVectorcall_FunctionInline() static inline function.
* PyObject_Vectorcall(), _PyObject_FastCall(), and
PyObject_CallOneArg() now call _PyThreadState_GET() rather
than PyThreadState_Get().
Building Python now requires a C99 <math.h> header file providing
isinf(), isnan() and isfinite() functions.
Remove the Py_FORCE_DOUBLE() macro. It was used by the
Py_IS_INFINITY() macro.
Changes:
* Remove Py_IS_NAN(), Py_IS_INFINITY() and Py_IS_FINITE()
in PC/pyconfig.h.
* Remove the _Py_force_double() function.
* configure no longer checks if math.h defines isinf(), isnan() and
isfinite().
Move Include/pystrhex.h to Include/internal/pycore_strhex.h.
The header file only contains private functions.
The following C extensions are now built with Py_BUILD_CORE_MODULE
macro defined to get access to the internal C API:
* _blake2
* _hashopenssl
* _md5
* _sha1
* _sha3
* _ssl
* binascii
* Never change types' cached keys. It could invalidate inline attribute objects.
* Lazily create object dictionaries.
* Update specialization of LOAD/STORE_ATTR.
* Don't update shared keys version for deletion of value.
* Update gdb support to handle instance values.
* Rename SPLIT_KEYS opcodes to INSTANCE_VALUE.
Redefining the PyThreadState_GET() macro in pycore_pystate.h is
useless since it doesn't affect files not including it. Either use
_PyThreadState_GET() directly, or don't use pycore_pystate.h internal
C API. For example, the _testcapi extension don't use the internal C
API, but use the public PyThreadState_Get() function instead.
Replace PyThreadState_Get() with _PyThreadState_GET(). The
_PyThreadState_GET() macro is more efficient than PyThreadState_Get()
and PyThreadState_GET() function calls which call fail with a fatal
Python error.
posixmodule.c and _ctypes extension now include <windows.h> before
pycore header files (like pycore_call.h).
_PyTraceback_Add() now uses _PyErr_Fetch()/_PyErr_Restore() instead
of PyErr_Fetch()/PyErr_Restore().
The _decimal and _xxsubinterpreters extensions are now built with the
Py_BUILD_CORE_MODULE macro defined to get access to the internal C
API.
* Move _PyObject_CallNoArgs() to pycore_call.h (internal C API).
* _ssl, _sqlite and _testcapi extensions now call the public
PyObject_CallNoArgs() function, rather than _PyObject_CallNoArgs().
* _lsprof extension is now built with Py_BUILD_CORE_MODULE macro
defined to get access to internal _PyObject_CallNoArgs().
Fix typo in the private _PyObject_CallNoArg() function name: rename
it to _PyObject_CallNoArgs() to be consistent with the public
function PyObject_CallNoArgs().
Move the following macros , to pycore_pymath.h (internal C API):
* _Py_SET_53BIT_PRECISION_HEADER
* _Py_SET_53BIT_PRECISION_START
* _Py_SET_53BIT_PRECISION_END
PEP 7: add braces to if and "do { ... } while (0)" in these macros.
Move also _Py_get_387controlword() and _Py_set_387controlword()
definitions to pycore_pymath.h. These functions are no longer
exported.
pystrtod.c now includes pycore_pymath.h.
Remove the following math macros using the errno variable:
* Py_ADJUST_ERANGE1()
* Py_ADJUST_ERANGE2()
* Py_OVERFLOWED()
* Py_SET_ERANGE_IF_OVERFLOW()
* Py_SET_ERRNO_ON_MATH_ERROR()
Create pycore_pymath.h internal header file.
Rename Py_ADJUST_ERANGE1() and Py_ADJUST_ERANGE2() to
_Py_ADJUST_ERANGE1() and _Py_ADJUST_ERANGE2(), and convert these
macros to static inline functions.
Move the following macros to pycore_pymath.h:
* _Py_IntegralTypeSigned()
* _Py_IntegralTypeMax()
* _Py_IntegralTypeMin()
* _Py_InIntegralTypeRange()
In the list of generated frozen modules at the top of Tools/scripts/freeze_modules.py, you will find that some of the modules have a different name than the module (or .py file) that is actually frozen. Let's call each case an "alias". Aliases do not come into play until we get to the (generated) list of modules in Python/frozen.c. (The tool for freezing modules, Programs/_freeze_module, is only concerned with the source file, not the module it will be used for.)
Knowledge of which frozen modules are aliases (and the identity of the original module) normally isn't important. However, this information is valuable when we go to set __file__ on frozen stdlib modules. This change updates Tools/scripts/freeze_modules.py to map aliases to the original module name (or None if not a stdlib module) in Python/frozen.c. We also add a helper function in Python/import.c to look up a frozen module's alias and add the result of that function to the frozen info returned from find_frozen().
https://bugs.python.org/issue45020
Before this change we end up duplicating effort and throwing away data in FrozenImporter.find_spec(). Now we do the work once in find_spec() and the only thing we do in FrozenImporter.exec_module() is turn the raw frozen data into a code object and then exec it.
We've added _imp.find_frozen(), add an arg to _imp.get_frozen_object(), and updated FrozenImporter. We've also moved some code around to reduce duplication, get a little more consistency in outcomes, and be more efficient.
Note that this change is mostly necessary if we want to set __file__ on frozen stdlib modules. (See https://bugs.python.org/issue21736.)
https://bugs.python.org/issue45324
Add a private C API for deadlines: add _PyDeadline_Init() and
_PyDeadline_Get() functions.
* Add _PyTime_Add() and _PyTime_Mul() functions which compute t1+t2
and t1*t2 and clamp the result on overflow.
* _PyTime_MulDiv() now uses _PyTime_Add() and _PyTime_Mul().
WaitForSingleObject() accepts timeout in milliseconds in the range
[0; 0xFFFFFFFE] (DWORD type). INFINITE value (0xFFFFFFFF) means no
timeout. 0xFFFFFFFE milliseconds is around 49.7 days.
PY_TIMEOUT_MAX is (0xFFFFFFFE * 1000) milliseconds on Windows, around
49.7 days.
Partially revert commit 37b8294d62.
On Unix, if the sem_clockwait() function is available in the C
library (glibc 2.30 and newer), the threading.Lock.acquire() method
now uses the monotonic clock (time.CLOCK_MONOTONIC) for the timeout,
rather than using the system clock (time.CLOCK_REALTIME), to not be
affected by system clock changes.
configure now checks if the sem_clockwait() function is available.
I've added a number of test-only modules. Some of those cases are covered by the recently frozen stdlib modules (and some will be once we add encodings back in). However, I figured we'd play it safe by having a set of modules guaranteed to be there during tests.
https://bugs.python.org/issue45020
PyThread_acquire_lock_timed() now clamps the timeout into the
[_PyTime_MIN; _PyTime_MAX] range (_PyTime_t type) if it is too large,
rather than calling Py_FatalError() which aborts the process.
PyThread_acquire_lock_timed() no longer uses
MICROSECONDS_TO_TIMESPEC() to compute sem_timedwait() argument, but
_PyTime_GetSystemClock() and _PyTime_AsTimespec_truncate().
Fix _thread.TIMEOUT_MAX value on Windows: the maximum timeout is
0x7FFFFFFF milliseconds (around 24.9 days), not 0xFFFFFFFF
milliseconds (around 49.7 days).
Set PY_TIMEOUT_MAX to 0x7FFFFFFF milliseconds, rather than 0xFFFFFFFF
milliseconds.
Fix PY_TIMEOUT_MAX overflow test: replace (us >= PY_TIMEOUT_MAX) with
(us > PY_TIMEOUT_MAX).
Add pytime_add() and pytime_mul() functions to pytime.c to compute
t+t2 and t*k with clamping to [_PyTime_MIN; _PyTime_MAX].
Fix pytime.h: _PyTime_FromTimeval() is not implemented on Windows.
Add the _PyTime_AsTimespec_clamp() function: similar to
_PyTime_AsTimespec(), but clamp to _PyTime_t min/max and don't raise
an exception.
PyThread_acquire_lock_timed() now uses _PyTime_AsTimespec_clamp() to
remove the Py_UNREACHABLE() code path.
* Add _PyTime_AsTime_t() function.
* Add PY_TIME_T_MIN and PY_TIME_T_MAX constants.
* Replace _PyTime_AsTimeval_noraise() with _PyTime_AsTimeval_clamp().
* Add pytime_divide_round_up() function.
* Fix integer overflow in pytime_divide().
* Add pytime_divmod() function.
During runtime startup we figure out the stdlib dir but currently throw that information away. This change preserves it and exposes it via PyConfig.stdlib_dir, _Py_GetStdlibDir(), and sys._stdlib_dir.
https://bugs.python.org/issue45211
This accomplishes 2 things:
* consolidates some common code between getpath.c and getpathp.c
* makes the helpers available to code in other files
FWIW, the signature of the join_relfile() function (in fileutils.c) intentionally mirrors that of Windows' PathCchCombineEx().
Note that this change is mostly moving code around. No behavior is meant to change.
https://bugs.python.org/issue45211
Mark the following thread_nt.h functions as static:
* AllocNonRecursiveMutex()
* FreeNonRecursiveMutex()
* EnterNonRecursiveMutex()
* LeaveNonRecursiveMutex()
py_win_perf_counter_frequency() no longer checks for
QueryPerformanceFrequency() failure. According to the
QueryPerformanceFrequency() documentation, the function can no longer
fails since Windows XP.
On Windows, time.sleep() now uses a waitable timer which has a
resolution of 100 ns (10^-7 sec). Previously, it had a solution of 1
ms (10^-3 sec).
* On Windows, time.sleep() now calls PyErr_CheckSignals() before
resetting the SIGINT event.
* Add _PyTime_As100Nanoseconds() function.
* Complete and update time.sleep() documentation.
Co-authored-by: Livius <egyszeregy@freemail.hu>
The main advantage is that the files will no longer show up in diffs and PRs. That means, for a PR, the number of files / lines changed will more clearly reflect the actual change. (This is essentially an un-revert of gh-28375.)
https://bugs.python.org/issue45020
The main advantage is that the files will no longer show up in diffs and PRs. That means, for a PR, the number of files / lines changed will more clearly reflect the actual change.
https://bugs.python.org/issue45020
Here's one more small cleanup that should have been in PR gh-28319. We eliminate stdout side-effects from importing the frozen __hello__ module, and update tests accordingly. We also move the module's source file into Lib/ from Toos/freeze/flag.py.
https://bugs.python.org/issue45019
Doing this provides significant performance gains for runtime startup (~15% with all the imported modules frozen). We don't yet freeze all the imported modules because there are a few hiccups in the build systems we need to sort out first. (See bpo-45186 and bpo-45188.)
Note that in PR GH-28320 we added a command-line flag (-X frozen_modules=[on|off]) that allows users to opt out of (or into) using frozen modules. The default is still "off" but we will change it to "on" as soon as we can do it in a way that does not cause contributors pain.
https://bugs.python.org/issue45020
Refactor pytime.c:
* Add pytime_from_nanoseconds() and pytime_as_nanoseconds(),
and use explicitly these functions
* Add two empty lines between functions
* PEP 7: add braces { ... }
* C99: declare variables where they are set
* Rename private functions to lowercase
* Rename error_time_t_overflow() to pytime_time_t_overflow()
* Rename win_perf_counter_frequency() to py_win_perf_counter_frequency()
* py_get_monotonic_clock(): add an assertion to detect overflow when
mach_absolute_time() unsigned uint64_t is casted to _PyTime_t
(signed int64_t).
_testcapi: use _PyTime_FromNanoseconds().
Currently we freeze several modules into the runtime. For each of these modules it is essential to bootstrapping the runtime that they be frozen. Any other stdlib module that we later freeze into the runtime is not essential. We can just as well import from the .py file. This PR lets users explicitly choose which should be used, with the new "-X frozen_modules=[on|off]" CLI flag. The default is "off" for now.
https://bugs.python.org/issue45020
There are a few things I missed in gh-27980. This is a follow-up that will make subsequent PRs cleaner. It includes fixes to tests and tools that reference the frozen modules.
https://bugs.python.org/issue45019
* Constructors of subclasses of some buitin classes (e.g. tuple, list,
frozenset) no longer accept arbitrary keyword arguments.
* Subclass of set can now define a __new__() method with additional
keyword parameters without overriding also __init__().
Release the GIL while performing isatty() system calls on arbitrary
file descriptors. In particular, this affects os.isatty(),
os.device_encoding() and io.TextIOWrapper. By extension,
io.open() in text mode is also affected.
Fix PyAiter_Check to only check for the `__anext__` presense (not for
`__aiter__`). Rename `PyAiter_Check()` to `PyAIter_Check()`,
`PyObject_GetAiter()` -> `PyObject_GetAIter()`.
Co-authored-by: Pablo Galindo Salgado <Pablogsal@gmail.com>
The binhex module, deprecated in Python 3.9, is now removed. The
following binascii functions, deprecated in Python 3.9, are now also
removed:
* a2b_hqx(), b2a_hqx();
* rlecode_hqx(), rledecode_hqx().
The binascii.crc_hqx() function remains available.
Fix indentation of <no Python frame> message in a faulthandler
traceback or a Fatal Python error traceback. Example:
Current thread 0x00007f03896fb740 (most recent call first):
Garbage-collecting
<no Python frame>
Frozen modules must be added to several files in order to work properly. Before this change this had to be done manually. Here we add a tool to generate the relevant lines in those files instead. This helps us avoid mistakes and omissions.
https://bugs.python.org/issue45019
Places the locals between the specials and stack. This is the more "natural" layout for a C struct, makes the code simpler and gives a slight speedup (~1%)
Implements a two steps check in `importlib._bootstrap._find_and_load()` to avoid locking when the module has been already imported and it's ready.
---
Using `importlib.__import__()`, after this, does show a big difference:
Before:
```
$ ./python -c 'import timeit; print(timeit.timeit("__import__(\"timeit\")", setup="from importlib import __import__"))'
15.92248619502061
```
After:
```
$ ./python -c 'import timeit; print(timeit.timeit("__import__(\"timeit\")", setup="from importlib import __import__"))'
1.206068897008663
```
---
* Refactor dispatch logic to make flow of control clearer. Moves lltrace and dxprofile instrumentation into DISPATCH macro.
* Remove switch in interpreter loop when using computed gotos. There is no need for two nearly-duplicate dispatch tables.
* Generalize cache names for LOAD_ATTR to allow store and delete specializations.
* Factor out specialization of attribute dictionary access.
* Specialize STORE_ATTR.
Fix the os.set_inheritable() function on FreeBSD 14 for file
descriptor opened with the O_PATH flag: ignore the EBADF error on
ioctl(), fallback on the fcntl() implementation.
The threading debug (PYTHONTHREADDEBUG environment variable) is
deprecated in Python 3.10 and will be removed in Python 3.12. This
feature requires a debug build of Python.
* Convert "specials" array to InterpreterFrame struct, adding f_lasti, f_state and other non-debug FrameObject fields to it.
* Refactor, calls pushing the call to the interpreter upward toward _PyEval_Vector.
* Compute f_back when on thread stack, only filling in value when frame object outlives stack invocation.
* Move ownership of InterpreterFrame in generator from frame object to generator object.
* Do not create frame objects for Python calls.
* Do not create frame objects for generators.
This is basically something that I noticed up while fixing test runs for another issue. It is really common to have multiline calls, and when they fail the display is kind of weird since we omit the annotations. E.g;
```
$ ./python t.py
Traceback (most recent call last):
File "/home/isidentical/cpython/cpython/t.py", line 11, in <module>
frame_1()
^^^^^^^^^
File "/home/isidentical/cpython/cpython/t.py", line 5, in frame_1
frame_2(
File "/home/isidentical/cpython/cpython/t.py", line 2, in frame_2
return a / 0 / b / c
~~^~~
ZeroDivisionError: division by zero
```
This patch basically adds support for annotating the rest of the line, if the instruction covers multiple lines (start_line != end_line).
Automerge-Triggered-By: GH:isidentical
The traceback.c and traceback.py mechanisms now utilize the newly added code.co_positions and PyCode_Addr2Location
to print carets on the specific expressions involved in a traceback.
Co-authored-by: Pablo Galindo <Pablogsal@gmail.com>
Co-authored-by: Ammar Askar <ammar@ammaraskar.com>
Co-authored-by: Batuhan Taskaya <batuhanosmantaskaya@gmail.com>
The new resizing system works like this;
```
$ cat t.py
a + a + a + b + c + a + a + a + b + c + a + a + a + b + c + a + a + a + b + c
[repeated 99 more times]
$ ./python t.py
RESIZE: prev len = 32, new len = 66
FINAL SIZE: 56
-----------------------------------------------------
RESIZE: prev len = 32, new len = 66
RESIZE: prev len = 66, new len = 134
RESIZE: prev len = 134, new len = 270
RESIZE: prev len = 270, new len = 542
RESIZE: prev len = 542, new len = 1086
RESIZE: prev len = 1086, new len = 2174
RESIZE: prev len = 2174, new len = 4350
RESIZE: prev len = 4350, new len = 8702
FINAL SIZE: 8004
```
So now we do considerably lower number of `_PyBytes_Resize` calls.
Automerge-Triggered-By: GH:isidentical