The list lookups become a big burden for very long lists.
This patch changes the "happy flow" path of 2 lookups into 1 lookup.
Automerge-Triggered-By: GH:vsajip
Fix a regression in type() when a metaclass raises an exception. The
C function type_new() must properly report the exception when a
metaclass constructor raises an exception and the winner class is not
the metaclass.
Py_FrozenMain was added to the Limited C API in [bpo-42591]() (3.10.0a4);
but to fix that issue it would be enough to add it to the regular C API.
The function is undocumented, tests were added very recently ([bpo-44131]()),
and most importantly, it is not present in all builds of Python, as
the linker sometimes omits it as unused.
It should be added back when these issues are fixed.
Note that this does not affect Python's regular C API.
The fix only applies to ``isinstance``. ``issubclass`` isn't affected (because it was always working to begin with). So I also fixed the news to reflect that.
A previous commit broke a check in sysconfig when building cpython itself.
This caused builds of the standard library modules to search a wrong
location (the installed location rather than the source directory) for
header files with the net effect that a ``make install``
incorrectly caused all extension modules to be rebuilt again and
with incorrect include file paths.
When building Python, we need two distinct "include" directories:
- source .h files
- install target for .h files
Note that this doesn't matter except when building Python from source.
Historically:
- source .h files were in the distutils scheme under 'include'
- the install directory was in the distutils.command.install scheme
under 'headers'
GH-24549 merged these; sysconfig is now the single source of truth and
distutils is derived from it.
This commit introduces a "secret" scheme path, 'headers', which contains
the install target. It is only present when building Python.
The distutils code uses it if present, and falls back to 'include'.
Co-authored-by: Ned Deily <nad@python.org>
When the parser does a second pass to check for errors, these rules can
have some small side-effects as they may advance the parser more than
the point reached in the first pass. This can cause the tokenizer to ask
for extra tokens in interactive mode causing the tokenizer to show the
prompt instead of failing instantly.
To avoid this, add a new mode to the tokenizer that is activated in the
second pass and deactivates asking for new tokens when the interactive
line is finished. As the parsing should have reached the last line in
the first pass, the second pass should not need to ask for more tokens.
The timeit doc references Tim Peters introduction to the Chapter 18,
Algorithms, of the second edition. The first editiion was before timeit.
The third edition instead has Chapter 1, Data Structures and Algorithms,
without Tim's introduction.
The invalid assignment rules are very delicate since the parser can
easily raise an invalid assignment when a keyword argument is provided.
As they are very deep into the grammar tree, is very difficult to
specify in which contexts these rules can be used and in which don't.
For that, we need to use a different version of the rule that doesn't do
error checking in those situations where we don't want the rule to raise
(keyword arguments and generator expressions).
We also need to check if we are in left-recursive rule, as those can try
to eagerly advance the parser even if the parse will fail at the end of
the expression. Failing to do this allows the parser to start parsing a
call as a tuple and incorrectly identify a keyword argument as an
invalid assignment, before it realizes that it was not a tuple after all.
Fix a crash at Python exit when a deallocator function removes the
last strong reference to a heap type.
Don't read type memory after calling basedealloc() since
basedealloc() can deallocate the type and free its memory.
_PyMem_IsPtrFreed() argument is now constant.
* Remove 'zombie' frames. We won't need them once we are allocating fixed-size frames.
* Add co_nlocalplus field to code object to avoid recomputing size of locals + frees + cells.
* Move locals, cells and freevars out of frame object into separate memory buffer.
* Use per-threadstate allocated memory chunks for local variables.
* Move globals and builtins from frame object to per-thread stack.
* Move (slow) locals frame object to per-thread stack.
* Move internal frame functions to internal header.
Moreover, Py_FrozenMain() relies on Py_InitializeFromConfig() to
handle the PYTHONUNBUFFERED environment variable and configure C
stdio streams like stdout (make the stream unbuffered).