* Convert "specials" array to InterpreterFrame struct, adding f_lasti, f_state and other non-debug FrameObject fields to it.
* Refactor, calls pushing the call to the interpreter upward toward _PyEval_Vector.
* Compute f_back when on thread stack, only filling in value when frame object outlives stack invocation.
* Move ownership of InterpreterFrame in generator from frame object to generator object.
* Do not create frame objects for Python calls.
* Do not create frame objects for generators.
* Remove struct _node from the stable ABI list
This struct was removed along with the old parser in Python 3.9 (PEP 617)
* Stable ABI list: Use the public name "PyFrameObject" rather than "_frame"
* Ensure limited API doesn't contain private names
Names prefixed by an underscore are private by definition.
* Add a blurb
* Specialize LOAD_ATTR with LOAD_ATTR_SLOT and LOAD_ATTR_SPLIT_KEYS
* Move dict-common.h to internal/pycore_dict.h
* Add LOAD_ATTR_WITH_HINT specialized opcode.
* Quicken in function if loopy
* Specialize LOAD_ATTR for module attributes.
* Add specialization stats
These were reverted in gh-26530 (commit 17c4edc) due to refleaks.
* 2c1e258 - Compute deref offsets in compiler (gh-25152)
* b2bf2bc - Add new internal code objects fields: co_fastlocalnames and co_fastlocalkinds. (gh-26388)
This change fixes the refleaks.
https://bugs.python.org/issue43693
* Revert "bpo-43693: Compute deref offsets in compiler (gh-25152)"
This reverts commit b2bf2bc1ec.
* Revert "bpo-43693: Add new internal code objects fields: co_fastlocalnames and co_fastlocalkinds. (gh-26388)"
This reverts commit 2c1e2583fd.
These two commits are breaking the refleak buildbots.
A number of places in the code base (notably ceval.c and frameobject.c) rely on mapping variable names to indices in the frame "locals plus" array (AKA fast locals), and thus opargs. Currently the compiler indirectly encodes that information on the code object as the tuples co_varnames, co_cellvars, and co_freevars. At runtime the dependent code must calculate the proper mapping from those, which isn't ideal and impacts performance-sensitive sections. This is something we can easily address in the compiler instead.
This change addresses the situation by replacing internal use of co_varnames, etc. with a single combined tuple of names in locals-plus order, along with a minimal array mapping each to its kind (local vs. cell vs. free). These two new PyCodeObject fields, co_fastlocalnames and co_fastllocalkinds, are not exposed to Python code for now, but co_varnames, etc. are still available with the same values as before (though computed lazily).
Aside from the (mild) performance impact, there are a number of other benefits:
* there's now a clear, direct relationship between locals-plus and variables
* code that relies on the locals-plus-to-name mapping is simpler
* marshaled code objects are smaller and serialize/de-serialize faster
Also note that we can take this approach further by expanding the possible values in co_fastlocalkinds to include specific argument types (e.g. positional-only, kwargs). Doing so would allow further speed-ups in _PyEval_MakeFrameVector(), which is where args get unpacked into the locals-plus array. It would also allow us to shrink marshaled code objects even further.
https://bugs.python.org/issue43693
The invalid assignment rules are very delicate since the parser can
easily raise an invalid assignment when a keyword argument is provided.
As they are very deep into the grammar tree, is very difficult to
specify in which contexts these rules can be used and in which don't.
For that, we need to use a different version of the rule that doesn't do
error checking in those situations where we don't want the rule to raise
(keyword arguments and generator expressions).
We also need to check if we are in left-recursive rule, as those can try
to eagerly advance the parser even if the parse will fail at the end of
the expression. Failing to do this allows the parser to start parsing a
call as a tuple and incorrectly identify a keyword argument as an
invalid assignment, before it realizes that it was not a tuple after all.
* Remove 'zombie' frames. We won't need them once we are allocating fixed-size frames.
* Add co_nlocalplus field to code object to avoid recomputing size of locals + frees + cells.
* Move locals, cells and freevars out of frame object into separate memory buffer.
* Use per-threadstate allocated memory chunks for local variables.
* Move globals and builtins from frame object to per-thread stack.
* Move (slow) locals frame object to per-thread stack.
* Move internal frame functions to internal header.
The patch from [bpo-44074]() does not account for a possibly non-English locale and blindly greps for "HEAD branch" in a possibly localized text.
Automerge-Triggered-By: GH:pitrou
- Reformat the C API and ABI Versioning page (and extend/clarify a bit)
- Rewrite the stable ABI docs into a general text on C API Compatibility
- Add a list of Limited API contents, and notes for the individual items.
- Replace `Include/README.rst` with a link to a devguide page with the same info
"Zero cost" exception handling.
* Uses a lookup table to determine how to handle exceptions.
* Removes SETUP_FINALLY and POP_TOP block instructions, eliminating (most of) the runtime overhead of try statements.
* Reduces the size of the frame object by about 60%.
- Remove HAVE_X509_VERIFY_PARAM_SET1_HOST check
- Update hashopenssl to require OpenSSL 1.1.1
- multissltests only OpenSSL > 1.1.0
- ALPN is always supported
- SNI is always supported
- Remove deprecated NPN code. Python wrappers are no-op.
- ECDH is always supported
- Remove OPENSSL_VERSION_1_1 macro
- Remove locking callbacks
- Drop PY_OPENSSL_1_1_API macro
- Drop HAVE_SSL_CTX_CLEAR_OPTIONS macro
- SSL_CTRL_GET_MAX_PROTO_VERSION is always defined now
- security level is always available now
- get_num_tickets is available with TLS 1.3
- X509_V_ERR MISMATCH is always available now
- Always set SSL_MODE_RELEASE_BUFFERS
- X509_V_FLAG_TRUSTED_FIRST is always available
- get_ciphers is always supported
- SSL_CTX_set_keylog_callback is always available
- Update Modules/Setup with static link example
- Mention PEP in whatsnew
- Drop 1.0.2 and 1.1.0 from GHA tests
* Use instruction offset, rather than bytecode offset. Streamlines interpreter dispatch a bit, and removes most EXTENDED_ARGs for jumps.
* Change some uses of PyCode_Addr2Line to PyFrame_GetLineNumber
The stable_abi.py script no longer parse macros. Macro targets can be
static inline functions which are not part of the stable ABI, only
part of the limited C API.
Run "make regen-limited-abi" to exclude PyType_HasFeature from
Doc/data/stable_abi.dat.
- [x] Build OpenSSL 1.1.1k for macOS
- [x] Build OpenSSL 1.1.1k for Windows
I have also updated multissl tester and various CI configurations to use latest OpenSSL. The versions were all over the place.
Signed-off-by: Christian Heimes <christian@python.org>
Automerge-Triggered-By: GH:tiran
Remove the pyarena.h header file with functions:
* PyArena_New()
* PyArena_Free()
* PyArena_Malloc()
* PyArena_AddPyObject()
These functions were undocumented, excluded from the limited C API,
and were only used internally by the compiler.
Add pycore_pyarena.h header. Rename functions:
* PyArena_New() => _PyArena_New()
* PyArena_Free() => _PyArena_Free()
* PyArena_Malloc() => _PyArena_Malloc()
* PyArena_AddPyObject() => _PyArena_AddPyObject()
Remove parser functions using the "struct _mod" type, because the
AST C API was removed:
* PyParser_ASTFromFile()
* PyParser_ASTFromFileObject()
* PyParser_ASTFromFilename()
* PyParser_ASTFromString()
* PyParser_ASTFromStringObject()
These functions were undocumented and excluded from the limited C
API.
Add pycore_parser.h internal header file. Rename functions:
* PyParser_ASTFromFileObject() => _PyParser_ASTFromFile()
* PyParser_ASTFromStringObject() => _PyParser_ASTFromString()
These functions are no longer exported (replace PyAPI_FUNC() with
extern).
Remove also _PyPegen_run_parser_from_file() function. Update
test_peg_generator to use _PyPegen_run_parser_from_file_pointer()
instead.
Remove the compiler functions using "struct _mod" type, because the
public AST C API was removed:
* PyAST_Compile()
* PyAST_CompileEx()
* PyAST_CompileObject()
* PyFuture_FromAST()
* PyFuture_FromASTObject()
These functions were undocumented and excluded from the limited C API.
Rename functions:
* PyAST_CompileObject() => _PyAST_Compile()
* PyFuture_FromASTObject() => _PyFuture_FromAST()
Moreover, _PyFuture_FromAST() is no longer exported (replace
PyAPI_FUNC() with extern). _PyAST_Compile() remains exported for
test_peg_generator.
Remove also compatibility functions:
* PyAST_Compile()
* PyAST_CompileEx()
* PyFuture_FromAST()
Rename Include/symtable.h to to Include/internal/pycore_symtable.h,
don't export symbols anymore (replace PyAPI_FUNC and PyAPI_DATA with
extern) and rename functions:
* PyST_GetScope() to _PyST_GetScope()
* PySymtable_BuildObject() to _PySymtable_Build()
* PySymtable_Free() to _PySymtable_Free()
Remove PySymtable_Build(), Py_SymtableString() and
Py_SymtableStringObject() functions.
The Py_SymtableString() function was part the stable ABI by mistake
but it could not be used, since the symtable.h header file was
excluded from the limited C API.
The Python symtable module remains available and is unchanged.
test_peg_generator now defines _Py_TEST_PEGEN macro when building C
code to not call PyAST_Validate() in Parser/pegen.c. Moreover, it
defines Py_BUILD_CORE_MODULE macro to get access to the internal
C API.
Remove "global_ast_state" from Python-ast.c when it's built by
test_peg_generator: always get the AST state from the current interpreter.
Add frozen modules to sys.stdlib_module_names. For example, add
"_frozen_importlib" and "_frozen_importlib_external" names.
Add "list_frozen" command to Programs/_testembed.
Include/{odictobject.h,parser_interface.h,picklebufobject.h,pydebug.h,pyfpe.h}
into Include/cpython/.
Parser: peg_api: include Python.h instead of parser_interface.h.
Add a new configure --without-static-libpython option to not build
the libpythonMAJOR.MINOR.a static library and not install the
python.o object file.
Fix smelly.py and stable_abi.py tools when libpython3.10.a is
missing.
* Add to the peg generator a new directive ('&&') that allows to expect
a token and hard fail the parsing if the token is not found. This
allows to quickly emmit syntax errors for missing tokens.
* Use the new grammar element to hard-fail if the ':' is missing before
suites.
Add a private list of all stdlib modules: _Py_module_names.
* Add Tools/scripts/generate_module_names.py script.
* Makefile: Add "make regen-module-names" command.
* setup.py: Add --list-module-names option.
* GitHub Action and Travis CI also runs "make regen-module-names",
not ony "make regen-all", to ensure that the module names remains
up to date.
The distutils bdist_wininst command deprecated in Python 3.8 has been
removed. The distutils bidst_wheel command is now recommended to
distribute binary packages on Windows.
* Remove Lib/distutils/command/bdist_wininst.py
* Remove PC/bdist_wininst/ project
* Remove Lib/distutils/command/wininst-*.exe programs
* Remove all references to bdist_wininst
On Fedora 31 gdb is using python 3.7.9, calling `proxyval` on an instance with a dictionary fails because of the `dict.iteritems` usage. This PR changes the code to be compatible with py2 and py3.
This changed seemed small enough to not need an issue and news blurb, if one is required please let me know.
Automerge-Triggered-By: GH:benjaminp
* Tkinter functions and constructors which need a default root window
raise now RuntimeError with descriptive message instead of obscure
AttributeError or NameError if it is not created yet or cannot
be created automatically.
* Add tests for all functions which use default root window.
* Fix import in the pynche script.
The smelly.py script now also checks the Python dynamic library and
extension modules, not only the Python static library. Make also the
script more verbose: explain what it does.
The GitHub Action job now builds Python with the libpython dynamic
library.
Fix a race condition in "make regen-all" when make -jN option is used
to run jobs in parallel. The clinic.py script now only use atomic
write to write files. Moveover, generated files are now left
unchanged if the content does not change, to not change the file
modification time.
The "make regen-all" command runs "make clinic" and "make
regen-importlib" targets:
* "make regen-importlib" builds object files (ex: Modules/_weakref.o)
from source files (ex: Modules/_weakref.c) and clinic files (ex:
Modules/clinic/_weakref.c.h)
* "make clinic" always rewrites all clinic files
(ex: Modules/clinic/_weakref.c.h)
Since there is no dependency between "clinic" and "regen-importlib"
Makefile targets, these two targets can be run in parallel. Moreover,
half of clinic.py file writes are not atomic and so there is a race
condition when "make regen-all" runs jobs in parallel using make -jN
option (which can be passed in MAKEFLAGS environment variable).
Fix clinic.py to make all file writes atomic:
* Add write_file() function to ensure that all file writes are
atomic: write into a temporary file and then use os.replace().
* Moreover, write_file() doesn't recreate or modify the file if the
content does not change to avoid modifying the file modification
file.
* Update test_clinic to verify these assertions with a functional
test.
* Remove Clinic.force attribute which was no longer used, whereas
Clinic.verify remains useful.
Adds support to Tools/i18n/pygettext.py for gettext calls in f-strings. This process is done by parsing the f-strings, processing each value, and flagging the ones which contain a gettext call.
Co-authored-by: Batuhan Taskaya <batuhanosmantaskaya@gmail.com>
Left-recursive rules need to check for errors explicitly, since
even if the rule returns NULL, the parsing might continue and lead
to long-distance failures.
Co-authored-by: Pablo Galindo <Pablogsal@gmail.com>
Use PyLong_FromLong(0) and PyLong_FromLong(1) of the public C API
instead. For Python internals, _PyLong_GetZero() and _PyLong_GetOne()
of pycore_long.h can be used.
* Implement running the parser a second time for the errors messages
The first parser run is only responsible for detecting whether
there is a `SyntaxError` or not. If there isn't the AST gets returned.
Otherwise, the parser is run a second time with all the `invalid_*`
rules enabled so that all the customized error messages get produced.
The original tool wasn't working right and it was simpler to create a new one, partially re-using some of the old code. At this point the tool runs properly on the master. (Try: ./python Tools/c-analyzer/c-analyzer.py analyze.) It take ~40 seconds on my machine to analyze the full CPython code base.
Note that we'll need to iron out some OS-specific stuff (e.g. preprocessor). We're okay though since this tool isn't used yet in our workflow. We will also need to verify the analysis results in detail before activating the check in CI, though I'm pretty sure it's close.
https://bugs.python.org/issue36876
This commit reverts commit ac0333e1e1 as the original links are working again and they provide extended features such as comments and alternative versions.
Remove the global _Py_CheckRecursionLimit variable: it has been
replaced by ceval.recursion_limit of the PyInterpreterState
structure.
There is no need to keep the variable for the stable ABI, since
Py_EnterRecursiveCall() and Py_LeaveRecursiveCall() were not usable
in Python 3.8 and older: these macros accessed PyThreadState members,
whereas the PyThreadState structure is opaque in the limited C API.
* Add new capability to the PEG parser to type variable assignments. For instance:
```
| a[asdl_stmt_seq*]=';'.small_stmt+ [';'] NEWLINE { a }
```
* Add new sequence types from the asdl definition (automatically generated)
* Make `asdl_seq` type a generic aliasing pointer type.
* Create a new `asdl_generic_seq` for the generic case using `void*`.
* The old `asdl_seq_GET`/`ast_seq_SET` macros now are typed.
* New `asdl_seq_GET_UNTYPED`/`ast_seq_SET_UNTYPED` macros for dealing with generic sequences.
* Changes all possible `asdl_seq` types to use specific versions everywhere.
NuGet automatically includes .props file from the build directory in the
target using the package, but only if the .props file has the correct
name: it must be $(id).props
Rename python.props correspondingly in all the nuspec variants. Also
keep python.props as it were for backward compatibility.
Currently, empty sequences in gather rules make the conditional for
gather rules fail as empty sequences evaluate as "False". We need to
explicitly check for "None" (the failure condition) to avoid false
negatives.
Prevent installation on Windows 8 and earlier.
Download UCRT on demand when required (non-updated Windows 8.1 only)
Add reference to py launcher to post-install message
Replace MIDL-generated file with manual GUID definition.
Use the same .def file for release and debug builds.
Update setup build to support latest toolset
This commit removes the old parser, the deprecated parser module, the old parser compatibility flags and environment variables and all associated support code and documentation.
Fix :mod:`ssl`` code to be compatible with OpenSSL 1.1.x builds that use
``no-deprecated`` and ``--api=1.1.0``.
Note: Tests assume full OpenSSL API and fail with limited API.
Signed-off-by: Christian Heimes <christian@python.org>
Co-authored-by: Mark Wright <gienah@gentoo.org>
Previously, the result could have been an instance of a subclass of int.
Also revert bpo-26202 and make attributes start, stop and step of the range
object having exact type int.
Add private function _PyNumber_Index() which preserves the old behavior
of PyNumber_Index() for performance to use it in the conversion functions
like PyLong_AsLong().
These are like keywords but they only work in context; they are not reserved except when there is an exact match.
This would enable things like match statements without reserving `match` (which would be bad for the `re.match()` function and probably lots of other places).
Automerge-Triggered-By: @gvanrossum
The scripts in `Tools/peg_generator/scripts` mostly assume that
`ast.parse` and `compile` use the old parser, since this was the
state of things, while we were developing them. They need to be
updated to always use the correct parser. `_peg_parser` is being
extended to support both parsing and compiling with both parsers.
When there are 2 negative lookaheads in the same rule, let's say `!"(" blabla "," !")"`, there will the 2 `FunctionCall`'s where assigned value is None. Currently when the `add_var` is called
the first one will be ignored but when the second lookahead's var is sent to dedupe it
will be returned as `None_1` and this won't be ignored by the declaration generator in the `visit_Alt`. This patch adds an explicit check to `add_var` to distinguish whether if there is a variable or not.
Create a `make venv` target, that creates a virtual environment
and installs the dependency in that venv. `make time` and all
the related targets are changed to use the virtual environment
python.
Automerge-Triggered-By: @pablogsal
The following improvements are implemented in this commit:
- `p->error_indicator` is set, in case malloc or realloc fail.
- Avoid memory leaks in the case that realloc fails.
- Call `PyErr_NoMemory()` instead of `PyErr_Format()`, because it requires no memory.
Co-authored-by: Pablo Galindo <Pablogsal@gmail.com>
This is the initial implementation of PEP 615, the zoneinfo module,
ported from the standalone reference implementation (see
https://www.python.org/dev/peps/pep-0615/#reference-implementation for a
link, which has a more detailed commit history).
This includes (hopefully) all functional elements described in the PEP,
but documentation is found in a separate PR. This includes:
1. A pure python implementation of the ZoneInfo class
2. A C accelerated implementation of the ZoneInfo class
3. Tests with 100% branch coverage for the Python code (though C code
coverage is less than 100%).
4. A compile-time configuration option on Linux (though not on Windows)
Differences from the reference implementation:
- The module is arranged slightly differently: the accelerated module is
`_zoneinfo` rather than `zoneinfo._czoneinfo`, which also necessitates
some changes in the test support function. (Suggested by Victor
Stinner and Steve Dower.)
- The tests are arranged slightly differently and do not include the
property tests. The tests live at test/test_zoneinfo/test_zoneinfo.py
rather than test/test_zoneinfo.py or test/test_zoneinfo/__init__.py
because we may do some refactoring in the future that would likely
require this separation anyway; we may:
- include the property tests
- automatically run all the tests against both pure Python and C,
rather than manually constructing C and Python test classes (similar
to the way this works with test_datetime.py, which generates C
and Python test cases from datetimetester.py).
- This includes a compile-time configuration option on Linux (though not
on Windows); added with much help from Thomas Wouters.
- Integration into the CPython build system is obviously different from
building a standalone zoneinfo module wheel.
- This includes configuration to install the tzdata package as part of
CI, though only on the coverage jobs. Introducing a PyPI dependency as
part of the CI build was controversial, and this is seen as less of a
major change, since the coverage jobs already depend on pip and PyPI.
Additional changes that were introduced as part of this PR, most / all of
which were backported to the reference implementation:
- Fixed reference and memory leaks
With much debugging help from Pablo Galindo
- Added smoke tests ensuring that the C and Python modules are built
The import machinery can be somewhat fragile, and the "seamlessly falls
back to pure Python" nature of this module makes it so that a problem
building the C extension or a failure to import the pure Python version
might easily go unnoticed.
- Adjustments to zoneinfo.__dir__
Suggested by Petr Viktorin.
- Slight refactorings as suggested by Steve Dower.
- Removed unnecessary if check on std_abbr
Discovered this because of a missing line in branch coverage.
OpenSSL can be build without support for TLS 1.0 and 1.1. The ssl module
now correctly adheres to OPENSSL_NO_TLS1 and OPENSSL_NO_TLS1_1 flags.
Also update multissltest to test with latest OpenSSL and LibreSSL
releases.
Signed-off-by: Christian Heimes <christian@python.org>
Automerge-Triggered-By: @tiran
* 1.0.2u (EOL)
* 1.1.0l (EOL)
* 1.1.1g
* 3.0.0-alpha2 (disabled for now)
Build the FIPS provider and create a FIPS configuration file for OpenSSL
3.0.0.
Signed-off-by: Christian Heimes <christian@python.org>
Automerge-Triggered-By: @tiran
Don't hardcode defining_class parameter name to "cls":
* Define CConverter.set_template_dict(): do nothing by default
* CLanguage.render_function() now calls set_template_dict() on all
converters.
This is for the C generator:
- Disallow rule and variable names starting with `_`
- Rename most local variable names generated by the parser to start with `_`
Exceptions:
- Renaming `p` to `_p` will be a separate PR
- There are still some names that might clash, e.g.
- anything starting with `Py`
- C reserved words (`if` etc.)
- Macros like `EXTRA` and `CHECK`
Module C state is now accessible from C-defined heap type methods (PEP 573).
Patch by Marcel Plch and Petr Viktorin.
Co-authored-by: Marcel Plch <mplch@redhat.com>
Co-authored-by: Victor Stinner <vstinner@python.org>
This commit also allows to pass flags to the new parser in all interfaces and fixes a bug in the parser generator that was causing to inline rules with actions, making them disappear.
This is one of the few files that has intimate knowledge of the pyc file
format. Since it lacks tests it tends to become outdated fairly quickly.
At present it has been broken since the introduction of PEP 552.
Previously every test was building an extension module and
loading it into sys.modules. The tearDown function was thus
not able to clean up correctly, resulting in memory leaks.
With this commit, every test function now builds the extension
module and runs the actual test code in a new process
(using assert_python_ok), so that sys.modules stays intact
and no memory gets leaked.
When there is a SyntaxError after reading the last input character from
the tokenizer and if no newline follows it, the error message used to be
`unexpected EOF while parsing`, which is wrong.
python-gdb.py now checks for "take_gil" function name to check if a
frame tries to acquire the GIL, instead of checking for
"pthread_cond_timedwait" which is specific to Linux and can be a
different condition than the GIL.
Break up COMPARE_OP into four logically distinct opcodes:
* COMPARE_OP for rich comparisons
* IS_OP for 'is' and 'is not' tests
* CONTAINS_OP for 'in' and 'is not' tests
* JUMP_IF_NOT_EXC_MATCH for checking exceptions in 'try-except' statements.
Add ast.unparse() as a function in the ast module that can be used to unparse an
ast.AST object and produce a string with code that would produce an equivalent ast.AST
object when parsed.
test_urllib commented since 2007:
commit d9880d07fc
Author: Facundo Batista <facundobatista@gmail.com>
Date: Fri May 25 04:20:22 2007 +0000
Commenting out the tests until find out who can test them in
one of the problematic enviroments.
pynche code commented since 1998 and 2001:
commit ef30092207
Author: Barry Warsaw <barry@python.org>
Date: Tue Dec 15 01:04:38 1998 +0000
Added most of the mechanism to change the strips from color variations
to color constants (i.e. red constant, green constant, blue
constant). But I haven't hooked this up yet because the UI gets more
crowded and the arrows don't reflect the correct values.
Added "Go to Black" and "Go to White" buttons.
commit 741eae0b31
Author: Barry Warsaw <barry@python.org>
Date: Wed Apr 18 03:51:55 2001 +0000
StripWidget.__init__(), update_yourself(): Removed some unused local
variables reported by PyChecker.
__togglegentype(): PyChecker accurately reported that the variable
__gentypevar was unused -- actually this whole method is currently
unused so comment it out.
This is partly a cleanup of the code. It also is preparation for getting the variables from the source (cross-platform) rather than from the symbols.
The change only touches the tool (and its tests).
The "Slot" helper (descriptor) is leaking references due to its caching mechanism. The change includes a partial fix to Slot, but also adds Variable.storage to replace the problematic use of Slot.
https://bugs.python.org/issue38187
This fixes the exception '`ValueError: invalid literal for int() with base 10`
if `str(gdbval)` returns a hexadecimal value (e.g. '0xa0'). This is the case if
the output-radix is set to 16 in gdb. See
https://sourceware.org/gdb/onlinedocs/gdb/Numbers.html for more information.
In ArgumentClinic, value "NULL" should now be used only for unrepresentable default values
(like in the optional third parameter of getattr). "None" should be used if None is accepted
as argument and passing None has the same effect as not passing the argument at all.
Now the fields have names! Much easier to keep straight as a
reader than the elements of an 18-tuple.
Runs about 10-15% slower: from 10.8s to 12.3s, on my laptop.
Fortunately that's perfectly fine for this maintenance script.
bpo-37151: remove special case for PyCFunction from PyObject_Call
Alse, make the undocumented function PyCFunction_Call an alias
of PyObject_Call and deprecate it.
Since PEP 393 in Python 3.3, this value is always 0x10ffff, the
maximum codepoint in Unicode; there's no longer such a thing as a
UCS-2 build of Python, which couldn't properly represent some
characters.
There are a couple of spots left where we still condition on the value
of this constant. Take them out.
This is the converse of GH-15353 -- in addition to plenty of
scripts in the tree that are marked with the executable bit
(and so can be directly executed), there are a few that have
a leading `#!` which could let them be executed, but it doesn't
do anything because they don't have the executable bit set.
Here's a command which finds such files and marks them. The
first line finds files in the tree with a `#!` line *anywhere*;
the next-to-last step checks that the *first* line is actually of
that form. In between we filter out files that already have the
bit set, and some files that are meant as fragments to be
consumed by one or another kind of preprocessor.
$ git grep -l '^#!' \
| grep -vxFf <( \
git ls-files --stage \
| perl -lane 'print $F[3] if (!/^100644/)' \
) \
| grep -ve '\.in$' -e '^Doc/includes/' \
| while read f; do
head -c2 "$f" | grep -qxF '#!' \
&& chmod a+x "$f"; \
done
Much like the lower-level logic in commit ef2af1ad4, we had
4 copies of this logic, written in a couple of different ways.
They're all implementing the same standard, so write it just once.
The `expand` option was introduced in 2000 in commit fad27aee1.
It appears to have been always set since it was committed, and
what it does is tell the code to do something essential. So,
just always do that, and cut the option.
Also cut the `linebreakprops` option, which isn't consulted anymore.
There were 10 copies of this, and almost as many distinct versions of
exactly how it was written. They're all implementing the same
standard. Pull them out to the top, so the more interesting logic
that remains becomes easier to read.
Remove sys.getcheckinterval() and sys.setcheckinterval() functions.
They were deprecated since Python 3.2. Use sys.getswitchinterval()
and sys.setswitchinterval() instead.
Remove also check_interval field of the PyInterpreterState structure.
As it changes the way functions are called, the PEP 590 implementation
skipped the functions that the GDB integration is looking for
(by name) to find function calls.
Looking for the new helper `cfunction_call_varargs` hopefully fixes the
tests, and thus buildbots.
The changed frame nuber in test_gdb is due to there being fewer
C calls when calling a built-in method.
This commit contains the implementation of PEP570: Python positional-only parameters.
* Update Grammar/Grammar with new typedarglist and varargslist
* Regenerate grammar files
* Update and regenerate AST related files
* Update code object
* Update marshal.c
* Update compiler and symtable
* Regenerate importlib files
* Update callable objects
* Implement positional-only args logic in ceval.c
* Regenerate frozen data
* Update standard library to account for positional-only args
* Add test file for positional-only args
* Update other test files to account for positional-only args
* Add News entry
* Update inspect module and related tests
Use _PyArg_CheckPositional() and inlined code instead of
PyArg_UnpackTuple() and _PyArg_UnpackStack() if all parameters
are positional and use the "object" converter.
"Include/token.h", "Lib/token.py" (containing now some data moved from
"Lib/tokenize.py") and new files "Parser/token.c" (containing the code
moved from "Parser/tokenizer.c") and "Doc/library/token-list.inc" (included
in "Doc/library/token.rst") are now generated from "Grammar/Tokens" by
"Tools/scripts/generate_token.py". The script overwrites files only if
needed and can be used on the read-only sources tree.
"Lib/symbol.py" is now generated by "Tools/scripts/generate_symbol_py.py"
instead of been executable itself.
Added new make targets "regen-token" and "regen-symbol" which are now
dependencies of "regen-all".
The documentation contains now strings for operators and punctuation tokens.
Fix invalid function cast warnings with gcc 8
for method conventions different from METH_NOARGS, METH_O and
METH_VARARGS in Argument Clinic generated code.
Include/*.h should be the "portable Python API", whereas
Include/cpython/*.h should be the "CPython API": CPython
implementation details.
Changes:
* Create Include/cpython/ subdirectory
* "make install" now creates $prefix/include/cpython and copy
Include/cpython/* to $prefix/include/cpython
* Create Include/cpython/objimpl.h: move objimpl.h code
surrounded by "#ifndef Py_LIMITED_API" to cpython/objimpl.h.
* objimpl.h now includes cpython/objimpl.h
* Windows installer (MSI) now also install Include/ subdirectories:
Include/cpython/ and Include/internal/.
Two kind of mistakes:
1. Missed space. After concatenating there is no space between words.
2. Missed comma. Causes unintentional concatenating in a list of strings.
python-gdb.py now handles errors on computing the line number
of a Python frame.
Changes:
* PyFrameObjectPtr.current_line_num() now catchs any Exception on
calling addr2line(), instead of failing with a surprising "<class
'TypeError'> 'FakeRepr' object is not subscriptable" error.
* All callers of current_line_num() now handle current_line_num()
returning None.
* PyFrameObjectPtr.current_line() now also catchs IndexError on
getting a line from the Python source file.
Add SSLContext.post_handshake_auth and
SSLSocket.verify_client_post_handshake for TLS 1.3 post-handshake
authentication.
Signed-off-by: Christian Heimes <christian@python.org>q
https://bugs.python.org/issue34670
* Replace "master process" with "parent process"
* Replace "master option mappings" with "main option mappings"
* Replace "master pattern object" with "main pattern object"
* ssl: replace "master" with "server"
* And some other similar changes
* Fix Tools/clinic/clinic_test.py: add missing
FakeClinic.destination_buffers attribute and pass a file argument
to Clinic().
* Rename Tools/clinic/clinic_test.py to Lib/test/test_clinic.py:
add temporary Tools/clinic/ to sys.path to import the clinic
module.
Co-Authored-By: Pablo Galindo <pablogsal@gmail.com>
During development of the limited API support for PySide,
we saw an error in a macro that accessed a type field.
This patch fixes the 7 errors in the Python headers.
Macros which were not written as capitals were implemented
as function.
To do the necessary analysis again, a script was included that
parses all headers and looks for "->tp_" in serctions which can
be reached with active limited API.
It is easily possible to call this script as a test.
Error listing:
../../Include/objimpl.h:243
#define PyObject_IS_GC(o) (PyType_IS_GC(Py_TYPE(o)) && \
(Py_TYPE(o)->tp_is_gc == NULL || Py_TYPE(o)->tp_is_gc(o)))
Action: commented only
../../Include/objimpl.h:362
#define PyType_SUPPORTS_WEAKREFS(t) ((t)->tp_weaklistoffset > 0)
Action: commented only
../../Include/objimpl.h:364
#define PyObject_GET_WEAKREFS_LISTPTR(o) \
((PyObject **) (((char *) (o)) + Py_TYPE(o)->tp_weaklistoffset))
Action: commented only
../../Include/pyerrors.h:143
#define PyExceptionClass_Name(x) \
((char *)(((PyTypeObject*)(x))->tp_name))
Action: implemented function
../../Include/abstract.h:593
#define PyIter_Check(obj) \
((obj)->ob_type->tp_iternext != NULL && \
(obj)->ob_type->tp_iternext != &_PyObject_NextNotImplemented)
Action: implemented function
../../Include/abstract.h:713
#define PyIndex_Check(obj) \
((obj)->ob_type->tp_as_number != NULL && \
(obj)->ob_type->tp_as_number->nb_index != NULL)
Action: implemented function
../../Include/abstract.h:924
#define PySequence_ITEM(o, i)\
( Py_TYPE(o)->tp_as_sequence->sq_item(o, i) )
Action: commented only
Remove the docstring attribute of AST types and restore docstring
expression as a first stmt in their body.
Co-authored-by: INADA Naoki <methane@users.noreply.github.com>
TLS 1.3 behaves slightly different than TLS 1.2. Session tickets and TLS
client cert auth are now handled after the initialy handshake. Tests now
either send/recv data to trigger session and client certs. Or tests
ignore ConnectionResetError / BrokenPipeError on the server side to
handle clients that force-close the socket fd.
To test TLS 1.3, OpenSSL 1.1.1-pre7-dev (git master + OpenSSL PR
https://github.com/openssl/openssl/pull/6340) is required.
Signed-off-by: Christian Heimes <christian@python.org>
Change TLS 1.3 cipher suite settings for compatibility with OpenSSL
1.1.1-pre6 and newer. OpenSSL 1.1.1 will have TLS 1.3 cipers enabled by
default.
Also update multissltests and Travis config to test with latest OpenSSL.
Signed-off-by: Christian Heimes <christian@python.org>
LibreSSL 2.7 introduced OpenSSL 1.1.0 API. The ssl module now detects
LibreSSL 2.7 and only provides API shims for OpenSSL < 1.1.0 and
LibreSSL < 2.7.
Documentation updates and fixes for failing tests will be provided in
another patch set.
Signed-off-by: Christian Heimes <christian@python.org>
Creating backup files with ~ suffix can be undesirable in some environment,
such as when building RPM packages. Instead of requiring the user to remove
those files manually, option -n was added, that simply disables this feature.
-n was selected because 2to3 has the same option with this behavior.
* bpo-32947: OpenSSL 1.1.1-pre1 / TLS 1.3 fixes
Misc fixes and workarounds for compatibility with OpenSSL 1.1.1-pre1 and
TLS 1.3 support. With OpenSSL 1.1.1, Python negotiates TLS 1.3 by
default. Some test cases only apply to TLS 1.2. Other tests currently
fail because the threaded or async test servers stop after failure.
I'm going to address these issues when OpenSSL 1.1.1 reaches beta.
OpenSSL 1.1.1 has added a new option OP_ENABLE_MIDDLEBOX_COMPAT for TLS
1.3. The feature is enabled by default for maximum compatibility with
broken middle boxes. Users should be able to disable the hack and CPython's test suite needs
it to verify default options.
Signed-off-by: Christian Heimes <christian@python.org>
* Fix multiple typos in code comments
* Add spacing in comments (test_logging.py, test_math.py)
* Fix spaces at the beginning of comments in test_logging.py
CPython migrated from CVS to Subversion, to Mercurial, and then to
Git. CVS and Subversion are not more used to develop CPython.
* platform module: drop support for sys.subversion. The
sys.subversion attribute has been removed in Python 3.3.
* Remove Misc/svnmap.txt
* Remove Tools/scripts/svneol.py
* Remove Tools/scripts/treesync.py
* distutils.config: Use the PyPIRCCommand.realm attribute if set
* turtledemo: wait until macOS osascript command completes to not
create a zombie process
* Tools/scripts/treesync.py: declare 'default_answer' and
'create_files' as globals to modify them with the command line
arguments. Previously, -y, -n, -f and -a options had no effect.
flake8 warning: "F841 local variable 'p' is assigned to but never
used".
kB (*kilo* byte) unit means 1000 bytes, whereas KiB ("kibibyte")
means 1024 bytes. KB was misused: replace kB or KB with KiB when
appropriate.
Same change for MB and GB which become MiB and GiB.
Change the output of Tools/iobench/iobench.py.
Round also the size of the documentation from 5.5 MB to 5 MiB.