Having these in a separate file from the one that's named after the
module in the usual way makes it very easy to miss them when looking
for tests for these two functions.
(In fact when working recently on is_normalized, I'd been surprised to
see no tests for it here and concluded the function had evaded being
tested at all. I'd gone as far as to write up some tests myself
before I spotted this other file.)
Mostly this just means moving all the one file's code into the other,
and moving code from the module toplevel to inside the test class to
keep it tidily separate from the rest of the file's code.
There's one substantive change, which reduces by a bit the amount of
code to be moved: we drop the `x > sys.maxunicode` conditional and all
the `RangeError` logic behind it. Now if that condition ever occurs
it will cause an error at `chr(x)`, and a test failure. That's the
right result because, since PEP 393 in Python 3.3, there is no longer
such a thing as an "unsupported character".
This is the converse of GH-15353 -- in addition to plenty of
scripts in the tree that are marked with the executable bit
(and so can be directly executed), there are a few that have
a leading `#!` which could let them be executed, but it doesn't
do anything because they don't have the executable bit set.
Here's a command which finds such files and marks them. The
first line finds files in the tree with a `#!` line *anywhere*;
the next-to-last step checks that the *first* line is actually of
that form. In between we filter out files that already have the
bit set, and some files that are meant as fragments to be
consumed by one or another kind of preprocessor.
$ git grep -l '^#!' \
| grep -vxFf <( \
git ls-files --stage \
| perl -lane 'print $F[3] if (!/^100644/)' \
) \
| grep -ve '\.in$' -e '^Doc/includes/' \
| while read f; do
head -c2 "$f" | grep -qxF '#!' \
&& chmod a+x "$f"; \
done
* Add Include/cpython/import.h and Include/internal/pycore_import.h
header files.
* Move _PyImport_ReInitLock() to the internal C API. Don't export the
symbol anymore.
The multiprocessing.resource_tracker replaces the multiprocessing.semaphore_tracker module. Other than semaphores, resource_tracker also tracks shared_memory segments. Patch by Pierre Glaser.
Add a new _testinternalcapi module to test the internal C API.
Move _Py_GetConfigsAsDict() function to the internal C API:
_testembed now uses _testinternalcapi to access the function.
Change PyAPI_FUNC(type), PyAPI_DATA(type) and PyMODINIT_FUNC macros
of pyport.h when Py_BUILD_CORE_MODULE is defined.
The Py_BUILD_CORE_MODULE define must be now be used to build a C
extension as a dynamic library accessing Python internals: export the
PyInit_xxx() function in DLL exports on Windows.
Changes:
* Py_BUILD_CORE_BUILTIN and Py_BUILD_CORE_MODULE now imply
Py_BUILD_CORE directy in pyport.h.
* ceval.c compilation now fails with an error if Py_BUILD_CORE is not
defined, just to ensure that Python is build with the correct
defines.
* setup.py now compiles _pickle.c with Py_BUILD_CORE_MODULE define.
* setup.py compiles _json.c with Py_BUILD_CORE_MODULE define, rather
than Py_BUILD_CORE_BUILTIN define
* PCbuild/pythoncore.vcxproj: Add Py_BUILD_CORE_BUILTIN define.
* Revert "bpo-36097: Use only public C-API in the_xxsubinterpreters module (adding as necessary). (#12003)"
This reverts commit bcfa450f21.
* Revert "bpo-33608: Simplify ceval's DISPATCH by hoisting eval_breaker ahead of time. (gh-12062)"
This reverts commit bda918bf65.
* Revert "bpo-33608: Use _Py_AddPendingCall() in _PyCrossInterpreterData_Release(). (gh-12024)"
This reverts commit b05b711a2c.
* Revert "bpo-33608: Factor out a private, per-interpreter _Py_AddPendingCall(). (GH-11617)"
This reverts commit ef4ac967e2.
Pgen is the oldest piece of technology in the CPython repository, building it requires various #if[n]def PGEN hacks in other parts of the code and it also depends more and more on CPython internals. This commit removes the old pgen C code and replaces it for a new version implemented in pure Python. This is a modified and adapted version of lib2to3/pgen2 that can generate grammar files compatibles with the current parser.
This commit also eliminates all the #ifdef and code branches related to pgen, simplifying the code and making it more maintainable. The regen-grammar step now uses $(PYTHON_FOR_REGEN) that can be any version of the interpreter, so the new pgen code maintains compatibility with older versions of the interpreter (this also allows regenerating the grammar with the current CI solution that uses Python3.5). The new pgen Python module also makes use of the Grammar/Tokens file that holds the token specification, so is always kept in sync and avoids having to maintain duplicate token definitions.