* gcmodule.c reuses _Py_AS_GC(op) for AS_GC()
* Move gcmodule.c FROM_GC() implementation to a new _Py_FROM_GC()
static inline function in pycore_gc.h.
* _PyObject_IS_GC(): only get the type once
* gc_is_finalized(à) and PyObject_GC_IsFinalized() use
_PyGC_FINALIZED(), instead of _PyGCHead_FINALIZED().
* Remove _Py_CAST() in pycore_gc.h: this header file is not built
with C++.
When Python is built in debug mode (if the Py_REF_DEBUG macro is
defined), the Py_INCREF() and Py_DECREF() function are now always
implemented as opaque functions to avoid leaking implementation
details like the "_Py_RefTotal" variable or the
_Py_DecRefTotal_DO_NOT_USE_THIS() function.
* Remove _Py_IncRefTotal_DO_NOT_USE_THIS() and
_Py_DecRefTotal_DO_NOT_USE_THIS() from the stable ABI.
* Remove _Py_NegativeRefcount() from limited C API.
The _xxsubinterpreters module was meant to only use public API. Some internal C-API usage snuck in over the last few years (e.g. gh-28969). This fixes that.
This avoids the problematic race in drop_gil() by skipping the FORCE_SWITCHING code there for finalizing threads.
(The idea for this approach came out of discussions with @markshannon.)
Remove functions in the C API:
* PyEval_AcquireLock()
* PyEval_ReleaseLock()
* PyEval_InitThreads()
* PyEval_ThreadsInitialized()
But keep these functions in the stable ABI.
Mention "make regen-limited-abi" in "make regen-all".
* refcounts.dat:
* Remove Py_UNICODE functions.
* Replace Py_UNICODE argument type with wchar_t.
* _PyUnicode_ToLowercase(), _PyUnicode_ToUppercase(),
_PyUnicode_ToTitlecase() are no longer deprecated in comments.
It's no longer needed since they now use Py_UCS4 type, rather than
the deprecated Py_UNICODE type.
* gdb: Remove unused char_width() method.
Remove the following old functions to configure the Python
initialization, deprecated in Python 3.11:
* PySys_AddWarnOptionUnicode()
* PySys_AddWarnOption()
* PySys_AddXOption()
* PySys_HasWarnOptions()
* PySys_SetArgvEx()
* PySys_SetArgv()
* PySys_SetPath()
* Py_SetPath()
* Py_SetProgramName()
* Py_SetPythonHome()
* Py_SetStandardStreamEncoding()
* _Py_SetProgramFullPath()
Most of these functions are kept in the stable ABI, except:
* Py_SetStandardStreamEncoding()
* _Py_SetProgramFullPath()
Update Doc/extending/embedding.rst and Doc/extending/extending.rst to
use the new PyConfig API.
_testembed.c:
* check_stdio_details() now sets stdio_encoding and stdio_errors
of PyConfig.
* Add definitions of functions removed from the API but kept in the
stable ABI.
* test_init_from_config() and test_init_read_set() now use
PyConfig_SetString() instead of PyConfig_SetBytesString().
Remove _Py_ClearStandardStreamEncoding() internal function.
Deprecate the old Py_UNICODE and PY_UNICODE_TYPE types in the C API:
use wchar_t instead.
Replace Py_UNICODE with wchar_t in multiple C files.
Co-authored-by: Inada Naoki <songofacandy@gmail.com>
In gh-103912 we added tp_bases and tp_mro to each PyInterpreterState.types.builtins entry. However, doing so ignored the fact that both PyTypeObject fields are public API, and not documented as internal (as opposed to tp_subclasses). We address that here by reverting back to shared objects, making them immortal in the process.
This commit replaces the Python implementation of the tokenize module with an implementation
that reuses the real C tokenizer via a private extension module. The tokenize module now implements
a compatibility layer that transforms tokens from the C tokenizer into Python tokenize tokens for backward
compatibility.
As the C tokenizer does not emit some tokens that the Python tokenizer provides (such as comments and non-semantic newlines), a new special mode has been added to the C tokenizer mode that currently is only used via
the extension module that exposes it to the Python layer. This new mode forces the C tokenizer to emit these new extra tokens and add the appropriate metadata that is needed to match the old Python implementation.
Co-authored-by: Pablo Galindo <pablogsal@gmail.com>
This PR updates `math.nextafter` to add a new `steps` argument. The behaviour is as though `math.nextafter` had been called `steps` times in succession.
---------
Co-authored-by: Mark Dickinson <mdickinson@enthought.com>
This implements PEP 695, Type Parameter Syntax. It adds support for:
- Generic functions (def func[T](): ...)
- Generic classes (class X[T](): ...)
- Type aliases (type X = ...)
- New scoping when the new syntax is used within a class body
- Compiler and interpreter changes to support the new syntax and scoping rules
Co-authored-by: Marc Mueller <30130371+cdce8p@users.noreply.github.com>
Co-authored-by: Eric Traut <eric@traut.com>
Co-authored-by: Larry Hastings <larry@hastings.org>
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
When monitoring LINE events, instrument all instructions that can have a predecessor on a different line.
Then check that the a new line has been hit in the instrumentation code.
This brings the behavior closer to that of 3.11, simplifying implementation and porting of tools.
This PR removes `_Py_dg_stdnan` and `_Py_dg_infinity` in favour of
using the standard `NAN` and `INFINITY` macros provided by C99.
This change has the side-effect of fixing a bug on MIPS where the
hard-coded value used by `_Py_dg_stdnan` gave a signalling NaN
rather than a quiet NaN.
---------
Co-authored-by: Mark Dickinson <dickinsm@gmail.com>
This is the culmination of PEP 684 (and of my 8-year long multi-core Python project)!
Each subinterpreter may now be created with its own GIL (via Py_NewInterpreterFromConfig()). If not so configured then the interpreter will share with the main interpreter--the status quo since subinterpreters were added decades ago. The main interpreter always has its own GIL and subinterpreters from Py_NewInterpreter() will always share with the main interpreter.
We also add PyInterpreterState.ceval.own_gil to record if the interpreter actually has its own GIL.
Note that for now we don't actually respect own_gil; all interpreters still share the one GIL. However, PyInterpreterState.ceval.own_gil does reflect PyInterpreterConfig.own_gil. That lie is a temporary one that we will fix when the GIL really becomes per-interpreter.
Here we are doing no more than adding the value for Py_mod_multiple_interpreters and using it for stdlib modules. We will start checking for it in gh-104206 (once PyInterpreterState.ceval.own_gil is added in gh-104204).
In preparation for a per-interpreter GIL, we add PyInterpreterState.ceval.gil, set it to the shared GIL for each interpreter, and use that rather than using _PyRuntime.ceval.gil directly. Note that _PyRuntime.ceval.gil is still the actual GIL.
This function no longer makes sense, since its runtime parameter is
no longer used. Use directly _PyThreadState_GET() and
_PyInterpreterState_GET() instead.
We also expose PyInterpreterConfig. This is part of the PEP 684 (per-interpreter GIL) implementation. We will add docs as soon as we can.
FYI, I'm adding the new config field for per-interpreter GIL in gh-99114.
his involves moving tp_dict, tp_bases, and tp_mro to PyInterpreterState, in the same way we did for tp_subclasses. Those three fields are effectively const for builtin static types (unlike tp_subclasses). In theory we only need to make their values immortal, along with their contents. However, that isn't such a simple proposition. (See gh-103823.) In the meantime the simplest solution is to move the fields into the interpreter.
One alternative is to statically allocate the values, but that's its own can of worms.
This is strictly about moving the "obmalloc" runtime state from
`_PyRuntimeState` to `PyInterpreterState`. Doing so improves isolation
between interpreters, specifically most of the memory (incl. objects)
allocated for each interpreter's use. This is important for a
per-interpreter GIL, but such isolation is valuable even without it.
FWIW, a per-interpreter obmalloc is the proverbial
canary-in-the-coalmine when it comes to the isolation of objects between
interpreters. Any object that leaks (unintentionally) to another
interpreter is highly likely to cause a crash (on debug builds at
least). That's a useful thing to know, relative to interpreter
isolation.
Core static types will continue to use the global value. All other types
will use the per-interpreter value. They all share the same range, where
the global types use values < 2^16 and each interpreter uses values
higher than that.
This speeds up `super()` (by around 85%, for a simple one-level
`super().meth()` microbenchmark) by avoiding allocation of a new
single-use `super()` object on each use.
We replace _PyRuntime.tstate_current with a thread-local variable. As part of this change, we add a _Py_thread_local macro in pyport.h (only for the core runtime) to smooth out the compiler differences. The main motivation here is in support of a per-interpreter GIL, but this change also provides some performance improvement opportunities.
Note that we do not provide a fallback to the thread-local, either falling back to the old tstate_current or to thread-specific storage (PyThread_tss_*()). If that proves problematic then we can circle back. I consider it unlikely, but will run the buildbots to double-check.
Also note that this does not change any of the code related to the GILState API, where it uses a thread state stored in thread-specific storage. I suspect we can combine that with _Py_tss_tstate (from here). However, that can be addressed separately and is not urgent (nor critical).
(While this change was mostly done independently, I did take some inspiration from earlier (~2020) work by @markshannon (main...markshannon:threadstate_in_tls) and @vstinner (#23976).)
This is the implementation of PEP683
Motivation:
The PR introduces the ability to immortalize instances in CPython which bypasses reference counting. Tagging objects as immortal allows up to skip certain operations when we know that the object will be around for the entire execution of the runtime.
Note that this by itself will bring a performance regression to the runtime due to the extra reference count checks. However, this brings the ability of having truly immutable objects that are useful in other contexts such as immutable data sharing between sub-interpreters.
* The majority of the monitoring code is in instrumentation.c
* The new instrumentation bytecodes are in bytecodes.c
* legacy_tracing.c adapts the new API to the old sys.setrace and sys.setprofile APIs
The function is like Py_AtExit() but for a single interpreter. This is a companion to the atexit module's register() function, taking a C callback instead of a Python one.
We also update the _xxinterpchannels module to use _Py_AtExit(), which is the motivating case. (This is inspired by pain points felt while working on gh-101660.)
Several platforms don't define the static_assert macro despite having
compiler support for the _Static_assert keyword. The macro needs to be
defined since it is used unconditionally in the Python code. So it
should always be safe to define it if undefined and not in C++11 (or
later) mode.
Hence, remove the checks for particular platforms or libc versions,
and just define static_assert anytime it needs to be defined but isn't.
That way, all platforms that need the fix will get it, regardless of
whether someone specifically thought of them.
Also document that certain macOS versions are among the platforms that
need this.
The C2x draft (currently expected to become C23) makes static_assert
a keyword to match C++. So only define the macro for up to C17.
Co-authored-by: Victor Stinner <vstinner@python.org>
Sharing mutable (or non-immortal) objects between interpreters is generally not safe. We can work around that but not easily.
There are two restrictions that are critical for objects that break interpreter isolation.
The first is that the object's state be guarded by a global lock. For now the GIL meets this requirement, but a granular global lock is needed once we have a per-interpreter GIL.
The second restriction is that the object (and, for a container, its items) be deallocated/resized only when the interpreter in which it was allocated is the current one. This is because every interpreter has (or will have, see gh-101660) its own object allocator. Deallocating an object with a different allocator can cause crashes.
The dict for the cache of module defs is completely internal, which simplifies what we have to do to meet those requirements. To do so, we do the following:
* add a mechanism for re-using a temporary thread state tied to the main interpreter in an arbitrary thread
* add _PyRuntime.imports.extensions.main_tstate`
* add _PyThreadState_InitDetached() and _PyThreadState_ClearDetached() (pystate.c)
* add _PyThreadState_BindDetached() and _PyThreadState_UnbindDetached() (pystate.c)
* make sure the cache dict (_PyRuntime.imports.extensions.dict) and its items are all owned by the main interpreter)
* add a placeholder using for a granular global lock
Note that the cache is only used for legacy extension modules and not for multi-phase init modules.
https://github.com/python/cpython/issues/100227
We can revisit the options for keeping it global later, if desired. For now the approach seems quite complex, so we've gone with the simpler isolation solution in the meantime.
https://github.com/python/cpython/issues/100227
This reverts commit 87be8d9.
This approach to keeping the interned strings safe is turning out to be too complex for my taste (due to obmalloc isolation). For now I'm going with the simpler solution, making the dict per-interpreter. We can revisit that later if we want a sharing solution.
This is effectively two changes. The first (the bulk of the change) is where we add _Py_AddToGlobalDict() (and _PyRuntime.cached_objects.main_tstate, etc.). The second (much smaller) change is where we update PyUnicode_InternInPlace() to use _Py_AddToGlobalDict() instead of calling PyDict_SetDefault() directly.
Basically, _Py_AddToGlobalDict() is a wrapper around PyDict_SetDefault() that should be used whenever we need to add a value to a runtime-global dict object (in the few cases where we are leaving the container global rather than moving it to PyInterpreterState, e.g. the interned strings dict). _Py_AddToGlobalDict() does all the necessary work to make sure the target global dict is shared safely between isolated interpreters. This is especially important as we move the obmalloc state to each interpreter (gh-101660), as well as, potentially, the GIL (PEP 684).
https://github.com/python/cpython/issues/100227
* Eliminate all remaining uses of Py_SIZE and Py_SET_SIZE on PyLongObject, adding asserts.
* Change layout of size/sign bits in longobject to support future addition of immortal ints and tagged medium ints.
* Add functions to hide some internals of long object, and for setting sign and digit count.
* Replace uses of IS_MEDIUM_VALUE macro with _PyLong_IsCompact().
Moving it valuable with a per-interpreter GIL. However, it is also useful without one, since it allows us to identify refleaks within a single interpreter or where references are escaping an interpreter. This becomes more important as we move the obmalloc state to PyInterpreterState.
https://github.com/python/cpython/issues/102304
Prior to this change, errors in _Py_NewInterpreterFromConfig() were always fatal. Instead, callers should be able to handle such errors and keep going. That's what this change supports. (This was an oversight in the original implementation of _Py_NewInterpreterFromConfig().) Note that the existing [fatal] behavior of the public Py_NewInterpreter() is preserved.
https://github.com/python/cpython/issues/98608
The essentially eliminates the global variable, with the associated benefits. This is also a precursor to isolating this bit of state to PyInterpreterState.
Folks that currently read _Py_RefTotal directly would have to start using _Py_GetGlobalRefTotal() instead.
https://github.com/python/cpython/issues/102304
This deprecates `st_ctime` fields on Windows, with the intent to change them to contain the correct value in 3.14. For now, they should keep returning the creation time as they always have.
It doesn't make sense to use multi-phase init for these modules. Using a per-interpreter "m_copy" (instead of PyModuleDef.m_base.m_copy) makes this work okay. (This came up while working on gh-101660.)
Note that we might instead end up disallowing re-load for sys/builtins since they are so special.
https://github.com/python/cpython/issues/102660
When __getattr__ is defined, python with try to find an attribute using _PyObject_GenericGetAttrWithDict
find nothing is reasonable so we don't need an exception, it will hurt performance.
Add `MS_WINDOWS_DESKTOP`, `MS_WINDOWS_APPS`, `MS_WINDOWS_SYSTEM` and `MS_WINDOWS_GAMES` preprocessor definitions to allow switching off functionality missing from particular API partitions ("partitions" are used in Windows to identify overlapping subsets of APIs).
CPython only officially supports `MS_WINDOWS_DESKTOP` and `MS_WINDOWS_SYSTEM` (APPS is included by normal desktop builds, but APPS without DESKTOP is not covered). Other configurations are a convenience for people building their own runtimes.
`MS_WINDOWS_GAMES` is for the Xbox subset of the Windows API, which is also available on client OS, but is restricted compared to `MS_WINDOWS_DESKTOP`. These restrictions may change over time, as they relate to the build headers rather than the OS support, and so we assume that Xbox builds will use the latest available version of the GDK.
Specific changes:
* move the import lock to PyInterpreterState
* move the "find_and_load" diagnostic state to PyInterpreterState
Note that the import lock exists to keep multiple imports of the same module in the same interpreter (but in different threads) from stomping on each other. Independently, we use a distinct global lock to protect globally shared import state, especially related to loaded extension modules. For now we can rely on the GIL as that lock but with a per-interpreter GIL we'll need a new global lock.
The remaining state in _PyRuntimeState.imports will (probably) continue being global.
https://github.com/python/cpython/issues/100227
Some incompatible changes had gone in, and the "ignore" lists weren't properly undated. This change fixes that. It's necessary prior to enabling test_check_c_globals, which I hope to do soon.
Note that this does include moving last_resort_memory_error to PyInterpreterState.
https://github.com/python/cpython/issues/90110
Since #101826 was merged, the internal macro `_Py_InIntegralTypeRange` is unused, as are its supporting macros `_Py_IntegralTypeMax` and `_Py_IntegralTypeMin`. This PR removes them.
Note that `_Py_InIntegralTypeRange` doesn't actually work as advertised - it's not a safe way to avoid undefined behaviour in an integer to double conversion.
We're adding the function back, only for the stable ABI symbol and not as any form of API. I had removed it yesterday.
This undocumented "private" function was added with the implementation for PEP 3121 (3.0, 2007) for internal use and later moved out of the limited API (3.6, 2016) and then into the internal API (3.9, 2019). I removed it completely yesterday, including from the stable ABI manifest (where it was added because the symbol happened to be exported). It's unlikely that anyone is using _PyState_AddModule(), especially any stable ABI extensions built against 3.2-3.5, but we're playing it safe.
https://github.com/python/cpython/issues/101758
* fileutils: handle non-blocking pipe IO on Windows
Handle erroring operations on non-blocking pipes by reading the _doserrno code.
Limit writes on non-blocking pipes that are too large.
* Support blocking functions on Windows
Use the GetNamedPipeHandleState and SetNamedPipeHandleState Win32 API functions to add support for os.get_blocking and os.set_blocking.
Enforcing (optionally) the restriction set by PEP 489 makes sense. Furthermore, this sets the stage for a potential restriction related to a per-interpreter GIL.
This change includes the following:
* add tests for extension module subinterpreter compatibility
* add _PyInterpreterConfig.check_multi_interp_extensions
* add Py_RTFLAGS_MULTI_INTERP_EXTENSIONS
* add _PyImport_CheckSubinterpIncompatibleExtensionAllowed()
* fail iff the module does not implement multi-phase init and the current interpreter is configured to check
https://github.com/python/cpython/issues/98627
This change is almost entirely moving code around and hiding import state behind internal API. We introduce no changes to behavior, nor to non-internal API. (Since there was already going to be a lot of churn, I took this as an opportunity to re-organize import.c into topically-grouped sections of code.) The motivation is to simplify a number of upcoming changes.
Specific changes:
* move existing import-related code to import.c, wherever possible
* add internal API for interacting with import state (both global and per-interpreter)
* use only API outside of import.c (to limit churn there when changing the location, etc.)
* consolidate the import-related state of PyInterpreterState into a single struct field (this changes layout slightly)
* add macros for import state in import.c (to simplify changing the location)
* group code in import.c into sections
*remove _PyState_AddModule()
https://github.com/python/cpython/issues/101758
* Make sure that the current exception is always normalized.
* Remove redundant type and traceback fields for the current exception.
* Add new API functions: PyErr_GetRaisedException, PyErr_SetRaisedException
* Add new API functions: PyException_GetArgs, PyException_SetArgs
A PyThreadState can be in one of many states in its lifecycle, represented by some status value. Those statuses haven't been particularly clear, so we're addressing that here. Specifically:
* made the distinct lifecycle statuses clear on PyThreadState
* identified expectations of how various lifecycle-related functions relate to status
* noted the various places where those expectations don't match the actual behavior
At some point we'll need to address the mismatches.
(This change also includes some cleanup.)
https://github.com/python/cpython/issues/59956
`warnings.warn()` gains the ability to skip stack frames based on code
filename prefix rather than only a numeric `stacklevel=` via a new
`skip_file_prefixes=` keyword argument.
To use this, ensure that clang support was selected in Visual Studio Installer, then set the PlatformToolset environment variable to "ClangCL" and build as normal from the command line.
It remains unsupported, but at least is possible now for experimentation.
We've factored out a struct from the two PyThreadState fields. This accomplishes two things:
* make it clear that the trashcan-related code doesn't need any other parts of PyThreadState
* allows us to use the trashcan mechanism even when there isn't a "current" thread state
We still expect the caller to hold the GIL.
https://github.com/python/cpython/issues/59956
This PR fixes object allocation in long_subtype_new to ensure that there's at least one digit in all cases, and makes sure that the value of that digit is copied over from the source long.
Needs backport to 3.11, but not any further: the change to require at least one digit was only introduced for Python 3.11.
Fixes#101037.
The objective of this change is to help make the GILState-related code easier to understand. This mostly involves moving code around and some semantically equivalent refactors. However, there are a also a small number of slight changes in structure and behavior:
* tstate_current is moved out of _PyRuntimeState.gilstate
* autoTSSkey is moved out of _PyRuntimeState.gilstate
* autoTSSkey is initialized earlier
* autoTSSkey is re-initialized (after fork) earlier
https://github.com/python/cpython/issues/59956
When executing the BUILD_LIST opcode, steal the references from the stack,
in a manner similar to the BUILD_TUPLE opcode. Implement this by offloading
the logic to a new private API, _PyList_FromArraySteal(), that works similarly
to _PyTuple_FromArraySteal().
This way, instead of performing multiple stack pointer adjustments while the
list is being initialized, the stack is adjusted only once and a fast memory
copy operation is performed in one fell swoop.
Not comprehensive, best effort warning. There are cases when threads exist on some platforms that this code cannot detect. macOS when API permissions allow and Linux with a readable /proc procfs present are the currently supported cases where a warning should show up reliably.
Starting with a DeprecationWarning for now, it is less disruptive than something like RuntimeWarning and most likely to only be seen in people's CI tests - a good place to start with this messaging.
* move _PyRuntime.global_objects.interned to _PyRuntime.cached_objects.interned_strings (and use _Py_CACHED_OBJECT())
* rename _PyRuntime.global_objects to _PyRuntime.static_objects
(This also relates to gh-96075.)
https://github.com/python/cpython/issues/90111
The Py_CLEAR(), Py_SETREF() and Py_XSETREF() macros now only evaluate
their arguments once. If an argument has side effects, these side
effects are no longer duplicated.
Use temporary variables to avoid duplicating side effects of macro
arguments. If available, use _Py_TYPEOF() to avoid type punning.
Otherwise, use memcpy() for the assignment to prevent a
miscompilation with strict aliasing caused by type punning.
Add _Py_TYPEOF() macro: __typeof__() on GCC and clang.
Add test_py_clear() and test_py_setref() unit tests to _testcapi.
* Add API to allow extensions to set callback function on creation and destruction of PyCodeObject
Co-authored-by: Ye11ow-Flash <janshah@cs.stonybrook.edu>
Convert macros to static inline functions to avoid macro pitfalls,
like duplication of side effects:
* _PyObject_SIZE()
* _PyObject_VAR_SIZE()
The result type is size_t (unsigned).
* Change _PyDict_KeysSize() and shared_keys_usable_size() return type
from signed (Py_ssize_t) to unsigned (size_t) type.
* new_values() argument type is now unsigned (size_t).
* init_inline_values() now uses size_t rather than int for the 'i'
iterator variable.
* type.__sizeof__() implementation now uses unsigned (size_t) type.
The following macros are modified to use _Py_RVALUE(), so they can no
longer be used as l-value:
* DK_LOG_SIZE()
* _PyCode_CODE()
* _PyList_ITEMS()
* _PyTuple_ITEMS()
* _Py_SLIST_HEAD()
* _Py_SLIST_ITEM_NEXT()
_PyCode_CODE() is private and other macros are part of the internal
C API.
Convert macros to static inline functions to avoid macro pitfalls,
like duplication of side effects:
* DK_ENTRIES()
* DK_UNICODE_ENTRIES()
* PyCode_GetNumFree()
* PyFloat_AS_DOUBLE()
* PyInstanceMethod_GET_FUNCTION()
* PyMemoryView_GET_BASE()
* PyMemoryView_GET_BUFFER()
* PyMethod_GET_FUNCTION()
* PyMethod_GET_SELF()
* PySet_GET_SIZE()
* _PyHeapType_GET_MEMBERS()
Changes:
* PyCode_GetNumFree() casts PyCode_GetNumFree.co_nfreevars from int
to Py_ssize_t to be future proof, and because Py_ssize_t is
commonly used in the C API.
* PyCode_GetNumFree() doesn't cast its argument: the replaced macro
already required the exact type PyCodeObject*.
* Add assertions in some functions using "CAST" macros to check
the arguments type when Python is built with assertions
(debug build).
* Remove an outdated comment in unicodeobject.h.
Newly supported interpreter definition syntax:
- `op(NAME, (input_stack_effects -- output_stack_effects)) { ... }`
- `macro(NAME) = OP1 + OP2;`
Also some other random improvements:
- Convert `WITH_EXCEPT_START` to use stack effects
- Fix lexer to balk at unrecognized characters, e.g. `@`
- Fix moved output names; support object pointers in cache
- Introduce `error()` method to print errors
- Introduce read_uint16(p) as equivalent to `*p`
Co-authored-by: Brandt Bucher <brandtbucher@gmail.com>
The ``structmember.h`` header is deprecated, though it continues to be available
and there are no plans to remove it. There are no deprecation warnings. Old code
can stay unchanged (unless the extra include and non-namespaced macros bother
you greatly). Specifically, no uses in CPython are updated -- that would just be
unnecessary churn.
The ``structmember.h`` header is deprecated, though it continues to be
available and there are no plans to remove it.
Its contents are now available just by including ``Python.h``,
with a ``Py`` prefix added if it was missing:
- `PyMemberDef`, `PyMember_GetOne` and`PyMember_SetOne`
- Type macros like `Py_T_INT`, `Py_T_DOUBLE`, etc.
(previously ``T_INT``, ``T_DOUBLE``, etc.)
- The flags `Py_READONLY` (previously ``READONLY``) and
`Py_AUDIT_READ` (previously all uppercase)
Several items are not exposed from ``Python.h``:
- `T_OBJECT` (use `Py_T_OBJECT_EX`)
- `T_NONE` (previously undocumented, and pretty quirky)
- The macro ``WRITE_RESTRICTED`` which does nothing.
- The macros ``RESTRICTED`` and ``READ_RESTRICTED``, equivalents of
`Py_AUDIT_READ`.
- In some configurations, ``<stddef.h>`` is not included from ``Python.h``.
It should be included manually when using ``offsetof()``.
The deprecated header continues to provide its original
contents under the original names.
Your old code can stay unchanged, unless the extra include and non-namespaced
macros bother you greatly.
There is discussion on the issue to rename `T_PYSSIZET` to `PY_T_SSIZE` or
similar. I chose not to do that -- users will probably copy/paste that with any
spelling, and not renaming it makes migration docs simpler.
Co-Authored-By: Alexander Belopolsky <abalkin@users.noreply.github.com>
Co-Authored-By: Matthias Braun <MatzeB@users.noreply.github.com>
This is part of the effort to consolidate global variables, to make them easier to manage (and make it easier to later move some of them to PyInterpreterState).
https://github.com/python/cpython/issues/81057