Prepare for module state:
- Add "get state by defining class" and "get state by module def" stubs
- Add AC defining class when needed
- Add state pointer to connection context
- Pass state as argument to utility functions
Automerge-Triggered-By: GH:encukou
This implies that SQLite now takes care of destroying the callback
context (the PyObject callable it has been passed), so we can strip the
collation dict from the connection object.
* Convert "specials" array to InterpreterFrame struct, adding f_lasti, f_state and other non-debug FrameObject fields to it.
* Refactor, calls pushing the call to the interpreter upward toward _PyEval_Vector.
* Compute f_back when on thread stack, only filling in value when frame object outlives stack invocation.
* Move ownership of InterpreterFrame in generator from frame object to generator object.
* Do not create frame objects for Python calls.
* Do not create frame objects for generators.
* Remove code that checks Py_TPFLAGS_HAVE_VERSION_TAG
The field is always present in the type struct, as explained
in the added comment.
* Remove Py_TPFLAGS_HAVE_AM_SEND
The flag is not needed, and since it was added in 3.10 it can be removed now.
With this, all sqlite3 static globals have been moved to the global state.
There are a couple of global static strings left, but there should be no need for adding them to the state.
https://bugs.python.org/issue42064
* Renamed assertLeadingPadding function to match logic
* Added a separate error message for discontinuous padding
* Updated the tests for discontinuous padding
binascii.a2b_base64 gains a strict_mode= parameter. When enabled it will raise an
error on input that deviates from the base64 spec in any way. The default remains
False for backward compatibility.
Code reviews and minor tweaks by: Gregory P. Smith <greg@krypto.org> [Google]
Both `executescript` methods contain the same docstring typo:
_"Executes a multiple SQL statements at once."_ => _"Executes multiple SQL statements at once."_
Automerge-Triggered-By: GH:pablogsal
Fix incorrect handling of exceptions when interpreting dialect objects in
the csv module. Not clearing exceptions between calls to
PyObject_GetAttrString() causes assertion failures in pydebug mode (or with
assertions enabled).
Add a minimal test that would've caught this (passing None as dialect, or
any object that isn't a csv.Dialect subclass, which the csv module allows
and caters to, even though it is not documented.) In pydebug mode, the test
triggers the assertion failure in the old code.
Contributed-By: T. Wouters [Google]
* zlib uses an UINT32_MAX sliding window for the output buffer
These funtions have an initial output buffer size parameter:
- zlib.decompress(data, /, wbits=MAX_WBITS, bufsize=DEF_BUF_SIZE)
- zlib.Decompress.flush([length])
If the initial size > UINT32_MAX, use an UINT32_MAX sliding window, instead of clamping to UINT32_MAX.
Speed up when (the initial size == the actual size).
This fixes a memory consumption and copying performance regression in earlier 3.10 beta releases if someone used an output buffer larger than 4GiB with zlib.decompress.
Reviewed-by: Gregory P. Smith
Remove useless calls to PyThread_exit_thread() in two unit tests of
_testcapi and _testembed modules.
Co-authored-by: Alexey Izbyshev <izbyshev@ispras.ru>
_thread.start_new_thread() no longer calls PyThread_exit_thread()
explicitly at the thread exit, the call was redundant.
On Linux with the glibc, pthread_cancel() loads dynamically the
libgcc_s.so.1 library. dlopen() can fail if there is no more
available file descriptor to open the file. In this case, the process
aborts with the error message:
"libgcc_s.so.1 must be installed for pthread_cancel to work"
pthread_cancel() unwinds back to the thread's wrapping function that
calls the thread entry point.
The unwind function is dynamically loaded from the libgcc_s library
since it is tightly coupled to the C compiler (GCC). The unwinder
depends on DWARF, the compiler generates DWARF, so the unwinder
belongs to the compiler.
Thanks Florian Weimer and Carlos O'Donell for their help on
investigating this issue.
* Make functools types immutable
* Multibyte codec types are now immutable
* pyexpat.xmlparser is now immutable
* array.arrayiterator is now immutable
* _thread types are now immutable
* _csv types are now immutable
* _queue.SimpleQueue is now immutable
* mmap.mmap is now immutable
* unicodedata.UCD is now immutable
* sqlite3 types are now immutable
* _lsprof.Profiler is now immutable
* _overlapped.Overlapped is now immutable
* _operator types are now immutable
* winapi__overlapped.Overlapped is now immutable
* _lzma types are now immutable
* _bz2 types are now immutable
* _dbm.dbm and _gdbm.gdbm are now immutable
* Move connection type to global state
* Move cursor type to global state
* Move prepare protocol type to global state
* Move row type to global state
* Move statement type to global state
* ADD_TYPE takes a pointer
* pysqlite_get_state is now static inline
Change the behaviour of `math.pow(0.0, -math.inf)` and `math.pow(-0.0, -math.inf)` to return positive infinity instead of raising `ValueError`. This makes `math.pow` consistent with the built-in `pow` (and the `**` operator) for this particular special case, and brings the `math.pow` special-case handling into compliance with IEEE 754.