AttributeError was raised always when attribute is not found.
This commit skip raising AttributeError when `tp_getattro` is `PyObject_GenericGetAttr`.
It makes hasattr() and getattr() about 4x faster when attribute is not found.
* Add _Py_GetLocaleconvNumeric() function: decode decimal_point and
thousands_sep fields of localeconv() from the LC_NUMERIC encoding,
rather than decoding from the LC_CTYPE encoding.
* Modify locale.localeconv() and "n" formatter of str.format() (for
int, float and complex to use _Py_GetLocaleconvNumeric()
internally.
Modify locale.localeconv(), time.tzname, os.strerror() and other
functions to ignore the UTF-8 Mode: always use the current locale
encoding.
Changes:
* Add _Py_DecodeLocaleEx() and _Py_EncodeLocaleEx(). On decoding or
encoding error, they return the position of the error and an error
message which are used to raise Unicode errors in
PyUnicode_DecodeLocale() and PyUnicode_EncodeLocale().
* Replace _Py_DecodeCurrentLocale() with _Py_DecodeLocaleEx().
* PyUnicode_DecodeLocale() now uses _Py_DecodeLocaleEx() for all
cases, especially for the strict error handler.
* Add _Py_DecodeUTF8Ex(): return more information on decoding error
and supports the strict error handler.
* Rename _Py_EncodeUTF8_surrogateescape() to _Py_EncodeUTF8Ex().
* Replace _Py_EncodeCurrentLocale() with _Py_EncodeLocaleEx().
* Ignore the UTF-8 mode to encode/decode localeconv(), strerror()
and time zone name.
* Remove PyUnicode_DecodeLocale(), PyUnicode_DecodeLocaleAndSize()
and PyUnicode_EncodeLocale() now ignore the UTF-8 mode: always use
the "current" locale.
* Remove _PyUnicode_DecodeCurrentLocale(),
_PyUnicode_DecodeCurrentLocaleAndSize() and
_PyUnicode_EncodeCurrentLocale().
`os.path.is*()` can return False if the file can't be accessed.
The behaviour is documented in details in `os.path.exists()`.
Link to `os.path.exists()` from `os.path.is*()`.
PyMemoryView_FromMemory() created a memoryview referring to
the internal data of the string. When the string is destroyed
the memoryview become referring to a freed memory.
In lexical analysis reference documentation, the internal link to
the string literal concatenation section was written as`.. _string-catenation:`.
Changed that to `.. _string-concatenation:`.
when serialize into memory buffer with C pickle implementations.
This optimization already is performed when serialize into memory
with Python pickle implementations or into a file with both
implementations.
Add new fuctions ignoring the UTF-8 mode:
* _Py_DecodeCurrentLocale()
* _Py_EncodeCurrentLocale()
* _PyUnicode_DecodeCurrentLocaleAndSize()
* _PyUnicode_EncodeCurrentLocale()
Modify the readline module to use these functions.
Re-enable test_readline.test_nonascii().
- primary change is to add a new default filter entry for
'default::DeprecationWarning:__main__'
- secondary change is an internal one to cope with plain
strings in the warning module's internal filter list
(this avoids the need to create a compiled regex object
early on during interpreter startup)
- assorted documentation updates, including many more
examples of configuring the warnings settings
- additional tests to ensure that both the pure Python and
the C accelerated warnings modules have the expected
default configuration
Third party projects may wish to hide their own internal machinery in
order to present more comprehensible tracebacks to end users
(e.g. Jinja2 and Trio both do this).
Previously such projects have had to rely on ctypes to do so:
fe3dadacdf/jinja2/debug.py (L345)1e86b1aee8/trio/_core/_multierror.py (L296)
This provides a Python level API for creating and modifying real
Traceback objects, allowing tracebacks to be edited at runtime.
Patch by Nathaniel Smith.
It's more trouble than it's worth, since AppVeyor only checks the HEAD commit of a PR rather than the full diff against the base branch to decide which files changed.
The picklers do no longer allocate temporary memory when dumping large
bytes and str objects into a file object. Instead the data is
directly streamed into the underlying file object.
Previously the C implementation would buffer all content and issue a
single call to file.write() at the end of the dump. With protocol 4
this behavior has changed to issue one call to file.write() per frame.
The Python pickler with protocol 4 now dumps each frame content as a
memoryview to an IOBytes instance that is never reused and the
memoryview is no longer released after the call to write. This makes it
possible for the file object to delay access to the memoryview of
previous frames without forcing any additional memory copy as was
already possible with the C pickler.
Add a new argument "-m" to the pdb module to allow
users to run `python -m pdb -m my_module_name`.
This relies on private APIs in the runpy module to work,
but we can get away with that since they're both part of
the standard library and can be updated together if
the runpy internals get refactored.