unintentionally caused them to get written in text mode under Windows.
As a result, when .pyc files were later read-- in binary mode --the
magic number was always wrong (note that .pyc magic numbers deliberately
include \r and \n characters, so this was "good" breakage, 100% across
all .pyc files, not random corruption in a subset). Fixed that.
Add definitions of INT_MAX and LONG_MAX to pyport.h.
Remove includes of limits.h and conditional definitions of INT_MAX
and LONG_MAX elsewhere.
This closes SourceForge patch #101659 and bug #115323.
Add three new convenience functions to the PyModule_*() family:
PyModule_AddObject(), PyModule_AddIntConstant(), PyModule_AddStringConstant().
This closes SourceForge patch #101233.
"s#" will now return a pointer to the default encoded string data
of the Unicode object instead of a pointer to the raw UTF-16
data.
The latter is still available via PyObject_AsReadBuffer().
The patch also adds an optimization for string objects which is
based on the fact that string objects return the raw character data
for getreadbuffer access and are always single-segment.
which implements the automatic conversion from Unicode to a string
object using the default encoding.
The new API is then put to use to have eval() and exec accept
Unicode objects as code parameter. This closes bugs #110924
and #113890.
As side-effect, the traditional C APIs PyString_Size() and
PyString_AsString() will also accept Unicode objects as
parameters.
When reading a short, sign-extend on platforms where shorts are
bigger than 16 bits.
When reading a long, repair the unportable sign extension that was
being done for 64-bit machines (it assumed that signed right shift
sign-extends).
I can't test this, so I'm just checking it in with blind faith in Andy.
I've tested that it doesn't broeak a non-Pth build on Linux.
Changes include:
- There's a --with-pth configure option.
- Instead of _GNU_PTH, we test for HAVE_PTH.
- Better signal handling.
- (The config.h.in file is regenerated in a slightly different order.)
can cause it to get called by multiple threads simultaneously.
Ditto for PyInterpreterState_Delete.
Of the former, the docs say "The interpreter lock need not be held, but may
be held if it is necessary to serialize calls to this function". This
kinda implies it both is and isn't thread-safe.
Of the latter, the docs merely say "The interpreter lock need not be
held.", and the clause about serializing is absent.
I expect it was *believed* these are both thread-safe, and the bit about
serializing via the global lock was meant as a permission rather than a
caution.
I also expect we've never seen a problem here because the Python core
(prior to the _PyPclose fix) only calls these functions once per run.
The Py_NewInterpreter subsystem exposed by the C API (but not used by
Python itself) also calls them, but that subsystem appears to be very
rarely used.
Whatever, they're both thread-safe now.
ceval.c:
define recurion_limit (static), default value is 2500
define Py_GetRecursionLimit and Py_SetRecursionLimit
raise RuntimeError if limit is exceeded
PC/config.h:
remove plat-specific definition
sysmodule.c:
add sys.(get|set)recursionlimit
how 'import' was called with a compiletime mechanism: create either a tuple
of the import arguments, or None (in the case of a normal import), add it to
the code-block constants, and load it onto the stack before calling
IMPORT_NAME.
PyRun_FileEx(). These are the same as their non-Ex counterparts but
have an extra argument, a flag telling them to close the file when
done.
Then this is used by Py_Main() and execfile() to close the file after
it is parsed but before it is executed.
Adding APIs seems strange given the feature freeze but it's the only
way I see to close the bug report without incompatible changes.
[ Bug #110616 ] source file stays open after parsing is done (PR#209)
(This fix is a bit broken, just as the test already was: the test for
testlist and listmaker are done always, whereas the test for exprlist and
the actual abort() are only done if Py_DEBUG is defined. Suggestions
welcome, I guess ;)
Add the EXTENDED_ARG opcode to the virtual machine, allowing 32-bit
arguments to opcodes instead of being forced to stick to the 16-bit
limit. This is especially useful for machine-generated code, which
can be too long for the SET_LINENO parameter to fit into 16 bits.
This closes the implementation portion of SourceForge patch #100893.
- Fix bug in thread_pthread.h::PyThread_get_thread_ident() where
sizeof(pthread) < sizeof(long).
- Add 'configure' for:
- SIZEOF_PTHREAD is pthread_t can be included via <pthread.h>
- setting Monterey system name
- appropriate CC,LINKCC,LDSHARED,OPT, and CCSHARED for Monterey
- Add section in README for Monterey build