Call error_ret() in decode_str(). It was called in some other places,
but seemed inconsistent. It is safe to call PyTokenizer_Free() after
calling error_ret().
This change implements a new bytecode compiler, based on a
transformation of the parse tree to an abstract syntax defined in
Parser/Python.asdl.
The compiler implementation is not complete, but it is in stable
enough shape to run the entire test suite excepting two disabled
tests.
- SF Bug #772896, unknown encoding results in MemoryError, which is not helpful
I will only backport the segfault fix. I'll let Anthony decide if he wants
the other changes backported. I will do the backport if asked.
because (essentially) I didn't realise that PY_BEGIN/END_ALLOW_THREADS
actually expanded to nothing under a no-threads build, so if you somehow
NULLed out the threadstate (e.g. by calling PyThread_SaveThread) it would
stay NULLed when you return to Python. Argh!
Backport candidate.
modes like non-interactive modes. This allows for non-latin-1 users
to write unicode strings directly and sets Japanese users free from
weird manual escaping <wink> in shift_jis environments.
(Reviewed by Martin v. Loewis)
[ 960406 ] unblock signals in threads
although the changes do not correspond exactly to any patch attached to
that report.
Non-main threads no longer have all signals masked.
A different interface to readline is used.
The handling of signals inside calls to PyOS_Readline is now rather
different.
These changes are all a bit scary! Review and cross-platform testing
much appreciated.
A change from Duncan Booth, to deal with changes in the way pgen gets
built. Note that graminit.[ch] aren't normally built on Windows (they're
obtained from CVS).
the build, so I'm restoring it. I'm not sure what Neal's intent was,
since the line following the one he removed was "REQN(i, 1)" so i is
obviously used. ;)
with an indented code block but no newline would raise SyntaxError.
This would have been a four-line change in parsetok.c... Except
codeop.py depends on this behavior, so a compilation flag had to be
invented that causes the tokenizer to revert to the old behavior;
this required extra changes to 2 .h files, 2 .c files, and 2 .py
files. (Fixes SF bug #501622.)
(see Patch 512981). I changed stdin to sys_stdin in the body of the
function, but did not change stderr to sys_stdout, though I suspect that may
be the correct course. I don't know the code involved well enough to judge.
-- replace then with slightly faster PyObject_Call(o,a,NULL). (The
difference is that the latter requires a to be a tuple; the former
allows other values and wraps them in a tuple if necessary; it
involves two more levels of C function calls to accomplish all that.)