* use %r instead of backticks since backticks are going away in Py3k
* PyArena_Malloc() already sets PyErr_NoMemory so we don't need to do it again
* the signature for ast2obj_int incorrectly used a bool, rather than a long
is specified at the top of the file. Also add a note that Python/Python-ast.c
needs to be committed separately after a change to the AST grammar to capture
the revision number of the change (which is what __version__ is set to).
Prevent an invalid memory read from test_coding in case the done flag is set.
In that case, the loop isn't entered. I wonder if rather than setting
the done flag in the cases before the loop, if they should just exit early.
This code looks like it should be refactored.
Backport candidate (also the early break above if decoding_fgets fails)
mismatches. At least I hope this fixes them all.
This reverts part of my change from yesterday that converted everything
in Parser/*.c to use PyObject_* API. The encoding doesn't really need
to use PyMem_*, however, it uses new_string() which must return PyMem_*
for handling the result of PyOS_Readline() which returns PyMem_* memory.
If there were 2 versions of new_string() one that returned PyMem_*
for tokens and one that return PyObject_* for encodings that could
also fix this problem. I'm not sure which version would be clearer.
This seems to fix both Guido's and Phillip's problems, so it's good enough
for now. After this change, it would be good to review Parser/*.c
for consistent use of the 2 memory APIs.
malloc/realloc type functions, as well as renaming one variable called 'new'
in tokensizer.c. Still lots more to be done, going to be checking in one
chunk at a time or the patch will be massively huge. Still compiles ok with
gcc.
This was the result of inconsistent use of PyMem_* and PyObject_* allocators.
By changing to use PyObject_* allocator almost everywhere, this removes
the inconsistency.
tracing/line number table in except blocks.
Reflow long lines introduced by col_offset changes. Update test_ast
to handle new fields in excepthandler.
As note in Python.asdl says, we might want to rethink how attributes
are handled. Perhaps they should be the same as other fields, with
the primary difference being how they are defined for all types within
a sum.
Also fix asdl_c so that constructors with int fields don't fail when
passed a zero value.
objimpl.h, pymem.h: Stop mapping PyMem_{Del, DEL} and PyMem_{Free, FREE}
to PyObject_{Free, FREE} in a release build. They're aliases for the
system free() now.
_subprocess.c/sp_handle_dealloc(): Since the memory was originally
obtained via PyObject_NEW, it must be released via PyObject_FREE (or
_DEL).
pythonrun.c, tokenizer.c, parsermodule.c: I lost count of the number of
PyObject vs PyMem mismatches in these -- it's like the specific
function called at each site was picked at random, sometimes even with
memory obtained via PyMem getting released via PyObject. Changed most
to use PyObject uniformly, since the blobs allocated are predictably
small in most cases, and obmalloc is generally faster than system
mallocs then.
If extension modules in real life prove as sloppy as Python's front
end, we'll have to revert the objimpl.h + pymem.h part of this patch.
Note that no problems will show up in a debug build (all calls still go
thru obmalloc then). Problems will show up only in a release build, most
likely segfaults.
This will hopefully get rid of some Coverity warnings, be a hint to
developers, and be marginally faster.
Some asserts were added when the type is currently known, but depends
on values from another function.
but without a specified encoding: decoding_fgets() (and decoding_feof()) can
return NULL and fiddle with the 'tok' struct, making tok->buf NULL. This is
okay in the other cases of calls to decoding_*(), it seems, but not in this
one.
This should get a test added, somewhere, but the testsuite doesn't seem to
test encoding anywhere (although plenty of tests use it.)
It seems to me that decoding errors in other places in the code (like at the
start of a token, instead of in the middle of one) make the code end up
adding small integers to NULL pointers, but happen to check for error states
before using the calculated new pointers. I haven't been able to trigger any
other crashes, in any case.
I would nominate this file for a comlete rewrite for Py3k. The whole
decoding trick is too bolted-on for my tastes.
- IMPORT_NAME takes an extra argument from the stack: the relativeness of
the import. Only passed to __import__ when it's not -1.
- __import__() takes an optional 5th argument for the same thing; it
__defaults to -1 (old semantics: try relative, then absolute)
- 'from . import name' imports name (be it module or regular attribute)
from the current module's *package*. Likewise, 'from .module import name'
will import name from a sibling to the current module.
- Importing from outside a package is not allowed; 'from . import sys' in a
toplevel module will not work, nor will 'from .. import sys' in a
(single-level) package.
- 'from __future__ import absolute_import' will turn on the new semantics
for import and from-import: imports will be absolute, except for
from-import with dots.
Includes tests for regular imports and importhooks, parser changes and a
NEWS item, but no compiler-package changes or documentation changes.
This was started by Mike Bland and completed by Guido
(with help from Neal).
This still needs a __future__ statement added;
Thomas is working on Michael's patch for that aspect.
There's a small amount of code cleanup and refactoring
in ast.c, compile.c and ceval.c (I fixed the lltrace
behavior when EXT_POP is used -- however I had to make
lltrace a static global).
breaks the parser module, because it adds the if/else construct as well as
two new grammar rules for backward compatibility. If no one else fixes
parsermodule, I guess I'll go ahead and fix it later this week.
The TeX code was checked with texcheck.py, but not rendered. There is
actually a slight incompatibility:
>>> (x for x in lambda:0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: iteration over non-sequence
changes into
>>> (x for x in lambda: 0)
File "<stdin>", line 1
(x for x in lambda: 0)
^
SyntaxError: invalid syntax
Since there's no way the former version can be useful, it's probably a
bugfix ;)
Expand set of errors caught in set_context(). Some new errors, some
old error messages changed for consistency.
Fixed error checking in generator expression code. The first set of
tests were impossible condition given the grammar. In general, the
ast code uses REQ() for those sanity checks.
Fix some error handling for augmented assignments. As comments in the
code explain, set_context() ought to work here, but I got unexpected
crashes when I tried it. Should come back to this.
Add note to Grammar that yield expression is a special case.
Add doctest cases for SyntaxErrors raised by ast.c.
comment based on 'sys.args[0]' does not depend on the path. For Python
builds from a remote directory ("/path/to/configure; make") the previous
logic used to include the "/path/to" portion in Python-ast.h. Then svn
would consider this file to be locally modified.
Strip off leading dots and slash so the generated files are the same regardless
of whether you configure in the checkout directory or build.
If anyone configures in a different directory, we might want a cleaner
approach using os.path.*(). Hopefully this is good enough.
Call error_ret() in decode_str(). It was called in some other places,
but seemed inconsistent. It is safe to call PyTokenizer_Free() after
calling error_ret().
This change implements a new bytecode compiler, based on a
transformation of the parse tree to an abstract syntax defined in
Parser/Python.asdl.
The compiler implementation is not complete, but it is in stable
enough shape to run the entire test suite excepting two disabled
tests.
- SF Bug #772896, unknown encoding results in MemoryError, which is not helpful
I will only backport the segfault fix. I'll let Anthony decide if he wants
the other changes backported. I will do the backport if asked.
because (essentially) I didn't realise that PY_BEGIN/END_ALLOW_THREADS
actually expanded to nothing under a no-threads build, so if you somehow
NULLed out the threadstate (e.g. by calling PyThread_SaveThread) it would
stay NULLed when you return to Python. Argh!
Backport candidate.
modes like non-interactive modes. This allows for non-latin-1 users
to write unicode strings directly and sets Japanese users free from
weird manual escaping <wink> in shift_jis environments.
(Reviewed by Martin v. Loewis)
[ 960406 ] unblock signals in threads
although the changes do not correspond exactly to any patch attached to
that report.
Non-main threads no longer have all signals masked.
A different interface to readline is used.
The handling of signals inside calls to PyOS_Readline is now rather
different.
These changes are all a bit scary! Review and cross-platform testing
much appreciated.
A change from Duncan Booth, to deal with changes in the way pgen gets
built. Note that graminit.[ch] aren't normally built on Windows (they're
obtained from CVS).
the build, so I'm restoring it. I'm not sure what Neal's intent was,
since the line following the one he removed was "REQN(i, 1)" so i is
obviously used. ;)
with an indented code block but no newline would raise SyntaxError.
This would have been a four-line change in parsetok.c... Except
codeop.py depends on this behavior, so a compilation flag had to be
invented that causes the tokenizer to revert to the old behavior;
this required extra changes to 2 .h files, 2 .c files, and 2 .py
files. (Fixes SF bug #501622.)
(see Patch 512981). I changed stdin to sys_stdin in the body of the
function, but did not change stderr to sys_stdout, though I suspect that may
be the correct course. I don't know the code involved well enough to judge.
-- replace then with slightly faster PyObject_Call(o,a,NULL). (The
difference is that the latter requires a to be a tuple; the former
allows other values and wraps them in a tuple if necessary; it
involves two more levels of C function calls to accomplish all that.)
Change the parser and compiler to use PyMalloc.
Only the files implementing processes that will request memory
allocations small enough for PyMalloc to be a win have been
changed, which are:-
- Python/compile.c
- Parser/acceler.c
- Parser/node.c
- Parser/parsetok.c
This augments the aggressive overallocation strategy implemented by
Tim Peters in PyNode_AddChild() [Parser/node.c], in reducing the
impact of platform malloc()/realloc()/free() corner case behaviour.
Such corner cases are known to be triggered by test_longexp and
test_import.
Jeremy Hylton, in accepting this patch, recommended this as a
bugfix candidate for 2.2. While the changes to Python/compile.c
and Parser/node.c backport easily (and could go in), the changes
to Parser/acceler.c and Parser/parsetok.c require other not
insignificant changes as a result of the differences in the memory
APIs between 2.3 and 2.2, which I'm not in a position to work
through at the moment. This is a pity, as the Parser/parsetok.c
changes are the most important after the Parser/node.c changes, due
to the size of the memory requests involved and their frequency.
disaster too, so this change is here to stay. Beefed up the comments
and added some stats Andrew reported. Also a small change to the
macro body, to make it obvious how XXXROUNDUP(0) ends up returning 0.
See SF patch 578297 for context.
Not a bugfix candidate, as the functional changes here have already
been backported to the 2.2 line (this patch just improves clarity).
This gets us closer to consistent Ctrl+C behaviour on NT and Win9x. NT now reliably generates KeyboardInterrupt exceptions for NT when a file IO operation was aborted. Bugfix candidate
more trivial lexical helper macros so that uses of these guys expand
to nothing at all when they're not enabled. This should help sub-
standard compilers that can't do a good job of optimizing away the
previous "(void)0" expressions.
Py_DECREF: There's only one definition of this now. Yay! That
was that last one in the family defined multiple times in an #ifdef
maze.
Py_FatalError(): Changed the char* signature to const char*.
_Py_NegativeRefcount(): New helper function for the Py_REF_DEBUG
expansion of Py_DECREF. Calling an external function cuts down on
the volume of generated code. The previous inline expansion of abort()
didn't work as intended on Windows (the program often kept going, and
the error msg scrolled off the screen unseen). _Py_NegativeRefcount
calls Py_FatalError instead, which captures our best knowledge of
how to abort effectively across platforms.
children gets large, to avoid severe platform realloc() degeneration
in extreme cases (like test_longexp).
Bugfix candidate.
This was doing extremely timid over-allocation, just rounding up to the
nearest multiple of 3. Now so long as the number of children is <= 128,
it rounds up to a multiple of 4 but via a much faster method. When the
number of children exceeds 128, though, and more space is needed, it
doubles the capacity. This is aggressive over-allocation.
SF patch <http://www.python.org/sf/578297> has Andrew MacIntyre using
PyMalloc in the parser to overcome platform malloc problems in
test_longexp on OS/2 EMX. Jack Jansen notes there that it didn't help
him on the Mac, because the Mac has problems with frequent ever-growing
reallocs, not just with gazillions of teensy mallocs. Win98 has no
visible problems with test_longexp, but I tried boosting the test-case
size and soon got "senseless" MemoryErrors out of it, and soon after
crashed the OS: as I've seen in many other contexts before, while the
Win98 realloc remains zippy in bad cases, it leads to extreme
fragmentation of user address space, to the point that the OS barfs.
I don't yet know whether this fixes Jack's Mac problems, but it does cure
Win98's problems when boosting the test case size. It also speeds
test_longexp in its unaltered state.
Highlights: import and friends will understand any of \r, \n and \r\n
as end of line. Python file input will do the same if you use mode 'U'.
Everything can be disabled by configuring with --without-universal-newlines.
See PEP278 for details.
This introduces:
- A new operator // that means floor division (the kind of division
where 1/2 is 0).
- The "future division" statement ("from __future__ import division)
which changes the meaning of the / operator to implement "true
division" (where 1/2 is 0.5).
- New overloadable operators __truediv__ and __floordiv__.
- New slots in the PyNumberMethods struct for true and floor division,
new abstract APIs for them, new opcodes, and so on.
I emphasize that without the future division statement, the semantics
of / will remain unchanged until Python 3.0.
Not yet implemented are warnings (default off) when / is used with int
or long arguments.
This has been on display since 7/31 as SF patch #443474.
Flames to /dev/null.
This is really stupid because it cannot be suppressed or altered using
the warning framework; that's because the warning framework is built
on Python interpreter internals, and the parser generator doesn't have
access to any of those (you cannot use anything of type PyObject * in
the parser).
But it's better than nothing, and implementing a proper check for this
appears to require modifying compile.c in a dozen places, for which I
don't have the stamina today. I promise we'll do better in 2.2a2.
At least it tells you the filename and line number (unlike the first
hack I considered :-).
that 'yield' is a keyword. This doesn't help test_generators at all! I
don't know why not. These things do work now (and didn't before this
patch):
1. "from __future__ import generators" now works in a native shell.
2. Similarly "python -i xxx.py" now has generators enabled in the
shell if xxx.py had them enabled.
3. This program (which was my doctest proxy) works fine:
from __future__ import generators
source = """\
def f():
yield 1
"""
exec compile(source, "", "single") in globals()
print type(f())