If a PyTokenizer_FromString() is called with an empty string, the
tokenizer's line_start member never gets initialized. Later, it is
compared with the token pointer 'a' in parsetok.c:193 and that behavior
can result in undefined behavior.
Added checks for integer overflows, contributed by Google. Some are
only available if asserts are left in the code, in cases where they
can't be triggered from Python code.
Rather than sprinkle casts throughout the code, change Py_CHARMASK to
always cast it's result to an unsigned char. This should ensure we
do the right thing when accessing an array with the result.
The new PyParser_*Ex() functions are based on Neal's suggestion and initial patch. The new __future__ feature makes all '' and r'' unicode strings. b'' and br'' stay (byte) strings.
This work is substantially Anthony Baxter's, from issue
1633807. I just freshened it, made a few minor tweaks,
and added the test cases. I also created issue 2412,
which is to check for 2to3's behavior with the print
function. I also added myself to ACKS.
Added 0b and 0o literals to tokenizer.
Modified PyOS_strtoul to support 0b and 0o inputs.
Modified PyLong_FromString to support guessing 0b and 0o inputs.
Renamed test_hexoct.py to test_int_literal.py and added binary tests.
Added upper and lower case 0b, 0O, and 0X tests to test_int_literal.py
Event alloc_fn: Called allocation function "metacompile" [model]
Event var_assign: Assigned variable "gr" to storage returned from "metacompile"
gr = metacompile(n);
Event pass_arg: Variable "gr" not freed or pointed-to in function "maketables" [model]
g = maketables(gr);
translatelabels(g);
addfirstsets(g);
Event leaked_storage: Returned without freeing storage "gr"
return g;
* use %r instead of backticks since backticks are going away in Py3k
* PyArena_Malloc() already sets PyErr_NoMemory so we don't need to do it again
* the signature for ast2obj_int incorrectly used a bool, rather than a long
is specified at the top of the file. Also add a note that Python/Python-ast.c
needs to be committed separately after a change to the AST grammar to capture
the revision number of the change (which is what __version__ is set to).
Prevent an invalid memory read from test_coding in case the done flag is set.
In that case, the loop isn't entered. I wonder if rather than setting
the done flag in the cases before the loop, if they should just exit early.
This code looks like it should be refactored.
Backport candidate (also the early break above if decoding_fgets fails)
mismatches. At least I hope this fixes them all.
This reverts part of my change from yesterday that converted everything
in Parser/*.c to use PyObject_* API. The encoding doesn't really need
to use PyMem_*, however, it uses new_string() which must return PyMem_*
for handling the result of PyOS_Readline() which returns PyMem_* memory.
If there were 2 versions of new_string() one that returned PyMem_*
for tokens and one that return PyObject_* for encodings that could
also fix this problem. I'm not sure which version would be clearer.
This seems to fix both Guido's and Phillip's problems, so it's good enough
for now. After this change, it would be good to review Parser/*.c
for consistent use of the 2 memory APIs.
malloc/realloc type functions, as well as renaming one variable called 'new'
in tokensizer.c. Still lots more to be done, going to be checking in one
chunk at a time or the patch will be massively huge. Still compiles ok with
gcc.
This was the result of inconsistent use of PyMem_* and PyObject_* allocators.
By changing to use PyObject_* allocator almost everywhere, this removes
the inconsistency.
tracing/line number table in except blocks.
Reflow long lines introduced by col_offset changes. Update test_ast
to handle new fields in excepthandler.
As note in Python.asdl says, we might want to rethink how attributes
are handled. Perhaps they should be the same as other fields, with
the primary difference being how they are defined for all types within
a sum.
Also fix asdl_c so that constructors with int fields don't fail when
passed a zero value.
objimpl.h, pymem.h: Stop mapping PyMem_{Del, DEL} and PyMem_{Free, FREE}
to PyObject_{Free, FREE} in a release build. They're aliases for the
system free() now.
_subprocess.c/sp_handle_dealloc(): Since the memory was originally
obtained via PyObject_NEW, it must be released via PyObject_FREE (or
_DEL).
pythonrun.c, tokenizer.c, parsermodule.c: I lost count of the number of
PyObject vs PyMem mismatches in these -- it's like the specific
function called at each site was picked at random, sometimes even with
memory obtained via PyMem getting released via PyObject. Changed most
to use PyObject uniformly, since the blobs allocated are predictably
small in most cases, and obmalloc is generally faster than system
mallocs then.
If extension modules in real life prove as sloppy as Python's front
end, we'll have to revert the objimpl.h + pymem.h part of this patch.
Note that no problems will show up in a debug build (all calls still go
thru obmalloc then). Problems will show up only in a release build, most
likely segfaults.
This will hopefully get rid of some Coverity warnings, be a hint to
developers, and be marginally faster.
Some asserts were added when the type is currently known, but depends
on values from another function.
but without a specified encoding: decoding_fgets() (and decoding_feof()) can
return NULL and fiddle with the 'tok' struct, making tok->buf NULL. This is
okay in the other cases of calls to decoding_*(), it seems, but not in this
one.
This should get a test added, somewhere, but the testsuite doesn't seem to
test encoding anywhere (although plenty of tests use it.)
It seems to me that decoding errors in other places in the code (like at the
start of a token, instead of in the middle of one) make the code end up
adding small integers to NULL pointers, but happen to check for error states
before using the calculated new pointers. I haven't been able to trigger any
other crashes, in any case.
I would nominate this file for a comlete rewrite for Py3k. The whole
decoding trick is too bolted-on for my tastes.
- IMPORT_NAME takes an extra argument from the stack: the relativeness of
the import. Only passed to __import__ when it's not -1.
- __import__() takes an optional 5th argument for the same thing; it
__defaults to -1 (old semantics: try relative, then absolute)
- 'from . import name' imports name (be it module or regular attribute)
from the current module's *package*. Likewise, 'from .module import name'
will import name from a sibling to the current module.
- Importing from outside a package is not allowed; 'from . import sys' in a
toplevel module will not work, nor will 'from .. import sys' in a
(single-level) package.
- 'from __future__ import absolute_import' will turn on the new semantics
for import and from-import: imports will be absolute, except for
from-import with dots.
Includes tests for regular imports and importhooks, parser changes and a
NEWS item, but no compiler-package changes or documentation changes.
This was started by Mike Bland and completed by Guido
(with help from Neal).
This still needs a __future__ statement added;
Thomas is working on Michael's patch for that aspect.
There's a small amount of code cleanup and refactoring
in ast.c, compile.c and ceval.c (I fixed the lltrace
behavior when EXT_POP is used -- however I had to make
lltrace a static global).
breaks the parser module, because it adds the if/else construct as well as
two new grammar rules for backward compatibility. If no one else fixes
parsermodule, I guess I'll go ahead and fix it later this week.
The TeX code was checked with texcheck.py, but not rendered. There is
actually a slight incompatibility:
>>> (x for x in lambda:0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: iteration over non-sequence
changes into
>>> (x for x in lambda: 0)
File "<stdin>", line 1
(x for x in lambda: 0)
^
SyntaxError: invalid syntax
Since there's no way the former version can be useful, it's probably a
bugfix ;)
Expand set of errors caught in set_context(). Some new errors, some
old error messages changed for consistency.
Fixed error checking in generator expression code. The first set of
tests were impossible condition given the grammar. In general, the
ast code uses REQ() for those sanity checks.
Fix some error handling for augmented assignments. As comments in the
code explain, set_context() ought to work here, but I got unexpected
crashes when I tried it. Should come back to this.
Add note to Grammar that yield expression is a special case.
Add doctest cases for SyntaxErrors raised by ast.c.