that 'yield' is a keyword. This doesn't help test_generators at all! I
don't know why not. These things do work now (and didn't before this
patch):
1. "from __future__ import generators" now works in a native shell.
2. Similarly "python -i xxx.py" now has generators enabled in the
shell if xxx.py had them enabled.
3. This program (which was my doctest proxy) works fine:
from __future__ import generators
source = """\
def f():
yield 1
"""
exec compile(source, "", "single") in globals()
print type(f())
that info to code dynamically compiled *by* code compiled with generators
enabled. Doesn't yet work because there's still no way to tell the parser
that "yield" is OK (unlike nested_scopes, the parser has its fingers in
this too).
Replaced PyEval_GetNestedScopes by a more-general
PyEval_MergeCompilerFlags. Perhaps I should not have? I doubted it was
*intended* to be part of the public API, so just did.
the yield statement. I figure we have to have this in before I can
release 2.2a1 on Wednesday.
Note: test_generators is currently broken, I'm counting on Tim to fix
this.
Probable fix (the bug report doesn't have enough info to say for sure).
find_init_module(): Insist on a case-sensitive match for __init__ files.
Given __INIT__.PY instead, find_init_module() thought that was fine, but
the later attempt to do find_module("__INIT__.PY") didn't and its caller
silently suppressed the resulting ImportError. Now find_init_module()
refuses to accept __INIT__.PY to begin with.
Bugfix candidate; specific to platforms with case-insensitive filesystems.
path (with no profile/trace function) through eval_code2() and
eval_frame() avoids several checks.
In the common cases of calls, returns, and exception propogation,
eval_code2() and eval_frame() used to test two values in the
thread-state: the profiling function and the tracing function. With
this change, a flag is set in the thread-state if either of these is
active, allowing a single check to suffice when both are NULL. This
also simplifies the code needed when either function is in use but is
already active (to avoid profiling/tracing the profiler/tracer); the
flag is set to 0 when the profile/trace code is entered, allowing the
same check to suffice for "already in the tracer" for call/return/
exception events.
"return expr" instances in generators (which latter may be generators
due to otherwise invisible "yield" stmts hiding in "if 0" blocks).
This was fun the first time, but this has gotten truly ugly now.
Python interpreter.
This change adds two new C-level APIs: PyEval_SetProfile() and
PyEval_SetTrace(). These can be used to install profile and trace
functions implemented in C, which can operate at much higher speeds
than Python-based functions. The overhead for calling a C-based
profile function is a very small fraction of a percent of the overhead
involved in calling a Python-based function.
The machinery required to call a Python-based profile or trace
function been moved to sysmodule.c, where sys.setprofile() and
sys.setprofile() simply become users of the new interface.
As a side effect, SF bug #436058 is fixed; there is no longer a
_PyTrace_Init() function to declare.
Implement sys.maxunicode.
Explicitly wrap around upper/lower computations for wide Py_UNICODE.
When decoding large characters with UTF-8, represent expected test
results using the \U notation.
- the correct range for the error message is range(0x110000);
- put the 4-byte Unicode-size code inside the same else branch as the
2-byte code, rather generating unreachable code in the 2-byte case.
- Don't hide the 'else' behine the '}'.
(I would prefer that in 4-byte mode, any value should be accepted, but
reasonable people can argue about that, so I'll put that off.)
Add configure option --enable-unicode.
Add config.h macros Py_USING_UNICODE, PY_UNICODE_TYPE, Py_UNICODE_SIZE,
SIZEOF_WCHAR_T.
Define Py_UCS2.
Encode and decode large UTF-8 characters into single Py_UNICODE values
for wide Unicode types; likewise for UTF-16.
Remove test whether sizeof Py_UNICODE is two.
Iterators list and Python-Dev; e.g., these all pass now:
def g1():
try:
return
except:
yield 1
assert list(g1()) == []
def g2():
try:
return
finally:
yield 1
assert list(g2()) == [1]
def g3():
for i in range(3):
yield None
yield None
assert list(g3()) == [None] * 4
compile.c: compile_funcdef and com_return_stmt: Just van Rossum's patch
to compile the same code for "return" regardless of function type (this
goes back to the previous scheme of returning Py_None).
ceval.c: gen_iternext: take a return (but not a yield) of Py_None as
meaning the generator is exhausted.
the next free valuestack slot, not to the base (in America, stacks push
and pop at the top -- they mutate at the bottom in Australia <winK>).
eval_frame(): assert that f_stacktop isn't NULL upon entry.
frame_delloc(): avoid ordered pointer comparisons involving f_stacktop
when f_stacktop is NULL.
reference to f_back when its really needed. Do a little whitespace
normalization as well. This whole file is a big war between tabs and spaces
but now is probably not the time to reindent everything.
NeilS, please check! This came from staring at your genbug.py, but I'm
not sure it plugs all possible holes. Without this, I caught a
frameobject refcount going negative, and it was also the cause (in debug
build) of _Py_ForgetReference's attempt to forget an object with already-
NULL _ob_prev and _ob_next pointers -- although I'm still not entirely
sure how! Part of the difficulty is that frameobjects are stored on a
free list that gets recycled very quickly, so if there's a stray pointer
to one of them it never looks like an insane frameobject (never goes
trough the free() mangling MS debug forces, etc).
and trace functions lazily, which incurs extra argument pushing and checks
in the C overhead for profiling/tracing, create the strings semi-lazily
when the Python code first registers a profile or trace function. This
simplifies the trampoline into the profile/trace functions.
Armin Rigo pointed out that the way the line-# table got built didn't work
for lines generating more than 255 bytes of bytecode. Fixed as he
suggested, plus corresponding changes to pyassem.py, plus added some
long overdue docs about this subtle table to compile.c.
Bugfix candidate (line numbers may be off in tracebacks under -O).
that should be used to cache an interned version of the event
string passed to the profile/trace function. call_trace() will
create interned strings and cache them in using the storage
specified by this additional parameter, avoiding a lot of string
object creation at runtime when using the profiling or tracing
functions.
All call sites are modified to pass the additional parameter, and four
static PyObject* variables are allocated to cache the interned string
objects.
This closes SF patch #431257.
in release builds. Suggested by Martin v. Loewis.
I'm half tempted to macroize PyErr_Occurred too, as the whole thing could
collapse to just
_PyThreadState_Current->curexc_type
In the default branch, keep three ifs that are used if level == 0, the
most common case. Note that first if here is a slight optimization
for the 'O' format.
Second part of SF patch 426072.
Note that lots of code was re-indented.
Replace two-step of convertsimple() and convertsimple1() with
convertsimple() and helper converterr(), which is called to format
error messages when convertsimple() fails. The old code did all the
real work in convertsimple1(), but deferred error message formatting
to conversimple(). The result was paying the price of a second
function call on every call just to format error messages in the
failure cases.
Factor out of the buffer-handling code in convertsimple() and package
it as convertbuffer().
Add two macros to ease readability of Unicode coversions,
UNICODE_DEFAULT_ENCODING() and CONV_UNICODE, an error string.
The convertsimple() routine had awful indentation problems, primarily
because there were two tabs between the case line and the body of the
case statements. This patch reformats the entire function to have a
single tab between case line and case body, which makes the code
easier to read (and consistent with ceval). The introduction of
converterr() exacerbated the problem and prompted this fix.
Also, eliminate non-standard whitespace after opening paren and before
closing paren in a few if statements.
(This checkin is part of SF patch 426072.)
Keyword arguments passed to builtin functions that don't take them are
ignored.
>>> {}.clear(x=2)
>>>
instead of
>>> {}.clear(x=2)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: clear() takes no keyword arguments
If we have a PyCFunction (builtin) and it is METH_VARARGS only, load
the args and dispatch to call_cfunction() directly. This provides a
small speedup for perhaps the most common function calls -- builtins.
Store floats and doubles to full precision in marshal.
Test that floats read from .pyc/.pyo closely match those read from .py.
Declare PyFloat_AsString() in floatobject header file.
Add new PyFloat_AsReprString() API function.
Document the functions declared in floatobject.h.
Check for free in class and method only if nested scopes are enabled.
Add assertion to verify that no free variables occur when nested
scopes are disabled.
XXX When should nested scopes by made non-optional on the trunk?
NEEDS DOC CHANGES.
More AttributeErrors transmuted into TypeErrors, in test_b2.py, and,
again, this strikes me as a good thing.
This checkin completes the iterator generalization work that obviously
needed to be done. Can anyone think of others that should be changed?
NEEDS DOC CHANGES.
Possibly contentious: The first time s.next() yields StopIteration (for
a given map argument s) is the last time map() *tries* s.next(). That
is, if other sequence args are longer, s will never again contribute
anything but None values to the result, even if trying s.next() again
could yield another result. This is the same behavior map() used to have
wrt IndexError, so it's the only way to be wholly backward-compatible.
I'm not a fan of letting StopIteration mean "try again later" anyway.
Directory containing
Spam.py
spam/__init__.py
Then "import Spam" caused a SystemError, because code checking for
the existence of "Spam/__init__.py" finds it on a case-insensitive
filesystem, but then bails because the directory it finds it in
doesn't match case, and then old code assumed that was still an error
even though it isn't anymore. Changed the code to just continue
looking in this case (instead of calling it an error). So
import Spam
and
import spam
both work now.
Also a 2.1 bugfix candidate (am I supposed to do something with those?).
Took away map()'s insistence that sequences support __len__, and cleaned
up the convoluted code that made it *look* like it really cared about
__len__ (in fact the old ->len field was only *used* as a flag bit, as
the main loop only looked at its sign bit, setting the field to -1 when
IndexError got raised; renamed the field to ->saw_IndexError instead).
The new test case demonstrates the bug. Be more careful in
symtable_resolve_free() to add a var to cells or frees only if it
won't be added under some other rule.
XXX Add new assertion that will catch this bug.
sees it (test_iter.py is unchanged).
- Added a tp_iternext slot, which calls the iterator's next() method;
this is much faster for built-in iterators over built-in types
such as lists and dicts, speeding up pybench's ForLoop with about
25% compared to Python 2.1. (Now there's a good argument for
iterators. ;-)
- Renamed the built-in sequence iterator SeqIter, affecting the C API
functions for it. (This frees up the PyIter prefix for generic
iterator operations.)
- Added PyIter_Check(obj), which checks that obj's type has a
tp_iternext slot and that the proper feature flag is set.
- Added PyIter_Next(obj) which calls the tp_iternext slot. It has a
somewhat complex return condition due to the need for speed: when it
returns NULL, it may not have set an exception condition, meaning
the iterator is exhausted; when the exception StopIteration is set
(or a derived exception class), it means the same thing; any other
exception means some other error occurred.
new slot tp_iter in type object, plus new flag Py_TPFLAGS_HAVE_ITER
new C API PyObject_GetIter(), calls tp_iter
new builtin iter(), with two forms: iter(obj), and iter(function, sentinel)
new internal object types iterobject and calliterobject
new exception StopIteration
new opcodes for "for" loops, GET_ITER and FOR_ITER (also supported by dis.py)
new magic number for .pyc files
new special method for instances: __iter__() returns an iterator
iteration over dictionaries: "for x in dict" iterates over the keys
iteration over files: "for x in file" iterates over lines
TODO:
documentation
test suite
decide whether to use a different way to spell iter(function, sentinal)
decide whether "for key in dict" is a good idea
use iterators in map/filter/reduce, min/max, and elsewhere (in/not in?)
speed tuning (make next() a slot tp_next???)
now raises NameError instead of UnboundLocalError, because the var in
question is definitely not local. (This affects test_scope.py)
Also update the recent fix by Ping using get_func_name(). Replace
tests of get_func_name() return value with call to get_func_desc() to
match all the other uses.
Calling an unbound method on a C extension class without providing
an instance can yield a segfault. Try "Exception.__init__()" or
"ValueError.__init__()".
This is a simple fix. The error-reporting bits in call_method
mistakenly treat the misleadingly-named variable "func" as a
function, when in fact it is a method.
If we let get_func_name take care of the work, all is fine.
Fix based on patch #414750 by Michael Hudson.
New functions get_func_name() and get_func_desc() return reasonable
names and descriptions for all objects. XXX Even objects that aren't
actually callable.
pickle.py
The code implicitly assumed that all ints fit in 4 bytes, causing all
sorts of mischief (from nonsense results to corrupted pickles).
Repaired that.
marshal.c
The int marshaling code assumed that right shifts of signed longs
sign-extend. Repaired that.
Jeffery Collins pointed out that filterstring decrefs a character object
before it's done using it. This works by accident today because another
module always happens to have an active reference too at the time. The
accident doesn't work after his Pippy modifications, and since it *is*
an accident even in the mainline Python, it should work by design there too.
The patch accomplishes that.
but apparently he had to go to school, so I am checking it in for him.
This makes PyRun_HandleSystemExit() a static instead, called
handle_system_exit(), and let it use the current exception rather than
passing in an exception. This slightly simplifies the code.
Update docstring and library reference section on 'sys' module.
New API PyErr_Display, just for displaying errors, called by excepthook.
Uncaught exceptions now call sys.excepthook; if that fails, we fall back
to calling PyErr_Display directly.
Also comes with sys.__excepthook__ and sys.__displayhook__.
If a module has a future statement enabling nested scopes, they are
also enable for the exec statement and the functions compile() and
execfile() if they occur in the module.
If Python is run with the -i option, which enters interactive mode
after executing a script, and the script it runs enables nested
scopes, they are also enabled in interactive mode.
XXX The use of -i with -c "from __future__ import nested_scopes" is
not supported. What's the point?
To support these changes, many function variants have been added to
pythonrun.c. All the variants names end with Flags and they take an
extra PyCompilerFlags * argument. It is possible that this complexity
will be eliminated in a future version of the interpreter in which
nested scopes are not optional.
frees. Note there doesn't seem to be any way to test LocalsToFast(),
because the instructions that trigger it are illegal in nested scopes
with free variables.
Fix allocation strategy for cells that are also formal parameters.
Instead of emitting LOAD_FAST / STORE_DEREF pairs for each parameter,
have the argument handling code in eval_code2() do the right thing.
A side-effect of this change is that cell variables that are also
arguments are listed at the front of co_cellvars in the order they
appear in the argument list.
has a binding for the name. The fix is in two places:
- in symtable_update_free_vars, ignore a global stmt in a class scope
- in symtable_load_symbols, add extra handling for names that are
defined at class scope and free in a method
Closes SF bug 407800
with free variables. Thanks to Martin v. Loewis for finding two of
the problems. This fixes SF buf 405583.
There is also a C API change: PyFrame_New() is reverting to its
pre-2.1 signature. The change introduced by nested scopes was a
mistake. XXX Is this okay between beta releases?
cell_clear(), the GC helper, must decref its reference to break
cycles.
frame_dealloc() must dealloc all cell vars and free vars in addition
to locals.
eval_code2() setup code must INCREF cells it copies out of the
closure.
The STORE_DEREF opcode implementation must DECREF the object it passes
to PyCell_Set().
Made sure that the warnings issued by symtable_check_unoptimized()
(about import * and exec) contain the proper filename and line number,
and are transformed into SyntaxError exceptions with -Werror.
(Also remove warning about module-level global decl, because we can't
distinguish from code passed to exec.)
Define PyCompilerFlags type contains a single element,
cf_nested_scopes, that is true if a nested scopes future statement has
been entered at the interactive prompt.
New API functions:
PyNode_CompileFlags()
PyRun_InteractiveOneFlags()
-- same as their non Flags counterparts except that the take an
optional PyCompilerFlags pointer
compile.c: In jcompile() use PyCompilerFlags argument. If
cf_nested_scopes is true, compile code with nested scopes. If it
is false, but the code has a valid future nested scopes statement,
set it to true.
pythonrun.c: Create a new PyCompilerFlags object in
PyRun_InteractiveLoop() and thread it through to
PyRun_InteractiveOneFlags().
the more recent versions of that platform, so we use the value (time_t)(-1)
as the error value. This is the type used in the OpenVMS documentation:
http://www.openvms.compaq.com/commercial/c/5763p048.htm#inde
This closes SF tracker bug #404240.
Also clean up an exception message when detecting overflow of time_t values
beyond 4 bytes.
from __future__ import nested_scopes
x=7
def f():
x=1
def g():
global x
def i():
def h():
return x
return h()
return i()
return g()
print f()
print x
This kind of code didn't work correctly because x was treated as free
in i, leading to an attempt to load x in g to make a closure for i.
Solution is to make global decl apply to nested scopes unless their is
an assignment. Thus, x in h is global.
described in PEP 227.
symtable_check_unoptimized() warns about import * and exec with "in"
when it is used in a function that contains a nested function with
free variables. Warnings are issued unless nested scopes are in
effect, in which case these are SyntaxErrors.
symtable_check_shadow() warns about assignments in a function scope
that shadow free variables defined in a nested scope. This will
always generate a warning -- and will behave differently with nested
scopes than without.
Restore full checking for free vars in children, even when nested
scopes are not enabled. This is needed to support warnings for
shadowing.
Change symtable_warn() to return an int-- the return value of
PyErr_WarnExplicit.
Sundry cleanup: Remove commented out code. Break long lines.
global after assign / use.
Note: I'm not updating the PyErr_Warn() call for import * / exec
combined with a function, because I can't trigger it with an example.
Jeremy, just follow the example of the call to PyErr_WarnExplicit()
that I *did* include.
for errors raised in future.c.
Move some helper functions from compile.c to errors.c and make them
API functions: PyErr_SyntaxLocation() and PyErr_ProgramText().
raised by the compiler.
XXX For now, text entered into the interactive intepreter is not
printed in the traceback.
Inspired by a patch from Roman Sulzhyk
compile.c:
Add helper fetch_program_text() that opens a file and reads until it
finds the specified line number. The code is a near duplicate of
similar code in traceback.c.
Modify com_error() to pass two arguments to SyntaxError constructor,
where the second argument contains the offending text when possible.
Modify set_error_location(), now used only by the symtable pass, to
set the text attribute on existing exceptions.
pythonrun.c:
Change parse_syntax_error() to continue of the offset attribute of a
SyntaxError is None. In this case, it sets offset to -1.
Move code from PyErr_PrintEx() into helper function
print_error_text(). In the helper, only print the caret for a
SyntaxError if offset > 0.
http://python.sourceforge.net/peps/pep-0235.html
Renamed check_case to case_ok. Substantial code rearrangement to get
this stuff in one place in the file. Innermost loop of find_module()
now much simpler and #ifdef-free, and I want to keep it that way (it's
bad enough that the innermost loop is itself still in an #ifdef!).
Windows semantics tested and are fine.
Jason, Cygwin *should* be fine if and only if what you did before "worked"
for case_ok.
Jack, the semantics on your flavor of Mac have definitely changed (see
the PEP), and need to be tested. The intent is that your flavor of Mac
now work the same as everything else in the "lower left" box, including
respecting PYTHONCASEOK.
Steven, sorry, you did the most work here so far but you got screwed the
worst. Happy to work with you on repairing it, but I don't understand
anything about all your Mac variants. We need to add another branch (or
two, three, ...?) inside case_ok. But we should not need to change
anything else.
XXX still need to integrate into symtable API
compile.h: Remove ff_n_simple_stmt; obsolete.
Add ff_found_docstring used internally to skip one and only
one string at the beginning of a module.
compile.c: Add check for from __future__ imports to far into the file.
In symtable_global() check for -1 returned from
symtable_lookup(), which signifies name not defined.
Add missing DECERF in symtable_add_def.
Free c->c_future.
future.c: Add special handling for multiple statements joined on a
single line using one or more semicolons; this form can
include an illegal future statement that would otherwise be
hard to detect.
Add support for detecting and skipping doc strings.
Makefile.pre.in: add target future.o
Include/compile.h: define PyFutureFeaters and PyNode_Future()
add c_future slot to struct compiling
Include/symtable.h: add st_future slot to struct symtable
Python/future.c: implementation of PyNode_Future()
Python/compile.c: use PyNode_Future() for nested_scopes support
Python/symtable.c: include compile.h to pick up PyFutureFeatures decl
compile.h: #define NESTED_SCOPES_DEFAULT 0 for Python 2.1
__future__ feature name: "nested_scopes"
symtable.h: Add st_nested_scopes slot. Define flags to track exec and
import star.
Lib/test/test_scope.py: requires nested scopes
compile.c: Fiddle with error messages.
Reverse the sense of ste_optimized flag on
PySymtableEntryObjects. If it is true, there is an optimization
conflict.
Modify get_ref_type to respect st_nested_scopes flags.
Refactor symtable_load_symbols() into several smaller functions,
which use struct symbol_info to share variables. In new function
symtable_update_flags(), raise an error or warning for import * or
bare exec that conflicts with nested scopes. Also, modify handle
for free variables to respect st_nested_scopes flag.
In symtable_init() assign st_nested_scopes flag to
NESTED_SCOPES_DEFAULT (defined in compile.h).
Add preliminary and often incorrect implementation of
symtable_check_future().
Add symtable_lookup() helper for future use.
Two different but related problems:
1. PySymtable_Free() must explicitly DECREF(st->st_cur), which should
always point to the global symtable entry. This entry is setup by the
first enter_scope() call, but there is never a corresponding
exit_scope() call.
Since each entry has a reference to scopes defined within it, the
missing DECREF caused all symtable entries to be leaked.
2. The leak here masked a separate problem with
PySymtableEntry_New(). When the requested entry was found in
st->st_symbols, the entry was returned without doing an INCREF.
And problem c) The ste_children slot was getting two copies of each
child entry, because it was populating the slot on the first and
second passes. Now only populate on the first pass.
save the __builtin__ module in a static variable. But this doesn't
work across Py_Finalise()/Py_Initialize()! It also doesn't work when
using multiple interpreter states created with PyInterpreterState_New().
So I'm ripping out this small optimization.
This was probably broken since PyImport_Import() was introduced in
1997! We really need a better test suite for multiple interpreter
states and repeatedly initializing.
This fixes the problems Barry reported in Demo/embed/loop.c.
the symbol table pass. These blocks were already ignored by the code
gen pass. Both passes must visit the same set of blocks in the same
order.
Fixes SF buf 132820
They're actually complaining about something more specific, an assignment
in a lambda as an actual argument, so that Python parses the
lambda as if it were a keyword argument. Like f(lambda x: x[0]=42).
The "lambda x: x[0]" part gets parsed as if it were a keyword, being
bound to 42, and the resulting error msg didn't make much sense.
_testcapimodule.c
make sure PyList_Reverse doesn't blow up again
getargs.c
assert args isn't NULL at the top of vgetargs1 instead of
waiting for a NULL-pointer dereference at the end
Bug was introduced by tricks played to make .pyc files executable
via cmdline arg. Then again, -x worked via a trick to begin with.
If anyone can think of a portable way to test -x, be my guest!
create an empty dictionary if it is called without keyword args. Just
pass NULL.
XXX I had believed that this caused weird errors, but the test suite
runs cleanly.
of nested functions. Either is allowed in a function if it contains
no defs or lambdas or the defs and lambdas it contains have no free
variables. If a function is itself nested and has free variables,
either is illegal.
Revise the symtable to use a PySymtableEntryObject, which holds all
the revelent information for a scope, rather than using a bunch of
st_cur_XXX pointers in the symtable struct. The changes simplify the
internal management of the current symtable scope and of the stack.
Added new C source file: Python/symtable.c. (Does the Windows build
process need to be updated?)
As part of these changes, the initial _symtable module interface
introduced in 2.1a2 is replaced. A dictionary of
PySymtableEntryObjects are returned.
hooks to take over the Python import machinery at a very early stage
in the Python startup phase.
If there are still places in the Python interpreter which need to
bypass the __import__ hook, these places must now use
PyImport_ImportModuleEx() instead. So far no other places than in
the import mechanism itself have been identified.
symtable.h, so that they can be used by external module.
Improve error handling in symtable_enter_scope(), which return an
error code that went unchecked by most callers. XXX The error handling
in symtable code is sloppy in general.
Modify symtable to record the line number that begins each scope.
This can help to identify which code block is being referred to when
multiple blocks are bound to the same name.
Add st_scopes dict that is used to preserve scope info when
PyNode_CompileSymtable() is called. Otherwise, this information is
tossed as soon as it is no longer needed.
Add Py_SymtableString() to pythonrun; analogous to Py_CompileString().
discussion on python-dev. 'from mod import *' is still banned except
at the module level.
Fix value for special NOOPT entry in symtable. Initialze to 0 instead
of None, so that later uses of PyInt_AS_LONG() are valid. (Bug
reported by Donn Cave.)
replace local REPR macros with PyObject_REPR in object.h
reference manual but not checked: Names bound by import statemants may
not occur in global statements in the same scope. The from ... import *
form may only occur in a module scope.
I guess these changes could break code, but the reference manual
warned about them.
Several other small changes
If a variable is declared global in the nearest enclosing scope of a
free variable, then treat it is a global in the nested scope too.
Get rid of com_mangle and symtable_mangle functions and call mangle
directly.
If errors occur during symtable table creation, return -1 from
symtable_build().
Do not increment st_errors in assignment to lambda, because exception
is not set.
Add extra argument to symtable_assign(); the argument, flag, is ORed
with DEF_LOCAL for each symtable_add_def() call.
This change eliminates an extra malloc/free when a frame with free
variables is created. Any cell vars or free vars are stored in
f_localsplus after the locals and before the stack.
eval_code2() fills in the appropriate values after handling
initialization of locals.
To track the size the frame has an f_size member that tracks the total
size of f_localsplus. It used to be implicitly f_nlocals + f_stacksize.
They're named as if public, so I did a Bad Thing by changing
PyMarshal_ReadObjectFromFile() to suck up the remainder of the file in one
gulp: anyone who counted on that leaving the file pointer merely at the
end of the next object would be screwed. So restored
PyMarshal_ReadObjectFromFile() to its earlier state, renamed the new greedy
code to PyMarshal_ReadLastObjectFromFile(), and changed Python internals to
call the latter instead.
SF patch http://sourceforge.net/patch/?func=detailpatch&patch_id=103453&group_id=5470
PyMember_Set of T_CHAR always raises exception.
Unfortunately, this is a use of a C API function that Python itself never makes, so
there's no .py test I can check in to verify this stays fixed. But the fault in the
code is obvious, and Dave Cole's patch just as obviously fixes it.
The majority of the changes are in the compiler. The mainloop changes
primarily to implement the new opcodes and to pass a function's
closure to eval_code2(). Frames and functions got new slots to hold
the closure.
Include/compile.h
Add co_freevars and co_cellvars slots to code objects.
Update PyCode_New() to take freevars and cellvars as arguments
Include/funcobject.h
Add func_closure slot to function objects.
Add GetClosure()/SetClosure() functions (and corresponding
macros) for getting at the closure.
Include/frameobject.h
PyFrame_New() now takes a closure.
Include/opcode.h
Add four new opcodes: MAKE_CLOSURE, LOAD_CLOSURE, LOAD_DEREF,
STORE_DEREF.
Remove comment about old requirement for opcodes to fit in 7
bits.
compile.c
Implement changes to code objects for co_freevars and co_cellvars.
Modify symbol table to use st_cur_name (string object for the name
of the current scope) and st_cur_children (list of nested blocks).
Also define st_nested, which might more properly be called
st_cur_nested. Add several DEF_XXX flags to track def-use
information for free variables.
New or modified functions of note:
com_make_closure(struct compiling *, PyCodeObject *)
Emit LOAD_CLOSURE opcodes as needed to pass cells for free
variables into nested scope.
com_addop_varname(struct compiling *, int, char *)
Emits opcodes for LOAD_DEREF and STORE_DEREF.
get_ref_type(struct compiling *, char *name)
Return NAME_CLOSURE if ref type is FREE or CELL
symtable_load_symbols(struct compiling *)
Decides what variables are cell or free based on def-use info.
Can now raise SyntaxError if nested scopes are mixed with
exec or from blah import *.
make_scope_info(PyObject *, PyObject *, int, int)
Helper functions for symtable scope stack.
symtable_update_free_vars(struct symtable *)
After a code block has been analyzed, it must check each of
its children for free variables that are not defined in the
block. If a variable is free in a child and not defined in
the parent, then it is defined by block the enclosing the
current one or it is a global. This does the right logic.
symtable_add_use() is now a macro for symtable_add_def()
symtable_assign(struct symtable *, node *)
Use goto instead of for (;;)
Fixed bug in symtable where name of keyword argument in function
call was treated as assignment in the scope of the call site. Ex:
def f():
g(a=2) # a was considered a local of f
ceval.c
eval_code2() now take one more argument, a closure.
Implement LOAD_CLOSURE, LOAD_DEREF, STORE_DEREF, MAKE_CLOSURE>
Also: When name error occurs for global variable, report that the
name was global in the error mesage.
Objects/frameobject.c
Initialize f_closure to be a tuple containing space for cellvars
and freevars. f_closure is NULL if neither are present.
Objects/funcobject.c
Add support for func_closure.
Python/import.c
Change the magic number.
Python/marshal.c
Track changes to code objects.
parameters that contained both anonymous tuples and *arg or **arg. Ex:
def f(a, (b, c), *d): pass
Fix the symtable_params() to generate names in the right order for
co_varnames slot of code object. Consider *arg and **arg before the
"complex" names introduced by anonymous tuples.
module__doc__: Document the Warning subclass heirarchy.
make_class(): Added a "goto finally" so that if populate_methods()
fails, the return status will be -1 (failure) instead of 0 (success).
fini_exceptions(): When decref'ing the static pointers to the
exception classes, clear out their dictionaries too. This breaks a
cycle from class->dict->method->class and allows the classes with
unbound methods to be reclaimed. This plugs a large memory leak in a
common Py_Initialize()/dosomething/Py_Finalize() loop.
pythonrun.c: In Py_Finalize, don't reset the initialized flag until after
the exit funcs have run.
atexit.py: in _run_exitfuncs, mutate the list of pending calls in a
threadsafe way. This wasn't a contributor to bug 128475, it just burned
my eyeballs when looking at that bug.
symbol table for each top-level compilation unit. The information in
the symbol table allows the elimination of the later optimize() pass;
the bytecode generation emits the correct opcodes.
The current version passes the complete regression test, but may still
contain some bugs. It's a fairly substantial revision. The current
code adds an assert() and a test that may lead to a Py_FatalError().
I expect to remove these before 2.1 beta 1.
The symbol table (struct symtable) is described in comments in the
code.
The changes affects the several com_XXX() functions that were used to
emit LOAD_NAME and its ilk. The primary interface for this bytecode
is now com_addop_varname() which takes a kind and a name, where kind
is one of VAR_LOAD, VAR_STORE, or VAR_DELETE.
There are many other smaller changes:
- The name mangling code is no longer contained in ifdefs. There are
two functions that expose the mangling logical: com_mangle() and
symtable_mangle().
- The com_error() function can accept NULL for its first argument;
this is useful with is_constant_false() is called during symbol
table generation.
- The loop index names used by list comprehensions have been changed
from __1__ to [1], so that they can not be accessed by Python code.
- in com_funcdef(), com_argdefs() is now called before the body of the
function is compiled. This provides consistency with com_lambdef()
and symtable_funcdef().
- Helpers do_pad(), dump(), and DUMP() are added to aid in debugging
the compiler.
except that it always returns Unicode objects.
A new C API PyObject_Unicode() is also provided.
This closes patch #101664.
Written by Marc-Andre Lemburg. Copyright assigned to Guido van Rossum.
- Use PyObject_RichCompare*() where possible: when comparing
keyword arguments, in _PyEval_SliceIndex(), and of course in
cmp_outcome().
Unrelated stuff:
- Removed all trailing whitespace.
- Folded some long lines.
message, and tries to make the messages more consistent and helpful when
the wrong number of arguments or duplicate keyword arguments are supplied.
Comes with more tests for test_extcall.py and and an update to an error
message in test/output/test_pyexpat.
re-initializing Python (Py_Finalize() followed by Py_Initialize()) to
blow up quickly. With the DECREF removed I can't get it to fail any
more. (Except it still leaks, but that's probably a separate issue.)
1) "from M import X" now works even if M is not a real module; it's
basically a getattr() operation with AttributeError exceptions
changed into ImportError.
2) "from M import *" now looks for M.__all__ to decide which names to
import; if M.__all__ doesn't exist, it uses M.__dict__.keys() but
filters out names starting with '_' as before. Whether or not
__all__ exists, there's no restriction on the type of M.
- Make error messages from issubclass() and isinstance() a bit more
descriptive (Ping, modified by Guido)
- Couple of tiny fixes to other docstrings (Ping)
- Get rid of trailing whitespace (Guido)
Cygwin Python DLL and Shared Extension Patch). Add module.dll as a
valid extension.
jlt63 writes: Note that his change essentially backs out the fix for
bug #115973. Should ".pyd" be retained instead for posterity?
an empty keywords dictionary (via apply() or the extended call syntax),
the keywords dict should be ignored. If the keywords dict is not empty,
TypeError should be raised. (Between the restructuring of the call
machinery and this patch, an empty dict in this situation would trigger
a SystemError via PyErr_BadInternalCall().)
Added regression tests to detect errors for this.
More revision still needed.
Much of the code that was in the mainloop was moved to a series of
helper functions. PyEval_CallObjectWithKeywords was split into two
parts. The first part now only does argument handling. The second
part is now named call_object and delegates the call to a
call_(function,method,etc.) helper.
XXX The call_XXX helper functions should be replaced with tp_call
functions for the respective types.
The CALL_FUNCTION implementation contains three kinds of optimization:
1. fast_cfunction and fast_function are called when the arguments on
the stack can be passed directly to eval_code2() without copying
them into a tuple.
2. PyCFunction objects are dispatched immediately, because they are
presumed to occur more often than anything else.
3. Bound methods are dispatched inline. The method object contains a
pointer to the function object that will be called. The function
is called from within the mainloop, which may allow optimization #1
to be used, too.
The extened call implementation -- f(*args) and f(**kw) -- are
implemented as a separate case in the mainloop. This allows the
common case of normal function calls to execute without wasting time
on checks for extended calls, although it does introduce a small
amount of code duplication.
Also, the unused final argument of eval_code2() was removed. This is
probably the last trace of the access statement :-).
"..." in "from M import ..." was never DECREFed. Leak reported by
James Slaughter and nailed by Barry, who also provided an earlier
version of this patch.
the bug report (for details, look at it), but agree there's no need for Python
to declare atof itself: we #include stdlib.h, and ANSI C sez atof is declared
there already.
regardless of whether the system getopt() does what we want. This avoids the
hassle with prototypes and externs, and the check to see if the system
getopt() does what we want. Prefix optind, optarg and opterr with _PyOS_ to
avoid name clashes. Add new include file to define the right symbols. Fix
Demo/pyserv/pyserv.c to include getopt.h itself, instead of relying on
Python to provide it.
When a method is called with no regular arguments and * args, defer
the first arg is subclass check until after the * args have been
expanded.
N.B. The CALL_FUNCTION implementation is getting really hairy; should
review it to see if it can be simplified.
by making the DUP_TOPX code utterly straightforward. This also gets rid
of all normal-case internal DUP_TOPX if/branches, and allows replacing one
POP() with TOP() in each case, so is a good idea regardless.
Do not assume that all platforms using a MetroWorks compiler can use
POSIX threads; the assumption breaks on BeOS. This fix only helps
for BeOS.
This closes SourceForge patch #101772.
unintentionally caused them to get written in text mode under Windows.
As a result, when .pyc files were later read-- in binary mode --the
magic number was always wrong (note that .pyc magic numbers deliberately
include \r and \n characters, so this was "good" breakage, 100% across
all .pyc files, not random corruption in a subset). Fixed that.
Add definitions of INT_MAX and LONG_MAX to pyport.h.
Remove includes of limits.h and conditional definitions of INT_MAX
and LONG_MAX elsewhere.
This closes SourceForge patch #101659 and bug #115323.
Add three new convenience functions to the PyModule_*() family:
PyModule_AddObject(), PyModule_AddIntConstant(), PyModule_AddStringConstant().
This closes SourceForge patch #101233.
"s#" will now return a pointer to the default encoded string data
of the Unicode object instead of a pointer to the raw UTF-16
data.
The latter is still available via PyObject_AsReadBuffer().
The patch also adds an optimization for string objects which is
based on the fact that string objects return the raw character data
for getreadbuffer access and are always single-segment.
which implements the automatic conversion from Unicode to a string
object using the default encoding.
The new API is then put to use to have eval() and exec accept
Unicode objects as code parameter. This closes bugs #110924
and #113890.
As side-effect, the traditional C APIs PyString_Size() and
PyString_AsString() will also accept Unicode objects as
parameters.
When reading a short, sign-extend on platforms where shorts are
bigger than 16 bits.
When reading a long, repair the unportable sign extension that was
being done for 64-bit machines (it assumed that signed right shift
sign-extends).
I can't test this, so I'm just checking it in with blind faith in Andy.
I've tested that it doesn't broeak a non-Pth build on Linux.
Changes include:
- There's a --with-pth configure option.
- Instead of _GNU_PTH, we test for HAVE_PTH.
- Better signal handling.
- (The config.h.in file is regenerated in a slightly different order.)
can cause it to get called by multiple threads simultaneously.
Ditto for PyInterpreterState_Delete.
Of the former, the docs say "The interpreter lock need not be held, but may
be held if it is necessary to serialize calls to this function". This
kinda implies it both is and isn't thread-safe.
Of the latter, the docs merely say "The interpreter lock need not be
held.", and the clause about serializing is absent.
I expect it was *believed* these are both thread-safe, and the bit about
serializing via the global lock was meant as a permission rather than a
caution.
I also expect we've never seen a problem here because the Python core
(prior to the _PyPclose fix) only calls these functions once per run.
The Py_NewInterpreter subsystem exposed by the C API (but not used by
Python itself) also calls them, but that subsystem appears to be very
rarely used.
Whatever, they're both thread-safe now.
ceval.c:
define recurion_limit (static), default value is 2500
define Py_GetRecursionLimit and Py_SetRecursionLimit
raise RuntimeError if limit is exceeded
PC/config.h:
remove plat-specific definition
sysmodule.c:
add sys.(get|set)recursionlimit
how 'import' was called with a compiletime mechanism: create either a tuple
of the import arguments, or None (in the case of a normal import), add it to
the code-block constants, and load it onto the stack before calling
IMPORT_NAME.
PyRun_FileEx(). These are the same as their non-Ex counterparts but
have an extra argument, a flag telling them to close the file when
done.
Then this is used by Py_Main() and execfile() to close the file after
it is parsed but before it is executed.
Adding APIs seems strange given the feature freeze but it's the only
way I see to close the bug report without incompatible changes.
[ Bug #110616 ] source file stays open after parsing is done (PR#209)
(This fix is a bit broken, just as the test already was: the test for
testlist and listmaker are done always, whereas the test for exprlist and
the actual abort() are only done if Py_DEBUG is defined. Suggestions
welcome, I guess ;)
Add the EXTENDED_ARG opcode to the virtual machine, allowing 32-bit
arguments to opcodes instead of being forced to stick to the 16-bit
limit. This is especially useful for machine-generated code, which
can be too long for the SET_LINENO parameter to fit into 16 bits.
This closes the implementation portion of SourceForge patch #100893.
- Fix bug in thread_pthread.h::PyThread_get_thread_ident() where
sizeof(pthread) < sizeof(long).
- Add 'configure' for:
- SIZEOF_PTHREAD is pthread_t can be included via <pthread.h>
- setting Monterey system name
- appropriate CC,LINKCC,LDSHARED,OPT, and CCSHARED for Monterey
- Add section in README for Monterey build
eval_code2(): Implement new bytecodes PRINT_ITEM_TO and
PRINT_NEWLINE_TO, as per accepted SF patch #100970.
Also update graminit.c based on related Grammar/Grammar changes.
trying hard enough to find out what the arguments to an import were. There
is no test-case for this bug, yet, but this is what it looked like:
from encodings import cp1006, cp1026
ImportError: cannot import name cp1026
'__import__' was called with only the first name in the 'arguments' list.
load mod.submod as m, or mod as m ? Both can be achieved differently, and
unambiguously. Also attempt to document this restriction (editor
appreciated!)
Note that this is an artificial check during compile, because incorporating
this in the grammar is hard, and then adjusting the compiler to do the right
thing with the right nodes is harder.
scope. Previously, s_buffer[] was defined inside the
PyUnicode_Check() scope, but referred to in the outer scope via
assignment to s. This quiets an Insure portability warning.
name as n'. By doing some twists and turns, "as" is not a reserved word.
There is a slight change in semantics for 'from module import name' (it will
now honour the 'global' keyword) but only in cases that are explicitly
undocumented.
First, the allocated buffer was never freed after using it to create
the PyString object. Second, it was possible that have_filename would
be false (meaning that filename was not a PyString object), but that
the code would still try to PyString_GET_SIZE() it.
in binascii.c (only on platforms with signed chars -- although Py_CHARMASK
is documented as returning an int, it only does so on platforms with
signed chars).
returning a pointer to the start of the file's "base" name;
similar to os.path.basename().
SyntaxError__str__(): Use my_basename() to keep the length of the
file name included in the exception message short.