backwards compatibility. When using the class of the first base as
the metaclass, use its __class__ attribute in preference over its
ob_type slot. This ensures that we can still use classic classes as
metaclasse, as shown in the original "Metaclasses" essay. This also
makes all the examples in Demo/metaclasses/ work again (maybe these
should be turned into a test suite?).
parameter for the return string (as unix pathnames are not limited
by the 255 char pstring limit).
Implemented the function for MachO-Python, where it returns unix pathnames.
by bbrox@bbrox.org / lionel.ulmer@free.fr.
This adds a configure check and if all goes well turns on the
PTHREAD_SCOPE_SYSTEM thread attribute for new threads.
This should remove the need to add tiny sleeps at the start of threads
to allow other threads to be scheduled.
Reported by Fredrik Lundh on python-dev.
The conversimple() code that handles Unicode arguments and converts
them to the default encoding now calls converterr() with the original
Unicode argument instead of the NULL returned by the failed encoding
attempt.
com_factor(): when a unary minus is attached to a float or imaginary zero,
don't optimize the UNARY_MINUS opcode away: the const dict can't
distinguish between +0.0 and -0.0, so ended up treating both like the
first one added to it. Optimizing UNARY_PLUS away isn't a problem.
(BTW, I already uploaded the 2.2a3 Windows installer, and this isn't
important enough to delay the release.)
of PyMapping_Keys because we know we have a real dict. Tolerate that
objects may have an attr named "__dict__" that's not a dict (Py_None
popped up during testing).
test_descr.py, test_dir(): Test the new classic-class behavior; beef up
the new-style class test similarly.
test_pyclbr.py, checkModule(): dir(C) is no longer a synonym for
C.__dict__.keys() when C is a classic class (looks like the same thing
that burned distutils! -- should it be *made* a synoym again? Then it
would be inconsistent with new-style class behavior.).
bag. It's clearly wrong for classic classes, at heart because a classic
class doesn't have a __class__ attribute, and I'm unclear on whether
that's feature or bug. I'll repair this once I find out (in the
meantime, dir() applied to classic classes won't find the base classes,
while dir() applied to a classic-class instance *will* find the base
classes but not *their* base classes).
Please give the new dir() a try and see whether you love it or hate it.
The new dir([]) behavior is something I could come to love. Here's
something to hate:
>>> class C:
... pass
...
>>> c = C()
>>> dir(c)
['__doc__', '__module__']
>>>
The idea that an instance has a __doc__ attribute is jarring (of course
it's really c.__class__.__doc__ == C.__doc__; likewise for __module__).
OTOH, the code already has too many special cases, and dir(x) doesn't
have a compelling or clear purpose when x isn't a module.
PEP 238. Changes:
- add a new flag variable Py_DivisionWarningFlag, declared in
pydebug.h, defined in object.c, set in main.c, and used in
{int,long,float,complex}object.c. When this flag is set, the
classic division operator issues a DeprecationWarning message.
- add a new API PyRun_SimpleStringFlags() to match
PyRun_SimpleString(). The main() function calls this so that
commands run with -c can also benefit from -Dnew.
- While I was at it, I changed the usage message in main() somewhat:
alphabetized the options, split it in *four* parts to fit in under
512 bytes (not that I still believe this is necessary -- doc strings
elsewhere are much longer), and perhaps most visibly, don't display
the full list of options on each command line error. Instead, the
full list is only displayed when -h is used, and otherwise a brief
reminder of -h is displayed. When -h is used, write to stdout so
that you can do `python -h | more'.
Notes:
- I don't want to use the -W option to control whether the classic
division warning is issued or not, because the machinery to decide
whether to display the warning or not is very expensive (it involves
calling into the warnings.py module). You can use -Werror to turn
the warnings into exceptions though.
- The -Dnew option doesn't select future division for all of the
program -- only for the __main__ module. I don't know if I'll ever
change this -- it would require changes to the .pyc file magic
number to do it right, and a more global notion of compiler flags.
- You can usefully combine -Dwarn and -Dnew: this gives the __main__
module new division, and warns about classic division everywhere
else.
pyport.h: typedef a new Py_intptr_t type.
DELICATE ASSUMPTION: That HAVE_UINTPTR_T implies intptr_t is
available as well as uintptr_t. If that turns out not to be
true, things must get uglier (C99 wants both, so I think it's
an assumption we're *likely* to get away with).
thread_nt.h, PyThread_start_new_thread: MS _beginthread is documented
as returning unsigned long; no idea why uintptr_t was being used.
Others: Always use Py_[u]intptr_t, never [u]intptr_t directly.
Check return value from future_parse() in for loop for file_input to
accomodate multiple future statements on separate lines.
Add several comments explaining how the code works.
Remove out-dated XXX comment.
Change to get/set/del slice operations so that if the object doesn't
support slicing, *or* if either of the slice arguments is not an int
or long, we construct a slice object and call the get/set/del item
operation instead. This makes it possible to design classes that
support slice arguments of non-integral types.
builtin_eval wasn't merging in the compiler flags from the current frame;
I suppose we never noticed this before because future division is the
first future-feature that can affect expressions (nested_scopes and
generators had only statement-level effects).
CO_FUTURE_DIVISION flag. Redid this to use Jeremy's PyCF_MASK #define
instead, so we dont have to remember to fiddle individual feature names
here again.
pythonrun.h: Also #define a PyCF_MASK_OBSOLETE mask. This isn't used
yet, but will be as part of the PEP 264 implementation (compile() mustn't
raise an error just because old code uses a flag name that's become
obsolete; a warning may be appropriate, but not an error; so compile() has
to know about obsolete flags too, but nobody is going to remember to
update compile() with individual obsolete flag names across releases either
-- i.e., this is the flip side of PyEval_MergeCompilerFlags's oversight).
- Do not compile unicodeobject, unicodectype, and unicodedata if Unicode is disabled
- check for Py_USING_UNICODE in all places that use Unicode functions
- disables unicode literals, and the builtin functions
- add the types.StringTypes list
- remove Unicode literals from most tests.
When code is compiled and compiler flags are passed in, be sure to
update cf_flags with any features defined by future statements in the
compiled code.
_PyImport_FixupExtension() on the exceptions module. Now
reload(exceptions) acts just like reload(sys) instead of raising
an ImportError.
This closes SF bug #422004.
The descr changes moved the dispatch for calling objects from
call_object() in ceval.c to PyObject_Call() in abstract.c.
call_object() and the many functions it used in ceval.c were no longer
used, but were not removed.
Rename meth_call() as PyCFunction_Call() so that it can be called by
the CALL_FUNCTION opcode in ceval.c.
Also, fix error message that referred to PyEval_EvalCodeEx() by its
old name eval_code2(). (I'll probably refer to it by its old name,
too.)
Revised version of Fred's patch, including support for ~ operator.
If the unary +, -, or ~ operator is applied to a constant, don't
generate a UNARY_xxx opcode. Just store the approriate value as a
constant. If the value is negative, extend the string containing the
constant and insert a negative in the 0th position.
For ~, compute the inverse of int and longs and use them directly, but
be prepared to generate code for all other possibilities (invalid
numbers, floats, complex).
same module twice, which apparently crashes Python. I could not test the
error condition, but in normal life it seems to have no adverse effects.
Also removed an unsued variable, and corrected 2 glaring errors (missing
'case' in front of a label).
Replace uses of PyCF_xxx with CO_xxx.
Replace individual feature slots in PyFutureFeatures with single
bitmask ff_features.
When flags must be transfered among the three parts of the interpreter
that care about them -- the pythonrun layer, the compiler, and the
future feature parser -- can simply or (|) the definitions.
with functionality needed for both unix-Python and MacPython and a
new smaller ./Mac/Python/macglue.c which contains MacPython stuff only.
pymactoolbox.h has moved to ./Include from ./Mac/Include and now also
contains the relevant stuff from macglue.h.
The net effect of this is that the ./Mac subdirectory is not needed
anymore for building the unix-Python core on MacOSX (it is needed
for building the extension modules).
This introduces:
- A new operator // that means floor division (the kind of division
where 1/2 is 0).
- The "future division" statement ("from __future__ import division)
which changes the meaning of the / operator to implement "true
division" (where 1/2 is 0.5).
- New overloadable operators __truediv__ and __floordiv__.
- New slots in the PyNumberMethods struct for true and floor division,
new abstract APIs for them, new opcodes, and so on.
I emphasize that without the future division statement, the semantics
of / will remain unchanged until Python 3.0.
Not yet implemented are warnings (default off) when / is used with int
or long arguments.
This has been on display since 7/31 as SF patch #443474.
Flames to /dev/null.
- Add an explicit call to PyType_Ready(&PyList_Type) to pythonrun.c
(just for the heck of it, really -- we should either explicitly
ready all types, or none).
Python warning which can be catched by means of the Python warning
framework.
It also adds two new APIs which hopefully make it easier for Python
to switch to buffer overflow safe [v]snprintf() APIs for error
reporting et al. The two new APIs are PyOS_snprintf() and
PyOS_vsnprintf() and work just like the standard ones in many
C libs. On platforms which have snprintf(), the native APIs are used,
on all other an emulation with snprintf() tries to do its best.
Fix suggested by Michael Hudson: Raise TypeError if attribute name
passed to getattr() is not a string or Unicode. There is some
unfortunate duplication of code between builtin_getattr() and
PyObject_GetAttr(), but it appears to be unavoidable.
And remove all the extern decls in the middle of .c files.
Apparently, it was excluded from the header file because it is
intended for internal use by the interpreter. It's still intended for
internal use and documented as such in the header file.
exception in the execution of bar, ensure that foo.bar exists.
(Previously, while sys.modules['foo.bar'] would exist, foo.bar would
only be created upon successful execution of bar. This is
inconvenient; some would say wrong. :-)
that 'yield' is a keyword. This doesn't help test_generators at all! I
don't know why not. These things do work now (and didn't before this
patch):
1. "from __future__ import generators" now works in a native shell.
2. Similarly "python -i xxx.py" now has generators enabled in the
shell if xxx.py had them enabled.
3. This program (which was my doctest proxy) works fine:
from __future__ import generators
source = """\
def f():
yield 1
"""
exec compile(source, "", "single") in globals()
print type(f())
that info to code dynamically compiled *by* code compiled with generators
enabled. Doesn't yet work because there's still no way to tell the parser
that "yield" is OK (unlike nested_scopes, the parser has its fingers in
this too).
Replaced PyEval_GetNestedScopes by a more-general
PyEval_MergeCompilerFlags. Perhaps I should not have? I doubted it was
*intended* to be part of the public API, so just did.
the yield statement. I figure we have to have this in before I can
release 2.2a1 on Wednesday.
Note: test_generators is currently broken, I'm counting on Tim to fix
this.
Probable fix (the bug report doesn't have enough info to say for sure).
find_init_module(): Insist on a case-sensitive match for __init__ files.
Given __INIT__.PY instead, find_init_module() thought that was fine, but
the later attempt to do find_module("__INIT__.PY") didn't and its caller
silently suppressed the resulting ImportError. Now find_init_module()
refuses to accept __INIT__.PY to begin with.
Bugfix candidate; specific to platforms with case-insensitive filesystems.
path (with no profile/trace function) through eval_code2() and
eval_frame() avoids several checks.
In the common cases of calls, returns, and exception propogation,
eval_code2() and eval_frame() used to test two values in the
thread-state: the profiling function and the tracing function. With
this change, a flag is set in the thread-state if either of these is
active, allowing a single check to suffice when both are NULL. This
also simplifies the code needed when either function is in use but is
already active (to avoid profiling/tracing the profiler/tracer); the
flag is set to 0 when the profile/trace code is entered, allowing the
same check to suffice for "already in the tracer" for call/return/
exception events.
"return expr" instances in generators (which latter may be generators
due to otherwise invisible "yield" stmts hiding in "if 0" blocks).
This was fun the first time, but this has gotten truly ugly now.
Python interpreter.
This change adds two new C-level APIs: PyEval_SetProfile() and
PyEval_SetTrace(). These can be used to install profile and trace
functions implemented in C, which can operate at much higher speeds
than Python-based functions. The overhead for calling a C-based
profile function is a very small fraction of a percent of the overhead
involved in calling a Python-based function.
The machinery required to call a Python-based profile or trace
function been moved to sysmodule.c, where sys.setprofile() and
sys.setprofile() simply become users of the new interface.
As a side effect, SF bug #436058 is fixed; there is no longer a
_PyTrace_Init() function to declare.
Implement sys.maxunicode.
Explicitly wrap around upper/lower computations for wide Py_UNICODE.
When decoding large characters with UTF-8, represent expected test
results using the \U notation.
- the correct range for the error message is range(0x110000);
- put the 4-byte Unicode-size code inside the same else branch as the
2-byte code, rather generating unreachable code in the 2-byte case.
- Don't hide the 'else' behine the '}'.
(I would prefer that in 4-byte mode, any value should be accepted, but
reasonable people can argue about that, so I'll put that off.)
Add configure option --enable-unicode.
Add config.h macros Py_USING_UNICODE, PY_UNICODE_TYPE, Py_UNICODE_SIZE,
SIZEOF_WCHAR_T.
Define Py_UCS2.
Encode and decode large UTF-8 characters into single Py_UNICODE values
for wide Unicode types; likewise for UTF-16.
Remove test whether sizeof Py_UNICODE is two.
Iterators list and Python-Dev; e.g., these all pass now:
def g1():
try:
return
except:
yield 1
assert list(g1()) == []
def g2():
try:
return
finally:
yield 1
assert list(g2()) == [1]
def g3():
for i in range(3):
yield None
yield None
assert list(g3()) == [None] * 4
compile.c: compile_funcdef and com_return_stmt: Just van Rossum's patch
to compile the same code for "return" regardless of function type (this
goes back to the previous scheme of returning Py_None).
ceval.c: gen_iternext: take a return (but not a yield) of Py_None as
meaning the generator is exhausted.
the next free valuestack slot, not to the base (in America, stacks push
and pop at the top -- they mutate at the bottom in Australia <winK>).
eval_frame(): assert that f_stacktop isn't NULL upon entry.
frame_delloc(): avoid ordered pointer comparisons involving f_stacktop
when f_stacktop is NULL.
reference to f_back when its really needed. Do a little whitespace
normalization as well. This whole file is a big war between tabs and spaces
but now is probably not the time to reindent everything.
NeilS, please check! This came from staring at your genbug.py, but I'm
not sure it plugs all possible holes. Without this, I caught a
frameobject refcount going negative, and it was also the cause (in debug
build) of _Py_ForgetReference's attempt to forget an object with already-
NULL _ob_prev and _ob_next pointers -- although I'm still not entirely
sure how! Part of the difficulty is that frameobjects are stored on a
free list that gets recycled very quickly, so if there's a stray pointer
to one of them it never looks like an insane frameobject (never goes
trough the free() mangling MS debug forces, etc).
and trace functions lazily, which incurs extra argument pushing and checks
in the C overhead for profiling/tracing, create the strings semi-lazily
when the Python code first registers a profile or trace function. This
simplifies the trampoline into the profile/trace functions.
Armin Rigo pointed out that the way the line-# table got built didn't work
for lines generating more than 255 bytes of bytecode. Fixed as he
suggested, plus corresponding changes to pyassem.py, plus added some
long overdue docs about this subtle table to compile.c.
Bugfix candidate (line numbers may be off in tracebacks under -O).
that should be used to cache an interned version of the event
string passed to the profile/trace function. call_trace() will
create interned strings and cache them in using the storage
specified by this additional parameter, avoiding a lot of string
object creation at runtime when using the profiling or tracing
functions.
All call sites are modified to pass the additional parameter, and four
static PyObject* variables are allocated to cache the interned string
objects.
This closes SF patch #431257.
in release builds. Suggested by Martin v. Loewis.
I'm half tempted to macroize PyErr_Occurred too, as the whole thing could
collapse to just
_PyThreadState_Current->curexc_type
In the default branch, keep three ifs that are used if level == 0, the
most common case. Note that first if here is a slight optimization
for the 'O' format.
Second part of SF patch 426072.
Note that lots of code was re-indented.
Replace two-step of convertsimple() and convertsimple1() with
convertsimple() and helper converterr(), which is called to format
error messages when convertsimple() fails. The old code did all the
real work in convertsimple1(), but deferred error message formatting
to conversimple(). The result was paying the price of a second
function call on every call just to format error messages in the
failure cases.
Factor out of the buffer-handling code in convertsimple() and package
it as convertbuffer().
Add two macros to ease readability of Unicode coversions,
UNICODE_DEFAULT_ENCODING() and CONV_UNICODE, an error string.
The convertsimple() routine had awful indentation problems, primarily
because there were two tabs between the case line and the body of the
case statements. This patch reformats the entire function to have a
single tab between case line and case body, which makes the code
easier to read (and consistent with ceval). The introduction of
converterr() exacerbated the problem and prompted this fix.
Also, eliminate non-standard whitespace after opening paren and before
closing paren in a few if statements.
(This checkin is part of SF patch 426072.)
Keyword arguments passed to builtin functions that don't take them are
ignored.
>>> {}.clear(x=2)
>>>
instead of
>>> {}.clear(x=2)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: clear() takes no keyword arguments
If we have a PyCFunction (builtin) and it is METH_VARARGS only, load
the args and dispatch to call_cfunction() directly. This provides a
small speedup for perhaps the most common function calls -- builtins.
Store floats and doubles to full precision in marshal.
Test that floats read from .pyc/.pyo closely match those read from .py.
Declare PyFloat_AsString() in floatobject header file.
Add new PyFloat_AsReprString() API function.
Document the functions declared in floatobject.h.
Check for free in class and method only if nested scopes are enabled.
Add assertion to verify that no free variables occur when nested
scopes are disabled.
XXX When should nested scopes by made non-optional on the trunk?
NEEDS DOC CHANGES.
More AttributeErrors transmuted into TypeErrors, in test_b2.py, and,
again, this strikes me as a good thing.
This checkin completes the iterator generalization work that obviously
needed to be done. Can anyone think of others that should be changed?
NEEDS DOC CHANGES.
Possibly contentious: The first time s.next() yields StopIteration (for
a given map argument s) is the last time map() *tries* s.next(). That
is, if other sequence args are longer, s will never again contribute
anything but None values to the result, even if trying s.next() again
could yield another result. This is the same behavior map() used to have
wrt IndexError, so it's the only way to be wholly backward-compatible.
I'm not a fan of letting StopIteration mean "try again later" anyway.
Directory containing
Spam.py
spam/__init__.py
Then "import Spam" caused a SystemError, because code checking for
the existence of "Spam/__init__.py" finds it on a case-insensitive
filesystem, but then bails because the directory it finds it in
doesn't match case, and then old code assumed that was still an error
even though it isn't anymore. Changed the code to just continue
looking in this case (instead of calling it an error). So
import Spam
and
import spam
both work now.
Also a 2.1 bugfix candidate (am I supposed to do something with those?).
Took away map()'s insistence that sequences support __len__, and cleaned
up the convoluted code that made it *look* like it really cared about
__len__ (in fact the old ->len field was only *used* as a flag bit, as
the main loop only looked at its sign bit, setting the field to -1 when
IndexError got raised; renamed the field to ->saw_IndexError instead).
The new test case demonstrates the bug. Be more careful in
symtable_resolve_free() to add a var to cells or frees only if it
won't be added under some other rule.
XXX Add new assertion that will catch this bug.
sees it (test_iter.py is unchanged).
- Added a tp_iternext slot, which calls the iterator's next() method;
this is much faster for built-in iterators over built-in types
such as lists and dicts, speeding up pybench's ForLoop with about
25% compared to Python 2.1. (Now there's a good argument for
iterators. ;-)
- Renamed the built-in sequence iterator SeqIter, affecting the C API
functions for it. (This frees up the PyIter prefix for generic
iterator operations.)
- Added PyIter_Check(obj), which checks that obj's type has a
tp_iternext slot and that the proper feature flag is set.
- Added PyIter_Next(obj) which calls the tp_iternext slot. It has a
somewhat complex return condition due to the need for speed: when it
returns NULL, it may not have set an exception condition, meaning
the iterator is exhausted; when the exception StopIteration is set
(or a derived exception class), it means the same thing; any other
exception means some other error occurred.
new slot tp_iter in type object, plus new flag Py_TPFLAGS_HAVE_ITER
new C API PyObject_GetIter(), calls tp_iter
new builtin iter(), with two forms: iter(obj), and iter(function, sentinel)
new internal object types iterobject and calliterobject
new exception StopIteration
new opcodes for "for" loops, GET_ITER and FOR_ITER (also supported by dis.py)
new magic number for .pyc files
new special method for instances: __iter__() returns an iterator
iteration over dictionaries: "for x in dict" iterates over the keys
iteration over files: "for x in file" iterates over lines
TODO:
documentation
test suite
decide whether to use a different way to spell iter(function, sentinal)
decide whether "for key in dict" is a good idea
use iterators in map/filter/reduce, min/max, and elsewhere (in/not in?)
speed tuning (make next() a slot tp_next???)
now raises NameError instead of UnboundLocalError, because the var in
question is definitely not local. (This affects test_scope.py)
Also update the recent fix by Ping using get_func_name(). Replace
tests of get_func_name() return value with call to get_func_desc() to
match all the other uses.
Calling an unbound method on a C extension class without providing
an instance can yield a segfault. Try "Exception.__init__()" or
"ValueError.__init__()".
This is a simple fix. The error-reporting bits in call_method
mistakenly treat the misleadingly-named variable "func" as a
function, when in fact it is a method.
If we let get_func_name take care of the work, all is fine.
Fix based on patch #414750 by Michael Hudson.
New functions get_func_name() and get_func_desc() return reasonable
names and descriptions for all objects. XXX Even objects that aren't
actually callable.
pickle.py
The code implicitly assumed that all ints fit in 4 bytes, causing all
sorts of mischief (from nonsense results to corrupted pickles).
Repaired that.
marshal.c
The int marshaling code assumed that right shifts of signed longs
sign-extend. Repaired that.
Jeffery Collins pointed out that filterstring decrefs a character object
before it's done using it. This works by accident today because another
module always happens to have an active reference too at the time. The
accident doesn't work after his Pippy modifications, and since it *is*
an accident even in the mainline Python, it should work by design there too.
The patch accomplishes that.
but apparently he had to go to school, so I am checking it in for him.
This makes PyRun_HandleSystemExit() a static instead, called
handle_system_exit(), and let it use the current exception rather than
passing in an exception. This slightly simplifies the code.
Update docstring and library reference section on 'sys' module.
New API PyErr_Display, just for displaying errors, called by excepthook.
Uncaught exceptions now call sys.excepthook; if that fails, we fall back
to calling PyErr_Display directly.
Also comes with sys.__excepthook__ and sys.__displayhook__.
If a module has a future statement enabling nested scopes, they are
also enable for the exec statement and the functions compile() and
execfile() if they occur in the module.
If Python is run with the -i option, which enters interactive mode
after executing a script, and the script it runs enables nested
scopes, they are also enabled in interactive mode.
XXX The use of -i with -c "from __future__ import nested_scopes" is
not supported. What's the point?
To support these changes, many function variants have been added to
pythonrun.c. All the variants names end with Flags and they take an
extra PyCompilerFlags * argument. It is possible that this complexity
will be eliminated in a future version of the interpreter in which
nested scopes are not optional.
frees. Note there doesn't seem to be any way to test LocalsToFast(),
because the instructions that trigger it are illegal in nested scopes
with free variables.
Fix allocation strategy for cells that are also formal parameters.
Instead of emitting LOAD_FAST / STORE_DEREF pairs for each parameter,
have the argument handling code in eval_code2() do the right thing.
A side-effect of this change is that cell variables that are also
arguments are listed at the front of co_cellvars in the order they
appear in the argument list.
has a binding for the name. The fix is in two places:
- in symtable_update_free_vars, ignore a global stmt in a class scope
- in symtable_load_symbols, add extra handling for names that are
defined at class scope and free in a method
Closes SF bug 407800
with free variables. Thanks to Martin v. Loewis for finding two of
the problems. This fixes SF buf 405583.
There is also a C API change: PyFrame_New() is reverting to its
pre-2.1 signature. The change introduced by nested scopes was a
mistake. XXX Is this okay between beta releases?
cell_clear(), the GC helper, must decref its reference to break
cycles.
frame_dealloc() must dealloc all cell vars and free vars in addition
to locals.
eval_code2() setup code must INCREF cells it copies out of the
closure.
The STORE_DEREF opcode implementation must DECREF the object it passes
to PyCell_Set().
Made sure that the warnings issued by symtable_check_unoptimized()
(about import * and exec) contain the proper filename and line number,
and are transformed into SyntaxError exceptions with -Werror.
(Also remove warning about module-level global decl, because we can't
distinguish from code passed to exec.)
Define PyCompilerFlags type contains a single element,
cf_nested_scopes, that is true if a nested scopes future statement has
been entered at the interactive prompt.
New API functions:
PyNode_CompileFlags()
PyRun_InteractiveOneFlags()
-- same as their non Flags counterparts except that the take an
optional PyCompilerFlags pointer
compile.c: In jcompile() use PyCompilerFlags argument. If
cf_nested_scopes is true, compile code with nested scopes. If it
is false, but the code has a valid future nested scopes statement,
set it to true.
pythonrun.c: Create a new PyCompilerFlags object in
PyRun_InteractiveLoop() and thread it through to
PyRun_InteractiveOneFlags().
the more recent versions of that platform, so we use the value (time_t)(-1)
as the error value. This is the type used in the OpenVMS documentation:
http://www.openvms.compaq.com/commercial/c/5763p048.htm#inde
This closes SF tracker bug #404240.
Also clean up an exception message when detecting overflow of time_t values
beyond 4 bytes.
from __future__ import nested_scopes
x=7
def f():
x=1
def g():
global x
def i():
def h():
return x
return h()
return i()
return g()
print f()
print x
This kind of code didn't work correctly because x was treated as free
in i, leading to an attempt to load x in g to make a closure for i.
Solution is to make global decl apply to nested scopes unless their is
an assignment. Thus, x in h is global.
described in PEP 227.
symtable_check_unoptimized() warns about import * and exec with "in"
when it is used in a function that contains a nested function with
free variables. Warnings are issued unless nested scopes are in
effect, in which case these are SyntaxErrors.
symtable_check_shadow() warns about assignments in a function scope
that shadow free variables defined in a nested scope. This will
always generate a warning -- and will behave differently with nested
scopes than without.
Restore full checking for free vars in children, even when nested
scopes are not enabled. This is needed to support warnings for
shadowing.
Change symtable_warn() to return an int-- the return value of
PyErr_WarnExplicit.
Sundry cleanup: Remove commented out code. Break long lines.
global after assign / use.
Note: I'm not updating the PyErr_Warn() call for import * / exec
combined with a function, because I can't trigger it with an example.
Jeremy, just follow the example of the call to PyErr_WarnExplicit()
that I *did* include.
for errors raised in future.c.
Move some helper functions from compile.c to errors.c and make them
API functions: PyErr_SyntaxLocation() and PyErr_ProgramText().
raised by the compiler.
XXX For now, text entered into the interactive intepreter is not
printed in the traceback.
Inspired by a patch from Roman Sulzhyk
compile.c:
Add helper fetch_program_text() that opens a file and reads until it
finds the specified line number. The code is a near duplicate of
similar code in traceback.c.
Modify com_error() to pass two arguments to SyntaxError constructor,
where the second argument contains the offending text when possible.
Modify set_error_location(), now used only by the symtable pass, to
set the text attribute on existing exceptions.
pythonrun.c:
Change parse_syntax_error() to continue of the offset attribute of a
SyntaxError is None. In this case, it sets offset to -1.
Move code from PyErr_PrintEx() into helper function
print_error_text(). In the helper, only print the caret for a
SyntaxError if offset > 0.
http://python.sourceforge.net/peps/pep-0235.html
Renamed check_case to case_ok. Substantial code rearrangement to get
this stuff in one place in the file. Innermost loop of find_module()
now much simpler and #ifdef-free, and I want to keep it that way (it's
bad enough that the innermost loop is itself still in an #ifdef!).
Windows semantics tested and are fine.
Jason, Cygwin *should* be fine if and only if what you did before "worked"
for case_ok.
Jack, the semantics on your flavor of Mac have definitely changed (see
the PEP), and need to be tested. The intent is that your flavor of Mac
now work the same as everything else in the "lower left" box, including
respecting PYTHONCASEOK.
Steven, sorry, you did the most work here so far but you got screwed the
worst. Happy to work with you on repairing it, but I don't understand
anything about all your Mac variants. We need to add another branch (or
two, three, ...?) inside case_ok. But we should not need to change
anything else.
XXX still need to integrate into symtable API
compile.h: Remove ff_n_simple_stmt; obsolete.
Add ff_found_docstring used internally to skip one and only
one string at the beginning of a module.
compile.c: Add check for from __future__ imports to far into the file.
In symtable_global() check for -1 returned from
symtable_lookup(), which signifies name not defined.
Add missing DECERF in symtable_add_def.
Free c->c_future.
future.c: Add special handling for multiple statements joined on a
single line using one or more semicolons; this form can
include an illegal future statement that would otherwise be
hard to detect.
Add support for detecting and skipping doc strings.
Makefile.pre.in: add target future.o
Include/compile.h: define PyFutureFeaters and PyNode_Future()
add c_future slot to struct compiling
Include/symtable.h: add st_future slot to struct symtable
Python/future.c: implementation of PyNode_Future()
Python/compile.c: use PyNode_Future() for nested_scopes support
Python/symtable.c: include compile.h to pick up PyFutureFeatures decl
compile.h: #define NESTED_SCOPES_DEFAULT 0 for Python 2.1
__future__ feature name: "nested_scopes"
symtable.h: Add st_nested_scopes slot. Define flags to track exec and
import star.
Lib/test/test_scope.py: requires nested scopes
compile.c: Fiddle with error messages.
Reverse the sense of ste_optimized flag on
PySymtableEntryObjects. If it is true, there is an optimization
conflict.
Modify get_ref_type to respect st_nested_scopes flags.
Refactor symtable_load_symbols() into several smaller functions,
which use struct symbol_info to share variables. In new function
symtable_update_flags(), raise an error or warning for import * or
bare exec that conflicts with nested scopes. Also, modify handle
for free variables to respect st_nested_scopes flag.
In symtable_init() assign st_nested_scopes flag to
NESTED_SCOPES_DEFAULT (defined in compile.h).
Add preliminary and often incorrect implementation of
symtable_check_future().
Add symtable_lookup() helper for future use.
Two different but related problems:
1. PySymtable_Free() must explicitly DECREF(st->st_cur), which should
always point to the global symtable entry. This entry is setup by the
first enter_scope() call, but there is never a corresponding
exit_scope() call.
Since each entry has a reference to scopes defined within it, the
missing DECREF caused all symtable entries to be leaked.
2. The leak here masked a separate problem with
PySymtableEntry_New(). When the requested entry was found in
st->st_symbols, the entry was returned without doing an INCREF.
And problem c) The ste_children slot was getting two copies of each
child entry, because it was populating the slot on the first and
second passes. Now only populate on the first pass.
save the __builtin__ module in a static variable. But this doesn't
work across Py_Finalise()/Py_Initialize()! It also doesn't work when
using multiple interpreter states created with PyInterpreterState_New().
So I'm ripping out this small optimization.
This was probably broken since PyImport_Import() was introduced in
1997! We really need a better test suite for multiple interpreter
states and repeatedly initializing.
This fixes the problems Barry reported in Demo/embed/loop.c.
the symbol table pass. These blocks were already ignored by the code
gen pass. Both passes must visit the same set of blocks in the same
order.
Fixes SF buf 132820
They're actually complaining about something more specific, an assignment
in a lambda as an actual argument, so that Python parses the
lambda as if it were a keyword argument. Like f(lambda x: x[0]=42).
The "lambda x: x[0]" part gets parsed as if it were a keyword, being
bound to 42, and the resulting error msg didn't make much sense.
_testcapimodule.c
make sure PyList_Reverse doesn't blow up again
getargs.c
assert args isn't NULL at the top of vgetargs1 instead of
waiting for a NULL-pointer dereference at the end
Bug was introduced by tricks played to make .pyc files executable
via cmdline arg. Then again, -x worked via a trick to begin with.
If anyone can think of a portable way to test -x, be my guest!
create an empty dictionary if it is called without keyword args. Just
pass NULL.
XXX I had believed that this caused weird errors, but the test suite
runs cleanly.
of nested functions. Either is allowed in a function if it contains
no defs or lambdas or the defs and lambdas it contains have no free
variables. If a function is itself nested and has free variables,
either is illegal.
Revise the symtable to use a PySymtableEntryObject, which holds all
the revelent information for a scope, rather than using a bunch of
st_cur_XXX pointers in the symtable struct. The changes simplify the
internal management of the current symtable scope and of the stack.
Added new C source file: Python/symtable.c. (Does the Windows build
process need to be updated?)
As part of these changes, the initial _symtable module interface
introduced in 2.1a2 is replaced. A dictionary of
PySymtableEntryObjects are returned.
hooks to take over the Python import machinery at a very early stage
in the Python startup phase.
If there are still places in the Python interpreter which need to
bypass the __import__ hook, these places must now use
PyImport_ImportModuleEx() instead. So far no other places than in
the import mechanism itself have been identified.
symtable.h, so that they can be used by external module.
Improve error handling in symtable_enter_scope(), which return an
error code that went unchecked by most callers. XXX The error handling
in symtable code is sloppy in general.
Modify symtable to record the line number that begins each scope.
This can help to identify which code block is being referred to when
multiple blocks are bound to the same name.
Add st_scopes dict that is used to preserve scope info when
PyNode_CompileSymtable() is called. Otherwise, this information is
tossed as soon as it is no longer needed.
Add Py_SymtableString() to pythonrun; analogous to Py_CompileString().
discussion on python-dev. 'from mod import *' is still banned except
at the module level.
Fix value for special NOOPT entry in symtable. Initialze to 0 instead
of None, so that later uses of PyInt_AS_LONG() are valid. (Bug
reported by Donn Cave.)
replace local REPR macros with PyObject_REPR in object.h
reference manual but not checked: Names bound by import statemants may
not occur in global statements in the same scope. The from ... import *
form may only occur in a module scope.
I guess these changes could break code, but the reference manual
warned about them.
Several other small changes
If a variable is declared global in the nearest enclosing scope of a
free variable, then treat it is a global in the nested scope too.
Get rid of com_mangle and symtable_mangle functions and call mangle
directly.
If errors occur during symtable table creation, return -1 from
symtable_build().
Do not increment st_errors in assignment to lambda, because exception
is not set.
Add extra argument to symtable_assign(); the argument, flag, is ORed
with DEF_LOCAL for each symtable_add_def() call.
This change eliminates an extra malloc/free when a frame with free
variables is created. Any cell vars or free vars are stored in
f_localsplus after the locals and before the stack.
eval_code2() fills in the appropriate values after handling
initialization of locals.
To track the size the frame has an f_size member that tracks the total
size of f_localsplus. It used to be implicitly f_nlocals + f_stacksize.
They're named as if public, so I did a Bad Thing by changing
PyMarshal_ReadObjectFromFile() to suck up the remainder of the file in one
gulp: anyone who counted on that leaving the file pointer merely at the
end of the next object would be screwed. So restored
PyMarshal_ReadObjectFromFile() to its earlier state, renamed the new greedy
code to PyMarshal_ReadLastObjectFromFile(), and changed Python internals to
call the latter instead.
SF patch http://sourceforge.net/patch/?func=detailpatch&patch_id=103453&group_id=5470
PyMember_Set of T_CHAR always raises exception.
Unfortunately, this is a use of a C API function that Python itself never makes, so
there's no .py test I can check in to verify this stays fixed. But the fault in the
code is obvious, and Dave Cole's patch just as obviously fixes it.
The majority of the changes are in the compiler. The mainloop changes
primarily to implement the new opcodes and to pass a function's
closure to eval_code2(). Frames and functions got new slots to hold
the closure.
Include/compile.h
Add co_freevars and co_cellvars slots to code objects.
Update PyCode_New() to take freevars and cellvars as arguments
Include/funcobject.h
Add func_closure slot to function objects.
Add GetClosure()/SetClosure() functions (and corresponding
macros) for getting at the closure.
Include/frameobject.h
PyFrame_New() now takes a closure.
Include/opcode.h
Add four new opcodes: MAKE_CLOSURE, LOAD_CLOSURE, LOAD_DEREF,
STORE_DEREF.
Remove comment about old requirement for opcodes to fit in 7
bits.
compile.c
Implement changes to code objects for co_freevars and co_cellvars.
Modify symbol table to use st_cur_name (string object for the name
of the current scope) and st_cur_children (list of nested blocks).
Also define st_nested, which might more properly be called
st_cur_nested. Add several DEF_XXX flags to track def-use
information for free variables.
New or modified functions of note:
com_make_closure(struct compiling *, PyCodeObject *)
Emit LOAD_CLOSURE opcodes as needed to pass cells for free
variables into nested scope.
com_addop_varname(struct compiling *, int, char *)
Emits opcodes for LOAD_DEREF and STORE_DEREF.
get_ref_type(struct compiling *, char *name)
Return NAME_CLOSURE if ref type is FREE or CELL
symtable_load_symbols(struct compiling *)
Decides what variables are cell or free based on def-use info.
Can now raise SyntaxError if nested scopes are mixed with
exec or from blah import *.
make_scope_info(PyObject *, PyObject *, int, int)
Helper functions for symtable scope stack.
symtable_update_free_vars(struct symtable *)
After a code block has been analyzed, it must check each of
its children for free variables that are not defined in the
block. If a variable is free in a child and not defined in
the parent, then it is defined by block the enclosing the
current one or it is a global. This does the right logic.
symtable_add_use() is now a macro for symtable_add_def()
symtable_assign(struct symtable *, node *)
Use goto instead of for (;;)
Fixed bug in symtable where name of keyword argument in function
call was treated as assignment in the scope of the call site. Ex:
def f():
g(a=2) # a was considered a local of f
ceval.c
eval_code2() now take one more argument, a closure.
Implement LOAD_CLOSURE, LOAD_DEREF, STORE_DEREF, MAKE_CLOSURE>
Also: When name error occurs for global variable, report that the
name was global in the error mesage.
Objects/frameobject.c
Initialize f_closure to be a tuple containing space for cellvars
and freevars. f_closure is NULL if neither are present.
Objects/funcobject.c
Add support for func_closure.
Python/import.c
Change the magic number.
Python/marshal.c
Track changes to code objects.
parameters that contained both anonymous tuples and *arg or **arg. Ex:
def f(a, (b, c), *d): pass
Fix the symtable_params() to generate names in the right order for
co_varnames slot of code object. Consider *arg and **arg before the
"complex" names introduced by anonymous tuples.
module__doc__: Document the Warning subclass heirarchy.
make_class(): Added a "goto finally" so that if populate_methods()
fails, the return status will be -1 (failure) instead of 0 (success).
fini_exceptions(): When decref'ing the static pointers to the
exception classes, clear out their dictionaries too. This breaks a
cycle from class->dict->method->class and allows the classes with
unbound methods to be reclaimed. This plugs a large memory leak in a
common Py_Initialize()/dosomething/Py_Finalize() loop.