In C++, it's an error to pass a string literal to a char* function
without a const_cast(). Rather than require every C++ extension
module to put a cast around string literals, fix the API to state the
const-ness.
I focused on parts of the API where people usually pass literals:
PyArg_ParseTuple() and friends, Py_BuildValue(), PyMethodDef, the type
slots, etc. Predictably, there were a large set of functions that
needed to be fixed as a result of these changes. The most pervasive
change was to make the keyword args list passed to
PyArg_ParseTupleAndKewords() to be a const char *kwlist[].
One cast was required as a result of the changes: A type object
mallocs the memory for its tp_doc slot and later frees it.
PyTypeObject says that tp_doc is const char *; but if the type was
created by type_new(), we know it is safe to cast to char *.
This change implements a new bytecode compiler, based on a
transformation of the parse tree to an abstract syntax defined in
Parser/Python.asdl.
The compiler implementation is not complete, but it is in stable
enough shape to run the entire test suite excepting two disabled
tests.
Improve signal handling, especially when using threads, by forcing an early
re-execution of PyEval_EvalFrame() "periodic" code when things_to_do is not
cleared by Py_MakePendingCalls().
M Misc/NEWS
M Python/ceval.c
High level error message was stomping useful detailed messages from lower
level routines.
The new approach is to augment string error messages returned by the low
level routines. The provides both high and low level information. If
the exception value is not a string, no changes are made.
To see the improved messages in action, type:
import random
class R(random): pass
class B(bool): pass
hack: it would resize *interned* strings in-place! This occurred because
their reference counts do not have their expected value -- stringobject.c
hacks them. Mea culpa.
have differing refcount semantics. If anyone sees a prettier way to
acheive the same ends, then please go for it.
I think this is the first time I've ever used Py_XINCREF.
* Fixes an incorrect variable in a PyDict_CheckExact.
* Allow general mapping locals arguments for the execfile() function
and exec statement.
* Add tests.
[ 960406 ] unblock signals in threads
although the changes do not correspond exactly to any patch attached to
that report.
Non-main threads no longer have all signals masked.
A different interface to readline is used.
The handling of signals inside calls to PyOS_Readline is now rather
different.
These changes are all a bit scary! Review and cross-platform testing
much appreciated.
The builtin eval() function now accepts any mapping for the locals argument.
Time sensitive steps guarded by PyDict_CheckExact() to keep from slowing
down the normal case. My timings so no measurable impact.
Add a more informative message for the common user mistake of subclassing
from a module name rather than another class (i.e. random instead of
random.random).
pre-increment forms to post-increment forms. Post-incrementing
also eliminates the need for negative array indices for oparg fetches.
* In exception handling code, check for class based exceptions before
the older string based exceptions.
BINARY_SUBSCR:
* invert test for normal case fall through
* eliminate err handling code by jumping to slow_case
LOAD_LOCALS:
* invert test for normal case fall through
* continue instead of break for the non-error case
STORE_NAME and DELETE_NAME:
* invert test for normal case fall through
LOAD_NAME:
* continue instead of break for the non-error case
DELETE_FAST:
* invert test for normal case fall through
LOAD_DEREF:
* invert test for normal case fall through
* continue instead of break for the non-error case
tests of "why" against WHY_YIELD became useless. This patch removes them,
but assert()s that why != WHY_YIELD everywhere such a test was removed.
The test suite ran fine under a debug build (i.e., the asserts never
triggered).
* Defer error handling for wrong number of arguments to the
unpack_iterable() function. Cuts the code size almost in half.
* Replace function calls to PyList_Size() and PyTuple_Size() with
their smaller and faster macro counterparts.
* Move the constant structure references outside of the inner loops.
(Contributed by Andrew I MacIntyre.)
disables opcode prediction when dynamic execution
profiling is in effect, so the profiling counters at
the top of the main interpreter loop in eval_frame()
are updated for each opcode.
Simplified version of Neal Norwitz's patch which adds gotos for
opcodes that set "why". This skips a number of tests where the
outcome of the tests are known in advance.
A new API (only accessible from C) to interrupt a thread by sending it
an exception. This is not always effective, but might help some people.
Requested by Just van Rossum and Alex Martelli. It is intentional
that you have to write your own C extension to call it from Python.
Docs will have to wait.
* Can now test for basic blocks.
* Optimize inverted comparisions.
* Optimize unary_not followed by a conditional jump.
* Added a new opcode, NOP, to keep code size constant.
* Applied NOP to previous transformations where appropriate.
Note, the NOP would not be necessary if other functions were
added to re-target jump addresses and update the co_lnotab mapping.
That would yield slightly faster and cleaner bytecode at the
expense of optimizer simplicity and of keeping it decoupled
from the line-numbering structure.
Added two predictions:
GET_ITER --> FOR_ITER
FOR_ITER --> STORE_FAST or UNPACK_SEQUENCE
Improves timings on pybench and timeit.py. Pystone results are neutral.
Applied to common cases:
COMPARE_OP is often followed by a JUMP_IF.
JUMP_IF is usually followed by POP_TOP.
Shows improved timings on PyStone, PyBench, and specific tests
using timeit.py:
python timeit.py -s "x=1" "if x==1: pass"
python timeit.py -s "x=1" "if x==2: pass"
python timeit.py -s "x=1" "if x: pass"
python timeit.py -s "x=100" "while x!=1: x-=1"
Potential future candidates:
GET_ITER predicts FOR_ITER
FOR_ITER predicts STORE_FAST or UNPACK_SEQUENCE
Also, applied missing goto fast_next_opcode to DUP_TOPX.
My previous patches should have used fast_next_opcode
in a few places instead of continue.
Also, applied one PyInt_AS_LONG macro in a place where
the type had already been checked.
-DCALL_PROFILE: Count the number of function calls executed.
When this symbol is defined, the ceval mainloop and helper functions
count the number of function calls made. It keeps detailed statistics
about what kind of object was called and whether the call hit any of
the special fast paths in the code.
Optimization:
When we take the fast_function() path, which seems to be taken for
most function calls, and there is minimal frame setup to do, avoid
call PyEval_EvalCodeEx(). The eval code ex function does a lot of
work to handle keywords args and star args, free variables,
generators, etc. The inlined version simply allocates the frame and
copies the arguments values into the frame.
The optimization gets a little help from compile.c which adds a
CO_NOFREE flag to code objects that don't have free variables or cell
variables. This change allows fast_function() to get into the fast
path with fewer tests.
I measure a couple of percent speedup in pystone with this change, but
there's surely more that can be done.
Make the code slightly shorter, faster, and easier to
read.
* Eliminate unused DUP_TOPX code for x==1.
compile.c always generates DUP_TOP instead.
* Since only two cases remain for DUP_TOPX, replace
the switch-case with if-elseif.
* The in-lined integer compare does a CheckExact on
both arguments. Since the second is a little more
likely to fail, test it first.
* The switch-case for IS/IS_NOT and IN/NOT_IN can
separate the regular and inverted cases with no
additional work. For all four paths, saves a test and
jump.
The two are semantically equivalent, but the first triggered a compiler
warning about an unused variable. Note, the preceding steps had already
accessed and decreffed the variable so the reference counts were fine.
parameter being either four or five. Currently, compile.c does not
generate calls with a parameter higher than three.
May have to be reverted if the second alpha or beta shakes out some
other tool generating this op code with a parameter of four or five.
Replaced groups of pushes and pops with indexed access to the stack and
a single adjustment (if needed) to the stacklevel.
Avoids scores of unnecessary increments and decrements to the stackpointer.
Removes unnecessary sequential dependencies so that the compiler has more
freedom for optimizations. Frees the processor for more parallel and
pipelined execution by using mostly read-only access and having few pointer
adjustments just prior to a read or write.
all along. Before instr_lb tended to be too high.
I don't think this actually makes any difference, given what the compiler
produces, but it makes me a bit happier.
patch #617312, both on the trunk and the 22-maint branch.
Also added a test case, and ported the test_trace I wrote for HEAD
to 2.2.2 (with all those horrible extra 'line' events ;-).
than when this interval was first established. Checking too frequently just
adds needless overhead because most of the time there is nothing to do and
no other threads ready to run.
globals, _Py_Ticker and _Py_CheckInterval. This also implements Jeremy's
shortcut in Py_AddPendingCall that zeroes out _Py_Ticker. This allows the
test in the main loop to only test a single value.
The gory details are at
http://python.org/sf/602191
Use a slightly different strategy to determine when not to call the line
trace function. This removes the need for the RETURN_NONE opcode, so
that's gone again. Update docs and comments to match.
Thanks to Neal and Armin!
Also add a test suite. This should have come with the original patch...
in LOAD_GLOBAL. Besides saving a C function call, it saves checks
whether f_globals and f_builtins are dicts, and extracting and testing
the string object's hash code is done only once. We bail out of the
inlining if the name is not exactly a string, or when its hash is -1;
because of interning, neither should ever happen. I believe interning
guarantees that the hash code is set, and I believe that the 'names'
tuple of a code object always contains interned strings, but I'm not
assuming that -- I'm simply testing hash != -1.
On my home machine, this makes a pystone variant with new-style
classes and slots run at the same speed as classic pystone! (With
new-style classes but without slots, it is still a lot slower.)
Also, don't handle METH_OLDARGS on the fast path. All the interesting
builtins have been converted to use METH_NOARGS, METH_O, or
METH_VARARGS.
Result is another 1-2% speedup. If I can cobble together 10 of these,
it might make a difference.
This makes the code much easier to ready, because it is at a sane
indentation level. On my box this shows a 1-2% speedup, which means
nothing, except that I'm not going to worry about the performance
effects of the change.
nothing special done if keyword arguments were present, so test for
that earlier and fall through to the normal case if there are any.
This ought to slow down CFunction calls with keyword args, but I don't
care; it's a tiny (1%) improvement for pystone.
[ 587993 ] SET_LINENO killer
Remove SET_LINENO. Tracing is now supported by inspecting co_lnotab.
Many sundry changes to document and adapt to this change.
The staticforward define was needed to support certain broken C
compilers (notably SCO ODT 3.0, perhaps early AIX as well) botched the
static keyword when it was used with a forward declaration of a static
initialized structure. Standard C allows the forward declaration with
static, and we've decided to stop catering to broken C compilers. (In
fact, we expect that the compilers are all fixed eight years later.)
I'm leaving staticforward and statichere defined in object.h as
static. This is only for backwards compatibility with C extensions
that might still use it.
XXX I haven't updated the documentation.
This was a simple typo. Strange that the compiler didn't catch it!
Instead of WHY_CONTINUE, two tests used CONTINUE_LOOP, which isn't a
why_code at all, but an opcode; but even though 'why' is declared as
an enum, comparing it to an int is apparently not even worth a
warning -- not in gcc, and not in VC++. :-(
Will fix in 2.2 too.
[ 558249 ] softspace vs --disable-unicode
And #endif was in the wrong place.
Bugfix candidate, almost surely.
I think I will embark on squashing test failures in --disable-unicode builds --
a Real Bug was hiding under them.
SF bug 535905 (Evil Trashcan and GC interaction).
The SETLOCAL() macro should not DECREF the local variable in-place and
then store the new value; it should copy the old value to a temporary
value, then store the new value, and then DECREF the temporary value.
This is because it is possible that during the DECREF the frame is
accessed by other code (e.g. a __del__ method or gc.collect()) and the
variable would be pointing to already-freed memory.
BUGFIX CANDIDATE!
This fixes the symptom, but PRINT_ITEM has no way to know what (if
anything) PyFile_WriteObject() writes unless the object being printed
is a string. When the object isn't a string, this fix retains the
guess that softspace should be set after PyFile_WriteObject().
We might want to say that it's the job of filelike-object write methods
to leave the file's softspace in the correct state. That would probably
be better -- but everyone relies on PRINT_ITEM to guess for them now.
eval_frame(): Under -Qnew, INPLACE_DIVIDE wasn't getting handed off to
INPLACE_TRUE_DIVIDE (like BINARY_DIVIDE was getting handed off to
BINARY_TRUE_DIVIDE).
Bugfix candidate.
Based on the patch from Danny Yoo. The fix is in exec_statement() in
ceval.c.
There are also changes to introduce use of PyCode_GetNumFree() in
several places.
Had nothing to do with rich comparisons -- some stack cleanup code was
lost as a result of merging in Neil Schemenauer's generators patch.
Reinserted the stack cleanup code, skipping it when yielding.
leak when a class defined a __metaclass__. This fixes the problem
reported on python-dev by Ping; I dunno if it's the same as SF bug
#489669 (since that mentions Unicode).
Big Hammer to implement -Qnew as PEP 238 says it should work (a global
option affecting all instances of "/").
pydebug.h, main.c, pythonrun.c: define a private _Py_QnewFlag flag, true
iff -Qnew is passed on the command line. This should go away (as the
comments say) when true division becomes The Rule. This is
deliberately not exposed to runtime inspection or modification: it's
a one-way one-shot switch to pretend you're using Python 3.
ceval.c: when _Py_QnewFlag is set, treat BINARY_DIVIDE as
BINARY_TRUE_DIVIDE.
test_{descr, generators, zipfile}.py: fiddle so these pass under
-Qnew too. This was just a matter of s!/!//! in test_generators and
test_zipfile. test_descr was trickier, as testbinop() is passed
assumptions that "/" is the same as calling a "__div__" method; put
a temporary hack there to call "__truediv__" instead when the method
name is "__div__" and 1/2 evaluates to 0.5.
Three standard tests still fail under -Qnew (on Windows; somebody
please try the Linux tests with -Qnew too! Linux runs a whole bunch
of tests Windows doesn't):
test_augassign
test_class
test_coercion
I can't stay awake longer to stare at this (be my guest). Offhand
cures weren't obvious, nor was it even obvious that cures are possible
without major hackery.
Question: when -Qnew is in effect, should calls to __div__ magically
change into calls to __truediv__? See "major hackery" at tail end of
last paragraph <wink>.
PyEval_EvalCodeEx(): increment tstate->recursion_depth around the
decref of the frame, because the C stack for this call is still in
use and the decref can lead to __del__ methods getting called.
While this gives tstate->recursion_depth a value proportional to the
depth of the C stack (instead of a small constant no matter how
deeply __del__s recurse), it's not enough to stop the reported crash
when using the default recursion limit on Windows.
Bugfix candidate.
This patch boosts performance for comparing identical string object
by some 20% on my machine while not causing any noticable slow-down
for other operations (according to tests done with pybench).