Commit Graph

399 Commits

Author SHA1 Message Date
Raymond Hettinger c8aa08b172 Some (but not all) of the why code bitfield tests ran faster as
separate equality tests.  Now, all are set to their best timing.
2004-04-11 14:59:33 +00:00
Raymond Hettinger 5bed456056 Revert 2.393, elimination of pre-decrementing, which
did not stand-up to additional timings.
2004-04-10 23:34:17 +00:00
Raymond Hettinger 7eddd78a15 Use continue instead of break whereever possible. 2004-04-07 14:38:08 +00:00
Raymond Hettinger d3b836d202 * Improve readability and remove data dependencies by converting
pre-increment forms to post-increment forms.  Post-incrementing
also eliminates the need for negative array indices for oparg fetches.

* In exception handling code, check for class based exceptions before
  the older string based exceptions.
2004-04-07 13:17:27 +00:00
Raymond Hettinger 467a698bd2 Small code improvements for readability, code size, and/or speed.
BINARY_SUBSCR:
    * invert test for normal case fall through
    * eliminate err handling code by jumping to slow_case

LOAD_LOCALS:
    * invert test for normal case fall through
    * continue instead of break for the non-error case

STORE_NAME and DELETE_NAME:
    * invert test for normal case fall through

LOAD_NAME:
    * continue instead of break for the non-error case

DELETE_FAST:
    * invert test for normal case fall through

LOAD_DEREF:
    * invert test for normal case fall through
    * continue instead of break for the non-error case
2004-04-07 11:39:21 +00:00
Raymond Hettinger 7c9586545e Simplify previous checkin (bitfields for WHY codes).
Restores the self-documenting enum declaration.
2004-04-06 10:11:10 +00:00
Raymond Hettinger 06032cb664 Coded WHY flags as bitfields (taking inspiration from tp_flags).
This allows multiple flags to be tested in a single compare
which eliminates unnecessary compares and saves a few bytes.
2004-04-06 09:37:35 +00:00
Tim Peters 8a5c3c76be Since the fast_yield branch target was introduced, it appears that most
tests of "why" against WHY_YIELD became useless.  This patch removes them,
but assert()s that why != WHY_YIELD everywhere such a test was removed.
The test suite ran fine under a debug build (i.e., the asserts never
triggered).
2004-04-05 19:36:21 +00:00
Nicholas Bastin e5662aedef Changed random calls to PyThreadState_Get() to use the macro 2004-03-24 22:22:12 +00:00
Nicholas Bastin c69ebe8d50 Enable the profiling of C functions (builtins and extensions) 2004-03-24 21:57:10 +00:00
Armin Rigo bf57a14522 Fix SF bug #765624. 2004-03-22 19:24:58 +00:00
Armin Rigo 9dbf9084e8 Cancelled checkin, sorry. 2004-03-20 21:50:13 +00:00
Armin Rigo 1515fc2a01 A 2% speed improvement with gcc on low-endian machines. My guess is that this
new pattern for NEXTARG() is detected and optimized as a single (*short)
loading.
2004-03-20 20:03:17 +00:00
Raymond Hettinger fba1cfc49a LIST_APPEND is predicably followed by JUMP_ABSOLUTE.
Reduces loop overhead by an additional 10%.
2004-03-12 16:33:17 +00:00
Raymond Hettinger 2d783e9b16 Move the code for BREAK and CONTINUE_LOOP to be near FOR_ITER.
Makes it more likely that all loop operations are in the cache
at the same time.
2004-03-12 09:12:22 +00:00
Raymond Hettinger db0de9e7ca Speedup for-loops by inlining PyIter_Next(). Saves duplicate tests
and a function call resulting in a 15% reduction of total loop overhead
(as measured by timeit.Timer('pass')).
2004-03-12 08:41:36 +00:00
Raymond Hettinger f114a3ae63 Refactor and optimize code for UNPACK_SEQUENCE.
* Defer error handling for wrong number of arguments to the
  unpack_iterable() function.  Cuts the code size almost in half.

* Replace function calls to PyList_Size() and PyTuple_Size() with
  their smaller and faster macro counterparts.

* Move the constant structure references outside of the inner loops.
2004-03-08 23:25:30 +00:00
Raymond Hettinger dd80f76265 SF patch #910929: Optimize list comprehensions
Add a new opcode, LIST_APPEND, and apply it to the code generation for
list comprehensions.  Reduces the per-loop overhead by about a third.
2004-03-07 07:31:06 +00:00
Skip Montanaro 786ea6bc23 Add pystack definition to Misc/gdbinit with some explanation of its behavior
and add flag comments to ceval.c and main.c alerting people to the coupling
between pystack and the layout of those files.
2004-03-01 15:44:05 +00:00
Michael W. Hudson ecfeb7f095 This is my patch #876198 plus a NEWS entry and a header frob.
Remove the ability to use (from C) arbitrary objects supporting the
read buffer interface as the co_code member of code objects.
2004-02-12 15:28:27 +00:00
Skip Montanaro 7befb9966e remove support for missing ANSI C header files (limits.h, stddef.h, etc). 2004-02-10 16:50:21 +00:00
Raymond Hettinger a72169871d SF patch #884022: dynamic execution profiling vs opcode prediction
(Contributed by Andrew I MacIntyre.)

disables opcode prediction when dynamic execution
profiling is in effect, so the profiling counters at
the top of the main interpreter loop in eval_frame()
are updated for each opcode.
2004-02-08 19:59:27 +00:00
Raymond Hettinger 1dd8309246 SF patch #864059: optimize eval_frame
Simplified version of Neal Norwitz's patch which adds gotos for
opcodes that set "why".  This skips a number of tests where the
outcome of the tests are known in advance.
2004-02-06 18:32:33 +00:00
Jack Jansen eddc1449ba Getting rid of all the code inside #ifdef macintosh too. 2003-11-20 01:44:59 +00:00
Jeremy Hylton 904ed86a77 Make undetected error on stack unwind a fatal error. 2003-11-05 17:29:35 +00:00
Armin Rigo 2b3eb4062c Deleting cyclic object comparison.
SF patch 825639
http://mail.python.org/pipermail/python-dev/2003-October/039445.html
2003-10-28 12:05:48 +00:00
Armin Rigo 1d313ab9d1 oh dear. Wrong manipulation. Committed a version of ceval.c from my
no-cyclic-comparison patch at the same time as errors.c.

Reverting ceval.c to the previous revision.
2003-10-25 14:33:09 +00:00
Armin Rigo 092381a979 Made function declaration a proper C prototype 2003-10-25 14:29:27 +00:00
Raymond Hettinger 8ae4689657 Simplify and speedup uses of Py_BuildValue():
* Py_BuildValue("(OOO)",a,b,c)  -->  PyTuple_Pack(3,a,b,c)
* Py_BuildValue("()",a)         -->  PyTuple_New(0)
* Py_BuildValue("O", a)         -->  Py_INCREF(a)
2003-10-12 19:09:37 +00:00
Neal Norwitz c5131bc256 Fix SF #762455, segfault when sys.stdout is changed in getattr
Will backport.
2003-06-29 14:48:32 +00:00
Guido van Rossum b8b6d0c2c6 Add PyThreadState_SetAsyncExc(long, PyObject *).
A new API (only accessible from C) to interrupt a thread by sending it
an exception.  This is not always effective, but might help some people.
Requested by Just van Rossum and Alex Martelli.  It is intentional
that you have to write your own C extension to call it from Python.

Docs will have to wait.
2003-06-28 21:53:52 +00:00
Neil Schemenauer c4b570f218 Use fast_next_opcode shortcut for forward jump opcodes (it's safe and
gives a small speedup).
2003-06-01 19:21:12 +00:00
Raymond Hettinger 40174c358f SF bug #733667: kwargs handled incorrectly
The fast_function() inlining optimization only
applies when there are zero keyword arguments.
2003-05-31 07:04:16 +00:00
Neil Schemenauer ca2a2f11d0 Don't use fast_next_opcode for JUMP_* opcodes. This fixes the problem
reported by Kurt B. Kaiser.
2003-05-30 23:59:44 +00:00
Michael W. Hudson 58ee2af48e Armin Rigo's fix & test for
[ 729622 ] line tracing hook errors

with massaging from me to integrate test into test suite.
2003-04-29 16:18:47 +00:00
Raymond Hettinger f4cf76dd5e Revert the previous enhancement to the bytecode optimizer.
The additional code complexity and new NOP opcode were not worth it.
2003-04-24 05:45:23 +00:00
Raymond Hettinger 060641d511 Improved the bytecode optimizer.
* Can now test for basic blocks.
* Optimize inverted comparisions.
* Optimize unary_not followed by a conditional jump.
* Added a new opcode, NOP, to keep code size constant.
* Applied NOP to previous transformations where appropriate.

Note, the NOP would not be necessary if other functions were
added to re-target jump addresses and update the co_lnotab mapping.
That would yield slightly faster and cleaner bytecode at the
expense of optimizer simplicity and of keeping it decoupled
from the line-numbering structure.
2003-04-22 06:49:11 +00:00
Mark Hammond 8d98d2cb95 New PyGILState_ API - implements pep 311, from patch 684256. 2003-04-19 15:41:53 +00:00
Guido van Rossum a12fe4e81f - New function sys.call_tracing() allows pdb to debug code
recursively.
- pdb has a new command, "debug", which lets you step through
  arbitrary code from the debugger's (pdb) prompt.
2003-04-09 19:06:21 +00:00
Raymond Hettinger 7dc52212aa Eliminate data dependency in predict macro.
Added two predictions:
  GET_ITER --> FOR_ITER
  FOR_ITER --> STORE_FAST or UNPACK_SEQUENCE

Improves timings on pybench and timeit.py. Pystone results are neutral.
2003-03-16 20:14:44 +00:00
Raymond Hettinger ac2072920d Fix comment and whitespace. 2003-03-16 15:41:11 +00:00
Raymond Hettinger f606f87b31 Introduced macros for a simple opcode prediction protocol.
Applied to common cases:
    COMPARE_OP is often followed by a JUMP_IF.
    JUMP_IF is usually followed by POP_TOP.

Shows improved timings on PyStone, PyBench, and specific tests
using timeit.py:
    python timeit.py -s "x=1" "if x==1: pass"
    python timeit.py -s "x=1" "if x==2: pass"
    python timeit.py -s "x=1" "if x: pass"
    python timeit.py -s "x=100" "while x!=1: x-=1"

Potential future candidates:
    GET_ITER predicts FOR_ITER
    FOR_ITER predicts STORE_FAST or UNPACK_SEQUENCE

Also, applied missing goto fast_next_opcode to DUP_TOPX.
2003-03-16 03:11:04 +00:00
Raymond Hettinger 080cb3268f SF patch #701907: More use of fast_next_opcode
My previous patches should have used fast_next_opcode
in a few places instead of continue.

Also, applied one PyInt_AS_LONG macro in a place where
the type had already been checked.
2003-03-14 01:37:42 +00:00
Guido van Rossum c9fbb72ba5 Added implementation notes for [re]set_exc_info(). 2003-03-01 03:36:33 +00:00
Michael W. Hudson e46d1559c9 In the process of adding all the extended slice support I attempted to
change _PyEval_SliceIndex to round massively negative longs up to
-INT_MAX, instead of 0 but botched it.  Get it right.

Thx to Armin for the report.
2003-02-27 14:50:34 +00:00
Raymond Hettinger 21012b8235 Micro-optimizations.
* List/Tuple checkexact is faster for the common case.
* Testing for Py_True and Py_False can be inlined for faster looping.
2003-02-26 18:11:50 +00:00
Guido van Rossum 6297a7a9fb - PyEval_GetFrame() is now declared to return a PyFrameObject *
instead of a plain PyObject *.  (SF patch #686601 by Ben Laurie.)
2003-02-19 15:53:17 +00:00
Just van Rossum 3aaf42c613 patch #683515: "Add unicode support to compile(), eval() and exec"
Incorporated nnorwitz's comment re. Py__USING_UNICODE.
2003-02-10 08:21:10 +00:00
Jeremy Hylton 985eba53f5 Small function call optimization and special build option for call stats.
-DCALL_PROFILE: Count the number of function calls executed.

When this symbol is defined, the ceval mainloop and helper functions
count the number of function calls made.  It keeps detailed statistics
about what kind of object was called and whether the call hit any of
the special fast paths in the code.

Optimization:

When we take the fast_function() path, which seems to be taken for
most function calls, and there is minimal frame setup to do, avoid
call PyEval_EvalCodeEx().  The eval code ex function does a lot of
work to handle keywords args and star args, free variables,
generators, etc.  The inlined version simply allocates the frame and
copies the arguments values into the frame.

The optimization gets a little help from compile.c which adds a
CO_NOFREE flag to code objects that don't have free variables or cell
variables.  This change allows fast_function() to get into the fast
path with fewer tests.

I measure a couple of percent speedup in pystone with this change, but
there's surely more that can be done.
2003-02-05 23:13:00 +00:00
Raymond Hettinger 4bad9ba282 SF patch #670367: Micro-optimizations for ceval.c
Make the code slightly shorter, faster, and easier to
read.

* Eliminate unused DUP_TOPX code for x==1.
compile.c always generates DUP_TOP instead.

* Since only two cases remain for DUP_TOPX, replace
the switch-case with if-elseif.

* The in-lined integer compare does a CheckExact on
both arguments. Since the second is a little more
likely to fail, test it first.

* The switch-case for IS/IS_NOT and IN/NOT_IN can
separate the regular and inverted cases with no
additional work. For all four paths, saves a test and
jump.
2003-01-19 05:08:13 +00:00