happen in 2.3, but nobody noticed it still was getting generated (the
warning was disabled by default). OverflowWarning and
PyExc_OverflowWarning should be removed for 2.5, and left notes all over
saying so.
* Perform the code length check earlier.
* Eliminate the extra PyMem_Free() upon hitting an EXTENDED_ARG.
* Assert that the NOP count used in jump retargeting matches the NOPs
eliminated in the final step.
* Add an XXX note to indicate that more work is being to done to
handle linenotab with intervals > 255.
* Make a pass to eliminate NOPs. Produce code that is more readable,
more compact, and a tiny bit faster. Makes the peepholer more flexible
in the scope of allowable transformations.
* With Guido's okay, bumped up the magic number so that this patch gets
widely exercised before the alpha goes out.
files and not re-optimized upon import. Saves a bit of startup time while
still remaining decoupled from the rest of the compiler.
As a side benefit, handcoded bytecode is not run through the optimizer
when new code objects are created. Hopefully, a handcoder has already
created exactly what they want to have run.
(Idea suggested by Armin Rigo and Michael Hudson. Initially avoided
because of worries about compiler coupling; however, only the nexus
point needed to be moved so there won't be a conflict when the AST
branch is loaded.)
[ 1009560 ] Fix @decorator evaluation order
From the description:
Changes in this patch:
- Change Grammar/Grammar to require
newlines between adjacent decorators.
- Fix order of evaluation of decorators
in the C (compile.c) and python
(Lib/compiler/pycodegen.py) compilers
- Add better order of evaluation check
to test_decorators.py (test_eval_order)
- Update the decorator documentation in
the reference manual (improve description
of evaluation order and update syntax
description)
and the comment:
Used Brett's evaluation order (see
http://mail.python.org/pipermail/python-dev/2004-August/047835.html)
(I'm checking this in for Anthony who was having problems getting SF to
talk to him)
because GNU/k*BSD uses gnu pth to provide pthreads, but will also happen on any
system that does the same.
python fails to build because it doesn't detect gnu pth in pthread
emulation. See C comments in patch for details.
patch taken from http://bugs.debian.org/264315
[ 1005248 ] new.code() not cleanly checking its arguments
using the result of new.code() can still destroy the sun, but merely
calling the function shouldn't any more.
I also rewrote the existing tests of new.code() to use vastly less
un-bogus arguments, and added tests for the previous insane behaviours.
hack: it would resize *interned* strings in-place! This occurred because
their reference counts do not have their expected value -- stringobject.c
hacks them. Mea culpa.
interning were not clear here -- a subclass could be mutable, for
example -- and had bugs. Explicitly interning a subclass of string
via intern() will raise a TypeError. Internal operations that attempt
to intern a string subclass will have no effect.
Added a few tests to test_builtin that includes the old buggy code and
verifies that calls like PyObject_SetAttr() don't fail. Perhaps these
tests should have gone in test_string.
[ 991812 ] PyArg_ParseTuple can miss errors with warnings as exceptions
as suggested in the report.
This is definitely a 2.3 candidate (as are most of the checkins I've
made in the last month...)
have differing refcount semantics. If anyone sees a prettier way to
acheive the same ends, then please go for it.
I think this is the first time I've ever used Py_XINCREF.
* Fixes an incorrect variable in a PyDict_CheckExact.
* Allow general mapping locals arguments for the execfile() function
and exec statement.
* Add tests.
PyImport_ReloadModule(): restore the module to sys.modules in error cases.
load_package(): semantic-neutral refactoring from an earlier stab at
this patch; giving it a common error exit made the code
easier to follow, so retaining that part.
_RemoveModule(): new little utility to delete a key from sys.modules.
__oct__, and __hex__. Raise TypeError if an invalid type is
returned. Note that PyNumber_Int and PyNumber_Long can still
return ints or longs. Fixes SF bug #966618.
The preceding case statement was missing a terminating "break" stmt,
so fell into the new code by mistake. This caused uncaught out-of-bounds
accesses to the "names" tuple, leading to a variety of insane behaviors.
expected entrypoint. The unlinking will crash the application if the
module contained ObjC code. The price of this is small: a little wasted
memory, and only in a case than isn't expected to occur often.
[ 984722 ] Py_BuildValue loses reference counts on error
I'm ever-so-slightly uneasy at the amount of work this can do with an
exception pending, but I don't think that this can result in anything
more serious than a strange error message.
[ 960406 ] unblock signals in threads
although the changes do not correspond exactly to any patch attached to
that report.
Non-main threads no longer have all signals masked.
A different interface to readline is used.
The handling of signals inside calls to PyOS_Readline is now rather
different.
These changes are all a bit scary! Review and cross-platform testing
much appreciated.
table' of the dll, to make sure that the dll really was build for the
correct Python version. It does this by looking for an entry
'pythonXY.dll' (X.Y is the Python version number).
The code now checks the size of the dll's import table before reading
entries from it. Before this patch, the code crashed trying to read
the import table when the size was zero (as in Win2k's wmi.dll, for
example).
Look for imports of 'pythonXY_d.dll' in a debug build instead of
'pythonXY.dll'.
Fixes SF 951851: Crash when reading "import table" of certain windows dlls.
Already backported to the 2.3 branch.
The builtin eval() function now accepts any mapping for the locals argument.
Time sensitive steps guarded by PyDict_CheckExact() to keep from slowing
down the normal case. My timings so no measurable impact.
Add a more informative message for the common user mistake of subclassing
from a module name rather than another class (i.e. random instead of
random.random).
(Code contributed by Jiwon Seo.)
The documentation portion of the patch is being re-worked and will be
checked-in soon. Likewise, PEP 289 will be updated to reflect Guido's
rationale for the design decisions on binding behavior (as described in
in his patch comments and in discussions on python-dev).
The test file, test_genexps.py, is written in doctest format and is
meant to exercise all aspects of the the patch. Further additions are
welcome from everyone. Please stress test this new feature as much as
possible before the alpha release.
pre-increment forms to post-increment forms. Post-incrementing
also eliminates the need for negative array indices for oparg fetches.
* In exception handling code, check for class based exceptions before
the older string based exceptions.
BINARY_SUBSCR:
* invert test for normal case fall through
* eliminate err handling code by jumping to slow_case
LOAD_LOCALS:
* invert test for normal case fall through
* continue instead of break for the non-error case
STORE_NAME and DELETE_NAME:
* invert test for normal case fall through
LOAD_NAME:
* continue instead of break for the non-error case
DELETE_FAST:
* invert test for normal case fall through
LOAD_DEREF:
* invert test for normal case fall through
* continue instead of break for the non-error case
tests of "why" against WHY_YIELD became useless. This patch removes them,
but assert()s that why != WHY_YIELD everywhere such a test was removed.
The test suite ran fine under a debug build (i.e., the asserts never
triggered).
* Defer error handling for wrong number of arguments to the
unpack_iterable() function. Cuts the code size almost in half.
* Replace function calls to PyList_Size() and PyTuple_Size() with
their smaller and faster macro counterparts.
* Move the constant structure references outside of the inner loops.
(Contributed by Andrew I MacIntyre.)
disables opcode prediction when dynamic execution
profiling is in effect, so the profiling counters at
the top of the main interpreter loop in eval_frame()
are updated for each opcode.
Simplified version of Neal Norwitz's patch which adds gotos for
opcodes that set "why". This skips a number of tests where the
outcome of the tests are known in advance.
comments about why both calls to cyclic gc here can cause problems.
I'll backport to 2.3 maint. Since the calls were introduced in 2.3,
that will be the end of it.
and left shifts. (Thanks to Kalle Svensson for SF patch 849227.)
This addresses most of the remaining semantic changes promised by
PEP 237, except for repr() of a long, which still shows the trailing
'L'. The PEP appears to promise warnings for operations that
changed semantics compared to Python 2.3, but this is not
implemented; we've suffered through enough warnings related to
hex/oct literals and I think it's best to be silent now.
* Install the unittests, docs, newsitem, include file, and makefile update.
* Exercise the new functions whereever sets.py was being used.
Includes the docs for libfuncs.tex. Separate docs for the types are
forthcoming.
The embed2.diff patch solves the user's problem by exporting the missing
symbols from the Python core so Python can be embedded in another Cygwin
application (well, at lest vim).
Cygwin's pthread_sigmask() implementation appears to be buggy. This
patch works around this problem by using sigprocmask() instead.
This patch is implemented in a general way so it could be used by other
platforms too. If this approach is deemed too risky, then I can work up
a patch that just hacks Python/thread_pthread.h for Cygwin.
Note that I tested this patch against 2.3c1 under Red Hat Linux 8.0 too.
[snip]
And finally, I need someone to regenerate pyconfig.h.in and configure
with the same versions of the autotools that are normally used by
Python.
Neal kindly regenerated pyconfig.h.in and configure for me.
If the initial import of warnings fails, clear the error. When the module
is actually needed, if the original import failed, see if it has managed
to find its way to sys.modules yet and if so, remember it.
Fixes for three related bugs, including errors that caused a script to
be ignored without printing an error message. The key problem was a bad
interaction between syntax warnings and syntax errors. If an
exception was already set when a warning was issued, the warning could
clobber the exception.
The PyErr_Occurred() check in issue_warning() isn't entirely
satisfying (the caller should know whether there was already an
error), but a better solution isn't immediately obvious.
Bug fix candidate.
behavior, creating many threads very quickly. A long debugging session
revealed that the Windows implementation of PyThread_start_new_thread()
was choked with "laziness" errors:
1. It checked MS _beginthread() for a failure return, but when that
happened it returned heap trash as the function result, instead of
an id of -1 (the proper error-return value).
2. It didn't consider that the Win32 CreateSemaphore() can fail.
3. When creating a great many threads very quickly, it's quite possible
that any particular bootstrap call can take virtually any amount of
time to return. But the code waited for a maximum of 5 seconds, and
didn't check to see whether the semaphore it was waiting for got
signaled. If it in fact timed out, the function could again return
heap trash as the function result. This is actually what confused
the test program, as the heap trash usually turned out to be 0, and
then multiple threads all got id 0 simultaneously, confusing the
hell out of threading.py's _active dict (mapping id to thread
object). A variety of baffling behaviors followed from that.
WRT #1 and #2, error returns are checked now, and "thread.error: can't
start new thread" gets raised now if a new thread (or new semaphore)
can't be created. WRT #3, we now wait for the semaphore without a
timeout.
Also removed useless local vrbls, folded long lines, and changed callobj
to a stack auto (it was going thru malloc/free instead, for no discernible
reason).
Bugfix candidate.
A new API (only accessible from C) to interrupt a thread by sending it
an exception. This is not always effective, but might help some people.
Requested by Just van Rossum and Alex Martelli. It is intentional
that you have to write your own C extension to call it from Python.
Docs will have to wait.
It depended on the previously removed basic block checker to
prevent a jump into the middle of the transformed block.
Clears SF 757818: tuple assignment -- SystemError: unknown opcode
The compiler was reseting the list comprehension tmpname counter for each function, but the symtable was using the same counter for the entire module. Repair by move tmpname into the symtable entry.
Bugfix candidate.
[ 708901 ] Lineno calculation sometimes broken
A one line patch to compile.c and a rather-more-than-one-line patch
to test_dis. Hey ho.
Possibly a backport candidate -- tho' lnotab is less used in 2.2...
* Can now test for basic blocks.
* Optimize inverted comparisions.
* Optimize unary_not followed by a conditional jump.
* Added a new opcode, NOP, to keep code size constant.
* Applied NOP to previous transformations where appropriate.
Note, the NOP would not be necessary if other functions were
added to re-target jump addresses and update the co_lnotab mapping.
That would yield slightly faster and cleaner bytecode at the
expense of optimizer simplicity and of keeping it decoupled
from the line-numbering structure.
particular leaving the traceback object (and everything reachable
from it) alive throughout shutdown. The patch is mostly from Guido.
Bugfix candidate.
new line.
New pvt API function _Py_PrintReferenceAddresses(): Prints only the
addresses and refcnts of the live objects. This is always safe to call,
because it has no dependence on Python's C API.
Py_Finalize(): If envar PYTHONDUMPREFS is set, call (the new)
_Py_PrintReferenceAddresses() right before dumping final pymalloc stats.
We can't print the reprs of the objects here because too much of the
interpreter has been shut down. You need to correlate the addresses
displayed here with the object reprs printed by the earlier
PYTHONDUMPREFS call to _Py_PrintReferences().
New functions:
unsigned long PyInt_AsUnsignedLongMask(PyObject *);
unsigned PY_LONG_LONG) PyInt_AsUnsignedLongLongMask(PyObject *);
unsigned long PyLong_AsUnsignedLongMask(PyObject *);
unsigned PY_LONG_LONG) PyLong_AsUnsignedLongLongMask(PyObject *);
New and changed format codes:
b unsigned char 0..UCHAR_MAX
B unsigned char none **
h unsigned short 0..USHRT_MAX
H unsigned short none **
i int INT_MIN..INT_MAX
I * unsigned int 0..UINT_MAX
l long LONG_MIN..LONG_MAX
k * unsigned long none
L long long LLONG_MIN..LLONG_MAX
K * unsigned long long none
Notes:
* New format codes.
** Changed from previous "range-and-a-half" to "none"; the
range-and-a-half checking wasn't particularly useful.
New test test_getargs2.py, to verify all this.
PYTHONDUMPREFS output after most teardown. Attempts to use
PYTHONDUMPREFS with the Zope3 test suite died with Py_FatalError(),
since _Py_PrintReferences() can end up executing arbitrary Python code
(for objects that override __repr__), and that requires an intact
interpreter.
even farther down, to just before the call to
_PyObject_DebugMallocStats(). This required the following changes:
- pystate.c, PyThreadState_GetDict(): changed not to raise an
exception or issue a fatal error when no current thread state is
available, but simply return NULL without raising an exception
(ever).
- object.c, Py_ReprEnter(): when PyThreadState_GetDict() returns NULL,
don't raise an exception but return 0. This means that when
printing a container that's recursive, printing will go on and on
and on. But that shouldn't happen in the case we care about (see
first bullet).
- Updated Misc/NEWS and Doc/api/init.tex to reflect changes to
PyThreadState_GetDict() definition.
the code erroneously decrefed the istep argument in an error case. This
caused a co_consts tuple to lose a float constant prematurely, which
eventually caused gc to try executing static data in floatobject.c (don't
ask <wink>). So reworked this extensively to ensure refcount correctness.
- range() now works even if the arguments are longs with magnitude
larger than sys.maxint, as long as the total length of the sequence
fits. E.g., range(2**100, 2**101, 2**100) is the following list:
[1267650600228229401496703205376L]. (SF patch #707427.)
return. Setting an exception can mess with the exception state, and
continuing is definitely wrong (since type is dereferenced later on).
Some code that calls this seems to be prepared for a NULL exception
type, so let's be safe rather than sorry and simply assume there's
nothing to normalize in this case.
Adds a single function to improve generated bytecode. Has a single line
attachment point, so it is completely de-coupled from both the compiler
and ceval.c.
Makes three simple transforms that do not require a basic block analysis
or re-ordering of code. Gives improved timings on pystone, pybench,
and any code using either "while 1" or "x,y=y,x".
Arranged that all the objects exposed by __builtin__ appear in the list
of all objects. I basically peed away two days tracking down a mystery
leak in sys.gettotalrefcount() in a ZODB app (== tons of code), because
the object leaking the references didn't appear in the sys.getobjects(0)
list. The object happened to be False. Now False is in the list, along
with other popular & previously missing leak candidates (like None).
Alas, we still don't have a choke point covering *all* Python objects,
so the list of all objects may still be incomplete.
variables to store internal data. As a result, any atempts to use the
unicode system with multiple active interpreters, or successive
interpreter executions, would fail.
Now that information is stored into members of the PyInterpreterState
structure.
Added two predictions:
GET_ITER --> FOR_ITER
FOR_ITER --> STORE_FAST or UNPACK_SEQUENCE
Improves timings on pybench and timeit.py. Pystone results are neutral.
Applied to common cases:
COMPARE_OP is often followed by a JUMP_IF.
JUMP_IF is usually followed by POP_TOP.
Shows improved timings on PyStone, PyBench, and specific tests
using timeit.py:
python timeit.py -s "x=1" "if x==1: pass"
python timeit.py -s "x=1" "if x==2: pass"
python timeit.py -s "x=1" "if x: pass"
python timeit.py -s "x=100" "while x!=1: x-=1"
Potential future candidates:
GET_ITER predicts FOR_ITER
FOR_ITER predicts STORE_FAST or UNPACK_SEQUENCE
Also, applied missing goto fast_next_opcode to DUP_TOPX.
My previous patches should have used fast_next_opcode
in a few places instead of continue.
Also, applied one PyInt_AS_LONG macro in a place where
the type had already been checked.
rarely needed, but can sometimes be useful to release objects
referenced by the traceback held in sys.exc_info()[2]. (SF patch
#693195.) Thanks to Kevin Jacobs!
import warnings.py _after_ site.py has run. This ensures that site.py
is again the first .py to be imported, giving it back full control over
sys.path.
with an indented code block but no newline would raise SyntaxError.
This would have been a four-line change in parsetok.c... Except
codeop.py depends on this behavior, so a compilation flag had to be
invented that causes the tokenizer to revert to the old behavior;
this required extra changes to 2 .h files, 2 .c files, and 2 .py
files. (Fixes SF bug #501622.)
mostly from SF patch #683257, but I had to change unlock_import() to
return an error value to avoid fatal error.
Should this be backported? The patch requested this, but it's a new
feature.
"Unsigned" (i.e., positive-looking, but really negative) hex/oct
constants with a leading minus sign are once again properly negated.
The micro-optimization for negated numeric constants did the wrong
thing for such hex/oct constants. The patch avoids the optimization
for all hex/oct constants.
This needs to be backported to Python 2.2!
object is not a real str or unicode but an instance
of a subclass, construct the output via looping
over __getitem__. This guarantees that the result
is the same for function==None and function==lambda x:x
This doesn't happen for tuples, because filtertuple()
uses PyTuple_GetItem().
(This was discussed on SF bug #665835).
-DCALL_PROFILE: Count the number of function calls executed.
When this symbol is defined, the ceval mainloop and helper functions
count the number of function calls made. It keeps detailed statistics
about what kind of object was called and whether the call hit any of
the special fast paths in the code.
Optimization:
When we take the fast_function() path, which seems to be taken for
most function calls, and there is minimal frame setup to do, avoid
call PyEval_EvalCodeEx(). The eval code ex function does a lot of
work to handle keywords args and star args, free variables,
generators, etc. The inlined version simply allocates the frame and
copies the arguments values into the frame.
The optimization gets a little help from compile.c which adds a
CO_NOFREE flag to code objects that don't have free variables or cell
variables. This change allows fast_function() to get into the fast
path with fewer tests.
I measure a couple of percent speedup in pystone with this change, but
there's surely more that can be done.