PyEval_EvalCodeEx(): increment tstate->recursion_depth around the
decref of the frame, because the C stack for this call is still in
use and the decref can lead to __del__ methods getting called.
While this gives tstate->recursion_depth a value proportional to the
depth of the C stack (instead of a small constant no matter how
deeply __del__s recurse), it's not enough to stop the reported crash
when using the default recursion limit on Windows.
Bugfix candidate.
Bugfix candidate.
tb_displayline(): the sprintf format was choking off the file name, but
used plain %s for the function name (which can be arbitrarily long).
Limit both to 500 chars max.
uninitialized memory reads reported in bug #478001.
Note that this doesn't address the following larger issues:
- Error conditions are not documented for PyOS_*sig() in the C API.
- Nothing that actually calls PyOS_*sig() in the core interpreter and
extension modules actually /checks/ the return value of the call.
Fixing those is left as an exercise for a later day.
This patch boosts performance for comparing identical string object
by some 20% on my machine while not causing any noticable slow-down
for other operations (according to tests done with pybench).
routines. As of 10.1 using Carbon will crash Python if no window server is
available (ssh connection, console mode, MacOSX Server). This fixes bug
#466907.
A result of this mod is that the default 8bit encoding on OSX is now ASCII,
for the time being. Also, the extension modules that need the Carbon
framework now explicitly include it in setup.py.
+ Squash another potential buffer overrun.
+ Simplify the keyword-arg loop by decrementing the count of keywords
remaining instead of incrementing Yet Another Variable; also break
out early if the number of keyword args remaining hits 0.
Since I hit the function's closing curly brace with this patch, that's
enough of this for now <wink>.
The "need" for this was probably removed by an earlier patch that stopped
the loop right before it from passing NULL to a dict lookup routine.
I still haven't convinced myself that the next loop is correct, so am
leaving the next mysterious PyErr_Clear() call in for now.
+ Generally test nkeywords against 0 instead of keywords against NULL
(saves a little work if an empty keywords dict is passed, and is
conceptually more on-target regardless).
+ When a call erroneously specifies a keyword argument both by position
and by keyword name:
- It was easy to provoke this routine into an internal buffer overrun
by using a long argument name. Now uses PyErr_format instead (which
computes a safe buffer size).
- Improved the error msg.
+ Got rid of now-redundant dict typecheck.
+ Renamed nkwds to nkwlist. Now all the "counting" vrbls have names
related to the things they're counting in an obvious way.
+ Renamed argslen to nargs.
+ Renamed kwlen to nkeywords. This one was especially confusing because
kwlen wasn't the length of the kwlist argument, but of the keywords
argument.
+ Removed now-redundant tuple typecheck.
+ Renamed "tplen" local to "argslen" (it's the length of the "args"
argument; I suppose "tp" was for "Tim Peters should rename me
someday <wink>).
introduced this bug just a little while ago, when *adding* internal error
checks).
vgetargskeywords: Rewrote the section that crawls over the format string.
+ Added block comment so it won't take the next person 15 minutes to
reverse-engineer what it's doing.
+ Lined up the "else" clauses.
+ Rearranged the ifs in decreasing order of likelihood (for speed).
and raise an error if they're insane.
vgetargskeywords: the same, except that since this is an internal routine,
just assert that the arguments are sane.
the kwlist vector whenever there was a mix of positional and keyword
arguments, and the number of positional arguments exceeded the length
of the kwlist vector. If there was just one more positional arg than
keyword, the kwlist-terminating NULL got passed to PyMapping_HasKeyString,
which set an internal error that vgetargskeywords() then squashed (but
it's impossible to say whether it knew it was masking an error). If
more than one more positional argument, it went on to pass random trash
to PyMapping_HasKeyString, which is why the example at the start
happened to kill the process.
Pure bugfix candidate.
to call the corresponding methods. This is not a performance improvement
since the times are still swamped by disk I/O, but cleans up the code just
a little.
In Include/, marshal.h declares both
PyMarshal_ReadLongFromFile()
and PyMarshal_ReadShortFromFile(),
but the second is missing from marshal.c.
[Shouldn't the return type be declared as 'short' instead of 'int'?
But 'int' is what was in marshal.h all those years... --Guido]
This fixes the behavior reported by SF bug #404545, where a file
x.y.py could be imported by the statement "import x.y" when there's a
frozen package x (I believe even if x.y also exists as a frozen
module).
:-).
Add a test that prevents the __hello__ bytecode from going stale
unnoticed again.
The test also tests the loophole noted in SF bug #404545. This test
will fail right now; I'll check in the fix in a minute.
The symbol table pass didn't have an explicit case for the list_iter
node which is used only for a nested list comprehension. As a result,
the target of the list comprehension was treated as a use instead of
an assignment. Fix is to add a case to symtable_node() to handle
list_iter.
Also, rework and document a couple of the subtler implementation
issues in the symbol table pass. The symtable_node() switch statement
depends on falling through the last several cases, in order to handle
some of the more complicated nodes like atom. Add a comment
explaining the behavior before the first fall through case. Add a
comment /* fall through */ at the end of case so that it is explicitly
marked as such.
Move the for_stmt case out of the fall through logic, which simplifies
both for_stmt and default. (The default used the local variable start
to skip the first three nodes of a for_stmt when it fell through.)
Rename the flag argument to symtable_assign() to def_flag and add a
comment explaining its use:
The third argument to symatble_assign() is a flag to be passed to
symtable_add_def() if it is eventually called. The flag is useful
to specify the particular type of assignment that should be
recorded, e.g. an assignment caused by import.
Also minor tweaks to internal routines.
Use PyCF_MASK instead of explicit list of flags.
For the MAKE_CLOSURE opcode, the number of items popped off the stack
depends on both the oparg and the number of free variables for the
code object. Fix the code so it accounts for the free variables.
In com_classdef(), record an extra pop to account for the STORE call
after the BUILD_CLASS.
Get rid of some commented out debugging code in com_push() and
com_pop().
Factor string resize logic into helper routine com_check_size().
In com_addbyte(), remove redudant if statement after assert. (They
test the same condition.)
In several routines, use string macros instead of string functions.
This changes Pythread_start_thread() to return the thread ID, or -1
for an error. (It's technically an incompatible API change, but I
doubt anyone calls it.)
When an extension imports another extension in its
initXXX() function, the variable _Py_PackageContext is
prematurely reset to NULL. If the outer extension then
calls Py_InitModule(), the extension is installed in
sys.modules without its package name. The
manifestation of this bug is a "SystemError:
_PyImport_FixupExtension: module <package>.<extension>
not loaded".
To fix this, importdl.c just needs to retain the old
value of _Py_PackageContext and restore it after the
initXXX() method is called. The attached patch does this.
This patch applies to Python 2.1.1 and the current CVS.
"for <var> in <testlist> may no longer be a single test followed by
a comma. This solves SF bug #431886. Note that if the testlist
contains more than one test, a trailing comma is still allowed, for
maximum backward compatibility; but this example is not:
[(x, y) for x in range(10), for y in range(10)]
^
The fix involved creating a new nonterminal 'testlist_safe' whose
definition doesn't allow the trailing comma if there's only one test:
testlist_safe: test [(',' test)+ [',']]
This patch changes to logic to:
if env.var. set and non-empty:
if env.var. is an integer:
set flag to that integer
if flag is zero: # [actually, <= 0 --GvR]
set flag to 1
Under this patch, anyone currently using
PYTHONVERBOSE=yes will get the same output as before.
PYTHONVERBNOSE=2 will generate more verbosity than
before.
The only unusual case that the following three are
still all equivalent:
PYTHONVERBOSE=yespleas
PYTHONVERBOSE=1
PYTHONVERBOSE=0
This patch updates Python/thread_pthread.h to mask all
signals for any thread created. This will keep all
signals masked for any thread that isn't the initial
thread. For Solaris and Linux, the two platforms I was
able to test it on, it solves bug #465673 (pthreads
need signal protection) and probably will solve bug
#219772 (Interactive InterPreter+ Thread -> core dump
at exit).
I'd be great if this could get some testing on other
platforms, especially HP-UX pre 11.00 and post 11.00,
as I had to make some guesses for the DCE thread case.
AIX is also a concern as I saw some mention of using
sigthreadmask() as a pthread_sigmask() equivalent, but
this patch doesn't use sigthreadmask(). I don't have
access to AIX.
The new profiler event stream includes a "return" event even when an
exception is being propogated, but the machinery that called the profile
hook did not save & restore the exception. In debug mode, the exception
was detected during the execution of the profile callback, which did not
have the proper internal flags set for the exception. Saving & restoring
the exception state solves the problem.
The profiler does not need to know anything about the exception state,
so we no longer call it when an exception is raised. We do, however,
make sure we *always* call the profiler when we exit a frame. This
ensures that timing events are more easily isolated by a profiler and
finally clauses that do a lot of work don't have their time
mis-allocated.
When an exception is propogated out of the frame, the C callback for
the profiler now receives a PyTrace_RETURN event with an arg of NULL;
the Python-level profile hook function will see a 'return' event with
an arg of None. This means that from Python it is impossible for the
profiler to determine if the frame exited with an exception or if it
returned None, but this doesn't matter for profiling. A C-based
profiler could tell the difference, but this doesn't seem important.
ceval.c:eval_frame(): Simplify the code in two places so that the
profiler is called for every exit from a frame
and not for exceptions.
sysmodule.c:profile_trampoline(): Make sure we don't expose Python
code to NULL; use None instead.
Unknown whether this fixes it.
- stringobject.c, PyString_FromFormatV: don't assume that va_list is of
a type that can be copied via an initializer.
- errors.c, PyErr_Format: add a va_end() to balance the va_start().
If a new exception occurs while an exception instance is being
created, try harder to make sure there is a traceback. If the
original exception had a traceback associated with it and the new
exception does not, keep the old exception.
Of course, callers to PyErr_NormalizeException() must still be
prepared to have tb set to NULL.
XXX This isn't an ideal solution, but it's better than no traceback at
all. It occurs if, for example, the exception occurs when the call to
the constructor fails before any Python code is executed. Guido
suggests that it there is Python code that was about to be executed
-- but wasn't, say, because it was called with the wrong number of
arguments -- then we should point at the first line of the code object
anyway.
It's possible for PyErr_NormalizeException() to set the traceback
pointer to NULL. I'm not sure how to provoke this directly from
Python, although it may be possible. The error occurs when an
exception is set using PyErr_SetObject() and another exception occurs
while PyErr_NormalizeException() is creating the exception instance.
XXX As a result of this change, it's possible for an exception to
occur but sys.last_traceback to be left undefined. Not sure if this
is a problem.
popped frame-block. What an embarrassing bug! Especially for Jeremy, since
he accepted the patch :-)
This fixes SF bugs #463359 and #462937, and possibly other, *very* obscure
bugs with very deeply nested loops that continue the loop and then break out
of it or raise an exception.
compatibility, this required all places where an array of "struct
memberlist" structures was declared that is referenced from a type's
tp_members slot to change the type of the structure to PyMemberDef;
"struct memberlist" is now only used by old code that still calls
PyMember_Get/Set. The code in PyObject_GenericGetAttr/SetAttr now
calls the new APIs PyMember_GetOne/SetOne, which take a PyMemberDef
argument.
As examples, I added actual docstrings to the attributes of a few
types: file, complex, instance method, super, and xxsubtype.spamlist.
Also converted the symtable to new style getattr.
Renamed the 'readonly' field to 'flags' and defined some new flag
bits: READ_RESTRICTED and WRITE_RESTRICTED, as well as a shortcut
RESTRICTED that means both.
backwards compatibility. When using the class of the first base as
the metaclass, use its __class__ attribute in preference over its
ob_type slot. This ensures that we can still use classic classes as
metaclasse, as shown in the original "Metaclasses" essay. This also
makes all the examples in Demo/metaclasses/ work again (maybe these
should be turned into a test suite?).
parameter for the return string (as unix pathnames are not limited
by the 255 char pstring limit).
Implemented the function for MachO-Python, where it returns unix pathnames.
by bbrox@bbrox.org / lionel.ulmer@free.fr.
This adds a configure check and if all goes well turns on the
PTHREAD_SCOPE_SYSTEM thread attribute for new threads.
This should remove the need to add tiny sleeps at the start of threads
to allow other threads to be scheduled.
Reported by Fredrik Lundh on python-dev.
The conversimple() code that handles Unicode arguments and converts
them to the default encoding now calls converterr() with the original
Unicode argument instead of the NULL returned by the failed encoding
attempt.
com_factor(): when a unary minus is attached to a float or imaginary zero,
don't optimize the UNARY_MINUS opcode away: the const dict can't
distinguish between +0.0 and -0.0, so ended up treating both like the
first one added to it. Optimizing UNARY_PLUS away isn't a problem.
(BTW, I already uploaded the 2.2a3 Windows installer, and this isn't
important enough to delay the release.)
of PyMapping_Keys because we know we have a real dict. Tolerate that
objects may have an attr named "__dict__" that's not a dict (Py_None
popped up during testing).
test_descr.py, test_dir(): Test the new classic-class behavior; beef up
the new-style class test similarly.
test_pyclbr.py, checkModule(): dir(C) is no longer a synonym for
C.__dict__.keys() when C is a classic class (looks like the same thing
that burned distutils! -- should it be *made* a synoym again? Then it
would be inconsistent with new-style class behavior.).
bag. It's clearly wrong for classic classes, at heart because a classic
class doesn't have a __class__ attribute, and I'm unclear on whether
that's feature or bug. I'll repair this once I find out (in the
meantime, dir() applied to classic classes won't find the base classes,
while dir() applied to a classic-class instance *will* find the base
classes but not *their* base classes).
Please give the new dir() a try and see whether you love it or hate it.
The new dir([]) behavior is something I could come to love. Here's
something to hate:
>>> class C:
... pass
...
>>> c = C()
>>> dir(c)
['__doc__', '__module__']
>>>
The idea that an instance has a __doc__ attribute is jarring (of course
it's really c.__class__.__doc__ == C.__doc__; likewise for __module__).
OTOH, the code already has too many special cases, and dir(x) doesn't
have a compelling or clear purpose when x isn't a module.
PEP 238. Changes:
- add a new flag variable Py_DivisionWarningFlag, declared in
pydebug.h, defined in object.c, set in main.c, and used in
{int,long,float,complex}object.c. When this flag is set, the
classic division operator issues a DeprecationWarning message.
- add a new API PyRun_SimpleStringFlags() to match
PyRun_SimpleString(). The main() function calls this so that
commands run with -c can also benefit from -Dnew.
- While I was at it, I changed the usage message in main() somewhat:
alphabetized the options, split it in *four* parts to fit in under
512 bytes (not that I still believe this is necessary -- doc strings
elsewhere are much longer), and perhaps most visibly, don't display
the full list of options on each command line error. Instead, the
full list is only displayed when -h is used, and otherwise a brief
reminder of -h is displayed. When -h is used, write to stdout so
that you can do `python -h | more'.
Notes:
- I don't want to use the -W option to control whether the classic
division warning is issued or not, because the machinery to decide
whether to display the warning or not is very expensive (it involves
calling into the warnings.py module). You can use -Werror to turn
the warnings into exceptions though.
- The -Dnew option doesn't select future division for all of the
program -- only for the __main__ module. I don't know if I'll ever
change this -- it would require changes to the .pyc file magic
number to do it right, and a more global notion of compiler flags.
- You can usefully combine -Dwarn and -Dnew: this gives the __main__
module new division, and warns about classic division everywhere
else.