- Reenable modules on x64 that had been disabled aeons ago for Itanium.
- Cleared up confusion about compilers for 64 bit windows. There is only Itanium and x64. Added macros MS_WINI64 and MS_WINX64 for those rare cases where it matters, such as the disabling of modules above.
- Set target platform (_WIN32_WINNT and WINVER) to 0x0501 (XP) for x64, and 0x0400 (NT 4.0) otherwise, which are the targeted minimum platforms.
- Fixed thread_nt.h. The emulated InterlockedCompareExchange function didn´t work on x64, probaby due to the lack of a "volatile" specifier. Anyway, win95 is no longer a target platform.
- Itertools module used wrong constant to check for overflow in count()
- PyInt_AsSsize_t couldn't deal with attribute error when accessing the __long__ member.
- PyLong_FromSsize_t() incorrectly specified that the operand were unsigned.
With these changes, the x64 passes the testsuite, for those modules present.
The file should now follow PEP 7, except that it uses 4 space indents
(in the style of Py3k). This particular code would be really hard to
read with the regular tab idents.
Other changes:
- reflow long lines
- change multi-line conditionals to have test at end of line
* lines too long
* wrong indentation
* space after a function name
* wrong function name in error string
* simplifying some logic
Also add an error check to PyDict_SetItemString.
Patch #1591665: implement the __dir__() special function lookup in PyObject_Dir.
Had to change a few bits of the patch because classobjs and __methods__ are still
in Py2.6.
exception if the -i command line option or PYTHONINSPECT environment
variable is given, but break into the interactive interpreter just like
on other exceptions or normal program exit.
(backport)
* use %r instead of backticks since backticks are going away in Py3k
* PyArena_Malloc() already sets PyErr_NoMemory so we don't need to do it again
* the signature for ast2obj_int incorrectly used a bool, rather than a long
I can't think of an easy way to test this behavior. It only occurs
when the file system default encoding and the interpreter default
encoding are different, such that you can open the file but not decode
its name.
is specified at the top of the file. Also add a note that Python/Python-ast.c
needs to be committed separately after a change to the AST grammar to capture
the revision number of the change (which is what __version__ is set to).
The next step of PEP 352 (for 2.6) causes raising a string exception to trigger
a TypeError. Trying to catch a string exception raises a DeprecationWarning.
References to string exceptions has been removed from the docs since they are
now just an error.
When running the interpreter in an environment that would cause it to set
stdout/stderr/stdin's encoding, having a sitecustomize that would replace
them with something other than PyFile objects would crash the interpreter.
Fix it by simply ignoring the encoding-setting for non-files.
This could do with a test, but I can think of no maintainable and portable
way to test this bug, short of adding a sitecustomize.py to the buildsystem
and have it always run with it (hmmm....)
It seems like this should be a different error than SystemError, but
I don't have any great ideas and SystemError was raised in 2.4 and earlier.
Will backport.
some warnings from Klokwork. They verify the assumptions of the format
of svn version output.
The assert in the thread module helped debug a problem on HP-UX.
* unified the way intobject, longobject and mystrtoul handle
values around -sys.maxint-1.
* in general, trying to entierely avoid overflows in any computation
involving signed ints or longs is extremely involved. Fixed a few
simple cases where a compiler might be too clever (but that's all
guesswork).
* more overflow checks against bad data in marshal.c.
* 2.5 specific: fixed a number of places that were still confusing int
and Py_ssize_t. Some of them could potentially have caused
"real-world" breakage.
* list.pop(x): fixing overflow issues on x was messy. I just reverted
to PyArg_ParseTuple("n"), which does the right thing. (An obscure
test was trying to give a Decimal to list.pop()... doesn't make
sense any more IMHO)
* trying to write a few tests...
The compiler was checking that there was something on the fblock
stack, but not that there was a loop on the stack. Fixed that and
added a test for the specific syntax error.
Bug fix candidate.
Building with HP's cc on HP-UX turned up a couple of problems.
_PyGILState_NoteThreadState was declared as static inconsistently.
Make it static as it's not necessary outside of this module.
Some tests failed because errno was reset to 0. (I think the tests
that failed were at least: test_fcntl and test_mailbox).
Ensure that errno doesn't change after a call to Py_END_ALLOW_THREADS.
This only affected debug builds.
generator expressions (x for x, in ... ) works again.
Sigh, I only fixed for loops the first time, not list comps and genexprs too.
I couldn't find any more unpacking cases where there is a similar bug lurking.
This code should be refactored to eliminate the duplication. I'm sure
the listcomp/genexpr code can be refactored. I'm not sure if the for loop
can re-use any of the same code though.
Will backport to 2.5 (the only place it matters).
I modified this patch some by fixing style, some error checking, and adding
XXX comments. This patch requires review and some changes are to be expected.
I'm checking in now to get the greatest possible review and establish a
baseline for moving forward. I don't want this to hold up release if possible.
However, there was no error checking that PyFloat_FromDouble returned
a valid pointer. I believe this change is correct as it seemed
to follow other code in the area.
Klocwork # 292.
there was no verification that privateobj was a PyString. If it wasn't
a string, this could have allowed a NULL pointer to creep in below and crash.
I wonder if this should be PyString_CheckExact? Must identifiers be strings
or can they be subclasses?
Klocwork #275
This is the first batch of fixes that should be easy to verify based on context.
This fixes problem numbers: 220 (ast), 323-324 (symtable),
321-322 (structseq), 215 (array), 210 (hotshot), 182 (codecs), 209 (etree).
PyThreadState_SetAsyncExc(): internal correctness changes wrt
refcount safety and deadlock avoidance. Also added a basic test
case (relying on ctypes) and repaired the docs.
on each iteration. I'm not positive this is the best way to handle
this. I'm also not sure that there aren't other cases where
the lnotab is generated incorrectly. It would be great if people
that use pdb or tracing could test heavily.
Also:
* Remove dead/duplicated code that wasn't used/necessary
because we already handled the docstring prior to entering the loop.
* add some debugging code into the compiler (#if 0'd out).
This provides the proper warning for struct.pack().
PyErr_Warn() is now deprecated in favor of PyErr_WarnEx().
As mentioned by Tim Peters on python-dev.
with PEP 302. This was fixed by adding an ``imp.NullImporter`` type that is
used in ``sys.path_importer_cache`` to cache non-directory paths and avoid
excessive filesystem operations during imports.
In general, C doesn't define anything about what happens when
an operation on a signed integral type overflows, and PyOS_strtol()
did several formally undefined things of that nature on signed
longs. Some version of gcc apparently tries to exploit that now,
and PyOS_strtol() could fail to detect overflow then.
Tried to repair all that, although it seems at least as likely to me
that we'll get screwed by bad platform definitions for LONG_MIN
and/or LONG_MAX now. For that reason, I don't recommend backporting
this.
Note that I have no box on which this makes a lick of difference --
can't really test it, except to note that it didn't break anything
on my boxes.
Silent change: PyOS_strtol() used to return the hard-coded 0x7fffffff
in case of overflow. Now it returns LONG_MAX. They're the same only on
32-bit boxes (although C doesn't guarantee that either ...).
Moved the code for _PyThread_CurrentFrames() up, so it's no longer
in a huge "#ifdef WITH_THREAD" block (I didn't realize it /was/ in
one).
Changed test_sys's test_current_frames() so it passes with or without
thread supported compiled in.
Note that test_sys fails when Python is compiled without threads,
but for an unrelated reason (the old test_exit() fails with an
indirect ImportError on the `thread` module). There are also
other unrelated compilation failures without threads, in extension
modules (like ctypes); at least the core compiles again.
Do we really support --without-threads? If so, there are several
problems remaining.
v2 can be NULL if exception2 is NULL. I don't think that condition can happen,
but I'm not sure it can't either. Now the code will protect against either
being NULL.
omit a default "error" argument for NULL pointer. This allows
the parser to take a codec from cjkcodecs again.
(Reported by Taewook Kang and reviewed by Walter Doerwald)
Heavily revised, comprising revisions:
46640 - original trunk revision (backed out in r46655)
46647 - markup fix (backed out in r46655)
46692:46918 merged from branch aimacintyre-sf1454481
branch tested on buildbots (Windows buildbots had problems
not related to these changes).
exact maximum size someone guesses is needed. In this case, if
we're really worried about extreme integers, then "cp%d" can
actually need 14 bytes (2 for "cp" + 1 for \0 at the end +
11 for -(2**31-1)). So reserve 128 bytes instead -- nothing is
actually saved by making a stack-local buffer tiny.
46640 Patch #1454481: Make thread stack size runtime tunable.
46647 Markup fix
The first is causing many buildbots to fail test runs, and there
are multiple causes with seemingly no immediate prospects for
repairing them. See python-dev discussion.
Note that a branch can (and should) be created for resolving these
problems, like
svn copy svn+ssh://svn.python.org/python/trunk -r46640 svn+ssh://svn.python.org/python/branches/NEW_BRANCH
followed by merging rev 46647 to the new branch.
set_exc_info(), reset_exc_info(): By exploiting the
likely (who knows?) invariant that when an exception's
`type` is NULL, its `value` and `traceback` are also NULL,
save some cycles in heavily-executed code.
This is a "a kronar saved is a kronar earned" patch: the
speedup isn't reliably measurable, but it obviously does
reduce the operation count in the normal (no exception
raised) path through PyEval_EvalFrameEx().
The tim-exc_sanity branch tries to push this harder, but
is still blowing up (at least in part due to pre-existing
subtle bugs that appear to have no other visible
consequences!).
Not a bugfix candidate.
invalid file paths for the built-in import machinery which leads to
fewer open calls on startup.
Also fix issue with PEP 302 style import hooks which lead to more open()
calls than necessary.
both mystrtoul.c and longobject.c. Share the table instead. Also
cut its size by 64 entries (they had been used for an inscrutable
trick originally, but the code no longer tries to use that trick).
In rare cases of strings specifying true values near sys.maxint,
and oddball bases (not decimal or a power of 2), int(string, base)
could deliver insane answers. This repairs all such problems, and
also speeds string->int significantly. On my box, here are %
speedups for decimal strings of various lengths:
length speedup
------ -------
1 12.4%
2 15.7%
3 20.6%
4 28.1%
5 33.2%
6 37.5%
7 41.9%
8 46.3%
9 51.2%
10 19.5%
11 19.9%
12 23.9%
13 23.7%
14 23.3%
15 24.9%
16 25.3%
17 28.3%
18 27.9%
19 35.7%
Note that the difference between 9 and 10 is the difference between
short and long Python ints on a 32-bit box. The patch doesn't
actually do anything to speed conversion to long: the speedup is
due to detecting "unsigned long" overflow more quickly.
This is a bugfix candidate, but it's a non-trivial patch and it
would be painful to separate the "bug fix" from the "speed up" parts.
discussion.
There are two places of documentation that still mention __context__:
Doc/lib/libstdtypes.tex -- I wasn't quite sure how to rewrite that without
spending a whole lot of time thinking about it; and whatsnew, which Andrew
usually likes to change himself.
- Warn-raise ImportWarning when importing would have picked up a directory
as package, if only it'd had an __init__.py. This swaps two tests (for
case-ness and __init__-ness), but case-test is not really more expensive,
and it's not in a speed-critical section.
- Test for the new warning by importing a common non-package directory on
sys.path: site-packages
- In regrtest.py, silence warnings generated by the build-environment
because Modules/ (which is added to sys.path for Setup-created modules)
has 'zlib' and '_ctypes' directories without __init__.py's.
MAXPATHLEN-sized buffers for various output-buffers (like to realpath()),
and that's correct on BSD platforms, but not Linux (which uses PATH_MAX, and
does not define MAXPATHLEN.) Cursory googling suggests Linux is following a
newer standard than BSD, but in cases like this, who knows. Using the
greater of PATH_MAX and 1024 as a fallback for MAXPATHLEN seems to be the
most portable solution.
exceptions that can't be raised any further, because (for instance) they
occur in __del__ methods. The coroutine tests in test_generators was
triggering this leak. Remove the leakers' testcase, and add a simpler
testcase that explicitly tests this leak to test_generators.
test_generators now no longer leaks at all, on my machine. This fix may also
solve other leaks, but my full refleakhunting run is still busy, so who
knows?
using a custom, nearly-identical macro. This probably changes how some of
these functions are compiled, which may result in fractionally slower (or
faster) execution. Considering the nature of traversal, visiting much of the
address space in unpredictable patterns, I'd argue the code readability and
maintainability is well worth it ;P
passing a string. Martin already fixed the actual crash by ensuring
Py_UNICODE is unsigned. As discussed on python-dev, this fix
removes the possibility of creating a unicode string from a raw buffer.
There is an outstanding question of how to fix the crash in 2.4.