http://www.python.org/sf/444708
This adds the optional argument for str.strip
to unicode.strip too and makes it possible
to call str.strip with a unicode argument
and unicode.strip with a str argument.
+ Redirect PyMem_{Del, DEL} to the object allocator's free() when
pymalloc is enabled. Needed so old extensions can continue to
mix PyObject_New with PyMem_DEL.
+ This implies that pgen needs to be able to see the PyObject_XYZ
declarations too. pgenheaders.h now includes Python.h. An
implication is that I expect obmalloc.o needs to get linked into
pgen on non-Windows boxes.
+ When PYMALLOC_DEBUG is defined, *all* Py memory API functions
now funnel through the debug allocator wrapper around pymalloc.
This is the default in a debug build.
+ That caused compile.c to fail: it indirectly mixed PyMem_Malloc
with raw platform free() in one place. This is verbotten.
+ Continued looping until n bytes in the buffer have been filled, not
just when n bytes have been read from the file. This repairs the
bug that f.readlines() only sucked up the first 8192 bytes of the file
on Windows when universal newlines was enabled and f was opened in
U mode (see Python-Dev -- this was the ultimate cause of the
test_inspect.py failure).
+ Changed prototye to take a char* buffer (void* doesn't make much sense).
+ Squashed size_t vs int mismatches (in particular, besides the unsigned
vs signed distinction, size_t may be larger than int).
+ Gets out under all error conditions now (it's possible for fread() to
suffer an error even if it returns a number larger than 0 -- any
"short read" is an error or EOF condition).
+ Rearranged and simplified declarations.
Highlights: import and friends will understand any of \r, \n and \r\n
as end of line. Python file input will do the same if you use mode 'U'.
Everything can be disabled by configuring with --without-universal-newlines.
See PEP278 for details.
Added code to call this when PYMALLOC_DEBUG is enabled, and envar
PYTHONMALLOCSTATS is set, whenever a new arena is obtained and once
late in the Python shutdown process.
PyMem_{Del, DEL} doesn't work yet (compilation problems).
pyport.h: _PyMem_EXTRA is gone.
pmem.h: Repaired comments. PyMem_{Malloc, MALLOC} and
PyMem_{Realloc, REALLOC} now make the same x-platform guarantees when
asking for 0 bytes, and when passing a NULL pointer to the latter.
object.c: PyMem_{Malloc, Realloc} just call their macro versions
now, since the latter take care of the x-platform 0 and NULL stuff
by themselves now.
pypcre.c, grow_stack(): So sue me. On two lines, this called
PyMem_RESIZE to grow a "const" area. It's not legit to realloc a
const area, so the compiler warned given the new expansion of
PyMem_RESIZE. It would have gotten the same warning before if it
had used PyMem_Resize() instead; the older macro version, but not the
function version, silently cast away the constness. IMO that was a wrong
thing to do, and the docs say the macro versions of PyMem_xyz are
deprecated anyway. If somebody else is resizing const areas with the
macro spelling, they'll get a warning when they recompile now too.
it's enabled.
Allow PyObject_Del, PyObject_Free, and PyObject_GC_Del to be used as
function designators. Provide source compatibility macros.
Make PyObject_GC_Track and PyObject_GC_UnTrack functions instead of
trivial macros wrapping functions.
PEP 285. Everything described in the PEP is here, and there is even
some documentation. I had to fix 12 unit tests; all but one of these
were printing Boolean outcomes that changed from 0/1 to False/True.
(The exception is test_unicode.py, which did a type(x) == type(y)
style comparison. I could've fixed that with a single line using
issubtype(x, type(y)), but instead chose to be explicit about those
places where a bool is expected.
Still to do: perhaps more documentation; change standard library
modules to return False/True from predicates.
This displays stats about the # of arenas, pools, blocks and bytes, to
stderr, both used and reserved but unused.
CAUTION: Because PYMALLOC_DEBUG is on, the debug malloc routine adds
16 bytes to each request. This makes each block appear two size classes
higher than it would be if PYMALLOC_DEBUG weren't on.
So far, playing with this confirms the obvious: there's a lot of activity
in the "small dict" size class, but nothing in the core makes any use of
the 8-byte or 16-byte classes.
descriptor, as used for the tp_methods slot of a type. These new flag
bits are both optional, and mutually exclusive. Most methods will not
use either. These flags are used to create special method types which
exist in the same namespace as normal methods without having to use
tedious construction code to insert the new special method objects in
the type's tp_dict after PyType_Ready() has been called.
If METH_CLASS is specified, the method will represent a class method
like that returned by the classmethod() built-in.
If METH_STATIC is specified, the method will represent a static method
like that returned by the staticmethod() built-in.
These flags may not be used in the PyMethodDef table for modules since
these special method types are not meaningful in that case; a
ValueError will be raised if these flags are found in that context.
When WITH_PYMALLOC is defined, define PYMALLOC_DEBUG to enable the debug
allocator. This can be done independent of build type (release or debug).
A debug build automatically defines PYMALLOC_DEBUG when pymalloc is
enabled. It's a detected error to define PYMALLOC_DEBUG when pymalloc
isn't enabled.
Two debugging entry points defined only under PYMALLOC_DEBUG:
+ _PyMalloc_DebugCheckAddress(const void *p) can be used (e.g., from gdb)
to sanity-check a memory block obtained from pymalloc. It sprays
info to stderr (see next) and dies via Py_FatalError if the block is
detectably damaged.
+ _PyMalloc_DebugDumpAddress(const void *p) can be used to spray info
about a debug memory block to stderr.
A tiny start at implementing "API family" checks isn't good for
anything yet.
_PyMalloc_DebugRealloc() has been optimized to do little when the new
size is <= old size. However, if the new size is larger, it really
can't call the underlying realloc() routine without either violating its
contract, or knowing something non-trivial about how the underlying
realloc() works. A memcpy is always done in this case.
This was a disaster for (and only) one of the std tests: test_bufio
creates single text file lines up to a million characters long. On
Windows, fileobject.c's get_line() uses the horridly funky
getline_via_fgets(), which keeps growing and growing a string object
hoping to find a newline. It grew the string object 1000 bytes each
time, so for a million-character string it took approximately forever
(I gave up after a few minutes).
So, also:
fileobject.c, getline_via_fgets(): When a single line is outrageously
long, grow the string object at a mildly exponential rate, instead of
just 1000 bytes at a time.
That's enough so that a debug-build test_bufio finishes in about 5 seconds
on my Win98SE box. I'm curious to try this on Win2K, because it has very
different memory behavior than Win9X, and test_bufio always took a factor
of 10 longer to complete on Win2K. It *could* be that the endless
reallocs were simply killing it on Win2K even in the release build.
and not pymalloc. Add the functions PyMalloc_New, PyMalloc_NewVar, and
PyMalloc_Del that will use pymalloc if it's enabled. If pymalloc is
not enabled then they use the standard malloc (PyMem_*).
Windows some modules are considered (by me, and I don't care what anyone
else thinks about this <wink>) to be part of "the core" despite that they
happen to be compiled into separate DLLs (the "to DLL or not to DLL?"
question on Windows is nearly arbitrary). Making the pymalloc entry
points available to them allows the Windows build to complete without
incident when WITH_PYMALLOC is #define'd.
Note that this isn't unprecedented. Other "private API" functions we
export include _PySequence_IterSearch, _PyEval_SliceIndex, _PyCodec_Lookup,
_Py_ZeroStruct, _Py_TrueStruct, _PyLong_New and _PyModule_Clear.
Another year in the quest to out-guess random C behavior.
Added macros Py_ADJUST_ERANGE1(X) and Py_ADJUST_ERANGE2(X, Y). The latter
is useful for functions with complex results. Two corrections to errno-
after-libm-call are attempted:
1. If the platform set errno to ERANGE due to underflow, clear errno.
Some unknown subset of libm versions and link options do this. It's
allowed by C89, but I never figured anyone would do it.
2. If the platform did not set errno but overflow occurred, force
errno to ERANGE. C89 required setting errno to ERANGE, but C99
doesn't. Some unknown subset of libm versions and link options do
it the C99 way now.
Bugfix candidate, but hold off until some Linux people actually try it,
with and without -lieee. I'll send a help plea to Python-Dev.
platform realloc(p, 0) returns NULL, so MALLOC_ZERO_RETURNS_NULL can
be correctly undefined yet realloc(p, 0) can return NULL anyway.
Prevent realloc(p, 0) doing free(p) and returning NULL via a different
hack. Would probably be better to get rid of MALLOC_ZERO_RETURNS_NULL
entirely.
Bugfix candidate.
alignment gimmick. David Abrahams notes that the standard "long double"
actually requires stricter alignment than "double" on some Tru64 box.
On my box and yours <wink>, it's the same, so no harm done on most
boxes.
the first 3 characters of this string in several places, so for as long
as they remain "2.2" it confuses the heck out of attempts to build 2.3
stuff using distutils.
Removed the ancient "#define ANY void".
Bugfix candidate? Hard call. The bug report claims the existence of
this #define creates conflicts with other packages, which is easy to
believe. OTOH, some extension authors may still be relying on its
presence. I'm afraid you can't win on this one.
PyDict_UpdateFromSeq2(): removed it.
PyDict_MergeFromSeq2(): made it public and documented it.
PyDict_Merge() docs: updated to reveal <wink> that the second
argument can be any mapping object.
type.__module__ behavior.
This adds the module name and a dot in front of the type name in every
type object initializer, except for built-in types (and those that
already had this). Note that it touches lots of Mac modules -- I have
no way to test these but the changes look right. Apologies if they're
not. This also touches the weakref docs, which contains a sample type
object initializer. It also touches the mmap test output, because the
mmap type's repr is included in that output. It touches object.h to
put the correct description in a comment.
Big Hammer to implement -Qnew as PEP 238 says it should work (a global
option affecting all instances of "/").
pydebug.h, main.c, pythonrun.c: define a private _Py_QnewFlag flag, true
iff -Qnew is passed on the command line. This should go away (as the
comments say) when true division becomes The Rule. This is
deliberately not exposed to runtime inspection or modification: it's
a one-way one-shot switch to pretend you're using Python 3.
ceval.c: when _Py_QnewFlag is set, treat BINARY_DIVIDE as
BINARY_TRUE_DIVIDE.
test_{descr, generators, zipfile}.py: fiddle so these pass under
-Qnew too. This was just a matter of s!/!//! in test_generators and
test_zipfile. test_descr was trickier, as testbinop() is passed
assumptions that "/" is the same as calling a "__div__" method; put
a temporary hack there to call "__truediv__" instead when the method
name is "__div__" and 1/2 evaluates to 0.5.
Three standard tests still fail under -Qnew (on Windows; somebody
please try the Linux tests with -Qnew too! Linux runs a whole bunch
of tests Windows doesn't):
test_augassign
test_class
test_coercion
I can't stay awake longer to stare at this (be my guest). Offhand
cures weren't obvious, nor was it even obvious that cures are possible
without major hackery.
Question: when -Qnew is in effect, should calls to __div__ magically
change into calls to __truediv__? See "major hackery" at tail end of
last paragraph <wink>.
There's now a new structmember code, T_OBJECT_EX, which is used for
all __slot__ variables (except __weakref__, which has special behavior
anyway). This new code raises AttributeError when the variable is
NULL rather than converting NULL to None.
use wrappers on all platforms, to make this as consistent as possible x-
platform (in particular, make sure there's at least one \0 byte in
the output buffer). Also document more of the truth about what these do.
getargs.c, seterror(): Three computations of remaining buffer size were
backwards, thus telling PyOS_snprintf the buffer is larger than it
actually is. This matters a lot now that PyOS_snprintf ensures there's a
trailing \0 byte (because it didn't get the truth about the buffer size,
it was storing \0 beyond the true end of the buffer).
sysmodule.c, mywrite(): Simplify, now that PyOS_vsnprintf guarantees to
produce a \0 byte.
const char* instead of char*. The change is conceptually correct, and
indirectly fixes a compiler wng introduced when somebody else innocently
passed a const char* to this function.
RISCOS/Makefile:
include structseq and weakrefobject;
changes to keep command line length below 2048
RISCOS/Modules/riscosmodule.c:
typos from the stat structseq patch
Include/pyport.h:
don't re-#define __attribute__(__x) on RISC OS as it is already defined in c library
object.h: Added PyType_CheckExact macro.
typeobject.c, type_new():
+ Use the new macro.
+ Assert that the arguments have the right types rather than do incomplete
runtime checks "sometimes".
+ If this isn't the 1-argument flavor() of type, and there aren't 3 args
total, produce a "types() takes 1 or 3 args" msg before
PyArg_ParseTupleAndKeywords produces a "takes exactly 3" msg.
PyObject_CallFunctionObArgs() and PyObject_CallMethodObArgs() have the
advantage that no format strings need to be parsed. The CallMethod
variant also avoids creating a new string object in order to retrieve
a method from an object as well.
outer level, the iterator protocol is used for memory-efficiency (the
outer sequence may be very large if fully materialized); at the inner
level, PySequence_Fast() is used for time-efficiency (these should
always be sequences of length 2).
dictobject.c, new functions PyDict_{Merge,Update}FromSeq2. These are
wholly analogous to PyDict_{Merge,Update}, but process a sequence-of-2-
sequences argument instead of a mapping object. For now, I left these
functions file static, so no corresponding doc changes. It's tempting
to change dict.update() to allow a sequence-of-2-seqs argument too.
Also changed the name of dictionary's keyword argument from "mapping"
to "x". Got a better name? "mapping_or_sequence_of_pairs" isn't
attractive, although more so than "mosop" <wink>.
abstract.h, abstract.tex: Added new PySequence_Fast_GET_SIZE function,
much faster than going thru the all-purpose PySequence_Size.
libfuncs.tex:
- Document dictionary().
- Fiddle tuple() and list() to admit that their argument is optional.
- The long-winded repetitions of "a sequence, a container that supports
iteration, or an iterator object" is getting to be a PITA. Many
months ago I suggested factoring this out into "iterable object",
where the definition of that could include being explicit about
generators too (as is, I'm not sure a reader outside of PythonLabs
could guess that "an iterator object" includes a generator call).
- Please check my curly braces -- I'm going blind <0.9 wink>.
abstract.c, PySequence_Tuple(): When PyObject_GetIter() fails, leave
its error msg alone now (the msg it produces has improved since
PySequence_Tuple was generalized to accept iterable objects, and
PySequence_Tuple was also stomping on the msg in cases it shouldn't
have even before PyObject_GetIter grew a better msg).
STRICT_SYSV_CURSES when compiling curses module on HP/UX. Generalize
access to _flags on systems where WINDOW is opaque. Fixes bugs
#432497, #422265, and the curses parts of #467145 and #473150.
'slotdef' structure typedef and 'struct wrapperbase'. By adding the
wrapper docstrings to the slotdef structure, the slotdefs array can
serve as the data structure that drives add_operators(); the wrapper
descriptor contains a pointer to slotdef structure. This replaces
lots of custom code from add_operators() by a loop over the slotdefs
array, and does away with all the tab_xxx tables.
This patch implements what we have discussed on python-dev late in
September: str(obj) and unicode(obj) should behave similar, while
the old behaviour is retained for unicode(obj, encoding, errors).
The patch also adds a new feature with which objects can provide
unicode(obj) with input data: the __unicode__ method. Currently no
new tp_unicode slot is implemented; this is left as option for the
future.
Note that PyUnicode_FromEncodedObject() no longer accepts Unicode
objects as input. The API name already suggests that Unicode
objects do not belong in the list of acceptable objects and the
functionality was only needed because
PyUnicode_FromEncodedObject() was being used directly by
unicode(). The latter was changed in the discussed way:
* unicode(obj) calls PyObject_Unicode()
* unicode(obj, encoding, errors) calls PyUnicode_FromEncodedObject()
One thing left open to discussion is whether to leave the
PyUnicode_FromObject() API as a thin API extension on top of
PyUnicode_FromEncodedObject() or to turn it into a (macro) alias
for PyObject_Unicode() and deprecate it. Doing so would have some
surprising consequences though, e.g. u"abc" + 123 would turn out
as u"abc123"...
[Marc-Andre didn't have time to check this in before the deadline. I
hope this is OK, Marc-Andre! You can still make changes and commit
them on the trunk after the branch has been made, but then please mail
Barry a context diff if you want the change to be merged into the
2.2b1 release branch. GvR]
This is a big one, touching lots of files. Some of the platforms
aren't tested yet. Briefly, this changes the return value of the
os/posix functions stat(), fstat(), statvfs(), fstatvfs(), and the
time functions localtime(), gmtime(), and strptime() from tuples into
pseudo-sequences. When accessed as a sequence, they behave exactly as
before. But they also have attributes like st_mtime or tm_year. The
stat return value, moreover, has a few platform-specific attributes
that are not available through the sequence interface (because
everybody expects the sequence to have a fixed length, these couldn't
be added there). If your platform's struct stat doesn't define
st_blksize, st_blocks or st_rdev, they won't be accessible from Python
either.
(Still missing is a documentation update.)
This changes Pythread_start_thread() to return the thread ID, or -1
for an error. (It's technically an incompatible API change, but I
doubt anyone calls it.)
"for <var> in <testlist> may no longer be a single test followed by
a comma. This solves SF bug #431886. Note that if the testlist
contains more than one test, a trailing comma is still allowed, for
maximum backward compatibility; but this example is not:
[(x, y) for x in range(10), for y in range(10)]
^
The fix involved creating a new nonterminal 'testlist_safe' whose
definition doesn't allow the trailing comma if there's only one test:
testlist_safe: test [(',' test)+ [',']]
The platform requires 8-byte alignment for doubles, but the GC header
was 12 bytes and that threw off the natural alignment of the double
members of a subtype of complex. The fix puts the GC header into a
union with a double as the other member, to force no-looser-than
double alignment of GC headers. On boxes that require 8-byte alignment
for doubles, this may add pad bytes to the GC header accordingly; ditto
for platforms that *prefer* 8-byte alignment for doubles. On platforms
that don't care, it shouldn't change the memory layout (because the
size of the old GC header is certainly greater than the size of a double
on all platforms, so unioning with a double shouldn't change size or
alignment on such boxes).
is a list of weak references to types (new-style classes). Make this
accessible to Python as the function __subclasses__ which returns a
list of types -- we don't want Python programmers to be able to
manipulate the raw list.
In order to make this possible, I also had to add weak reference
support to type objects.
This will eventually be used together with a trap on attribute
assignment for dynamic classes for a major speed-up without losing the
dynamic properties of types: when a __foo__ method is added to a
class, the class and all its subclasses will get an appropriate tp_foo
slot function.
This simplifies the rounding in _PyObject_VAR_SIZE, allows to restore the
pre-rounding calling sequence, and allows some nice little simplifications
in its callers. I'm still making it return a size_t, though.
As Guido suggested, this makes the new subclassing code substantially
simpler. But the mechanics of doing it w/ C macro semantics are a mess,
and _PyObject_VAR_SIZE has a new calling sequence now.
Question: The PyObject_NEW_VAR macro appears to be part of the public API.
Regardless of what it expands to, the notion that it has to round up the
memory it allocates is new, and extensions containing the old
PyObject_NEW_VAR macro expansion (which was embedded in the
PyObject_NEW_VAR expansion) won't do this rounding. But the rounding
isn't actually *needed* except for new-style instances with dict pointers
after a variable-length blob of embedded data. So my guess is that we do
not need to bump the API version for this (as the rounding isn't needed
for anything an extension can do unless it's recompiled anyway). What's
your guess?
pad memory to properly align the __dict__ pointer in all cases.
gcmodule.c/objimpl.h, _PyObject_GC_Malloc:
+ Added a "padding" argument so that this flavor of malloc can allocate
enough bytes for alignment padding (it can't know this is needed, but
its callers do).
typeobject.c, PyType_GenericAlloc:
+ Allocated enough bytes to align the __dict__ pointer.
+ Sped and simplified the round-up-to-PTRSIZE logic.
+ Added blank lines so I could parse the if/else blocks <0.7 wink>.
test dramatically:
class T(tuple): __dynamic__ = 1
t = T(range(1000))
for i in range(1000): tt = tuple(t)
The speedup was about 5x compared to the previous state of CVS (1.7
vs. 8.8, in arbitrary time units). But it's still more than twice as
slow as as the same test with __dynamic__ = 0 (0.8).
I'm not sure that I really want to go through the trouble of this kind
of speedup for every slot. Even doing it just for the most popular
slots will be a major effort (the new slot_sq_item is 40+ lines, while
the old one was one line with a powerful macro -- unfortunately the
speedup comes from expanding the macro and doing things in a way
specific to the slot signature).
An alternative that I'm currently considering is sketched in PLAN.txt:
trap setattr on type objects. But this will require keeping track of
all derived types using weak references.
instances).
Also added GC support to various auxiliary types: super, property,
descriptors, wrappers, dictproxy. (Only type objects have a tp_clear
field; the other types are.)
One change was necessary to the GC infrastructure. We have statically
allocated type objects that don't have a GC header (and can't easily
be given one) and heap-allocated type objects that do have a GC
header. Giving these different metatypes would be really ugly: I
tried, and I had to modify pickle.py, cPickle.c, copy.py, add a new
invent a new name for the new metatype and make it a built-in, change
affected tests... In short, a mess. So instead, we add a new type
slot tp_is_gc, which is a simple Boolean function that determines
whether a particular instance has GC headers or not. This slot is
only relevant for types that have the (new) GC flag bit set. If the
tp_is_gc slot is NULL (by far the most common case), all instances of
the type are deemed to have GC headers. This slot is called by the
PyObject_IS_GC() macro (which is only used twice, both times in
gcmodule.c).
I also changed the extern declarations for a bunch of GC-related
functions (_PyObject_GC_Del etc.): these always exist but objimpl.h
only declared them when WITH_CYCLE_GC was defined, but I needed to be
able to reference them without #ifdefs. (When WITH_CYCLE_GC is not
defined, they do the same as their non-GC counterparts anyway.)
The patch repaired internal gcc compiler errors on BeOS.
This checkin repairs them in a simpler way, by explicitly casting the
platform INFINITY to double.
no backwards compatibility to worry about, so I just pushed the
'closure' struct member to the back -- it's never used in the current
code base (I may eliminate it, but that's more work because the getter
and setter signatures would have to change.)
As examples, I added actual docstrings to the getset attributes of a
few types: file.closed, xxsubtype.spamdict.state.
compatibility, this required all places where an array of "struct
memberlist" structures was declared that is referenced from a type's
tp_members slot to change the type of the structure to PyMemberDef;
"struct memberlist" is now only used by old code that still calls
PyMember_Get/Set. The code in PyObject_GenericGetAttr/SetAttr now
calls the new APIs PyMember_GetOne/SetOne, which take a PyMemberDef
argument.
As examples, I added actual docstrings to the attributes of a few
types: file, complex, instance method, super, and xxsubtype.spamlist.
Also converted the symtable to new style getattr.
hack, and it's even more disgusting than a PyInstance_Check() call.
If the tp_compare slot is the slot used for overrides in Python,
it's always called.
Add some tests that show what should work too.
Renamed the 'readonly' field to 'flags' and defined some new flag
bits: READ_RESTRICTED and WRITE_RESTRICTED, as well as a shortcut
RESTRICTED that means both.
tuple(i) repaired to return a true tuple when i is an instance of a
tuple subclass.
Added PyTuple_CheckExact macro.
PySequence_Tuple(): if a tuple-like object isn't exactly a tuple, it's
not safe to return the object as-is -- make a new tuple of it instead.
Given an immutable type M, and an instance I of a subclass of M, the
constructor call M(I) was just returning I as-is; but it should return a
new instance of M. This fixes it for M in {int, long}. Strings, floats
and tuples remain to be done.
Added new macros PyInt_CheckExact and PyLong_CheckExact, to more easily
distinguish between "is" and "is a" (i.e., only an int passes
PyInt_CheckExact, while any sublass of int passes PyInt_Check).
Added private API function _PyLong_Copy.
iterable object. I'm not sure how that got overlooked before!
Got rid of the internal _PySequence_IterContains, introduced a new
internal _PySequence_IterSearch, and rewrote all the iteration-based
"count of", "index of", and "is the object in it or not?" routines to
just call the new function. I suppose it's slower this way, but the
code duplication was getting depressing.
While not even documented, they were clearly part of the C API,
there's no great difficulty to support them, and it has the cool
effect of not requiring any changes to ExtensionClass.c.
requires that errno ever get set, and it looks like glibc is already
playing that game. New rules:
+ Never use HUGE_VAL. Use the new Py_HUGE_VAL instead.
+ Never believe errno. If overflow is the only thing you're interested in,
use the new Py_OVERFLOWED(x) macro. If you're interested in any libm
errors, use the new Py_SET_ERANGE_IF_OVERFLOW(x) macro, which attempts
to set errno the way C89 said it worked.
Unfortunately, none of these are reliable, but they work on Windows and I
*expect* under glibc too.
but will be the foundation for Good Things:
+ Speed PyLong_AsDouble.
+ Give PyLong_AsDouble the ability to detect overflow.
+ Make true division of long/long nearly as accurate as possible (no
spurious infinities or NaNs).
+ Return non-insane results from math.log and math.log10 when passing a
long that can't be approximated by a double better than HUGE_VAL.
PEP 238. Changes:
- add a new flag variable Py_DivisionWarningFlag, declared in
pydebug.h, defined in object.c, set in main.c, and used in
{int,long,float,complex}object.c. When this flag is set, the
classic division operator issues a DeprecationWarning message.
- add a new API PyRun_SimpleStringFlags() to match
PyRun_SimpleString(). The main() function calls this so that
commands run with -c can also benefit from -Dnew.
- While I was at it, I changed the usage message in main() somewhat:
alphabetized the options, split it in *four* parts to fit in under
512 bytes (not that I still believe this is necessary -- doc strings
elsewhere are much longer), and perhaps most visibly, don't display
the full list of options on each command line error. Instead, the
full list is only displayed when -h is used, and otherwise a brief
reminder of -h is displayed. When -h is used, write to stdout so
that you can do `python -h | more'.
Notes:
- I don't want to use the -W option to control whether the classic
division warning is issued or not, because the machinery to decide
whether to display the warning or not is very expensive (it involves
calling into the warnings.py module). You can use -Werror to turn
the warnings into exceptions though.
- The -Dnew option doesn't select future division for all of the
program -- only for the __main__ module. I don't know if I'll ever
change this -- it would require changes to the .pyc file magic
number to do it right, and a more global notion of compiler flags.
- You can usefully combine -Dwarn and -Dnew: this gives the __main__
module new division, and warns about classic division everywhere
else.
the old flag to still compile. Remove the PyType_BASICSIZE and
PyType_SET_BASICSIZE macros. Add PyObject_GC_New, PyObject_GC_NewVar,
PyObject_GC_Resize, PyObject_GC_Del, PyObject_GC_Track,
PyObject_GC_UnTrack. Part of SF patch #421893.
pyport.h: typedef a new Py_intptr_t type.
DELICATE ASSUMPTION: That HAVE_UINTPTR_T implies intptr_t is
available as well as uintptr_t. If that turns out not to be
true, things must get uglier (C99 wants both, so I think it's
an assumption we're *likely* to get away with).
thread_nt.h, PyThread_start_new_thread: MS _beginthread is documented
as returning unsigned long; no idea why uintptr_t was being used.
Others: Always use Py_[u]intptr_t, never [u]intptr_t directly.
PyErr_Format() these new C API methods can be used instead of
sprintf()'s into hardcoded char* buffers. This allows us to fix
many situation where long package, module, or class names get
truncated in reprs.
PyString_FromFormat() is the varargs variety.
PyString_FromFormatV() is the va_list variety
Original PyErr_Format() code was modified to allow %p and %ld
expansions.
Many reprs were converted to this, checkins coming soo. Not
changed: complex_repr(), float_repr(), float_print(), float_str(),
int_repr(). There may be other candidates not yet converted.
Closes patch #454743.
CO_FUTURE_DIVISION flag. Redid this to use Jeremy's PyCF_MASK #define
instead, so we dont have to remember to fiddle individual feature names
here again.
pythonrun.h: Also #define a PyCF_MASK_OBSOLETE mask. This isn't used
yet, but will be as part of the PEP 264 implementation (compile() mustn't
raise an error just because old code uses a flag name that's become
obsolete; a warning may be appropriate, but not an error; so compile() has
to know about obsolete flags too, but nobody is going to remember to
update compile() with individual obsolete flag names across releases either
-- i.e., this is the flip side of PyEval_MergeCompilerFlags's oversight).
- Do not compile unicodeobject, unicodectype, and unicodedata if Unicode is disabled
- check for Py_USING_UNICODE in all places that use Unicode functions
- disables unicode literals, and the builtin functions
- add the types.StringTypes list
- remove Unicode literals from most tests.
The descr changes moved the dispatch for calling objects from
call_object() in ceval.c to PyObject_Call() in abstract.c.
call_object() and the many functions it used in ceval.c were no longer
used, but were not removed.
Rename meth_call() as PyCFunction_Call() so that it can be called by
the CALL_FUNCTION opcode in ceval.c.
Also, fix error message that referred to PyEval_EvalCodeEx() by its
old name eval_code2(). (I'll probably refer to it by its old name,
too.)
Replace individual slots in PyFutureFeatures with a single bitmask
with one field per feature. The flags for this bitmask are the same
as the flags used in the co_flags slot of a code object.
XXX This means we waste several bits, because they are used
for co_flags but have no meaning for future statements. Don't
think this is an issue.
Remove the NESTED_SCOPES_DEFAULT define and others. Not sure what
they were for anyway.
Remove all the PyCF_xxx flags, but define PyCF_MASK in terms of the
CO_xxx flags that are relevant for this release.
Change definition of PyCompilerFlags so that cf_flags matches
co_flags.
Removed all instances of Py_UCS2 from the codebase, and so also (I hope)
the last remaining reliance on the platform having an integral type
with exactly 16 bits.
PyUnicode_DecodeUTF16() and PyUnicode_EncodeUTF16() now read and write
one byte at a time.
with functionality needed for both unix-Python and MacPython and a
new smaller ./Mac/Python/macglue.c which contains MacPython stuff only.
pymactoolbox.h has moved to ./Include from ./Mac/Include and now also
contains the relevant stuff from macglue.h.
The net effect of this is that the ./Mac subdirectory is not needed
anymore for building the unix-Python core on MacOSX (it is needed
for building the extension modules).
This introduces:
- A new operator // that means floor division (the kind of division
where 1/2 is 0).
- The "future division" statement ("from __future__ import division)
which changes the meaning of the / operator to implement "true
division" (where 1/2 is 0.5).
- New overloadable operators __truediv__ and __floordiv__.
- New slots in the PyNumberMethods struct for true and floor division,
new abstract APIs for them, new opcodes, and so on.
I emphasize that without the future division statement, the semantics
of / will remain unchanged until Python 3.0.
Not yet implemented are warnings (default off) when / is used with int
or long arguments.
This has been on display since 7/31 as SF patch #443474.
Flames to /dev/null.
- Add an explicit call to PyType_Ready(&PyList_Type) to pythonrun.c
(just for the heck of it, really -- we should either explicitly
ready all types, or none).
Python warning which can be catched by means of the Python warning
framework.
It also adds two new APIs which hopefully make it easier for Python
to switch to buffer overflow safe [v]snprintf() APIs for error
reporting et al. The two new APIs are PyOS_snprintf() and
PyOS_vsnprintf() and work just like the standard ones in many
C libs. On platforms which have snprintf(), the native APIs are used,
on all other an emulation with snprintf() tries to do its best.
And remove all the extern decls in the middle of .c files.
Apparently, it was excluded from the header file because it is
intended for internal use by the interpreter. It's still intended for
internal use and documented as such in the header file.
that 'yield' is a keyword. This doesn't help test_generators at all! I
don't know why not. These things do work now (and didn't before this
patch):
1. "from __future__ import generators" now works in a native shell.
2. Similarly "python -i xxx.py" now has generators enabled in the
shell if xxx.py had them enabled.
3. This program (which was my doctest proxy) works fine:
from __future__ import generators
source = """\
def f():
yield 1
"""
exec compile(source, "", "single") in globals()
print type(f())
that info to code dynamically compiled *by* code compiled with generators
enabled. Doesn't yet work because there's still no way to tell the parser
that "yield" is OK (unlike nested_scopes, the parser has its fingers in
this too).
Replaced PyEval_GetNestedScopes by a more-general
PyEval_MergeCompilerFlags. Perhaps I should not have? I doubted it was
*intended* to be part of the public API, so just did.
the yield statement. I figure we have to have this in before I can
release 2.2a1 on Wednesday.
Note: test_generators is currently broken, I'm counting on Tim to fix
this.
Although this is a one-character change, more work needs to be done:
the compiler can get rid of a lot of non-nested-scopes code, the
documentation needs to be updated, the future statement can be
simplified, etc.
But this change enables the nested scope semantics everywhere, and
that's the important part -- we can now test code for compatibility
with nested scopes by default.
path (with no profile/trace function) through eval_code2() and
eval_frame() avoids several checks.
In the common cases of calls, returns, and exception propogation,
eval_code2() and eval_frame() used to test two values in the
thread-state: the profiling function and the tracing function. With
this change, a flag is set in the thread-state if either of these is
active, allowing a single check to suffice when both are NULL. This
also simplifies the code needed when either function is in use but is
already active (to avoid profiling/tracing the profiler/tracer); the
flag is set to 0 when the profile/trace code is entered, allowing the
same check to suffice for "already in the tracer" for call/return/
exception events.
Python interpreter.
This change adds two new C-level APIs: PyEval_SetProfile() and
PyEval_SetTrace(). These can be used to install profile and trace
functions implemented in C, which can operate at much higher speeds
than Python-based functions. The overhead for calling a C-based
profile function is a very small fraction of a percent of the overhead
involved in calling a Python-based function.
The machinery required to call a Python-based profile or trace
function been moved to sysmodule.c, where sys.setprofile() and
sys.setprofile() simply become users of the new interface.
Implement sys.maxunicode.
Explicitly wrap around upper/lower computations for wide Py_UNICODE.
When decoding large characters with UTF-8, represent expected test
results using the \U notation.
Add configure option --enable-unicode.
Add config.h macros Py_USING_UNICODE, PY_UNICODE_TYPE, Py_UNICODE_SIZE,
SIZEOF_WCHAR_T.
Define Py_UCS2.
Encode and decode large UTF-8 characters into single Py_UNICODE values
for wide Unicode types; likewise for UTF-16.
Remove test whether sizeof Py_UNICODE is two.
unicodeobject.h, which forces sizeof(Py_UNICODE) == sizeof(Py_UCS4).
(this may be good enough for platforms that doesn't have a 16-bit
type. the UTF-16 codecs don't work, though)
the next free valuestack slot, not to the base (in America, stacks push
and pop at the top -- they mutate at the bottom in Australia <winK>).
eval_frame(): assert that f_stacktop isn't NULL upon entry.
frame_delloc(): avoid ordered pointer comparisons involving f_stacktop
when f_stacktop is NULL.
Gave Python linear-time repr() implementations for dicts, lists, strings.
This means, e.g., that repr(range(50000)) is no longer 50x slower than
pprint.pprint() in 2.2 <wink>.
I don't consider this a bugfix candidate, as it's a performance boost.
Added _PyString_Join() to the internal string API. If we want that in the
public API, fine, but then it requires runtime error checks instead of
asserts.
_PyLong_FromByteArray
_PyLong_AsByteArray
Untested and probably buggy -- they compile OK, but nothing calls them
yet. Will soon be called by the struct module, to implement x-platform
'q' and 'Q'.
If other people have uses for them, we could move them into the public API.
See longobject.h for usage details.
UTF-16 codec will now interpret and remove a *leading* BOM mark. Sub-
sequent BOM characters are no longer interpreted and removed.
UTF-16-LE and -BE pass through all BOM mark characters.
These changes should get the UTF-16 codec more in line with what
the Unicode FAQ recommends w/r to BOM marks.
and introduces a new method .decode().
The major change is that strg.encode() will no longer try to convert
Unicode returns from the codec into a string, but instead pass along
the Unicode object as-is. The same is now true for all other codec
return types. The underlying C APIs were changed accordingly.
Note that even though this does have the potential of breaking
existing code, the chances are low since conversion from Unicode
previously took place using the default encoding which is normally
set to ASCII rendering this auto-conversion mechanism useless for
most Unicode encodings.
The good news is that you can now use .encode() and .decode() with
much greater ease and that the door was opened for better accessibility
of the builtin codecs.
As demonstration of the new feature, the patch includes a few new
codecs which allow string to string encoding and decoding (rot13,
hex, zip, uu, base64).
Written by Marc-Andre Lemburg. Copyright assigned to the PSF.
Store floats and doubles to full precision in marshal.
Test that floats read from .pyc/.pyo closely match those read from .py.
Declare PyFloat_AsString() in floatobject header file.
Add new PyFloat_AsReprString() API function.
Document the functions declared in floatobject.h.
safely together and don't duplicate logic (the common logic was factored
out into new private API function _PySequence_IterContains()).
Visible change:
some_complex_number in some_instance
no longer blows up if some_instance has __getitem__ but neither
__contains__ nor __iter__. test_iter changed to ensure that remains true.
NEEDS DOC CHANGES.
This one surprised me! While I expected tuple() to be a no-brainer, turns
out it's actually dripping with consequences:
1. It will *allow* the popular PySequence_Fast() to work with any iterable
object (code for that not yet checked in, but should be trivial).
2. It caused two std tests to fail. This because some places used
PyTuple_Sequence() (the C spelling of tuple()) as an indirect way to test
whether something *is* a sequence. But tuple() code only looked for the
existence of sq->item to determine that, and e.g. an instance passed
that test whether or not it supported the other operations tuple()
needed (e.g., __len__). So some things the tests *expected* to fail
with an AttributeError now fail with a TypeError instead. This looks
like an improvement to me; e.g., test_coercion used to produce 559
TypeErrors and 2 AttributeErrors, and now they're all TypeErrors. The
error details are more informative too, because the places calling this
were *looking* for TypeErrors in order to replace the generic tuple()
"not a sequence" msg with their own more specific text, and
AttributeErrors snuck by that.
patch for sharing single character Unicode objects.
Martin's patch had to be reworked in a number of ways to take Unicode
resizing into consideration as well. Here's what the updated patch
implements:
* Single character Unicode strings in the Latin-1 range are shared
(not only ASCII chars as in Martin's original patch).
* The ASCII and Latin-1 codecs make use of this optimization,
providing a noticable speedup for single character strings. Most
Unicode methods can use the optimization as well (by virtue
of using PyUnicode_FromUnicode()).
* Some code cleanup was done (replacing memcpy with Py_UNICODE_COPY)
* The PyUnicode_Resize() can now also handle the case of resizing
unicode_empty which previously resulted in an error.
* Modified the internal API _PyUnicode_Resize() and
the public PyUnicode_Resize() API to handle references to
shared objects correctly. The _PyUnicode_Resize() signature
changed due to this.
* Callers of PyUnicode_FromUnicode() may now only modify the Unicode
object contents of the returned object in case they called the API
with NULL as content template.
Note that even though this patch passes the regression tests, there
may still be subtle bugs in the sharing code.
sees it (test_iter.py is unchanged).
- Added a tp_iternext slot, which calls the iterator's next() method;
this is much faster for built-in iterators over built-in types
such as lists and dicts, speeding up pybench's ForLoop with about
25% compared to Python 2.1. (Now there's a good argument for
iterators. ;-)
- Renamed the built-in sequence iterator SeqIter, affecting the C API
functions for it. (This frees up the PyIter prefix for generic
iterator operations.)
- Added PyIter_Check(obj), which checks that obj's type has a
tp_iternext slot and that the proper feature flag is set.
- Added PyIter_Next(obj) which calls the tp_iternext slot. It has a
somewhat complex return condition due to the need for speed: when it
returns NULL, it may not have set an exception condition, meaning
the iterator is exhausted; when the exception StopIteration is set
(or a derived exception class), it means the same thing; any other
exception means some other error occurred.
new slot tp_iter in type object, plus new flag Py_TPFLAGS_HAVE_ITER
new C API PyObject_GetIter(), calls tp_iter
new builtin iter(), with two forms: iter(obj), and iter(function, sentinel)
new internal object types iterobject and calliterobject
new exception StopIteration
new opcodes for "for" loops, GET_ITER and FOR_ITER (also supported by dis.py)
new magic number for .pyc files
new special method for instances: __iter__() returns an iterator
iteration over dictionaries: "for x in dict" iterates over the keys
iteration over files: "for x in file" iterates over lines
TODO:
documentation
test suite
decide whether to use a different way to spell iter(function, sentinal)
decide whether "for key in dict" is a good idea
use iterators in map/filter/reduce, min/max, and elsewhere (in/not in?)
speed tuning (make next() a slot tp_next???)
Update docstring and library reference section on 'sys' module.
New API PyErr_Display, just for displaying errors, called by excepthook.
Uncaught exceptions now call sys.excepthook; if that fails, we fall back
to calling PyErr_Display directly.
Also comes with sys.__excepthook__ and sys.__displayhook__.
must now initialize the extra field used by the weak-ref machinery to
NULL themselves, to avoid having to require PyObject_INIT() to check
if the type supports weak references and do it there. This causes less
work to be done for all objects (the type object does not need to be
consulted to check for the Py_TPFLAGS_HAVE_WEAKREFS bit).
If a module has a future statement enabling nested scopes, they are
also enable for the exec statement and the functions compile() and
execfile() if they occur in the module.
If Python is run with the -i option, which enters interactive mode
after executing a script, and the script it runs enables nested
scopes, they are also enabled in interactive mode.
XXX The use of -i with -c "from __future__ import nested_scopes" is
not supported. What's the point?
To support these changes, many function variants have been added to
pythonrun.c. All the variants names end with Flags and they take an
extra PyCompilerFlags * argument. It is possible that this complexity
will be eliminated in a future version of the interpreter in which
nested scopes are not optional.
with free variables. Thanks to Martin v. Loewis for finding two of
the problems. This fixes SF buf 405583.
There is also a C API change: PyFrame_New() is reverting to its
pre-2.1 signature. The change introduced by nested scopes was a
mistake. XXX Is this okay between beta releases?
cell_clear(), the GC helper, must decref its reference to break
cycles.
frame_dealloc() must dealloc all cell vars and free vars in addition
to locals.
eval_code2() setup code must INCREF cells it copies out of the
closure.
The STORE_DEREF opcode implementation must DECREF the object it passes
to PyCell_Set().
(Also remove warning about module-level global decl, because we can't
distinguish from code passed to exec.)
Define PyCompilerFlags type contains a single element,
cf_nested_scopes, that is true if a nested scopes future statement has
been entered at the interactive prompt.
New API functions:
PyNode_CompileFlags()
PyRun_InteractiveOneFlags()
-- same as their non Flags counterparts except that the take an
optional PyCompilerFlags pointer
compile.c: In jcompile() use PyCompilerFlags argument. If
cf_nested_scopes is true, compile code with nested scopes. If it
is false, but the code has a valid future nested scopes statement,
set it to true.
pythonrun.c: Create a new PyCompilerFlags object in
PyRun_InteractiveLoop() and thread it through to
PyRun_InteractiveOneFlags().
for errors raised in future.c.
Move some helper functions from compile.c to errors.c and make them
API functions: PyErr_SyntaxLocation() and PyErr_ProgramText().
XXX still need to integrate into symtable API
compile.h: Remove ff_n_simple_stmt; obsolete.
Add ff_found_docstring used internally to skip one and only
one string at the beginning of a module.
compile.c: Add check for from __future__ imports to far into the file.
In symtable_global() check for -1 returned from
symtable_lookup(), which signifies name not defined.
Add missing DECERF in symtable_add_def.
Free c->c_future.
future.c: Add special handling for multiple statements joined on a
single line using one or more semicolons; this form can
include an illegal future statement that would otherwise be
hard to detect.
Add support for detecting and skipping doc strings.
Makefile.pre.in: add target future.o
Include/compile.h: define PyFutureFeaters and PyNode_Future()
add c_future slot to struct compiling
Include/symtable.h: add st_future slot to struct symtable
Python/future.c: implementation of PyNode_Future()
Python/compile.c: use PyNode_Future() for nested_scopes support
Python/symtable.c: include compile.h to pick up PyFutureFeatures decl
compile.h: #define NESTED_SCOPES_DEFAULT 0 for Python 2.1
__future__ feature name: "nested_scopes"
symtable.h: Add st_nested_scopes slot. Define flags to track exec and
import star.
Lib/test/test_scope.py: requires nested scopes
compile.c: Fiddle with error messages.
Reverse the sense of ste_optimized flag on
PySymtableEntryObjects. If it is true, there is an optimization
conflict.
Modify get_ref_type to respect st_nested_scopes flags.
Refactor symtable_load_symbols() into several smaller functions,
which use struct symbol_info to share variables. In new function
symtable_update_flags(), raise an error or warning for import * or
bare exec that conflicts with nested scopes. Also, modify handle
for free variables to respect st_nested_scopes flag.
In symtable_init() assign st_nested_scopes flag to
NESTED_SCOPES_DEFAULT (defined in compile.h).
Add preliminary and often incorrect implementation of
symtable_check_future().
Add symtable_lookup() helper for future use.
release the interned string dictionary. This is useful for memory
use debugging because it eliminates a huge source of noise from the
reports. Only defined when INTERN_STRINGS is defined.
of nested functions. Either is allowed in a function if it contains
no defs or lambdas or the defs and lambdas it contains have no free
variables. If a function is itself nested and has free variables,
either is illegal.
Revise the symtable to use a PySymtableEntryObject, which holds all
the revelent information for a scope, rather than using a bunch of
st_cur_XXX pointers in the symtable struct. The changes simplify the
internal management of the current symtable scope and of the stack.
Added new C source file: Python/symtable.c. (Does the Windows build
process need to be updated?)
As part of these changes, the initial _symtable module interface
introduced in 2.1a2 is replaced. A dictionary of
PySymtableEntryObjects are returned.
symtable.h, so that they can be used by external module.
Improve error handling in symtable_enter_scope(), which return an
error code that went unchecked by most callers. XXX The error handling
in symtable code is sloppy in general.
Modify symtable to record the line number that begins each scope.
This can help to identify which code block is being referred to when
multiple blocks are bound to the same name.
Add st_scopes dict that is used to preserve scope info when
PyNode_CompileSymtable() is called. Otherwise, this information is
tossed as soon as it is no longer needed.
Add Py_SymtableString() to pythonrun; analogous to Py_CompileString().
discussion on python-dev. 'from mod import *' is still banned except
at the module level.
Fix value for special NOOPT entry in symtable. Initialze to 0 instead
of None, so that later uses of PyInt_AS_LONG() are valid. (Bug
reported by Donn Cave.)
replace local REPR macros with PyObject_REPR in object.h
This change eliminates an extra malloc/free when a frame with free
variables is created. Any cell vars or free vars are stored in
f_localsplus after the locals and before the stack.
eval_code2() fills in the appropriate values after handling
initialization of locals.
To track the size the frame has an f_size member that tracks the total
size of f_localsplus. It used to be implicitly f_nlocals + f_stacksize.
They're named as if public, so I did a Bad Thing by changing
PyMarshal_ReadObjectFromFile() to suck up the remainder of the file in one
gulp: anyone who counted on that leaving the file pointer merely at the
end of the next object would be screwed. So restored
PyMarshal_ReadObjectFromFile() to its earlier state, renamed the new greedy
code to PyMarshal_ReadLastObjectFromFile(), and changed Python internals to
call the latter instead.
The majority of the changes are in the compiler. The mainloop changes
primarily to implement the new opcodes and to pass a function's
closure to eval_code2(). Frames and functions got new slots to hold
the closure.
Include/compile.h
Add co_freevars and co_cellvars slots to code objects.
Update PyCode_New() to take freevars and cellvars as arguments
Include/funcobject.h
Add func_closure slot to function objects.
Add GetClosure()/SetClosure() functions (and corresponding
macros) for getting at the closure.
Include/frameobject.h
PyFrame_New() now takes a closure.
Include/opcode.h
Add four new opcodes: MAKE_CLOSURE, LOAD_CLOSURE, LOAD_DEREF,
STORE_DEREF.
Remove comment about old requirement for opcodes to fit in 7
bits.
compile.c
Implement changes to code objects for co_freevars and co_cellvars.
Modify symbol table to use st_cur_name (string object for the name
of the current scope) and st_cur_children (list of nested blocks).
Also define st_nested, which might more properly be called
st_cur_nested. Add several DEF_XXX flags to track def-use
information for free variables.
New or modified functions of note:
com_make_closure(struct compiling *, PyCodeObject *)
Emit LOAD_CLOSURE opcodes as needed to pass cells for free
variables into nested scope.
com_addop_varname(struct compiling *, int, char *)
Emits opcodes for LOAD_DEREF and STORE_DEREF.
get_ref_type(struct compiling *, char *name)
Return NAME_CLOSURE if ref type is FREE or CELL
symtable_load_symbols(struct compiling *)
Decides what variables are cell or free based on def-use info.
Can now raise SyntaxError if nested scopes are mixed with
exec or from blah import *.
make_scope_info(PyObject *, PyObject *, int, int)
Helper functions for symtable scope stack.
symtable_update_free_vars(struct symtable *)
After a code block has been analyzed, it must check each of
its children for free variables that are not defined in the
block. If a variable is free in a child and not defined in
the parent, then it is defined by block the enclosing the
current one or it is a global. This does the right logic.
symtable_add_use() is now a macro for symtable_add_def()
symtable_assign(struct symtable *, node *)
Use goto instead of for (;;)
Fixed bug in symtable where name of keyword argument in function
call was treated as assignment in the scope of the call site. Ex:
def f():
g(a=2) # a was considered a local of f
ceval.c
eval_code2() now take one more argument, a closure.
Implement LOAD_CLOSURE, LOAD_DEREF, STORE_DEREF, MAKE_CLOSURE>
Also: When name error occurs for global variable, report that the
name was global in the error mesage.
Objects/frameobject.c
Initialize f_closure to be a tuple containing space for cellvars
and freevars. f_closure is NULL if neither are present.
Objects/funcobject.c
Add support for func_closure.
Python/import.c
Change the magic number.
Python/marshal.c
Track changes to code objects.
supposed to be declared in system include files (with a proper prototype.)
Should be moved to a platform-specific block if anyone finds out which
broken platforms need it :-)
implementation details inside the ucnhash module.
also cleaned up the unicode copyright blurb a little; Secret Labs'
internal revision history isn't that interesting...
except that it always returns Unicode objects.
A new C API PyObject_Unicode() is also provided.
This closes patch #101664.
Written by Marc-Andre Lemburg. Copyright assigned to Guido van Rossum.
Closes SF patch #103123.
funcobject.h:
PyFunctionObject: add the func_dict slot.
funcobject.c:
PyFunction_New(): Initialize the func_dict slot to NULL.
func_getattr(): Rename to func_getattro() and change the
signature. It's more efficient to use attro methods and dig the C
string out than it is to re-convert a C string to a PyString.
Also, add support for getting the __dict__ (a.k.a. func_dict)
attribute, and for getting an arbitrary function attribute.
func_setattr(): Rename to func_setattro() and change the signature
for the same reason. Also add support for setting __dict__
(a.k.a. func_dict) and any arbitrary function attribute.
func_dealloc(): Be sure to DECREF the func_dict slot.
func_traverse(): Be sure to traverse func_dict too.
PyFunction_Type: make the necessary func_?etattro() changes.
classobject.c:
instancemethod_memberlist: Add __dict__
instancemethod_setattro(): New method to set arbitrary attributes
on methods (really the underlying im_func). Raise TypeError when
the instance is bound or when you're trying to set one of the
reserved im_* attributes.
instancemethod_getattr(): Renamed to instancemethod_getattro()
since that's what it really is. Also, added support fo getting
arbitrary attributes through the im_func.
PyMethod_Type: Do the ?etattr{,o} dance.
regardless of whether the system getopt() does what we want. This avoids the
hassle with prototypes and externs, and the check to see if the system
getopt() does what we want. Prefix optind, optarg and opterr with _PyOS_ to
avoid name clashes. Add new include file to define the right symbols. Fix
Demo/pyserv/pyserv.c to include getopt.h itself, instead of relying on
Python to provide it.
#define'd to an unreasonable value (several recent gcc systems have
misdefined it, causing bogus overflows in integer multiplication). Nuke
CHAR_BIT entirely.
Add definitions of INT_MAX and LONG_MAX to pyport.h.
Remove includes of limits.h and conditional definitions of INT_MAX
and LONG_MAX elsewhere.
This closes SourceForge patch #101659 and bug #115323.
Add three new convenience functions to the PyModule_*() family:
PyModule_AddObject(), PyModule_AddIntConstant(), PyModule_AddStringConstant().
This closes SourceForge patch #101233.
Note a curious extension to the std C rules: x, X and o formatting can never produce
a sign character in C, so the '+' and ' ' flags are meaningless for them. But
unbounded ints *can* produce a sign character under these conversions (no fixed-
width bitstring is wide enough to hold all negative values in 2's-comp form). So
these flags become meaningful in Python when formatting a Python long which is too
big to fit in a C long. This required shuffling around existing code, which hacked
x and X conversions to death when both the '#' and '0' flags were specified: the
hacks weren't strong enough to deal with the simultaneous possibility of the ' ' or
'+' flags too, since signs were always meaningless before for x and X conversions.
Isomorphic shuffling was required in unicodeobject.c.
Also added dozens of non-trivial new unbounded-int test cases to test_format.py.
which implements the automatic conversion from Unicode to a string
object using the default encoding.
The new API is then put to use to have eval() and exec accept
Unicode objects as code parameter. This closes bugs #110924
and #113890.
As side-effect, the traditional C APIs PyString_Size() and
PyString_AsString() will also accept Unicode objects as
parameters.
I can't test this, so I'm just checking it in with blind faith in Andy.
I've tested that it doesn't broeak a non-Pth build on Linux.
Changes include:
- There's a --with-pth configure option.
- Instead of _GNU_PTH, we test for HAVE_PTH.
- Better signal handling.
- (The config.h.in file is regenerated in a slightly different order.)
It's hard to sort out what the bug was, exactly. So, Big Hammer:
1. Python shouldn't be in the business of #define'ing NULL, period.
2. Users of the Python C API shouldn't be in the business of not including
Python.h, period.
Hence:
1. Removed all #define's of NULL in Python source code (pyport.h and
object.h).
2. Since we're *relying* on stdio.h defining NULL, put an #error in
Python.h after its #include of stdio.h if NULL isn't defined then.
PyRun_FileEx(). These are the same as their non-Ex counterparts but
have an extra argument, a flag telling them to close the file when
done.
Then this is used by Py_Main() and execfile() to close the file after
it is parsed but before it is executed.
Adding APIs seems strange given the feature freeze but it's the only
way I see to close the bug report without incompatible changes.
[ Bug #110616 ] source file stays open after parsing is done (PR#209)
Add the EXTENDED_ARG opcode to the virtual machine, allowing 32-bit
arguments to opcodes instead of being forced to stick to the 16-bit
limit. This is especially useful for machine-generated code, which
can be too long for the SET_LINENO parameter to fit into 16 bits.
This closes the implementation portion of SourceForge patch #100893.
by the following.
typedef in a portable way the Python name for the C9X uintptr_t type.
This latter is the most portable way to spell an integral type to
which a void* can be cast to and back again without losing
information. Parallel checkin hacks configure to check if the
platform/compiler supports the C9X name.
name as n'. By doing some twists and turns, "as" is not a reserved word.
There is a slight change in semantics for 'from module import name' (it will
now honour the 'global' keyword) but only in cases that are explicitly
undocumented.
This was a misleading bug -- the true "bug" was that hash(x) gave an error
return when x is an infinity. Fixed that. Added new Py_IS_INFINITY macro to
pyport.h. Rearranged code to reduce growing duplication in hashing of float and
complex numbers, pushing Trent's earlier stab at that to a logical conclusion.
Fixed exceedingly rare bug where hashing of floats could return -1 even if there
wasn't an error (didn't waste time trying to construct a test case, it was simply
obvious from the code that it *could* happen). Improved complex hash so that
hash(complex(x, y)) doesn't systematically equal hash(complex(y, x)) anymore.
did the same anyway.
I'm not sure what to do with Tools/compiler/compiler/* -- that isn't part of
distutils, is it ? Should it try to be compatible with old bytecode version ?
the Python Unicode implementation.
The internal buffer used for implementing the buffer protocol
is renamed to defenc to make this change visible. It now holds the
default encoded version of the Unicode object and is calculated
on demand (NULL otherwise).
Since the default encoding defaults to ASCII, this will mean that
Unicode objects which hold non-ASCII characters will no longer
work on C APIs using the "s" or "t" parser markers. C APIs must now
explicitly provide Unicode support via the "u", "U" or "es"/"es#"
parser markers in order to work with non-ASCII Unicode strings.
(Note: this patch will also have to be applied to the 1.6 branch
of the CVS tree.)
This doesn't change the copyright status for these files -- just the
markings! Doing it on the main branch for these three files for which
the HEAD revision was pushed back into 1.6.
for systems that are missing those declarations from system include files.
Start by moving a pointy-haired ones from their previous locations to the
new section.
(The gethostname() one, for instance, breaks on several systems, because
some define it as (char *, size_t) and some as (char *, int).)
I purposely decided not to include the summary of used #defines like Tim did
in the first section of pyport.h. In my opinion, the number of #defines
likedly to be used by this section would make such an overview unwieldy. I
would suggest documenting the non-obvious ones, though.
handlers "return void", according to ANSI C.
Removed the new Py_RETURN_FROM_SIGNAL_HANDLER macro.
Left RETSIGTYPE in the config stuff, because it's not clear to
me that others aren't relying on it (e.g., extension modules).
good C practice hasn't been available to everything all along.
Added Py_SAFE_DOWNCAST(VALUE, WIDE, NARROW) macro to pyport.h; this
just casts VALUE from type WIDE to type NARROW, but assert-fails if
Py_DEBUG is defined and info is lost due to casting.
Replaced a line in Fredrik's fix to marshal.c to use the new macro.
#if RETSIGTYPE != void
That isn't C, and MSVC properly refuses to compile it.
Introduced new Py_RETURN_FROM_SIGNAL_HANDLER macro in pyport.h
to expand to the correct thing based on RETSIGTYPE. However,
only void is ANSI! Do we still have platforms that return int?
The Unix config mess appears to #define RETSIGTYPE by magic
without being asked to, so I assume it's "a problem" across
Unices still.
comments, docstrings or error messages. I fixed two minor things in
test_winreg.py ("didn't" -> "Didn't" and "Didnt" -> "Didn't").
There is a minor style issue involved: Guido seems to have preferred English
grammar (behaviour, honour) in a couple places. This patch changes that to
American, which is the more prominent style in the source. I prefer English
myself, so if English is preferred, I'd be happy to supply a patch myself ;)
used for indentation related errors. This patch includes Ping's
improvements for indentation-related error messages.
Closes SourceForge patches #100734 and #100856.
This was a convenient excuse to create the pyport.h file recently
discussed!
Please use new Py_ARITHMETIC_RIGHT_SHIFT when right-shifting a
signed int and you *need* sign-extension. This is #define'd in
pyport.h, keying off new config symbol SIGNED_RIGHT_SHIFT_ZERO_FILLS.
If you're running on a platform that needs that symbol #define'd,
the std tests never would have worked for you (in particular,
at least test_long would have failed).
The autoconfig stuff got added to Python after my Unix days, so
I don't know how that works. Would someone please look into doing
& testing an auto-config of the SIGNED_RIGHT_SHIFT_ZERO_FILLS
symbol? It needs to be defined if & only if, e.g., (-1) >> 3 is
not -1.
Stein -- thanks!). Incidentally removed all the Py_PROTO macros
from object.h, as they prevented my editor from magically finding
the definitions of the "coercion", "cmpfunc" and "reprfunc"
typedefs that were being redundantly applied in longobject.c.
match the ones in the Unicode implementation, but were extended
to be able to reuse the existing Unicode codecs for string
purposes too.
Conversion from string to Unicode and back are done using the
default encoding.
to switch on support for BSD and SysV on platforms which use glibc
such as Linux.
These #defines are documented in e.g. the file /usr/include/features.h
on Linux platforms and the SUSv2 docs.
implementation. This was really to test whether my new CVS+SSH
setup is more usable than the old one -- and turns out it is (for
whatever reason, it was impossible to do a commit before that
involved more than one directory).
which are true for alphabetic and alphanumeric characters resp.
The macros are currently implemented using the existing is* tables
but will have to be updated to meet the Unicode standard definitions
(add tables for non-cased letters and letter modifiers).
errors in some of the hash algorithms. For exmaple, in float_hash and
complex_hash a certain part of the value is not included in the hash
calculation. See Tim's, Guido's, and my discussion of this on
python-dev in May under the title "fix float_hash and complex_hash for
64-bit *nix"
(2) The hash algorithms that use pointers (e.g. func_hash, code_hash)
are universally not correct on Win64 (they assume that sizeof(long) ==
sizeof(void*))
As well, this patch significantly cleans up the hash code. It adds the
two function _Py_HashDouble and _PyHash_VoidPtr that the various
hashing routine are changed to use.
These help maintain the hash function invariant: (a==b) =>
(hash(a)==hash(b))) I have added Lib/test/test_hash.py and
Lib/test/output/test_hash to test this for some cases.
This patch modifies the type structures of objects that
participate in GC. The object's tp_basicsize is increased when
GC is enabled. GC information is prefixed to the object to
maintain binary compatibility. GC objects also define the
tp_flag Py_TPFLAGS_GC.
the number of children of a node exceeds the max possible value for
the short that is used to count them. The Python runtime converts
this parser error into the SyntaxError "expression too long."
or fini of the builtin module.
_PyBuiltin_Init_1 => _PyBuiltin_Init
_PyBuiltin_Init_2 removed
_PyBuiltin_Fini_1 removed
_PyBuiltin_Fini_2 removed
These functions are used to initialize the _exceptions module.
init_exceptions added
fini_exceptions added
For more comments, read the patches@python.org archives.
For documentation read the comments in mymalloc.h and objimpl.h.
(This is not exactly what Vladimir posted to the patches list; I've
made a few changes, and Vladimir sent me a fix in private email for a
problem that only occurs in debug mode. I'm also holding back on his
change to main.c, which seems unnecessary to me.)
Improvements:
- does no longer need any extra memory
- has no relationship to tstate
- works in debug mode
- can easily be modified for free threading (hi Greg:)
Side effects:
Trashcan does change the order of object destruction.
Prevending that would be quite an immense effort, as
my attempts have shown. This version works always
the same, with debug mode or not. The slightly
changed destruction order should therefore be no problem.
Algorithm:
While the old idea of delaying the destruction of some
obejcts at a certain recursion level was kept, we now
no longer aloocate an object to hold these objects.
The delayed objects are instead chained together
via their ob_type field. The type is encoded via
ob_refcnt. When it comes to the destruction of the
chain of waiting objects, the topmost object is popped
off the chain and revived with type and refcount 1,
then it gets a normal Py_DECREF.
I am confident that this solution is near optimum
for minimizing side effects and code bloat.
Changed PyUnicode_Splitlines() maxsplit argument to keepends.
The maxsplit functionality was replaced by the keepends
functionality which allows keeping the line end markers together
with the string.
his copy of test_contains.py seems to be broken -- the lines he
deleted were already absent). Checkin messages:
New Unicode support for int(), float(), complex() and long().
- new APIs PyInt_FromUnicode() and PyLong_FromUnicode()
- added support for Unicode to PyFloat_FromString()
- new encoding API PyUnicode_EncodeDecimal() which converts
Unicode to a decimal char* string (used in the above new
APIs)
- shortcuts for calls like int(<int object>) and float(<float obj>)
- tests for all of the above
Unicode compares and contains checks:
- comparing Unicode and non-string types now works; TypeErrors
are masked, all other errors such as ValueError during
Unicode coercion are passed through (note that PyUnicode_Compare
does not implement the masking -- PyObject_Compare does this)
- contains now works for non-string types too; TypeErrors are
masked and 0 returned; all other errors are passed through
Better testing support for the standard codecs.
Misc minor enhancements, such as an alias dbcs for the mbcs codec.
Changes:
- PyLong_FromString() now applies the same error checks as
does PyInt_FromString(): trailing garbage is reported
as error and not longer silently ignored. The only characters
which may be trailing the digits are 'L' and 'l' -- these
are still silently ignored.
- string.ato?() now directly interface to int(), long() and
float(). The error strings are now a little different, but
the type still remains the same. These functions are now
ready to get declared obsolete ;-)
- PyNumber_Int() now also does a check for embedded NULL chars
in the input string; PyNumber_Long() already did this (and
still does)
Followed by:
Looks like I've gone a step too far there... (and test_contains.py
seem to have a bug too).
I've changed back to reporting all errors in PyUnicode_Contains()
and added a few more test cases to test_contains.py (plus corrected
the join() NameError).
executive summary:
Instead of typing 'apply(f, args, kwargs)' you can type 'f(*arg, **kwargs)'.
Some file-by-file details follow.
Grammar/Grammar:
simplify varargslist, replacing '*' '*' with '**'
add * & ** options to arglist
Include/opcode.h & Lib/dis.py:
define three new opcodes
CALL_FUNCTION_VAR
CALL_FUNCTION_KW
CALL_FUNCTION_VAR_KW
Python/ceval.c:
extend TypeError "keyword parameter redefined" message to include
the name of the offending keyword
reindent CALL_FUNCTION using four spaces
add handling of sequences and dictionaries using extend calls
fix function import_from to use PyErr_Format
The attached patch set includes a workaround to get Python with
Unicode compile on BSDI 4.x (courtesy Thomas Wouters; the cause
is a bug in the BSDI wchar.h header file) and Python interfaces
for the MBCS codec donated by Mark Hammond.
Also included are some minor corrections w/r to the docs of
the new "es" and "es#" parser markers (use PyMem_Free() instead
of free(); thanks to Mark Hammond for finding these).
The unicodedata tests are now in a separate file
(test_unicodedata.py) to avoid problems if the module cannot
be found.
/* More standard operations (at end for binary compatibility) */
should now be:
/* More standard operations (here for binary compatibility) */
since they're no longer at the end!
Attached you find an update of the Unicode implementation.
The patch is against the current CVS version. I would appreciate
if someone with CVS checkin permissions could check the changes
in.
The patch contains all bugs and patches sent this week and also
fixes a leak in the codecs code and a bug in the free list code
for Unicode objects (which only shows up when compiling Python
with Py_DEBUG; thanks to MarkH for spotting this one).
Added wrapping macros to dictobject.c, listobject.c, tupleobject.c,
frameobject.c, traceback.c that safely prevends core dumps
on stack overflow. Macros and functions in object.c, object.h.
The method is an "elevator destructor" that turns cascading
deletes into tail recursive behavior when some limit is hit.
a new proc type (objobjproc), a new slot sq_contains to
PySequenceMethods, and a new flag Py_TPFLAGS_HAVE_SEQUENCE_IN to
Py_TPFLAGS_DEFAULT. More to follow.
Introduce a new builtin exception, UnboundLocalError, raised when ceval.c
tries to retrieve or delete a local name that isn't bound to a value.
Currently raises NameError, which makes this behavior a FAQ since the same
error is raised for "missing" global names too: when the user has a global
of the same name as the unbound local, NameError makes no sense to them.
Even in the absence of shadowing, knowing whether a bogus name is local or
global is a real aid to quick understanding.
Example:
D:\src\PCbuild>type local.py
x = 42
def f():
print x
x = 13
return x
f()
D:\src\PCbuild>python local.py
Traceback (innermost last):
File "local.py", line 8, in ?
f()
File "local.py", line 4, in f
print x
UnboundLocalError: x
D:\src\PCbuild>
Note that UnboundLocalError is a subclass of NameError, for compatibility
with existing class-exception code that may be trying to catch this as a
NameError. Unfortunately, I see no way to make this wholly compatible
with -X (see comments in bltinmodule.c): under -X, [UnboundLocalError
is an alias for NameError --GvR].
[The ceval.c patch differs slightly from the second version that Tim
submitted; I decided not to raise UnboundLocalError for DELETE_NAME,
only for DELETE_LOCAL. DELETE_NAME is only generated at the module
level, and since at that level a NameError is raised for referencing
an undefined name, it should also be raised for deleting one.]
indicate to those that are using the CVS access that they are using a
newer-than-1.2.5 version, without committing to a particular version
number or patch level.
PycStringIO_IMPORT. While arguably the name used in the documentation
is more consistent, I think it's probably safer not to change the
macro definition and instead fix the doco.
Add a new member to the PyBufferProcs struct, bf_getcharbuffer. For
backward compatibility, this member should only be used (this includes
testing for NULL!) when the flag Py_TPFLAGS_HAVE_GETCHARBUFFER is set
in the type structure, below. Note that if its flag is not set, we
may be looking at an extension module compiled for 1.5.1, which will
have garbage at the bf_getcharbuffer member (because the struct wasn't
as long then). If the flag is one, the pointer may still be NULL.
The function found at this member is used in a similar manner as
bf_getreadbuffer, but it is known to point to 8-bit character data.
(See discussion in getargs.c checked in later.)
As a general feature for extending the type structure and the various
structures that (may) hang off it in a backwards compatible way, we
rename the tp_xxx4 "spare" slot to tp_flags. In 1.5.1 and before,
this slot was always zero. In 1.5.1, it may contain various flags
indicating extra fields that weren't present in 1.5.1. The only flag
defined so far is for the bf_getcharbuffer member of the PyBufferProcs
struct.
Note that the new spares (tp_xxx5 - tp_xxx8), once they become used,
should also be protected by a flag (or flags) in tp_flags.
The MS compiler doesn't call it 'long long', it uses __int64,
so a new #define, LONG_LONG, has been added and all occurrences
of 'long long' are replaced with it.
clear_carefully() used to do in import.c. Differences: leave only
__builtins__ alone in the 2nd pass; and don't clear the dictionary (on
the theory that as long as there are references left to the
dictionary, those might be destructors that might expect __builtins__
to be alive when they run; and __builtins__ can't normally be part of
a cycle).
embedders to force a different PYTHONHOME.
- Add new interface PyErr_PrintEx(flag); same as PyErr_Print() but
flag determines whether sys.last_* are set or not. PyErr_Print()
now simply calls PyErr_PrintEx(1).
signal handlers in a fork()ed child process when Python is compiled with
thread support. The bug was reported by Scott <scott@chronis.icgroup.com>.
What happens is that after a fork(), the variables used by the signal
module to determine whether this is the main thread or not are bogus,
and it decides that no thread is the main thread, so no signals will
be delivered.
The solution is the addition of PyOS_AfterFork(), which fixes the signal
module's variables. A dummy version of the function is present in the
intrcheck.c source file which is linked when the signal module is not
used.
is like PyImport_ImporModule(name) but receives the globals and locals
dict and the fromlist arguments as well. (The name is a char*; the
others are PyObject*s).