This patch complies with the following request found
near the top of configure.in:
# This is for stuff that absolutely must end up in pyconfig.h.
# Please use pyport.h instead, if possible.
I tested this patch under Cygwin, Win32, and Red
Hat Linux. Python built and ran successfully on
each of these platforms.
This patch complies with the following request found
near the top of configure.in:
# This is for stuff that absolutely must end up in pyconfig.h.
# Please use pyport.h instead, if possible.
I tested this patch under Cygwin, Win32, and Red
Hat Linux. Python built and ran successfully on
each of these platforms.
PEP 285. Everything described in the PEP is here, and there is even
some documentation. I had to fix 12 unit tests; all but one of these
were printing Boolean outcomes that changed from 0/1 to False/True.
(The exception is test_unicode.py, which did a type(x) == type(y)
style comparison. I could've fixed that with a single line using
issubtype(x, type(y)), but instead chose to be explicit about those
places where a bool is expected.
Still to do: perhaps more documentation; change standard library
modules to return False/True from predicates.
When WITH_PYMALLOC is defined, define PYMALLOC_DEBUG to enable the debug
allocator. This can be done independent of build type (release or debug).
A debug build automatically defines PYMALLOC_DEBUG when pymalloc is
enabled. It's a detected error to define PYMALLOC_DEBUG when pymalloc
isn't enabled.
Two debugging entry points defined only under PYMALLOC_DEBUG:
+ _PyMalloc_DebugCheckAddress(const void *p) can be used (e.g., from gdb)
to sanity-check a memory block obtained from pymalloc. It sprays
info to stderr (see next) and dies via Py_FatalError if the block is
detectably damaged.
+ _PyMalloc_DebugDumpAddress(const void *p) can be used to spray info
about a debug memory block to stderr.
A tiny start at implementing "API family" checks isn't good for
anything yet.
_PyMalloc_DebugRealloc() has been optimized to do little when the new
size is <= old size. However, if the new size is larger, it really
can't call the underlying realloc() routine without either violating its
contract, or knowing something non-trivial about how the underlying
realloc() works. A memcpy is always done in this case.
This was a disaster for (and only) one of the std tests: test_bufio
creates single text file lines up to a million characters long. On
Windows, fileobject.c's get_line() uses the horridly funky
getline_via_fgets(), which keeps growing and growing a string object
hoping to find a newline. It grew the string object 1000 bytes each
time, so for a million-character string it took approximately forever
(I gave up after a few minutes).
So, also:
fileobject.c, getline_via_fgets(): When a single line is outrageously
long, grow the string object at a mildly exponential rate, instead of
just 1000 bytes at a time.
That's enough so that a debug-build test_bufio finishes in about 5 seconds
on my Win98SE box. I'm curious to try this on Win2K, because it has very
different memory behavior than Win9X, and test_bufio always took a factor
of 10 longer to complete on Win2K. It *could* be that the endless
reallocs were simply killing it on Win2K even in the release build.
that info to code dynamically compiled *by* code compiled with generators
enabled. Doesn't yet work because there's still no way to tell the parser
that "yield" is OK (unlike nested_scopes, the parser has its fingers in
this too).
Replaced PyEval_GetNestedScopes by a more-general
PyEval_MergeCompilerFlags. Perhaps I should not have? I doubted it was
*intended* to be part of the public API, so just did.
new slot tp_iter in type object, plus new flag Py_TPFLAGS_HAVE_ITER
new C API PyObject_GetIter(), calls tp_iter
new builtin iter(), with two forms: iter(obj), and iter(function, sentinel)
new internal object types iterobject and calliterobject
new exception StopIteration
new opcodes for "for" loops, GET_ITER and FOR_ITER (also supported by dis.py)
new magic number for .pyc files
new special method for instances: __iter__() returns an iterator
iteration over dictionaries: "for x in dict" iterates over the keys
iteration over files: "for x in file" iterates over lines
TODO:
documentation
test suite
decide whether to use a different way to spell iter(function, sentinal)
decide whether "for key in dict" is a good idea
use iterators in map/filter/reduce, min/max, and elsewhere (in/not in?)
speed tuning (make next() a slot tp_next???)
Add definitions of INT_MAX and LONG_MAX to pyport.h.
Remove includes of limits.h and conditional definitions of INT_MAX
and LONG_MAX elsewhere.
This closes SourceForge patch #101659 and bug #115323.
I can't test this, so I'm just checking it in with blind faith in Andy.
I've tested that it doesn't broeak a non-Pth build on Linux.
Changes include:
- There's a --with-pth configure option.
- Instead of _GNU_PTH, we test for HAVE_PTH.
- Better signal handling.
- (The config.h.in file is regenerated in a slightly different order.)
It's hard to sort out what the bug was, exactly. So, Big Hammer:
1. Python shouldn't be in the business of #define'ing NULL, period.
2. Users of the Python C API shouldn't be in the business of not including
Python.h, period.
Hence:
1. Removed all #define's of NULL in Python source code (pyport.h and
object.h).
2. Since we're *relying* on stdio.h defining NULL, put an #error in
Python.h after its #include of stdio.h if NULL isn't defined then.
good C practice hasn't been available to everything all along.
Added Py_SAFE_DOWNCAST(VALUE, WIDE, NARROW) macro to pyport.h; this
just casts VALUE from type WIDE to type NARROW, but assert-fails if
Py_DEBUG is defined and info is lost due to casting.
Replaced a line in Fredrik's fix to marshal.c to use the new macro.
This was a convenient excuse to create the pyport.h file recently
discussed!
Please use new Py_ARITHMETIC_RIGHT_SHIFT when right-shifting a
signed int and you *need* sign-extension. This is #define'd in
pyport.h, keying off new config symbol SIGNED_RIGHT_SHIFT_ZERO_FILLS.
If you're running on a platform that needs that symbol #define'd,
the std tests never would have worked for you (in particular,
at least test_long would have failed).
The autoconfig stuff got added to Python after my Unix days, so
I don't know how that works. Would someone please look into doing
& testing an auto-config of the SIGNED_RIGHT_SHIFT_ZERO_FILLS
symbol? It needs to be defined if & only if, e.g., (-1) >> 3 is
not -1.
to switch on support for BSD and SysV on platforms which use glibc
such as Linux.
These #defines are documented in e.g. the file /usr/include/features.h
on Linux platforms and the SUSv2 docs.
his copy of test_contains.py seems to be broken -- the lines he
deleted were already absent). Checkin messages:
New Unicode support for int(), float(), complex() and long().
- new APIs PyInt_FromUnicode() and PyLong_FromUnicode()
- added support for Unicode to PyFloat_FromString()
- new encoding API PyUnicode_EncodeDecimal() which converts
Unicode to a decimal char* string (used in the above new
APIs)
- shortcuts for calls like int(<int object>) and float(<float obj>)
- tests for all of the above
Unicode compares and contains checks:
- comparing Unicode and non-string types now works; TypeErrors
are masked, all other errors such as ValueError during
Unicode coercion are passed through (note that PyUnicode_Compare
does not implement the masking -- PyObject_Compare does this)
- contains now works for non-string types too; TypeErrors are
masked and 0 returned; all other errors are passed through
Better testing support for the standard codecs.
Misc minor enhancements, such as an alias dbcs for the mbcs codec.
Changes:
- PyLong_FromString() now applies the same error checks as
does PyInt_FromString(): trailing garbage is reported
as error and not longer silently ignored. The only characters
which may be trailing the digits are 'L' and 'l' -- these
are still silently ignored.
- string.ato?() now directly interface to int(), long() and
float(). The error strings are now a little different, but
the type still remains the same. These functions are now
ready to get declared obsolete ;-)
- PyNumber_Int() now also does a check for embedded NULL chars
in the input string; PyNumber_Long() already did this (and
still does)
Followed by:
Looks like I've gone a step too far there... (and test_contains.py
seem to have a bug too).
I've changed back to reporting all errors in PyUnicode_Contains()
and added a few more test cases to test_contains.py (plus corrected
the join() NameError).
Introduce truly separate (sub)interpreter objects. For now, these
must be used by separate threads, created from C. See Demo/pysvr for
an example of how to use this. This also rationalizes Python's
initialization and finalization behavior:
Py_Initialize() -- initialize the whole interpreter
Py_Finalize() -- finalize the whole interpreter
tstate = Py_NewInterpreter() -- create a new (sub)interpreter
Py_EndInterpreter(tstate) -- delete a new (sub)interpreter
There are also new interfaces relating to threads and the interpreter
lock, which can be used to create new threads, and sometimes have to
be used to manipulate the interpreter lock when creating or deleting
sub-interpreters. These are only defined when WITH_THREAD is defined:
PyEval_AcquireLock() -- acquire the interpreter lock
PyEval_ReleaseLock() -- release the interpreter lock
PyEval_AcquireThread(tstate) -- acquire the lock and make the thread current
PyEval_ReleaseThread(tstate) -- release the lock and make NULL current
Other administrative changes:
- The header file bltinmodule.h is deleted.
- The init functions for Import, Sys and Builtin are now internal and
declared in pythonrun.h.
- Py_Setup() and Py_Cleanup() are no longer declared.
- The interpreter state and thread state structures are now linked
together in a chain (the chain of interpreters is a static variable
in pythonrun.c).
- Some members of the interpreter and thread structures have new,
shorter, more consistent, names.
- Added declarations for _PyImport_{Find,Fixup}Extension() to import.h.