outer level, the iterator protocol is used for memory-efficiency (the
outer sequence may be very large if fully materialized); at the inner
level, PySequence_Fast() is used for time-efficiency (these should
always be sequences of length 2).
dictobject.c, new functions PyDict_{Merge,Update}FromSeq2. These are
wholly analogous to PyDict_{Merge,Update}, but process a sequence-of-2-
sequences argument instead of a mapping object. For now, I left these
functions file static, so no corresponding doc changes. It's tempting
to change dict.update() to allow a sequence-of-2-seqs argument too.
Also changed the name of dictionary's keyword argument from "mapping"
to "x". Got a better name? "mapping_or_sequence_of_pairs" isn't
attractive, although more so than "mosop" <wink>.
abstract.h, abstract.tex: Added new PySequence_Fast_GET_SIZE function,
much faster than going thru the all-purpose PySequence_Size.
libfuncs.tex:
- Document dictionary().
- Fiddle tuple() and list() to admit that their argument is optional.
- The long-winded repetitions of "a sequence, a container that supports
iteration, or an iterator object" is getting to be a PITA. Many
months ago I suggested factoring this out into "iterable object",
where the definition of that could include being explicit about
generators too (as is, I'm not sure a reader outside of PythonLabs
could guess that "an iterator object" includes a generator call).
- Please check my curly braces -- I'm going blind <0.9 wink>.
abstract.c, PySequence_Tuple(): When PyObject_GetIter() fails, leave
its error msg alone now (the msg it produces has improved since
PySequence_Tuple was generalized to accept iterable objects, and
PySequence_Tuple was also stomping on the msg in cases it shouldn't
have even before PyObject_GetIter grew a better msg).
Replace some tortuous code that was trying to be clever but forgot to
DECREF the key and value, by more longwinded but obviously correct
code.
(Inspired by but not copying the fix from SF patch #475033.)
The C-code in fileobject.readinto(buffer) which parses
the arguments assumes that size_t is interchangeable
with int:
size_t ntodo, ndone, nnow;
if (f->f_fp == NULL)
return err_closed();
if (!PyArg_Parse(args, "w#", &ptr, &ntodo))
return NULL;
This causes a problem on Alpha / Tru64 / OSF1 v5.1
where size_t is a long and sizeof(long) != sizeof(int).
The patch I'm proposing declares ntodo as an int. An
alternative might be to redefine w# to expect size_t.
[We can't change w# because there are probably third party modules
relying on it. GvR]
1. configure doesn't handle HP-UX release numbers
(e.g., B.11.00), resulting in MACHDEP = "hpuxB".
2. After checking for wchar.h, configure doesn't
include it when checking the size of wchar_t.
(Python 2.2b1 on HP-UX 11.00)
This adds unsetenv to posix, and uses it in the __delitem__ method of
os.environ.
(XXX Should we change the preferred name for putenv to setenv, for
consistency?)
This is a big one, touching lots of files. Some of the platforms
aren't tested yet. Briefly, this changes the return value of the
os/posix functions stat(), fstat(), statvfs(), fstatvfs(), and the
time functions localtime(), gmtime(), and strptime() from tuples into
pseudo-sequences. When accessed as a sequence, they behave exactly as
before. But they also have attributes like st_mtime or tm_year. The
stat return value, moreover, has a few platform-specific attributes
that are not available through the sequence interface (because
everybody expects the sequence to have a fixed length, these couldn't
be added there). If your platform's struct stat doesn't define
st_blksize, st_blocks or st_rdev, they won't be accessible from Python
either.
(Still missing is a documentation update.)
This changes Pythread_start_thread() to return the thread ID, or -1
for an error. (It's technically an incompatible API change, but I
doubt anyone calls it.)
This patch changes to logic to:
if env.var. set and non-empty:
if env.var. is an integer:
set flag to that integer
if flag is zero: # [actually, <= 0 --GvR]
set flag to 1
Under this patch, anyone currently using
PYTHONVERBOSE=yes will get the same output as before.
PYTHONVERBNOSE=2 will generate more verbosity than
before.
The only unusual case that the following three are
still all equivalent:
PYTHONVERBOSE=yespleas
PYTHONVERBOSE=1
PYTHONVERBOSE=0
call, or via setting an instance or class vrbl.
Rewrote the calibration docs.
Modern boxes are so friggin' fast, and a profiler event does so much work
anyway, that the cost of looking up an instance vrbl (the bias constant)
per profile event just isn't a big deal.
actual run of the profiler, instead of timing a simplified simulation of
part of what the profiler does. It computes a constant about 60% higher
on my Win98SE box than the old method, and the new constant appears much
more realistic. Deleted the undocumented simple(), instrumented(), and
profiler_simulation() methods (which existed only to support the previous
calibration method).
from Tim Hochberg. Also mucho fiddling to change the way doctest
determines whether a thing is a function, module or class. Under 2.2,
this really requires the functions in inspect.py (e.g., types.ClassType
is close to meaningless now, if not outright misleading).
Generalize PyLong_AsLongLong to accept int arguments too. The real point
is so that PyArg_ParseTuple's 'L' code does too. That code was
undocumented (AFAICT), so documented it.
- property() now takes 4 keyword arguments: fget, fset, fdel, doc.
Note that the real purpose of the 'f' prefix is to make fdel fit in
('del' is a keyword, so can't used as a keyword argument name).
- These map to visible readonly attributes 'fget', 'fset', 'fdel',
and '__doc__' in the property object.
- fget/fset/fdel weren't discoverable from Python before.
- __doc__ is new, and allows to associate a docstring with a property.
iterable object. I'm not sure how that got overlooked before!
Got rid of the internal _PySequence_IterContains, introduced a new
internal _PySequence_IterSearch, and rewrote all the iteration-based
"count of", "index of", and "is the object in it or not?" routines to
just call the new function. I suppose it's slower this way, but the
code duplication was getting depressing.
Curious: the MS docs say stati64 etc are supported even on Win95, but
Win95 doesn't support a filesystem that allows partitions > 2 Gb.
test_largefile: This was opening its test file in text mode. I have no
idea how that worked under Win64, but it sure needs binary mode on Win98.
BTW, on Win98 test_largefile runs quickly (under a second).
requires that errno ever get set, and it looks like glibc is already
playing that game. New rules:
+ Never use HUGE_VAL. Use the new Py_HUGE_VAL instead.
+ Never believe errno. If overflow is the only thing you're interested in,
use the new Py_OVERFLOWED(x) macro. If you're interested in any libm
errors, use the new Py_SET_ERANGE_IF_OVERFLOW(x) macro, which attempts
to set errno the way C89 said it worked.
Unfortunately, none of these are reliable, but they work on Windows and I
*expect* under glibc too.
getting Infs, NaNs, or nonsense in 2.1 and before; in yesterday's CVS we
were getting OverflowError; but these functions always make good sense
for positive arguments, no matter how large).