This adds unsetenv to posix, and uses it in the __delitem__ method of
os.environ.
(XXX Should we change the preferred name for putenv to setenv, for
consistency?)
This is a big one, touching lots of files. Some of the platforms
aren't tested yet. Briefly, this changes the return value of the
os/posix functions stat(), fstat(), statvfs(), fstatvfs(), and the
time functions localtime(), gmtime(), and strptime() from tuples into
pseudo-sequences. When accessed as a sequence, they behave exactly as
before. But they also have attributes like st_mtime or tm_year. The
stat return value, moreover, has a few platform-specific attributes
that are not available through the sequence interface (because
everybody expects the sequence to have a fixed length, these couldn't
be added there). If your platform's struct stat doesn't define
st_blksize, st_blocks or st_rdev, they won't be accessible from Python
either.
(Still missing is a documentation update.)
This changes Pythread_start_thread() to return the thread ID, or -1
for an error. (It's technically an incompatible API change, but I
doubt anyone calls it.)
call, or via setting an instance or class vrbl.
Rewrote the calibration docs.
Modern boxes are so friggin' fast, and a profiler event does so much work
anyway, that the cost of looking up an instance vrbl (the bias constant)
per profile event just isn't a big deal.
actual run of the profiler, instead of timing a simplified simulation of
part of what the profiler does. It computes a constant about 60% higher
on my Win98SE box than the old method, and the new constant appears much
more realistic. Deleted the undocumented simple(), instrumented(), and
profiler_simulation() methods (which existed only to support the previous
calibration method).
from Tim Hochberg. Also mucho fiddling to change the way doctest
determines whether a thing is a function, module or class. Under 2.2,
this really requires the functions in inspect.py (e.g., types.ClassType
is close to meaningless now, if not outright misleading).
Generalize PyLong_AsLongLong to accept int arguments too. The real point
is so that PyArg_ParseTuple's 'L' code does too. That code was
undocumented (AFAICT), so documented it.
- property() now takes 4 keyword arguments: fget, fset, fdel, doc.
Note that the real purpose of the 'f' prefix is to make fdel fit in
('del' is a keyword, so can't used as a keyword argument name).
- These map to visible readonly attributes 'fget', 'fset', 'fdel',
and '__doc__' in the property object.
- fget/fset/fdel weren't discoverable from Python before.
- __doc__ is new, and allows to associate a docstring with a property.
iterable object. I'm not sure how that got overlooked before!
Got rid of the internal _PySequence_IterContains, introduced a new
internal _PySequence_IterSearch, and rewrote all the iteration-based
"count of", "index of", and "is the object in it or not?" routines to
just call the new function. I suppose it's slower this way, but the
code duplication was getting depressing.
Curious: the MS docs say stati64 etc are supported even on Win95, but
Win95 doesn't support a filesystem that allows partitions > 2 Gb.
test_largefile: This was opening its test file in text mode. I have no
idea how that worked under Win64, but it sure needs binary mode on Win98.
BTW, on Win98 test_largefile runs quickly (under a second).
requires that errno ever get set, and it looks like glibc is already
playing that game. New rules:
+ Never use HUGE_VAL. Use the new Py_HUGE_VAL instead.
+ Never believe errno. If overflow is the only thing you're interested in,
use the new Py_OVERFLOWED(x) macro. If you're interested in any libm
errors, use the new Py_SET_ERANGE_IF_OVERFLOW(x) macro, which attempts
to set errno the way C89 said it worked.
Unfortunately, none of these are reliable, but they work on Windows and I
*expect* under glibc too.
getting Infs, NaNs, or nonsense in 2.1 and before; in yesterday's CVS we
were getting OverflowError; but these functions always make good sense
for positive arguments, no matter how large).
the fiddling is simply due to that no caller of PyLong_AsDouble ever
checked for failure (so that's fixing old bugs). PyLong_AsDouble is much
faster for big inputs now too, but that's more of a happy consequence
than a design goal.
bag. It's clearly wrong for classic classes, at heart because a classic
class doesn't have a __class__ attribute, and I'm unclear on whether
that's feature or bug. I'll repair this once I find out (in the
meantime, dir() applied to classic classes won't find the base classes,
while dir() applied to a classic-class instance *will* find the base
classes but not *their* base classes).
Please give the new dir() a try and see whether you love it or hate it.
The new dir([]) behavior is something I could come to love. Here's
something to hate:
>>> class C:
... pass
...
>>> c = C()
>>> dir(c)
['__doc__', '__module__']
>>>
The idea that an instance has a __doc__ attribute is jarring (of course
it's really c.__class__.__doc__ == C.__doc__; likewise for __module__).
OTOH, the code already has too many special cases, and dir(x) doesn't
have a compelling or clear purpose when x isn't a module.
Stephen Hansen reported via email that he didn't finish the port to
Borland C, so remove the old item saying it worked and add a new item
saying what I know; I've asked Stephen for more details.
- Do not compile unicodeobject, unicodectype, and unicodedata if Unicode is disabled
- check for Py_USING_UNICODE in all places that use Unicode functions
- disables unicode literals, and the builtin functions
- add the types.StringTypes list
- remove Unicode literals from most tests.
__dict__ attribute. Deleting it, or setting it to a non-dictionary
result in a TypeError. Note that getting it the first time magically
initializes it to an empty dict so that func.__dict__ will always
appear to be a dictionary (never None).
Closes SF bug #446645.
(Tim & I should agree on where to add new additions: I add them at the
top, Tim adds them at the bottom. I like the top better because folks
who occasionally check out the NEWS file will see the latest news
first.)