* Replace "time.clock on windows, or time.time" with time.perf_counter()
* profile module: only use time.process_time() instead of trying different
functions providing the process time
* timeit module: use time.perf_counter() by default, time.time() and
time.clock() can still be used using --time and --clock options
* pybench program: use time.perf_counter() by default, add support for
the new time.process_time() and time.perf_counter() functions, but stay
backward compatible. Use also time.get_clock_info() to display information
of the timer.
and .keys(), .items(), .values() return dict views.
The dict views aren't fully functional yet; in particular, they can't
be compared to sets yet. but they are useful as "iterator wells".
There are still 27 failing unit tests; I expect that many of these
have fairly trivial fixes, but there are so many, I could use help.
There's one major and one minor category still unfixed:
doctests are the major category (and I hope to be able to augment the
refactoring tool to refactor bona fide doctests soon);
other code generating print statements in strings is the minor category.
(Oh, and I don't know if the compiler package works.)
Not all code has been fixed yet; this is just a checkpoint...
The C API still has PyDict_HasKey() and _HasKeyString(); not sure
if I want to change those just yet.
call, or via setting an instance or class vrbl.
Rewrote the calibration docs.
Modern boxes are so friggin' fast, and a profiler event does so much work
anyway, that the cost of looking up an instance vrbl (the bias constant)
per profile event just isn't a big deal.
actual run of the profiler, instead of timing a simplified simulation of
part of what the profiler does. It computes a constant about 60% higher
on my Win98SE box than the old method, and the new constant appears much
more realistic. Deleted the undocumented simple(), instrumented(), and
profiler_simulation() methods (which existed only to support the previous
calibration method).
seriously wrong. This started out by just fixing the docs, but then it
occurred to me that the doc confusion propagated into misleading vrbl names
too, so I also renamed those to match reality. As a result, INO the time
computations are much easier to understand now (within the limitations of
vast quantities of 3-character names <wink>).
it deals correctly with some anomalous cases; according to this test
suite I've fixed it right.
The anomalous cases had to do with 'exception' events: these aren't
generated when they would be most helpful, and the profiler has to
work hard to recover the right information. The problems occur when C
code (such as hasattr(), which is used as the example here) calls back
into Python code and clears an exception raised by that Python code.
Consider this example:
def foo():
hasattr(obj, "bar")
Where obj is an instance from a class like this:
class C:
def __getattr__(self, name):
raise AttributeError
The profiler sees the following sequence of events:
call (foo)
call (__getattr__)
exception (in __getattr__)
return (from foo)
Previously, the profiler would assume the return event returned from
__getattr__. An if statement checking for this condition and raising
an exception was commented out... This version does the right thing.