The strerror attribute contained only partial information about the
exception and produced some very confusing error messages. By passing
err (the exception object itself) and letting it convert itself to a
string, the error messages are better.
operators per line or statement are now on by default, and -m turns
these warnings off.
- Change the way multiple / operators are reported; a regular
recommendation is always emitted after the warning.
- Report ambiguous warnings (both int|long and float|complex used for
the same operator).
- Update the doc string again to clarify all this and describe the
possible messages more precisely.
percolated out, and some general cleanup. The output is still the
same, except it now prints "Index: <file>" instead of "Processing:
<file>", so that the output can be used as input for patch (but only
the diff-style parts of it).
Cater to that.
+ Major speed boost via not reading more of files than necessary. This
was no slouch before; now it screams.
+ Improve msg when giving up on a goofy future statement.
If multiple header files are processed simultaneously which include each
other, the corresponding modules mport each other. Specifically, if h2py
is invoked with sys/types.h first, later header files won't contain the
complete contents of TYPES.py.
I published it on the web as http://www.python.org/2.1/md5sum.py
so I thought I might as well check it in.
Works with Python 1.5.2 and later.
Works like the Linux tool ``mdfsum file ...'' except it doesn't take
any options or read stdin.
codec files to codecs.py and added logic so that multi mappings
in the decoding maps now result in mappings to None (undefined mapping)
in the encoding maps.
Assertion error message had typos in arguments to string format.
.cover files for modules in packages are now put in the right place.
The code that generate .cover files seemed to prepend a "./" to many
absolute paths, causing them to fail. The code now checks explicitly
for absolute paths and leaves them alone.
In trace/coverage code, recover from case where module has no __name__
attribute, when e.g. it is executed by PyRun_String(). In this case,
assign modulename to None and hope for the best. There isn't anywhere
to write out coverage data for this code anyway.
Also, replace several sys.stderr.writes with print >> sys.stderr.
New features:
-C/--coverdir dir: Generate .cover files in specified directory
instead of in the directory where the .py file is.
-s: Print a short summary of files coverred (# lines, % coverage,
name)
(Yes, this is a new feature right before the 2.1 release. No, I can't
imagine this would seriously break anybody's code. In fact, most
users of this script are probably *happy* to see this addition.)
this just copies the __name__=='__main__' logic from pydoc.py.
?!ng can decide whether he wants to create a main() in pydoc, or rip
it out of pydoc.py completely.
Guido told me to do this <wink>.
Greatly expanded docstrings, and fleshed out with examples.
New std test.
Added new get_close_matches() function for ESR.
Needs docs, but LaTeXification of the module docstring is all it needs.
\CVS: ----------------------------------------------------------------------
\t\t\t\t\treal code
##\t\t\t\t\tunused code
\t\t\t\t\treal code
via untabifying and shifting the real code left. Semantically the
same but made the intent of the commented-out-in-column-0 unused code
unclear. The exact same unused code appears to have gotten copied from
file to file over the years.
codec to not apply Latin-1 mappings for keys which are not found
in the mapping dictionaries, but instead treat them as undefined
mappings.
The patch was originally written by Martin v. Loewis with some
additional (cosmetic) changes and an updated test script
by Marc-Andre Lemburg.
The standard codecs were recreated from the most current files
available at the Unicode.org site using the Tools/scripts/gencodec.py
tool.
This patch closes the bugs #116285 and #119960.
mislabeled.
(Using -c and then -e rearranges some comments, so I won't check that
in -- but it's a good test anyway.
Note that pindent is not perfect -- e.g. it doesn't know about
triple-quoted strings!)
Problem:
A Python program can be completed and reformatted using
Tools/scripts/pindent.py. Unfortunately there is no option for removal
of the generated "# end"-tags. Although a few Python commands or a
"grep -v '# end '" can do wonders here, there are two drawbacks:
- not everyone has grep/time to write a Python script
- it is not checked whether the "# end"-tags were used validly
Solution:
add extra option "-e" (eliminate) to pindent.py
"""
If the filename being complained about contains a space, enclose the
file-name in quotes.
The reason is simply that when I try and parse tabnanny's output, filenames
with spaces make it very difficult to determine where the filename stops
and the linenumber begins!
"""
Tim approves.
I slightly changed the patch (use 'in' instead of string.find()) and
arbitrarily bumped the __version__ variable up to 6.
I should have waited overnight <wink/sigh>. Nothing wrong with the one I
sent, but I couldn't resist going on to add new -r1 / -r2 cmdline options
for recreating the original files from ndiff's output. That's attached, if
you're game! Us Windows guys don't usually have a sed sitting around
<wink>.
Attached is a cleaned-up version of ndiff (added useful module
docstring, now echo'ed in case of cmd line mistake); added -q option
to suppress initial file identification lines; + other minor cleanups,
& a slightly faster match engine.
"""
the NEWS file of Python 1.5.2a2 inspired me to look at
Tools/scripts/untabify.py. I wonder why it accepts a -t argument
but ignores it. The following patch tries to make it somewhat useful
(i.e., to override the tabsize=8 setting). Is that agreeable?
"""
.mirrorinfo. Fix by me to call string.lstrip(filename) to cope with a
bug in strop.strip() in Python 1.4. Additionally, I changed all print
statements that print filenames etc. to put them in backquotes so that
it will be more obvious when there's a funny character on one of them
(such as a space...).
The 1.5.1 tabnanny.py suffers an assert error if fed a script whose last
line is both indented and lacks a newline:
if 1:
print 'oh fudge' # no newline here:
The attached version repairs that.
This patch must hold the world record for living in my inbox:
From: John Ehresman <jehresma@dsg.harvard.edu>
Date: Wed, 23 Aug 1995 16:07:11 -0400
He provided a fix for the version that comes with Python 1.3:
ftpmirror.py revision 1.1... And it was still relevant!
is used), do a recursive delete. Use -r with even more caution!
Also changed usage message into a doc string, added a comment or two,
and rearranged a long line.
will compare equal even if the master file uses only \n to terminate
lines (this is by far the most common situation). Also, check for the
case where the master file is missing, and print the time difference
in seconds when the slave file appears newer than the master (for
debugging).
removed from .mirrorinfo. Now they are (even if -r is not specified
-- the files are not removed, just their .mirrorinfo entry).
Added a feature: the -s pattern option is also used to skip local
files when removing (i.e. -r won't remove local files matching the -s
patterns).
navigation links for HTML 3 version.
Forced a blank line above the footnotes separator for HTML 2; at
least one page did not get this spaced correctly.
print section titles even when the debugging output is not enabled.
Added -3 option to generate HTML 3.0 constructs where meaningful.
Removed repititive garbage generation: the old version added simple
descriptive comments after every datadesc/funcdesc/*desc entry:
function(args) -- function of module xxxx
Description....
These comments are no longer generated:
function(args)
Description....
to 4 spaces per level (no longer 8).
(Makefile): Use .pyc versions of partparse.py and texi2html.py to generate
converted documentation formats. This reduces the startup costs;
probably doesn't affect anyone but me in reality, but helps when
working on the docs.