(with one small bugfix in bgen/bgen/scantools.py)
This replaces string module functions with string methods
for the stuff in the Tools directory. Several uses of
string.letters etc. are still remaining.
[ 587993 ] SET_LINENO killer
Remove SET_LINENO. Tracing is now supported by inspecting co_lnotab.
Many sundry changes to document and adapt to this change.
* globaltrace_lt - handle case where inspect.getmodulename doesn't return
anything useful
* localtrace_trace - handle case where inspect.getframeinfo doesn't return
any context info
I think both of the last two are caused by exec'd or eval'd code
1. BUGFIX: In function makefile(), strip blanks from the nodename.
This is necesary to match the behavior of parser.makeref() and
parser.do_node().
2. BUGFIX fixed KeyError in end_ifset (well, I may have just made
it go away, rather than fix it)
3. BUGFIX allow @menu and menu items inside @ifset or @ifclear
4. Support added for:
@uref URL reference
@image image file reference (see note below)
@multitable output an HTML table
@vtable
5. Partial support for accents, to match MAKEINFO output
6. I added a new command-line option, '-H basename', to specify
HTML Help output. This will cause three files to be created
in the current directory:
`basename`.hhp HTML Help Workshop project file
`basename`.hhc Contents file for the project
`basename`.hhk Index file for the project
When fed into HTML Help Workshop, the resulting file will be
named `basename`.chm.
7. A new class, HTMLHelp, to accomplish item 6.
8. Various calls to HTMLHelp functions.
A NOTE ON IMAGES: Just as 'outputdirectory' must exist before
running this program, all referenced images must already exist
in outputdirectory.
FLD: wrapped some long lines.
The strerror attribute contained only partial information about the
exception and produced some very confusing error messages. By passing
err (the exception object itself) and letting it convert itself to a
string, the error messages are better.
operators per line or statement are now on by default, and -m turns
these warnings off.
- Change the way multiple / operators are reported; a regular
recommendation is always emitted after the warning.
- Report ambiguous warnings (both int|long and float|complex used for
the same operator).
- Update the doc string again to clarify all this and describe the
possible messages more precisely.
percolated out, and some general cleanup. The output is still the
same, except it now prints "Index: <file>" instead of "Processing:
<file>", so that the output can be used as input for patch (but only
the diff-style parts of it).
Cater to that.
+ Major speed boost via not reading more of files than necessary. This
was no slouch before; now it screams.
+ Improve msg when giving up on a goofy future statement.
If multiple header files are processed simultaneously which include each
other, the corresponding modules mport each other. Specifically, if h2py
is invoked with sys/types.h first, later header files won't contain the
complete contents of TYPES.py.
I published it on the web as http://www.python.org/2.1/md5sum.py
so I thought I might as well check it in.
Works with Python 1.5.2 and later.
Works like the Linux tool ``mdfsum file ...'' except it doesn't take
any options or read stdin.
codec files to codecs.py and added logic so that multi mappings
in the decoding maps now result in mappings to None (undefined mapping)
in the encoding maps.
Assertion error message had typos in arguments to string format.
.cover files for modules in packages are now put in the right place.
The code that generate .cover files seemed to prepend a "./" to many
absolute paths, causing them to fail. The code now checks explicitly
for absolute paths and leaves them alone.
In trace/coverage code, recover from case where module has no __name__
attribute, when e.g. it is executed by PyRun_String(). In this case,
assign modulename to None and hope for the best. There isn't anywhere
to write out coverage data for this code anyway.
Also, replace several sys.stderr.writes with print >> sys.stderr.
New features:
-C/--coverdir dir: Generate .cover files in specified directory
instead of in the directory where the .py file is.
-s: Print a short summary of files coverred (# lines, % coverage,
name)
(Yes, this is a new feature right before the 2.1 release. No, I can't
imagine this would seriously break anybody's code. In fact, most
users of this script are probably *happy* to see this addition.)
this just copies the __name__=='__main__' logic from pydoc.py.
?!ng can decide whether he wants to create a main() in pydoc, or rip
it out of pydoc.py completely.
Guido told me to do this <wink>.
Greatly expanded docstrings, and fleshed out with examples.
New std test.
Added new get_close_matches() function for ESR.
Needs docs, but LaTeXification of the module docstring is all it needs.
\CVS: ----------------------------------------------------------------------
\t\t\t\t\treal code
##\t\t\t\t\tunused code
\t\t\t\t\treal code
via untabifying and shifting the real code left. Semantically the
same but made the intent of the commented-out-in-column-0 unused code
unclear. The exact same unused code appears to have gotten copied from
file to file over the years.
codec to not apply Latin-1 mappings for keys which are not found
in the mapping dictionaries, but instead treat them as undefined
mappings.
The patch was originally written by Martin v. Loewis with some
additional (cosmetic) changes and an updated test script
by Marc-Andre Lemburg.
The standard codecs were recreated from the most current files
available at the Unicode.org site using the Tools/scripts/gencodec.py
tool.
This patch closes the bugs #116285 and #119960.
mislabeled.
(Using -c and then -e rearranges some comments, so I won't check that
in -- but it's a good test anyway.
Note that pindent is not perfect -- e.g. it doesn't know about
triple-quoted strings!)
Problem:
A Python program can be completed and reformatted using
Tools/scripts/pindent.py. Unfortunately there is no option for removal
of the generated "# end"-tags. Although a few Python commands or a
"grep -v '# end '" can do wonders here, there are two drawbacks:
- not everyone has grep/time to write a Python script
- it is not checked whether the "# end"-tags were used validly
Solution:
add extra option "-e" (eliminate) to pindent.py
"""
If the filename being complained about contains a space, enclose the
file-name in quotes.
The reason is simply that when I try and parse tabnanny's output, filenames
with spaces make it very difficult to determine where the filename stops
and the linenumber begins!
"""
Tim approves.
I slightly changed the patch (use 'in' instead of string.find()) and
arbitrarily bumped the __version__ variable up to 6.
I should have waited overnight <wink/sigh>. Nothing wrong with the one I
sent, but I couldn't resist going on to add new -r1 / -r2 cmdline options
for recreating the original files from ndiff's output. That's attached, if
you're game! Us Windows guys don't usually have a sed sitting around
<wink>.
Attached is a cleaned-up version of ndiff (added useful module
docstring, now echo'ed in case of cmd line mistake); added -q option
to suppress initial file identification lines; + other minor cleanups,
& a slightly faster match engine.
"""
the NEWS file of Python 1.5.2a2 inspired me to look at
Tools/scripts/untabify.py. I wonder why it accepts a -t argument
but ignores it. The following patch tries to make it somewhat useful
(i.e., to override the tabsize=8 setting). Is that agreeable?
"""