Found them using::
find . -name '*.py' | while read i ; do grep 'def[^(]*( ' $i /dev/null ; done
find . -name '*.py' | while read i ; do grep ' ):' $i /dev/null ; done
(I was doing this all over my own code anyway, because I'd been using spaces in
all defs, so I thought I'd make a run on the Python code as well. If you need
to do such fixes in your own code, you can use xx-rename or parenregu.el within
emacs.)
contains options, drop them to get the major/minor content type.
Modified from the supplied patch to support more whitespace variation.
Closes SF patch #613605.
(with one small bugfix in bgen/bgen/scantools.py)
This replaces string module functions with string methods
for the stuff in the Tools directory. Several uses of
string.letters etc. are still remaining.
The cause seems to be that when a file URL doesn't exist,
urllib.urlopen() raises OSError instead of IOError. Simply add this
to the except clause. Not elegant, but effective. :-)
<christopher.mccafferty@csg.ch>:
Add javascript: and telnet: to the types of URLs we ignore.
Add support for several additional URL-valued attributes on the BODY,
FRAME, IFRAME, LINK, OBJECT, and SCRIPT elements.
- forced new done origins to set errors if they're in self.bad (fixes
bug where only the first of a number of errorful references to a
link is reported under some circumstances)
- suppressed adding duplicates to self.todo list (cleans up printout
in wcgui details)
instance variables. Make all global functions methods, for easy
overriding. Restructure getpage() for easy overriding. Add
save_pickle() method and load_pickle() global function to make it
easier for other programs to emulate the toplevel interface.
- Change the code that looks for robots.txt to always look in /, even
if the "root" path is somewhere deep down below.
- Add link processing in <AREA> tags.
- Change safeclose() to avoid crashing when the file has no geturl()
method.
Links are now either in 'todo' or 'done', and ext links
are hadled more like local links except that no further
links are gathered (and sometimes they aren't checked,
e.g. for mailto and news URLs). The -x option reverses
its meaning: it disables checking of ext links (they are
moved to 'done' without checking). A new 'errors' table
collects pages with bad links as we go -- redundant,
but useful for the GUI version which needs to report
this as we go. Some new methods, including reset().
New checkpoint format.
Adapted the GUI to the changes in the Checker class.
Added Quit and "Start over" buttons, and a checkbox
to disable checking external links. The details
window now also shows bad links emanating from the
selected page. Miscellaneous small chages.
- Faster HTML parser derivede from SGMLparser (Fred Gansevles).
- All manipulations of todo, done, ext, bad are done via methods, so a
derived class can override. Also moved the 'done' marking to
dopage(), so run() is much simpler.
- Added a method status() which returns a string containing the
summary counts; added a "total" count.
- Drop the guessing of the file type before opening the document -- we
still need to check those links for validity!
- Added a subroutine to close a connection which first slurps up the
remaining data when it's an ftp URL -- apparently closing an ftp
connection without reading till the end makes it hang.
- Added -n option to skip running (only useful with -R).
- The Checker object now has an instance variable which is set to 1
when it is changed. This is not pickled.
in the 'bad' dictionary (sanitize them so they are picklable; the
sanitation code is now a subroutine); don't check mailto: URLs; omit
colon in Error message.