Merged revisions 53623-53858 via svnmerge from
svn+ssh://pythondev@svn.python.org/python/trunk ........ r53624 | peter.astrand | 2007-02-02 20:06:36 +0100 (Fri, 02 Feb 2007) | 1 line We had several if statements checking the value of a fd. This is unsafe, since valid fds might be zero. We should check for not None instead. ........ r53635 | kurt.kaiser | 2007-02-05 07:03:18 +0100 (Mon, 05 Feb 2007) | 2 lines Add 'raw' support to configHandler. Patch 1650174 Tal Einat. ........ r53641 | kurt.kaiser | 2007-02-06 00:02:16 +0100 (Tue, 06 Feb 2007) | 5 lines 1. Calltips now 'handle' tuples in the argument list (display '<tuple>' :) Suggested solution by Christos Georgiou, Bug 791968. 2. Clean up tests, were not failing when they should have been. 4. Remove some camelcase and an unneeded try/except block. ........ r53644 | kurt.kaiser | 2007-02-06 04:21:40 +0100 (Tue, 06 Feb 2007) | 2 lines Clean up ModifiedInterpreter.runcode() structure ........ r53646 | peter.astrand | 2007-02-06 16:37:50 +0100 (Tue, 06 Feb 2007) | 1 line Applied patch 1124861.3.patch to solve bug #1124861: Automatically create pipes on Windows, if GetStdHandle fails. Will backport. ........ r53648 | lars.gustaebel | 2007-02-06 19:38:13 +0100 (Tue, 06 Feb 2007) | 4 lines Patch #1652681: create nonexistent files in append mode and allow appending to empty files. ........ r53649 | kurt.kaiser | 2007-02-06 20:09:43 +0100 (Tue, 06 Feb 2007) | 4 lines Updated patch (CodeContext.061217.patch) to [ 1362975 ] CodeContext - Improved text indentation Tal Einat 16Dec06 ........ r53650 | kurt.kaiser | 2007-02-06 20:21:19 +0100 (Tue, 06 Feb 2007) | 2 lines narrow exception per [ 1540849 ] except too broad ........ r53653 | kurt.kaiser | 2007-02-07 04:39:41 +0100 (Wed, 07 Feb 2007) | 4 lines [ 1621265 ] Auto-completion list placement Move AC window below input line unless not enough space, then put it above. Patch: Tal Einat ........ r53654 | kurt.kaiser | 2007-02-07 09:07:13 +0100 (Wed, 07 Feb 2007) | 2 lines Handle AttributeError during calltip lookup ........ r53656 | raymond.hettinger | 2007-02-07 21:08:22 +0100 (Wed, 07 Feb 2007) | 3 lines SF #1615701: make d.update(m) honor __getitem__() and keys() in dict subclasses ........ r53658 | raymond.hettinger | 2007-02-07 22:04:20 +0100 (Wed, 07 Feb 2007) | 1 line SF: 1397711 Set docs conflated immutable and hashable ........ r53660 | raymond.hettinger | 2007-02-07 22:42:17 +0100 (Wed, 07 Feb 2007) | 1 line Check for a common user error with defaultdict(). ........ r53662 | raymond.hettinger | 2007-02-07 23:24:07 +0100 (Wed, 07 Feb 2007) | 1 line Bug #1575169: operator.isSequenceType() now returns False for subclasses of dict. ........ r53664 | raymond.hettinger | 2007-02-08 00:49:03 +0100 (Thu, 08 Feb 2007) | 1 line Silence compiler warning ........ r53666 | raymond.hettinger | 2007-02-08 01:07:32 +0100 (Thu, 08 Feb 2007) | 1 line Do not let overflows in enumerate() and count() pass silently. ........ r53668 | raymond.hettinger | 2007-02-08 01:50:39 +0100 (Thu, 08 Feb 2007) | 1 line Bypass set specific optimizations for set and frozenset subclasses. ........ r53670 | raymond.hettinger | 2007-02-08 02:42:35 +0100 (Thu, 08 Feb 2007) | 1 line Fix docstring bug ........ r53671 | martin.v.loewis | 2007-02-08 10:13:36 +0100 (Thu, 08 Feb 2007) | 3 lines Bug #1653736: Complain about keyword arguments to time.isoformat. Will backport to 2.5. ........ r53679 | kurt.kaiser | 2007-02-08 23:58:18 +0100 (Thu, 08 Feb 2007) | 6 lines Corrected some bugs in AutoComplete. Also, Page Up/Down in ACW implemented; mouse and cursor selection in ACWindow implemented; double Tab inserts current selection and closes ACW (similar to double-click and Return); scroll wheel now works in ACW. Added AutoComplete instructions to IDLE Help. ........ r53689 | martin.v.loewis | 2007-02-09 13:19:32 +0100 (Fri, 09 Feb 2007) | 3 lines Bug #1653736: Properly discard third argument to slot_nb_inplace_power. Will backport. ........ r53691 | martin.v.loewis | 2007-02-09 13:36:48 +0100 (Fri, 09 Feb 2007) | 4 lines Bug #1600860: Search for shared python library in LIBDIR, not lib/python/config, on "linux" and "gnu" systems. Will backport. ........ r53693 | martin.v.loewis | 2007-02-09 13:58:49 +0100 (Fri, 09 Feb 2007) | 2 lines Update broken link. Will backport to 2.5. ........ r53697 | georg.brandl | 2007-02-09 19:48:41 +0100 (Fri, 09 Feb 2007) | 2 lines Bug #1656078: typo in in profile docs. ........ r53731 | brett.cannon | 2007-02-11 06:36:00 +0100 (Sun, 11 Feb 2007) | 3 lines Change a very minor inconsistency (that is purely cosmetic) in the AST definition. ........ r53735 | skip.montanaro | 2007-02-11 19:24:37 +0100 (Sun, 11 Feb 2007) | 1 line fix trace.py --ignore-dir ........ r53741 | brett.cannon | 2007-02-11 20:44:41 +0100 (Sun, 11 Feb 2007) | 3 lines Check in changed Python-ast.c from a cosmetic change to Python.asdl (in r53731). ........ r53751 | brett.cannon | 2007-02-12 04:51:02 +0100 (Mon, 12 Feb 2007) | 5 lines Modify Parser/asdl_c.py so that the __version__ number for Python/Python-ast.c is specified at the top of the file. Also add a note that Python/Python-ast.c needs to be committed separately after a change to the AST grammar to capture the revision number of the change (which is what __version__ is set to). ........ r53752 | lars.gustaebel | 2007-02-12 10:25:53 +0100 (Mon, 12 Feb 2007) | 3 lines Bug #1656581: Point out that external file objects are supposed to be at position 0. ........ r53754 | martin.v.loewis | 2007-02-12 13:21:10 +0100 (Mon, 12 Feb 2007) | 3 lines Patch 1463026: Support default namespace in XMLGenerator. Fixes #847665. Will backport. ........ r53757 | armin.rigo | 2007-02-12 17:23:24 +0100 (Mon, 12 Feb 2007) | 4 lines Fix the line to what is my guess at the original author's meaning. (The line has no effect anyway, but is present because it's customary call the base class __init__). ........ r53763 | martin.v.loewis | 2007-02-13 09:34:45 +0100 (Tue, 13 Feb 2007) | 3 lines Patch #685268: Consider a package's __path__ in imputil. Will backport. ........ r53765 | martin.v.loewis | 2007-02-13 10:49:38 +0100 (Tue, 13 Feb 2007) | 2 lines Patch #698833: Support file decryption in zipfile. ........ r53766 | martin.v.loewis | 2007-02-13 11:10:39 +0100 (Tue, 13 Feb 2007) | 3 lines Patch #1517891: Make 'a' create the file if it doesn't exist. Fixes #1514451. ........ r53767 | martin.v.loewis | 2007-02-13 13:08:24 +0100 (Tue, 13 Feb 2007) | 3 lines Bug #1658794: Remove extraneous 'this'. Will backport to 2.5. ........ r53769 | martin.v.loewis | 2007-02-13 13:14:19 +0100 (Tue, 13 Feb 2007) | 3 lines Patch #1657276: Make NETLINK_DNRTMSG conditional. Will backport. ........ r53771 | lars.gustaebel | 2007-02-13 17:09:24 +0100 (Tue, 13 Feb 2007) | 4 lines Patch #1647484: Renamed GzipFile's filename attribute to name. The filename attribute is still accessible as a property that emits a DeprecationWarning. ........ r53772 | lars.gustaebel | 2007-02-13 17:24:00 +0100 (Tue, 13 Feb 2007) | 3 lines Strip the '.gz' extension from the filename that is written to the gzip header. ........ r53774 | martin.v.loewis | 2007-02-14 11:07:37 +0100 (Wed, 14 Feb 2007) | 2 lines Patch #1432399: Add HCI sockets. ........ r53775 | martin.v.loewis | 2007-02-14 12:30:07 +0100 (Wed, 14 Feb 2007) | 2 lines Update 1432399 to removal of _BT_SOCKADDR_MEMB. ........ r53776 | martin.v.loewis | 2007-02-14 12:30:56 +0100 (Wed, 14 Feb 2007) | 3 lines Ignore directory time stamps when considering whether to rerun libffi configure. ........ r53778 | lars.gustaebel | 2007-02-14 15:45:12 +0100 (Wed, 14 Feb 2007) | 4 lines A missing binary mode in AppendTest caused failures in Windows Buildbot. ........ r53782 | martin.v.loewis | 2007-02-15 10:51:35 +0100 (Thu, 15 Feb 2007) | 2 lines Patch #1397848: add the reasoning behind no-resize-on-shrinkage. ........ r53783 | georg.brandl | 2007-02-15 11:37:59 +0100 (Thu, 15 Feb 2007) | 2 lines Make functools.wraps() docs a bit clearer. ........ r53785 | georg.brandl | 2007-02-15 12:29:04 +0100 (Thu, 15 Feb 2007) | 2 lines Patch #1494140: Add documentation for the new struct.Struct object. ........ r53787 | georg.brandl | 2007-02-15 12:29:55 +0100 (Thu, 15 Feb 2007) | 2 lines Add missing \versionadded. ........ r53800 | brett.cannon | 2007-02-15 23:54:39 +0100 (Thu, 15 Feb 2007) | 11 lines Update the encoding package's search function to use absolute imports when calling __import__. This helps make the expected search locations for encoding modules be more explicit. One could use an explicit value for __path__ when making the call to __import__ to force the exact location searched for encodings. This would give the most strict search path possible if one is worried about malicious code being imported. The unfortunate side-effect of that is that if __path__ was modified on 'encodings' on purpose in a safe way it would not be picked up in future __import__ calls. ........ r53801 | brett.cannon | 2007-02-16 20:33:01 +0100 (Fri, 16 Feb 2007) | 2 lines Make the __import__ call in encodings.__init__ absolute with a level 0 call. ........ r53809 | vinay.sajip | 2007-02-16 23:36:24 +0100 (Fri, 16 Feb 2007) | 1 line Minor fix for currentframe (SF #1652788). ........ r53818 | raymond.hettinger | 2007-02-19 03:03:19 +0100 (Mon, 19 Feb 2007) | 3 lines Extend work on revision 52962: Eliminate redundant calls to PyObject_Hash(). ........ r53820 | raymond.hettinger | 2007-02-19 05:08:43 +0100 (Mon, 19 Feb 2007) | 1 line Add merge() function to heapq. ........ r53821 | raymond.hettinger | 2007-02-19 06:28:28 +0100 (Mon, 19 Feb 2007) | 1 line Add tie-breaker count to preserve sort stability. ........ r53822 | raymond.hettinger | 2007-02-19 07:59:32 +0100 (Mon, 19 Feb 2007) | 1 line Use C heapreplace() instead of slower _siftup() in pure python. ........ r53823 | raymond.hettinger | 2007-02-19 08:30:21 +0100 (Mon, 19 Feb 2007) | 1 line Add test for merge stability ........ r53824 | raymond.hettinger | 2007-02-19 10:14:10 +0100 (Mon, 19 Feb 2007) | 1 line Provide an example of defaultdict with non-zero constant factory function. ........ r53825 | lars.gustaebel | 2007-02-19 10:54:47 +0100 (Mon, 19 Feb 2007) | 2 lines Moved misplaced news item. ........ r53826 | martin.v.loewis | 2007-02-19 11:55:19 +0100 (Mon, 19 Feb 2007) | 3 lines Patch #1490190: posixmodule now includes os.chflags() and os.lchflags() functions on platforms where the underlying system calls are available. ........ r53827 | raymond.hettinger | 2007-02-19 19:15:04 +0100 (Mon, 19 Feb 2007) | 1 line Fixup docstrings for merge(). ........ r53829 | raymond.hettinger | 2007-02-19 21:44:04 +0100 (Mon, 19 Feb 2007) | 1 line Fixup set/dict interoperability. ........ r53837 | raymond.hettinger | 2007-02-21 06:20:38 +0100 (Wed, 21 Feb 2007) | 1 line Add itertools.izip_longest(). ........ r53838 | raymond.hettinger | 2007-02-21 18:22:05 +0100 (Wed, 21 Feb 2007) | 1 line Remove filler struct item and fix leak. ........
This commit is contained in:
parent
63eecc7eee
commit
cf297e46b8
|
@ -33,7 +33,7 @@ Optional \var{mangle_from_} is a flag that, when \code{True}, puts a
|
|||
line. This is the only guaranteed portable way to avoid having such
|
||||
lines be mistaken for a \UNIX{} mailbox format envelope header separator (see
|
||||
\ulink{WHY THE CONTENT-LENGTH FORMAT IS BAD}
|
||||
{http://home.netscape.com/eng/mozilla/2.0/relnotes/demo/content-length.html}
|
||||
{http://www.jwz.org/doc/content-length.html}
|
||||
for details). \var{mangle_from_} defaults to \code{True}, but you
|
||||
might want to set this to \code{False} if you are not writing \UNIX{}
|
||||
mailbox format files.
|
||||
|
|
|
@ -311,16 +311,20 @@ languages):
|
|||
When a letter is first encountered, it is missing from the mapping, so the
|
||||
\member{default_factory} function calls \function{int()} to supply a default
|
||||
count of zero. The increment operation then builds up the count for each
|
||||
letter. This technique makes counting simpler and faster than an equivalent
|
||||
technique using \method{dict.get()}:
|
||||
letter.
|
||||
|
||||
The function \function{int()} which always returns zero is just a special
|
||||
case of constant functions. A faster and more flexible way to create
|
||||
constant functions is to use \function{itertools.repeat()} which can supply
|
||||
any constant value (not just zero):
|
||||
|
||||
\begin{verbatim}
|
||||
>>> d = {}
|
||||
>>> for k in s:
|
||||
d[k] = d.get(k, 0) + 1
|
||||
|
||||
>>> d.items()
|
||||
[('i', 4), ('p', 2), ('s', 4), ('m', 1)]
|
||||
>>> def constant_factory(value):
|
||||
... return itertools.repeat(value).next
|
||||
>>> d = defaultdict(constant_factory('<missing>'))
|
||||
>>> d.update(name='John', action='ran')
|
||||
>>> '%(name)s %(action)s to %(object)s' % d
|
||||
'John ran to <missing>'
|
||||
\end{verbatim}
|
||||
|
||||
Setting the \member{default_factory} to \class{set} makes the
|
||||
|
|
|
@ -66,15 +66,16 @@ two:
|
|||
|
||||
\begin{funcdesc}{update_wrapper}
|
||||
{wrapper, wrapped\optional{, assigned}\optional{, updated}}
|
||||
Update a wrapper function to look like the wrapped function. The optional
|
||||
arguments are tuples to specify which attributes of the original
|
||||
Update a \var{wrapper} function to look like the \var{wrapped} function.
|
||||
The optional arguments are tuples to specify which attributes of the original
|
||||
function are assigned directly to the matching attributes on the wrapper
|
||||
function and which attributes of the wrapper function are updated with
|
||||
the corresponding attributes from the original function. The default
|
||||
values for these arguments are the module level constants
|
||||
\var{WRAPPER_ASSIGNMENTS} (which assigns to the wrapper function's name,
|
||||
module and documentation string) and \var{WRAPPER_UPDATES} (which
|
||||
updates the wrapper function's instance dictionary).
|
||||
\var{WRAPPER_ASSIGNMENTS} (which assigns to the wrapper function's
|
||||
\var{__name__}, \var{__module__} and \var{__doc__}, the documentation string)
|
||||
and \var{WRAPPER_UPDATES} (which updates the wrapper function's \var{__dict__},
|
||||
i.e. the instance dictionary).
|
||||
|
||||
The main intended use for this function is in decorator functions
|
||||
which wrap the decorated function and return the wrapper. If the
|
||||
|
@ -98,6 +99,7 @@ as a function decorator when defining a wrapper function. For example:
|
|||
...
|
||||
>>> @my_decorator
|
||||
... def example():
|
||||
... """Docstring"""
|
||||
... print 'Called example function'
|
||||
...
|
||||
>>> example()
|
||||
|
@ -105,9 +107,12 @@ as a function decorator when defining a wrapper function. For example:
|
|||
Called example function
|
||||
>>> example.__name__
|
||||
'example'
|
||||
>>> example.__doc__
|
||||
'Docstring'
|
||||
\end{verbatim}
|
||||
Without the use of this decorator factory, the name of the example
|
||||
function would have been \code{'wrapper'}.
|
||||
function would have been \code{'wrapper'}, and the docstring of the
|
||||
original \function{example()} would have been lost.
|
||||
\end{funcdesc}
|
||||
|
||||
|
||||
|
|
|
@ -88,7 +88,18 @@ True
|
|||
>>>
|
||||
\end{verbatim}
|
||||
|
||||
The module also offers two general purpose functions based on heaps.
|
||||
The module also offers three general purpose functions based on heaps.
|
||||
|
||||
\begin{funcdesc}{merge}{*iterables}
|
||||
Merge multiple sorted inputs into a single sorted output (for example, merge
|
||||
timestamped entries from multiple log files). Returns an iterator over
|
||||
over the sorted values.
|
||||
|
||||
Similar to \code{sorted(itertools.chain(*iterables))} but returns an iterable,
|
||||
does not pull the data into memory all at once, and assumes that each of the
|
||||
input streams is already sorted (smallest to largest).
|
||||
\versionadded{2.6}
|
||||
\end{funcdesc}
|
||||
|
||||
\begin{funcdesc}{nlargest}{n, iterable\optional{, key}}
|
||||
Return a list with the \var{n} largest elements from the dataset defined
|
||||
|
@ -110,7 +121,7 @@ Equivalent to: \samp{sorted(iterable, key=key)[:n]}
|
|||
\versionchanged[Added the optional \var{key} argument]{2.5}
|
||||
\end{funcdesc}
|
||||
|
||||
Both functions perform best for smaller values of \var{n}. For larger
|
||||
The latter two functions perform best for smaller values of \var{n}. For larger
|
||||
values, it is more efficient to use the \function{sorted()} function. Also,
|
||||
when \code{n==1}, it is more efficient to use the builtin \function{min()}
|
||||
and \function{max()} functions.
|
||||
|
|
|
@ -302,6 +302,33 @@ by functions or loops that truncate the stream.
|
|||
don't care about trailing, unmatched values from the longer iterables.
|
||||
\end{funcdesc}
|
||||
|
||||
\begin{funcdesc}{izip_longest}{*iterables\optional{, fillvalue}}
|
||||
Make an iterator that aggregates elements from each of the iterables.
|
||||
If the iterables are of uneven length, missing values are filled-in
|
||||
with \var{fillvalue}. Iteration continues until the longest iterable
|
||||
is exhausted. Equivalent to:
|
||||
|
||||
\begin{verbatim}
|
||||
def izip_longest(*args, **kwds):
|
||||
fillvalue = kwds.get('fillvalue')
|
||||
def sentinel(counter = ([fillvalue]*(len(args)-1)).pop):
|
||||
yield counter() # yields the fillvalue, or raises IndexError
|
||||
fillers = repeat(fillvalue)
|
||||
iters = [chain(it, sentinel(), fillers) for it in args]
|
||||
try:
|
||||
for tup in izip(*iters):
|
||||
yield tup
|
||||
except IndexError:
|
||||
pass
|
||||
\end{verbatim}
|
||||
|
||||
If one of the iterables is potentially infinite, then the
|
||||
\function{izip_longest()} function should be wrapped with something
|
||||
that limits the number of calls (for example \function{islice()} or
|
||||
\function{take()}).
|
||||
\versionadded{2.6}
|
||||
\end{funcdesc}
|
||||
|
||||
\begin{funcdesc}{repeat}{object\optional{, times}}
|
||||
Make an iterator that returns \var{object} over and over again.
|
||||
Runs indefinitely unless the \var{times} argument is specified.
|
||||
|
|
|
@ -481,7 +481,7 @@ The case conversion functions in the
|
|||
locale settings. When a call to the \function{setlocale()} function
|
||||
changes the \constant{LC_CTYPE} settings, the variables
|
||||
\code{string.lowercase}, \code{string.uppercase} and
|
||||
\code{string.letters} are recalculated. Note that this code that uses
|
||||
\code{string.letters} are recalculated. Note that code that uses
|
||||
these variable through `\keyword{from} ... \keyword{import} ...',
|
||||
e.g.\ \code{from string import letters}, is not affected by subsequent
|
||||
\function{setlocale()} calls.
|
||||
|
|
|
@ -758,6 +758,26 @@ Availability: Macintosh, \UNIX, Windows.
|
|||
\versionadded{2.3}
|
||||
\end{funcdesc}
|
||||
|
||||
\begin{funcdesc}{chflags}{path, flags}
|
||||
Set the flags of \var{path} to the numeric \var{flags}.
|
||||
\var{flags} may take a combination (bitwise OR) of the following values
|
||||
(as defined in the \module{stat} module):
|
||||
\begin{itemize}
|
||||
\item \code{UF_NODUMP}
|
||||
\item \code{UF_IMMUTABLE}
|
||||
\item \code{UF_APPEND}
|
||||
\item \code{UF_OPAQUE}
|
||||
\item \code{UF_NOUNLINK}
|
||||
\item \code{SF_ARCHIVED}
|
||||
\item \code{SF_IMMUTABLE}
|
||||
\item \code{SF_APPEND}
|
||||
\item \code{SF_NOUNLINK}
|
||||
\item \code{SF_SNAPSHOT}
|
||||
\end{itemize}
|
||||
Availability: Macintosh, \UNIX.
|
||||
\versionadded{2.6}
|
||||
\end{funcdesc}
|
||||
|
||||
\begin{funcdesc}{chroot}{path}
|
||||
Change the root directory of the current process to \var{path}.
|
||||
Availability: Macintosh, \UNIX.
|
||||
|
@ -804,6 +824,13 @@ and \var{gid}. To leave one of the ids unchanged, set it to -1.
|
|||
Availability: Macintosh, \UNIX.
|
||||
\end{funcdesc}
|
||||
|
||||
\begin{funcdesc}{lchflags}{path, flags}
|
||||
Set the flags of \var{path} to the numeric \var{flags}, like
|
||||
\function{chflags()}, but do not follow symbolic links.
|
||||
Availability: \UNIX.
|
||||
\versionadded{2.6}
|
||||
\end{funcdesc}
|
||||
|
||||
\begin{funcdesc}{lchown}{path, uid, gid}
|
||||
Change the owner and group id of \var{path} to the numeric \var{uid}
|
||||
and gid. This function will not follow symbolic links.
|
||||
|
|
|
@ -44,8 +44,8 @@ file type and creator codes will not be correct.
|
|||
\end{funcdesc}
|
||||
|
||||
\begin{funcdesc}{copystat}{src, dst}
|
||||
Copy the permission bits, last access time, and last modification
|
||||
time from \var{src} to \var{dst}. The file contents, owner, and
|
||||
Copy the permission bits, last access time, last modification time,
|
||||
and flags from \var{src} to \var{dst}. The file contents, owner, and
|
||||
group are unaffected. \var{src} and \var{dst} are path names given
|
||||
as strings.
|
||||
\end{funcdesc}
|
||||
|
|
|
@ -1212,7 +1212,7 @@ Notes:
|
|||
\label{types-set}}
|
||||
\obindex{set}
|
||||
|
||||
A \dfn{set} object is an unordered collection of immutable values.
|
||||
A \dfn{set} object is an unordered collection of distinct hashable objects.
|
||||
Common uses include membership testing, removing duplicates from a sequence,
|
||||
and computing mathematical operations such as intersection, union, difference,
|
||||
and symmetric difference.
|
||||
|
|
|
@ -29,6 +29,15 @@ The module defines the following exception and functions:
|
|||
exactly.
|
||||
\end{funcdesc}
|
||||
|
||||
\begin{funcdesc}{pack_into}{fmt, buffer, offset, v1, v2, \moreargs}
|
||||
Pack the values \code{\var{v1}, \var{v2}, \textrm{\ldots}} according to the given
|
||||
format, write the packed bytes into the writable \var{buffer} starting at
|
||||
\var{offset}.
|
||||
Note that the offset is not an optional argument.
|
||||
|
||||
\versionadded{2.5}
|
||||
\end{funcdesc}
|
||||
|
||||
\begin{funcdesc}{unpack}{fmt, string}
|
||||
Unpack the string (presumably packed by \code{pack(\var{fmt},
|
||||
\textrm{\ldots})}) according to the given format. The result is a
|
||||
|
@ -37,6 +46,16 @@ The module defines the following exception and functions:
|
|||
(\code{len(\var{string})} must equal \code{calcsize(\var{fmt})}).
|
||||
\end{funcdesc}
|
||||
|
||||
\begin{funcdesc}{unpack_from}{fmt, buffer\optional{,offset \code{= 0}}}
|
||||
Unpack the \var{buffer} according to tthe given format.
|
||||
The result is a tuple even if it contains exactly one item. The
|
||||
\var{buffer} must contain at least the amount of data required by the
|
||||
format (\code{len(buffer[offset:])} must be at least
|
||||
\code{calcsize(\var{fmt})}).
|
||||
|
||||
\versionadded{2.5}
|
||||
\end{funcdesc}
|
||||
|
||||
\begin{funcdesc}{calcsize}{fmt}
|
||||
Return the size of the struct (and hence of the string)
|
||||
corresponding to the given format.
|
||||
|
@ -208,3 +227,43 @@ in effect; standard size and alignment does not enforce any alignment.
|
|||
\seemodule{array}{Packed binary storage of homogeneous data.}
|
||||
\seemodule{xdrlib}{Packing and unpacking of XDR data.}
|
||||
\end{seealso}
|
||||
|
||||
\subsection{Struct Objects \label{struct-objects}}
|
||||
|
||||
The \module{struct} module also defines the following type:
|
||||
|
||||
\begin{classdesc}{Struct}{format}
|
||||
Return a new Struct object which writes and reads binary data according to
|
||||
the format string \var{format}. Creating a Struct object once and calling
|
||||
its methods is more efficient than calling the \module{struct} functions
|
||||
with the same format since the format string only needs to be compiled once.
|
||||
|
||||
\versionadded{2.5}
|
||||
\end{classdesc}
|
||||
|
||||
Compiled Struct objects support the following methods and attributes:
|
||||
|
||||
\begin{methoddesc}[Struct]{pack}{v1, v2, \moreargs}
|
||||
Identical to the \function{pack()} function, using the compiled format.
|
||||
(\code{len(result)} will equal \member{self.size}.)
|
||||
\end{methoddesc}
|
||||
|
||||
\begin{methoddesc}[Struct]{pack_into}{buffer, offset, v1, v2, \moreargs}
|
||||
Identical to the \function{pack_into()} function, using the compiled format.
|
||||
\end{methoddesc}
|
||||
|
||||
\begin{methoddesc}[Struct]{unpack}{string}
|
||||
Identical to the \function{unpack()} function, using the compiled format.
|
||||
(\code{len(string)} must equal \member{self.size}).
|
||||
\end{methoddesc}
|
||||
|
||||
\begin{methoddesc}[Struct]{unpack_from}{buffer\optional{,offset
|
||||
\code{= 0}}}
|
||||
Identical to the \function{unpack_from()} function, using the compiled format.
|
||||
(\code{len(buffer[offset:])} must be at least \member{self.size}).
|
||||
\end{methoddesc}
|
||||
|
||||
\begin{memberdesc}[Struct]{format}
|
||||
The format string used to construct this Struct object.
|
||||
\end{memberdesc}
|
||||
|
||||
|
|
|
@ -36,7 +36,8 @@ Some facts and figures:
|
|||
\lineii{'r:'}{Open for reading exclusively without compression.}
|
||||
\lineii{'r:gz'}{Open for reading with gzip compression.}
|
||||
\lineii{'r:bz2'}{Open for reading with bzip2 compression.}
|
||||
\lineii{'a' or 'a:'}{Open for appending with no compression.}
|
||||
\lineii{'a' or 'a:'}{Open for appending with no compression. The file
|
||||
is created if it does not exist.}
|
||||
\lineii{'w' or 'w:'}{Open for uncompressed writing.}
|
||||
\lineii{'w:gz'}{Open for gzip compressed writing.}
|
||||
\lineii{'w:bz2'}{Open for bzip2 compressed writing.}
|
||||
|
@ -48,8 +49,8 @@ Some facts and figures:
|
|||
avoid this. If a compression method is not supported,
|
||||
\exception{CompressionError} is raised.
|
||||
|
||||
If \var{fileobj} is specified, it is used as an alternative to
|
||||
a file object opened for \var{name}.
|
||||
If \var{fileobj} is specified, it is used as an alternative to a file
|
||||
object opened for \var{name}. It is supposed to be at position 0.
|
||||
|
||||
For special purposes, there is a second format for \var{mode}:
|
||||
\code{'filemode|[compression]'}. \function{open()} will return a
|
||||
|
@ -160,6 +161,7 @@ tar archive several times. Each archive member is represented by a
|
|||
|
||||
If \var{fileobj} is given, it is used for reading or writing data.
|
||||
If it can be determined, \var{mode} is overridden by \var{fileobj}'s mode.
|
||||
\var{fileobj} will be used from position 0.
|
||||
\begin{notice}
|
||||
\var{fileobj} is not closed, when \class{TarFile} is closed.
|
||||
\end{notice}
|
||||
|
|
|
@ -17,8 +17,10 @@ understanding of the format, as defined in
|
|||
{PKZIP Application Note}.
|
||||
|
||||
This module does not currently handle ZIP files which have appended
|
||||
comments, or multi-disk ZIP files. It can handle ZIP files that use the
|
||||
ZIP64 extensions (that is ZIP files that are more than 4 GByte in size).
|
||||
comments, or multi-disk ZIP files. It can handle ZIP files that use
|
||||
the ZIP64 extensions (that is ZIP files that are more than 4 GByte in
|
||||
size). It supports decryption of encrypted files in ZIP archives, but
|
||||
it cannot currently create an encrypted file.
|
||||
|
||||
The available attributes of this module are:
|
||||
|
||||
|
@ -99,6 +101,8 @@ cat myzip.zip >> python.exe
|
|||
\end{verbatim}
|
||||
|
||||
also works, and at least \program{WinZip} can read such files.
|
||||
If \var{mode} is \code{a} and the file does not exist at all,
|
||||
it is created.
|
||||
\var{compression} is the ZIP compression method to use when writing
|
||||
the archive, and should be \constant{ZIP_STORED} or
|
||||
\constant{ZIP_DEFLATED}; unrecognized values will cause
|
||||
|
@ -112,6 +116,9 @@ cat myzip.zip >> python.exe
|
|||
ZIP file would require ZIP64 extensions. ZIP64 extensions are disabled by
|
||||
default because the default \program{zip} and \program{unzip} commands on
|
||||
\UNIX{} (the InfoZIP utilities) don't support these extensions.
|
||||
|
||||
\versionchanged[If the file does not exist, it is created if the
|
||||
mode is 'a']{2.6}
|
||||
\end{classdesc}
|
||||
|
||||
\begin{methoddesc}{close}{}
|
||||
|
@ -138,9 +145,18 @@ cat myzip.zip >> python.exe
|
|||
Print a table of contents for the archive to \code{sys.stdout}.
|
||||
\end{methoddesc}
|
||||
|
||||
\begin{methoddesc}{read}{name}
|
||||
\begin{methoddesc}{setpassword}{pwd}
|
||||
Set \var{pwd} as default password to extract encrypted files.
|
||||
\versionadded{2.6}
|
||||
\end{methoddesc}
|
||||
|
||||
\begin{methoddesc}{read}{name\optional{, pwd}}
|
||||
Return the bytes of the file in the archive. The archive must be
|
||||
open for read or append.
|
||||
open for read or append. \var{pwd} is the password used for encrypted
|
||||
files and, if specified, it will override the default password set with
|
||||
\method{setpassword()}.
|
||||
|
||||
\versionchanged[\var{pwd} was added]{2.6}
|
||||
\end{methoddesc}
|
||||
|
||||
\begin{methoddesc}{testzip}{}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
/* File automatically generated by Parser/asdl_c.py */
|
||||
/* File automatically generated by Parser/asdl_c.py. */
|
||||
|
||||
#include "asdl.h"
|
||||
|
||||
|
|
|
@ -100,12 +100,15 @@ PyAPI_FUNC(int) PyDict_DelItem(PyObject *mp, PyObject *key);
|
|||
PyAPI_FUNC(void) PyDict_Clear(PyObject *mp);
|
||||
PyAPI_FUNC(int) PyDict_Next(
|
||||
PyObject *mp, Py_ssize_t *pos, PyObject **key, PyObject **value);
|
||||
PyAPI_FUNC(int) _PyDict_Next(
|
||||
PyObject *mp, Py_ssize_t *pos, PyObject **key, PyObject **value, long *hash);
|
||||
PyAPI_FUNC(PyObject *) PyDict_Keys(PyObject *mp);
|
||||
PyAPI_FUNC(PyObject *) PyDict_Values(PyObject *mp);
|
||||
PyAPI_FUNC(PyObject *) PyDict_Items(PyObject *mp);
|
||||
PyAPI_FUNC(Py_ssize_t) PyDict_Size(PyObject *mp);
|
||||
PyAPI_FUNC(PyObject *) PyDict_Copy(PyObject *mp);
|
||||
PyAPI_FUNC(int) PyDict_Contains(PyObject *mp, PyObject *key);
|
||||
PyAPI_FUNC(int) _PyDict_Contains(PyObject *mp, PyObject *key, long hash);
|
||||
|
||||
/* PyDict_Update(mp, other) is equivalent to PyDict_Merge(mp, other, 1). */
|
||||
PyAPI_FUNC(int) PyDict_Update(PyObject *mp, PyObject *other);
|
||||
|
|
|
@ -487,7 +487,7 @@ def localcontext(ctx=None):
|
|||
28
|
||||
>>> with localcontext():
|
||||
... ctx = getcontext()
|
||||
... ctx.prec() += 2
|
||||
... ctx.prec += 2
|
||||
... print(ctx.prec)
|
||||
...
|
||||
30
|
||||
|
|
|
@ -185,9 +185,7 @@ class build_ext (Command):
|
|||
|
||||
# for extensions under Cygwin and AtheOS Python's library directory must be
|
||||
# appended to library_dirs
|
||||
if sys.platform[:6] == 'cygwin' or sys.platform[:6] == 'atheos' or \
|
||||
((sys.platform.startswith('linux') or sys.platform.startswith('gnu')) and
|
||||
sysconfig.get_config_var('Py_ENABLE_SHARED')):
|
||||
if sys.platform[:6] == 'cygwin' or sys.platform[:6] == 'atheos':
|
||||
if string.find(sys.executable, sys.exec_prefix) != -1:
|
||||
# building third party extensions
|
||||
self.library_dirs.append(os.path.join(sys.prefix, "lib",
|
||||
|
@ -197,6 +195,17 @@ class build_ext (Command):
|
|||
# building python standard extensions
|
||||
self.library_dirs.append('.')
|
||||
|
||||
# for extensions under Linux with a shared Python library,
|
||||
# Python's library directory must be appended to library_dirs
|
||||
if (sys.platform.startswith('linux') or sys.platform.startswith('gnu')) \
|
||||
and sysconfig.get_config_var('Py_ENABLE_SHARED'):
|
||||
if string.find(sys.executable, sys.exec_prefix) != -1:
|
||||
# building third party extensions
|
||||
self.library_dirs.append(sysconfig.get_config_var('LIBDIR'))
|
||||
else:
|
||||
# building python standard extensions
|
||||
self.library_dirs.append('.')
|
||||
|
||||
# The argument parsing will result in self.define being a string, but
|
||||
# it has to be a list of 2-tuples. All the preprocessor symbols
|
||||
# specified by the 'define' option will be set to '1'. Multiple
|
||||
|
|
|
@ -93,8 +93,10 @@ def search_function(encoding):
|
|||
if not modname or '.' in modname:
|
||||
continue
|
||||
try:
|
||||
mod = __import__('encodings.' + modname,
|
||||
globals(), locals(), _import_tail)
|
||||
# Import is absolute to prevent the possibly malicious import of a
|
||||
# module with side-effects that is not in the 'encodings' package.
|
||||
mod = __import__('encodings.' + modname, fromlist=_import_tail,
|
||||
level=0)
|
||||
except ImportError:
|
||||
pass
|
||||
else:
|
||||
|
|
18
Lib/gzip.py
18
Lib/gzip.py
|
@ -106,7 +106,7 @@ class GzipFile:
|
|||
self._new_member = True
|
||||
self.extrabuf = ""
|
||||
self.extrasize = 0
|
||||
self.filename = filename
|
||||
self.name = filename
|
||||
# Starts small, scales exponentially
|
||||
self.min_readsize = 100
|
||||
|
||||
|
@ -127,14 +127,20 @@ class GzipFile:
|
|||
if self.mode == WRITE:
|
||||
self._write_gzip_header()
|
||||
|
||||
@property
|
||||
def filename(self):
|
||||
import warnings
|
||||
warnings.warn("use the name attribute", DeprecationWarning)
|
||||
if self.mode == WRITE and self.name[-3:] != ".gz":
|
||||
return self.name + ".gz"
|
||||
return self.name
|
||||
|
||||
def __repr__(self):
|
||||
s = repr(self.fileobj)
|
||||
return '<gzip ' + s[1:-1] + ' ' + hex(id(self)) + '>'
|
||||
|
||||
def _init_write(self, filename):
|
||||
if filename[-3:] != '.gz':
|
||||
filename = filename + '.gz'
|
||||
self.filename = filename
|
||||
self.name = filename
|
||||
self.crc = zlib.crc32("")
|
||||
self.size = 0
|
||||
self.writebuf = []
|
||||
|
@ -143,7 +149,9 @@ class GzipFile:
|
|||
def _write_gzip_header(self):
|
||||
self.fileobj.write('\037\213') # magic header
|
||||
self.fileobj.write('\010') # compression method
|
||||
fname = self.filename[:-3]
|
||||
fname = self.name
|
||||
if fname.endswith(".gz"):
|
||||
fname = fname[:-3]
|
||||
flags = 0
|
||||
if fname:
|
||||
flags = FNAME
|
||||
|
|
42
Lib/heapq.py
42
Lib/heapq.py
|
@ -126,8 +126,8 @@ Believe me, real good tape sorts were quite spectacular to watch!
|
|||
From all times, sorting has always been a Great Art! :-)
|
||||
"""
|
||||
|
||||
__all__ = ['heappush', 'heappop', 'heapify', 'heapreplace', 'nlargest',
|
||||
'nsmallest']
|
||||
__all__ = ['heappush', 'heappop', 'heapify', 'heapreplace', 'merge',
|
||||
'nlargest', 'nsmallest']
|
||||
|
||||
from itertools import islice, repeat, count, imap, izip, tee
|
||||
from operator import itemgetter, neg
|
||||
|
@ -308,6 +308,41 @@ try:
|
|||
except ImportError:
|
||||
pass
|
||||
|
||||
def merge(*iterables):
|
||||
'''Merge multiple sorted inputs into a single sorted output.
|
||||
|
||||
Similar to sorted(itertools.chain(*iterables)) but returns an iterable,
|
||||
does not pull the data into memory all at once, and assumes that each of
|
||||
the input streams is already sorted (smallest to largest).
|
||||
|
||||
>>> list(merge([1,3,5,7], [0,2,4,8], [5,10,15,20], [], [25]))
|
||||
[0, 1, 2, 3, 4, 5, 5, 7, 8, 10, 15, 20, 25]
|
||||
|
||||
'''
|
||||
_heappop, _heapreplace, _StopIteration = heappop, heapreplace, StopIteration
|
||||
|
||||
h = []
|
||||
h_append = h.append
|
||||
for itnum, it in enumerate(map(iter, iterables)):
|
||||
try:
|
||||
next = it.next
|
||||
h_append([next(), itnum, next])
|
||||
except _StopIteration:
|
||||
pass
|
||||
heapify(h)
|
||||
|
||||
while 1:
|
||||
try:
|
||||
while 1:
|
||||
v, itnum, next = s = h[0] # raises IndexError when h is empty
|
||||
yield v
|
||||
s[0] = next() # raises StopIteration when exhausted
|
||||
_heapreplace(h, s) # restore heap condition
|
||||
except _StopIteration:
|
||||
_heappop(h) # remove empty iterator
|
||||
except IndexError:
|
||||
return
|
||||
|
||||
# Extend the implementations of nsmallest and nlargest to use a key= argument
|
||||
_nsmallest = nsmallest
|
||||
def nsmallest(n, iterable, key=None):
|
||||
|
@ -341,3 +376,6 @@ if __name__ == "__main__":
|
|||
while heap:
|
||||
sort.append(heappop(heap))
|
||||
print(sort)
|
||||
|
||||
import doctest
|
||||
doctest.testmod()
|
||||
|
|
|
@ -10,13 +10,14 @@ HIDE_SEQUENCES = ("<FocusOut>", "<ButtonPress>")
|
|||
KEYPRESS_VIRTUAL_EVENT_NAME = "<<autocompletewindow-keypress>>"
|
||||
# We need to bind event beyond <Key> so that the function will be called
|
||||
# before the default specific IDLE function
|
||||
KEYPRESS_SEQUENCES = ("<Key>", "<Key-BackSpace>", "<Key-Return>",
|
||||
"<Key-Up>", "<Key-Down>", "<Key-Home>", "<Key-End>")
|
||||
KEYPRESS_SEQUENCES = ("<Key>", "<Key-BackSpace>", "<Key-Return>", "<Key-Tab>",
|
||||
"<Key-Up>", "<Key-Down>", "<Key-Home>", "<Key-End>",
|
||||
"<Key-Prior>", "<Key-Next>")
|
||||
KEYRELEASE_VIRTUAL_EVENT_NAME = "<<autocompletewindow-keyrelease>>"
|
||||
KEYRELEASE_SEQUENCE = "<KeyRelease>"
|
||||
LISTUPDATE_SEQUENCE = "<ButtonRelease>"
|
||||
LISTUPDATE_SEQUENCE = "<B1-ButtonRelease>"
|
||||
WINCONFIG_SEQUENCE = "<Configure>"
|
||||
DOUBLECLICK_SEQUENCE = "<Double-ButtonRelease>"
|
||||
DOUBLECLICK_SEQUENCE = "<B1-Double-ButtonRelease>"
|
||||
|
||||
class AutoCompleteWindow:
|
||||
|
||||
|
@ -49,6 +50,8 @@ class AutoCompleteWindow:
|
|||
# event ids
|
||||
self.hideid = self.keypressid = self.listupdateid = self.winconfigid \
|
||||
= self.keyreleaseid = self.doubleclickid = None
|
||||
# Flag set if last keypress was a tab
|
||||
self.lastkey_was_tab = False
|
||||
|
||||
def _change_start(self, newstart):
|
||||
i = 0
|
||||
|
@ -118,11 +121,6 @@ class AutoCompleteWindow:
|
|||
i = 0
|
||||
while i < len(lts) and i < len(selstart) and lts[i] == selstart[i]:
|
||||
i += 1
|
||||
previous_completion = self.completions[cursel - 1]
|
||||
while cursel > 0 and selstart[:i] <= previous_completion:
|
||||
i += 1
|
||||
if selstart == previous_completion:
|
||||
break # maybe we have a duplicate?
|
||||
newstart = selstart[:i]
|
||||
self._change_start(newstart)
|
||||
|
||||
|
@ -206,7 +204,7 @@ class AutoCompleteWindow:
|
|||
self.keyrelease_event)
|
||||
self.widget.event_add(KEYRELEASE_VIRTUAL_EVENT_NAME,KEYRELEASE_SEQUENCE)
|
||||
self.listupdateid = listbox.bind(LISTUPDATE_SEQUENCE,
|
||||
self.listupdate_event)
|
||||
self.listselect_event)
|
||||
self.winconfigid = acw.bind(WINCONFIG_SEQUENCE, self.winconfig_event)
|
||||
self.doubleclickid = listbox.bind(DOUBLECLICK_SEQUENCE,
|
||||
self.doubleclick_event)
|
||||
|
@ -215,24 +213,34 @@ class AutoCompleteWindow:
|
|||
if not self.is_active():
|
||||
return
|
||||
# Position the completion list window
|
||||
text = self.widget
|
||||
text.see(self.startindex)
|
||||
x, y, cx, cy = text.bbox(self.startindex)
|
||||
acw = self.autocompletewindow
|
||||
self.widget.see(self.startindex)
|
||||
x, y, cx, cy = self.widget.bbox(self.startindex)
|
||||
acw.wm_geometry("+%d+%d" % (x + self.widget.winfo_rootx(),
|
||||
y + self.widget.winfo_rooty() \
|
||||
-acw.winfo_height()))
|
||||
|
||||
acw_width, acw_height = acw.winfo_width(), acw.winfo_height()
|
||||
text_width, text_height = text.winfo_width(), text.winfo_height()
|
||||
new_x = text.winfo_rootx() + min(x, max(0, text_width - acw_width))
|
||||
new_y = text.winfo_rooty() + y
|
||||
if (text_height - (y + cy) >= acw_height # enough height below
|
||||
or y < acw_height): # not enough height above
|
||||
# place acw below current line
|
||||
new_y += cy
|
||||
else:
|
||||
# place acw above current line
|
||||
new_y -= acw_height
|
||||
acw.wm_geometry("+%d+%d" % (new_x, new_y))
|
||||
|
||||
def hide_event(self, event):
|
||||
if not self.is_active():
|
||||
return
|
||||
self.hide_window()
|
||||
|
||||
def listupdate_event(self, event):
|
||||
def listselect_event(self, event):
|
||||
if not self.is_active():
|
||||
return
|
||||
self.userwantswindow = True
|
||||
self._selection_changed()
|
||||
cursel = int(self.listbox.curselection()[0])
|
||||
self._change_start(self.completions[cursel])
|
||||
|
||||
def doubleclick_event(self, event):
|
||||
# Put the selected completion in the text, and close the list
|
||||
|
@ -248,7 +256,8 @@ class AutoCompleteWindow:
|
|||
state = event.mc_state
|
||||
else:
|
||||
state = 0
|
||||
|
||||
if keysym != "Tab":
|
||||
self.lastkey_was_tab = False
|
||||
if (len(keysym) == 1 or keysym in ("underscore", "BackSpace")
|
||||
or (self.mode==AutoComplete.COMPLETE_FILES and keysym in
|
||||
("period", "minus"))) \
|
||||
|
@ -330,13 +339,21 @@ class AutoCompleteWindow:
|
|||
self.listbox.select_clear(cursel)
|
||||
self.listbox.select_set(newsel)
|
||||
self._selection_changed()
|
||||
self._change_start(self.completions[newsel])
|
||||
return "break"
|
||||
|
||||
elif (keysym == "Tab" and not state):
|
||||
# The user wants a completion, but it is handled by AutoComplete
|
||||
# (not AutoCompleteWindow), so ignore.
|
||||
self.userwantswindow = True
|
||||
return
|
||||
if self.lastkey_was_tab:
|
||||
# two tabs in a row; insert current selection and close acw
|
||||
cursel = int(self.listbox.curselection()[0])
|
||||
self._change_start(self.completions[cursel])
|
||||
self.hide_window()
|
||||
return "break"
|
||||
else:
|
||||
# first tab; let AutoComplete handle the completion
|
||||
self.userwantswindow = True
|
||||
self.lastkey_was_tab = True
|
||||
return
|
||||
|
||||
elif any(s in keysym for s in ("Shift", "Control", "Alt",
|
||||
"Meta", "Command", "Option")):
|
||||
|
|
|
@ -3,7 +3,9 @@
|
|||
Call Tips are floating windows which display function, class, and method
|
||||
parameter and docstring information when you type an opening parenthesis, and
|
||||
which disappear when you type a closing parenthesis.
|
||||
|
||||
"""
|
||||
import re
|
||||
import sys
|
||||
import types
|
||||
|
||||
|
@ -89,6 +91,8 @@ class CallTips:
|
|||
two unrelated modules are being edited some calltips in the current
|
||||
module may be inoperative if the module was not the last to run.
|
||||
|
||||
To find methods, fetch_tip must be fed a fully qualified name.
|
||||
|
||||
"""
|
||||
try:
|
||||
rpcclt = self.editwin.flist.pyshell.interp.rpcclt
|
||||
|
@ -108,7 +112,7 @@ class CallTips:
|
|||
namespace.update(__main__.__dict__)
|
||||
try:
|
||||
return eval(name, namespace)
|
||||
except:
|
||||
except (NameError, AttributeError):
|
||||
return None
|
||||
|
||||
def _find_constructor(class_ob):
|
||||
|
@ -124,39 +128,37 @@ def _find_constructor(class_ob):
|
|||
|
||||
def get_arg_text(ob):
|
||||
"""Get a string describing the arguments for the given object"""
|
||||
argText = ""
|
||||
arg_text = ""
|
||||
if ob is not None:
|
||||
argOffset = 0
|
||||
arg_offset = 0
|
||||
if type(ob) in (types.ClassType, types.TypeType):
|
||||
# Look for the highest __init__ in the class chain.
|
||||
fob = _find_constructor(ob)
|
||||
if fob is None:
|
||||
fob = lambda: None
|
||||
else:
|
||||
argOffset = 1
|
||||
arg_offset = 1
|
||||
elif type(ob)==types.MethodType:
|
||||
# bit of a hack for methods - turn it into a function
|
||||
# but we drop the "self" param.
|
||||
fob = ob.im_func
|
||||
argOffset = 1
|
||||
arg_offset = 1
|
||||
else:
|
||||
fob = ob
|
||||
# Try and build one for Python defined functions
|
||||
# Try to build one for Python defined functions
|
||||
if type(fob) in [types.FunctionType, types.LambdaType]:
|
||||
try:
|
||||
realArgs = fob.func_code.co_varnames[argOffset:fob.func_code.co_argcount]
|
||||
defaults = fob.func_defaults or []
|
||||
defaults = list(map(lambda name: "=%s" % repr(name), defaults))
|
||||
defaults = [""] * (len(realArgs)-len(defaults)) + defaults
|
||||
items = map(lambda arg, dflt: arg+dflt, realArgs, defaults)
|
||||
if fob.func_code.co_flags & 0x4:
|
||||
items.append("...")
|
||||
if fob.func_code.co_flags & 0x8:
|
||||
items.append("***")
|
||||
argText = ", ".join(items)
|
||||
argText = "(%s)" % argText
|
||||
except:
|
||||
pass
|
||||
argcount = fob.func_code.co_argcount
|
||||
real_args = fob.func_code.co_varnames[arg_offset:argcount]
|
||||
defaults = fob.func_defaults or []
|
||||
defaults = list(map(lambda name: "=%s" % repr(name), defaults))
|
||||
defaults = [""] * (len(real_args) - len(defaults)) + defaults
|
||||
items = map(lambda arg, dflt: arg + dflt, real_args, defaults)
|
||||
if fob.func_code.co_flags & 0x4:
|
||||
items.append("...")
|
||||
if fob.func_code.co_flags & 0x8:
|
||||
items.append("***")
|
||||
arg_text = ", ".join(items)
|
||||
arg_text = "(%s)" % re.sub("\.\d+", "<tuple>", arg_text)
|
||||
# See if we can use the docstring
|
||||
doc = getattr(ob, "__doc__", "")
|
||||
if doc:
|
||||
|
@ -164,10 +166,10 @@ def get_arg_text(ob):
|
|||
pos = doc.find("\n")
|
||||
if pos < 0 or pos > 70:
|
||||
pos = 70
|
||||
if argText:
|
||||
argText += "\n"
|
||||
argText += doc[:pos]
|
||||
return argText
|
||||
if arg_text:
|
||||
arg_text += "\n"
|
||||
arg_text += doc[:pos]
|
||||
return arg_text
|
||||
|
||||
#################################################
|
||||
#
|
||||
|
@ -181,16 +183,18 @@ if __name__=='__main__':
|
|||
def t4(*args): "(...)"
|
||||
def t5(a, *args): "(a, ...)"
|
||||
def t6(a, b=None, *args, **kw): "(a, b=None, ..., ***)"
|
||||
def t7((a, b), c, (d, e)): "(<tuple>, c, <tuple>)"
|
||||
|
||||
class TC:
|
||||
"(a=None, ...)"
|
||||
def __init__(self, a=None, *b): "(a=None, ...)"
|
||||
class TC(object):
|
||||
"(ai=None, ...)"
|
||||
def __init__(self, ai=None, *b): "(ai=None, ...)"
|
||||
def t1(self): "()"
|
||||
def t2(self, a, b=None): "(a, b=None)"
|
||||
def t3(self, a, *args): "(a, ...)"
|
||||
def t2(self, ai, b=None): "(ai, b=None)"
|
||||
def t3(self, ai, *args): "(ai, ...)"
|
||||
def t4(self, *args): "(...)"
|
||||
def t5(self, a, *args): "(a, ...)"
|
||||
def t6(self, a, b=None, *args, **kw): "(a, b=None, ..., ***)"
|
||||
def t5(self, ai, *args): "(ai, ...)"
|
||||
def t6(self, ai, b=None, *args, **kw): "(ai, b=None, ..., ***)"
|
||||
def t7(self, (ai, b), c, (d, e)): "(<tuple>, c, <tuple>)"
|
||||
|
||||
def test(tests):
|
||||
ct = CallTips()
|
||||
|
@ -198,15 +202,20 @@ if __name__=='__main__':
|
|||
for t in tests:
|
||||
expected = t.__doc__ + "\n" + t.__doc__
|
||||
name = t.__name__
|
||||
arg_text = ct.fetch_tip(name)
|
||||
# exercise fetch_tip(), not just get_arg_text()
|
||||
try:
|
||||
qualified_name = "%s.%s" % (t.im_class.__name__, name)
|
||||
except AttributeError:
|
||||
qualified_name = name
|
||||
arg_text = ct.fetch_tip(qualified_name)
|
||||
if arg_text != expected:
|
||||
failed.append(t)
|
||||
print("%s - expected %s, but got %s" % (t, expected,
|
||||
get_arg_text(entity)))
|
||||
fmt = "%s - expected %s, but got %s"
|
||||
print(fmt % (t.__name__, expected, get_arg_text(t)))
|
||||
print("%d of %d tests failed" % (len(failed), len(tests)))
|
||||
|
||||
tc = TC()
|
||||
tests = (t1, t2, t3, t4, t5, t6,
|
||||
TC, tc.t1, tc.t2, tc.t3, tc.t4, tc.t5, tc.t6)
|
||||
tests = (t1, t2, t3, t4, t5, t6, t7,
|
||||
TC, tc.t1, tc.t2, tc.t3, tc.t4, tc.t5, tc.t6, tc.t7)
|
||||
|
||||
test(tests)
|
||||
|
|
|
@ -10,6 +10,7 @@ not open blocks are not shown in the context hints pane.
|
|||
|
||||
"""
|
||||
import Tkinter
|
||||
from Tkconstants import TOP, LEFT, X, W, SUNKEN
|
||||
from configHandler import idleConf
|
||||
import re
|
||||
from sys import maxint as INFINITY
|
||||
|
@ -24,7 +25,6 @@ getspacesfirstword =\
|
|||
|
||||
class CodeContext:
|
||||
menudefs = [('options', [('!Code Conte_xt', '<<toggle-code-context>>')])]
|
||||
|
||||
context_depth = idleConf.GetOption("extensions", "CodeContext",
|
||||
"numlines", type="int", default=3)
|
||||
bgcolor = idleConf.GetOption("extensions", "CodeContext",
|
||||
|
@ -54,66 +54,33 @@ class CodeContext:
|
|||
|
||||
def toggle_code_context_event(self, event=None):
|
||||
if not self.label:
|
||||
# The following code attempts to figure out the required border
|
||||
# width and vertical padding required for the CodeContext widget
|
||||
# to be perfectly aligned with the text in the main Text widget.
|
||||
# This is done by retrieving the appropriate attributes from the
|
||||
# editwin.text and editwin.text_frame widgets.
|
||||
# Calculate the border width and horizontal padding required to
|
||||
# align the context with the text in the main Text widget.
|
||||
#
|
||||
# All values are passed through int(str(<value>)), since some
|
||||
# values may be pixel objects, which can't simply be added added
|
||||
# to ints.
|
||||
#
|
||||
# This code is considered somewhat unstable since it relies on
|
||||
# some of Tk's inner workings. However its effect is merely
|
||||
# cosmetic; failure will only cause the CodeContext text to be
|
||||
# somewhat misaligned with the text in the main Text widget.
|
||||
#
|
||||
# To avoid possible errors, all references to the inner workings
|
||||
# of Tk are executed inside try/except blocks.
|
||||
|
||||
widgets_for_width_calc = self.editwin.text, self.editwin.text_frame
|
||||
|
||||
# calculate the required vertical padding
|
||||
# values may be pixel objects, which can't simply be added to ints.
|
||||
widgets = self.editwin.text, self.editwin.text_frame
|
||||
# Calculate the required vertical padding
|
||||
padx = 0
|
||||
for widget in widgets_for_width_calc:
|
||||
try:
|
||||
# retrieve the "padx" attribte from widget's pack info
|
||||
padx += int(str( widget.pack_info()['padx'] ))
|
||||
except:
|
||||
pass
|
||||
try:
|
||||
# retrieve the widget's "padx" attribte
|
||||
padx += int(str( widget.cget('padx') ))
|
||||
except:
|
||||
pass
|
||||
|
||||
# calculate the required border width
|
||||
border_width = 0
|
||||
for widget in widgets_for_width_calc:
|
||||
try:
|
||||
# retrieve the widget's "border" attribte
|
||||
border_width += int(str( widget.cget('border') ))
|
||||
except:
|
||||
pass
|
||||
|
||||
for widget in widgets:
|
||||
padx += int(str( widget.pack_info()['padx'] ))
|
||||
padx += int(str( widget.cget('padx') ))
|
||||
# Calculate the required border width
|
||||
border = 0
|
||||
for widget in widgets:
|
||||
border += int(str( widget.cget('border') ))
|
||||
self.label = Tkinter.Label(self.editwin.top,
|
||||
text="\n" * (self.context_depth - 1),
|
||||
anchor="w", justify="left",
|
||||
anchor=W, justify=LEFT,
|
||||
font=self.textfont,
|
||||
bg=self.bgcolor, fg=self.fgcolor,
|
||||
width=1, #don't request more than we get
|
||||
padx=padx, #line up with text widget
|
||||
border=border_width, #match border width
|
||||
relief="sunken",
|
||||
)
|
||||
|
||||
# CodeContext's label widget is packed before and above the
|
||||
# text_frame widget, thus ensuring that it will appear directly
|
||||
# above it.
|
||||
self.label.pack(side="top", fill="x", expand=False,
|
||||
padx=padx, border=border,
|
||||
relief=SUNKEN)
|
||||
# Pack the label widget before and above the text_frame widget,
|
||||
# thus ensuring that it will appear directly above text_frame
|
||||
self.label.pack(side=TOP, fill=X, expand=False,
|
||||
before=self.editwin.text_frame)
|
||||
|
||||
else:
|
||||
self.label.destroy()
|
||||
self.label = None
|
||||
|
@ -190,7 +157,6 @@ class CodeContext:
|
|||
stopindent)
|
||||
self.info.extend(lines)
|
||||
self.topvisible = new_topvisible
|
||||
|
||||
# empty lines in context pane:
|
||||
context_strings = [""] * max(0, self.context_depth - len(self.info))
|
||||
# followed by the context hint lines:
|
||||
|
|
|
@ -209,7 +209,7 @@ class IOBinding:
|
|||
# gets set to "not modified" at every new prompt.
|
||||
try:
|
||||
interp = self.editwin.interp
|
||||
except:
|
||||
except AttributeError:
|
||||
interp = None
|
||||
if not self.filename and self.get_saved() and not interp:
|
||||
self.editwin.flist.open(filename, self.loadfile)
|
||||
|
|
|
@ -3,6 +3,19 @@ What's New in IDLE 2.6a1?
|
|||
|
||||
*Release date: XX-XXX-200X*
|
||||
|
||||
- Corrected some bugs in AutoComplete. Also, Page Up/Down in ACW implemented;
|
||||
mouse and cursor selection in ACWindow implemented; double Tab inserts
|
||||
current selection and closes ACW (similar to double-click and Return); scroll
|
||||
wheel now works in ACW. Added AutoComplete instructions to IDLE Help.
|
||||
|
||||
- AutoCompleteWindow moved below input line, will move above if there
|
||||
isn't enough space. Patch 1621265 Tal Einat
|
||||
|
||||
- Calltips now 'handle' tuples in the argument list (display '<tuple>' :)
|
||||
Suggested solution by Christos Georgiou, Bug 791968.
|
||||
|
||||
- Add 'raw' support to configHandler. Patch 1650174 Tal Einat.
|
||||
|
||||
- Avoid hang when encountering a duplicate in a completion list. Bug 1571112.
|
||||
|
||||
- Patch #1362975: Rework CodeContext indentation algorithm to
|
||||
|
|
|
@ -706,34 +706,36 @@ class ModifiedInterpreter(InteractiveInterpreter):
|
|||
debugger = self.debugger
|
||||
try:
|
||||
self.tkconsole.beginexecuting()
|
||||
try:
|
||||
if not debugger and self.rpcclt is not None:
|
||||
self.active_seq = self.rpcclt.asyncqueue("exec", "runcode",
|
||||
(code,), {})
|
||||
elif debugger:
|
||||
debugger.run(code, self.locals)
|
||||
else:
|
||||
exec(code, self.locals)
|
||||
except SystemExit:
|
||||
if not self.tkconsole.closing:
|
||||
if tkMessageBox.askyesno(
|
||||
"Exit?",
|
||||
"Do you want to exit altogether?",
|
||||
default="yes",
|
||||
master=self.tkconsole.text):
|
||||
raise
|
||||
else:
|
||||
self.showtraceback()
|
||||
else:
|
||||
if not debugger and self.rpcclt is not None:
|
||||
self.active_seq = self.rpcclt.asyncqueue("exec", "runcode",
|
||||
(code,), {})
|
||||
elif debugger:
|
||||
debugger.run(code, self.locals)
|
||||
else:
|
||||
exec(code, self.locals)
|
||||
except SystemExit:
|
||||
if not self.tkconsole.closing:
|
||||
if tkMessageBox.askyesno(
|
||||
"Exit?",
|
||||
"Do you want to exit altogether?",
|
||||
default="yes",
|
||||
master=self.tkconsole.text):
|
||||
raise
|
||||
except:
|
||||
if use_subprocess:
|
||||
# When run w/o subprocess, both user and IDLE errors
|
||||
# are printed here; skip message in that case.
|
||||
print("IDLE internal error in runcode()", file=self.tkconsole.stderr)
|
||||
else:
|
||||
else:
|
||||
raise
|
||||
except:
|
||||
if use_subprocess:
|
||||
print("IDLE internal error in runcode()",
|
||||
file=self.tkconsole.stderr)
|
||||
self.showtraceback()
|
||||
if use_subprocess:
|
||||
self.tkconsole.endexecuting()
|
||||
self.tkconsole.endexecuting()
|
||||
else:
|
||||
if self.tkconsole.canceled:
|
||||
self.tkconsole.canceled = False
|
||||
print("KeyboardInterrupt", file=self.tkconsole.stderr)
|
||||
else:
|
||||
self.showtraceback()
|
||||
finally:
|
||||
if not use_subprocess:
|
||||
try:
|
||||
|
|
|
@ -39,22 +39,19 @@ class IdleConfParser(ConfigParser):
|
|||
self.file=cfgFile
|
||||
ConfigParser.__init__(self,defaults=cfgDefaults)
|
||||
|
||||
def Get(self, section, option, type=None, default=None):
|
||||
def Get(self, section, option, type=None, default=None, raw=False):
|
||||
"""
|
||||
Get an option value for given section/option or return default.
|
||||
If type is specified, return as type.
|
||||
"""
|
||||
if type=='bool':
|
||||
getVal=self.getboolean
|
||||
elif type=='int':
|
||||
getVal=self.getint
|
||||
else:
|
||||
getVal=self.get
|
||||
if self.has_option(section,option):
|
||||
#return getVal(section, option, raw, vars, default)
|
||||
return getVal(section, option)
|
||||
else:
|
||||
if not self.has_option(section, option):
|
||||
return default
|
||||
if type=='bool':
|
||||
return self.getboolean(section, option)
|
||||
elif type=='int':
|
||||
return self.getint(section, option)
|
||||
else:
|
||||
return self.get(section, option, raw=raw)
|
||||
|
||||
def GetOptionList(self,section):
|
||||
"""
|
||||
|
@ -219,7 +216,7 @@ class IdleConf:
|
|||
return userDir
|
||||
|
||||
def GetOption(self, configType, section, option, default=None, type=None,
|
||||
warn_on_default=True):
|
||||
warn_on_default=True, raw=False):
|
||||
"""
|
||||
Get an option value for given config type and given general
|
||||
configuration section/option or return a default. If type is specified,
|
||||
|
@ -233,9 +230,11 @@ class IdleConf:
|
|||
|
||||
"""
|
||||
if self.userCfg[configType].has_option(section,option):
|
||||
return self.userCfg[configType].Get(section, option, type=type)
|
||||
return self.userCfg[configType].Get(section, option,
|
||||
type=type, raw=raw)
|
||||
elif self.defaultCfg[configType].has_option(section,option):
|
||||
return self.defaultCfg[configType].Get(section, option, type=type)
|
||||
return self.defaultCfg[configType].Get(section, option,
|
||||
type=type, raw=raw)
|
||||
else: #returning default, print warning
|
||||
if warn_on_default:
|
||||
warning = ('\n Warning: configHandler.py - IdleConf.GetOption -\n'
|
||||
|
|
|
@ -44,6 +44,10 @@ Edit Menu:
|
|||
Find in Files... -- Open a search dialog box for searching files
|
||||
Replace... -- Open a search-and-replace dialog box
|
||||
Go to Line -- Ask for a line number and show that line
|
||||
Show Calltip -- Open a small window with function param hints
|
||||
Show Completions -- Open a scroll window allowing selection keywords
|
||||
and attributes. (see '*TIPS*', below)
|
||||
Show Parens -- Highlight the surrounding parenthesis
|
||||
Expand Word -- Expand the word you have typed to match another
|
||||
word in the same buffer; repeat to get a
|
||||
different expansion
|
||||
|
@ -91,6 +95,7 @@ Options Menu:
|
|||
Code Context -- Open a pane at the top of the edit window which
|
||||
shows the block context of the section of code
|
||||
which is scrolling off the top or the window.
|
||||
(Not present in Shell window.)
|
||||
|
||||
Windows Menu:
|
||||
|
||||
|
@ -138,8 +143,11 @@ Basic editing and navigation:
|
|||
Control-left/right Arrow moves by words in a strange but useful way.
|
||||
Home/End go to begin/end of line.
|
||||
Control-Home/End go to begin/end of file.
|
||||
Some useful Emacs bindings (Control-a, Control-e, Control-k, etc.)
|
||||
are inherited from Tcl/Tk.
|
||||
Some useful Emacs bindings are inherited from Tcl/Tk:
|
||||
Control-a beginning of line
|
||||
Control-e end of line
|
||||
Control-k kill line (but doesn't put it in clipboard)
|
||||
Control-l center window around the insertion point
|
||||
Standard Windows bindings may work on that platform.
|
||||
Keybindings are selected in the Settings Dialog, look there.
|
||||
|
||||
|
@ -155,6 +163,52 @@ Automatic indentation:
|
|||
|
||||
See also the indent/dedent region commands in the edit menu.
|
||||
|
||||
Completions:
|
||||
|
||||
Completions are supplied for functions, classes, and attributes of
|
||||
classes, both built-in and user-defined. Completions are also provided
|
||||
for filenames.
|
||||
|
||||
The AutoCompleteWindow (ACW) will open after a predefined delay
|
||||
(default is two seconds) after a '.' or (in a string) an os.sep is
|
||||
typed. If after one of those characters (plus zero or more other
|
||||
characters) you type a Tab the ACW will open immediately if a possible
|
||||
continuation is found.
|
||||
|
||||
If there is only one possible completion for the characters entered, a
|
||||
Tab will supply that completion without opening the ACW.
|
||||
|
||||
'Show Completions' will force open a completions window. In an empty
|
||||
string, this will contain the files in the current directory. On a
|
||||
blank line, it will contain the built-in and user-defined functions and
|
||||
classes in the current name spaces, plus any modules imported. If some
|
||||
characters have been entered, the ACW will attempt to be more specific.
|
||||
|
||||
If string of characters is typed, the ACW selection will jump to the
|
||||
entry most closely matching those characters. Entering a Tab will cause
|
||||
the longest non-ambiguous match to be entered in the Edit window or
|
||||
Shell. Two Tabs in a row will supply the current ACW selection, as
|
||||
will Return or a double click. Cursor keys, Page Up/Down, mouse
|
||||
selection, and the scrollwheel all operate on the ACW.
|
||||
|
||||
'Hidden' attributes can be accessed by typing the beginning of hidden
|
||||
name after a '.'. e.g. '_'. This allows access to modules with
|
||||
'__all__' set, or to class-private attributes.
|
||||
|
||||
Completions and the 'Expand Word' facility can save a lot of typing!
|
||||
|
||||
Completions are currently limited to those in the namespaces. Names in
|
||||
an Edit window which are not via __main__ or sys.modules will not be
|
||||
found. Run the module once with your imports to correct this
|
||||
situation. Note that IDLE itself places quite a few modules in
|
||||
sys.modules, so much can be found by default, e.g. the re module.
|
||||
|
||||
If you don't like the ACW popping up unbidden, simply make the delay
|
||||
longer or disable the extension. OTOH, you could make the delay zero.
|
||||
|
||||
You could also switch off the CallTips extension. (We will be adding
|
||||
a delay to the call tip window.)
|
||||
|
||||
Python Shell window:
|
||||
|
||||
Control-c interrupts executing command.
|
||||
|
@ -165,7 +219,7 @@ Python Shell window:
|
|||
|
||||
Alt-p retrieves previous command matching what you have typed.
|
||||
Alt-n retrieves next.
|
||||
(These are Control-p, Control-n on the Mac)
|
||||
(These are Control-p, Control-n on the Mac)
|
||||
Return while cursor is on a previous command retrieves that command.
|
||||
Expand word is also useful to reduce typing.
|
||||
|
||||
|
@ -196,7 +250,7 @@ Other preferences:
|
|||
be changed using the Settings dialog.
|
||||
|
||||
Command line usage:
|
||||
|
||||
|
||||
Enter idle -h at the command prompt to get a usage message.
|
||||
|
||||
Running without a subprocess:
|
||||
|
@ -211,3 +265,18 @@ Running without a subprocess:
|
|||
re-import any specific items (e.g. from foo import baz) if the changes
|
||||
are to take effect. For these reasons, it is preferable to run IDLE
|
||||
with the default subprocess if at all possible.
|
||||
|
||||
Extensions:
|
||||
|
||||
IDLE contains an extension facility. See the beginning of
|
||||
config-extensions.def in the idlelib directory for further information.
|
||||
The default extensions are currently:
|
||||
|
||||
FormatParagraph
|
||||
AutoExpand
|
||||
ZoomHeight
|
||||
ScriptBinding
|
||||
CallTips
|
||||
ParenMatch
|
||||
AutoComplete
|
||||
CodeContext
|
||||
|
|
|
@ -552,6 +552,10 @@ class _FilesystemImporter(Importer):
|
|||
# This method is only used when we look for a module within a package.
|
||||
assert parent
|
||||
|
||||
for submodule_path in parent.__path__:
|
||||
code = self._import_pathname(_os_path_join(submodule_path, modname), fqname)
|
||||
if code is not None:
|
||||
return code
|
||||
return self._import_pathname(_os_path_join(parent.__pkgdir__, modname),
|
||||
fqname)
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2001-2005 by Vinay Sajip. All Rights Reserved.
|
||||
# Copyright 2001-2007 by Vinay Sajip. All Rights Reserved.
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and its
|
||||
# documentation for any purpose and without fee is hereby granted,
|
||||
|
@ -21,7 +21,7 @@ comp.lang.python, and influenced by Apache's log4j system.
|
|||
Should work under Python versions >= 1.5.2, except that source line
|
||||
information is not available unless 'sys._getframe()' is.
|
||||
|
||||
Copyright (C) 2001-2004 Vinay Sajip. All Rights Reserved.
|
||||
Copyright (C) 2001-2007 Vinay Sajip. All Rights Reserved.
|
||||
|
||||
To use, simply 'import logging' and log away!
|
||||
"""
|
||||
|
@ -41,8 +41,8 @@ except ImportError:
|
|||
|
||||
__author__ = "Vinay Sajip <vinay_sajip@red-dove.com>"
|
||||
__status__ = "production"
|
||||
__version__ = "0.5.0.1"
|
||||
__date__ = "09 January 2007"
|
||||
__version__ = "0.5.0.2"
|
||||
__date__ = "16 February 2007"
|
||||
|
||||
#---------------------------------------------------------------------------
|
||||
# Miscellaneous module data
|
||||
|
@ -68,7 +68,7 @@ def currentframe():
|
|||
except:
|
||||
return sys.exc_traceback.tb_frame.f_back
|
||||
|
||||
if hasattr(sys, '_getframe'): currentframe = sys._getframe
|
||||
if hasattr(sys, '_getframe'): currentframe = lambda: sys._getframe(3)
|
||||
# done filching
|
||||
|
||||
# _srcfile is only used in conjunction with sys._getframe().
|
||||
|
|
|
@ -60,13 +60,15 @@ def copymode(src, dst):
|
|||
os.chmod(dst, mode)
|
||||
|
||||
def copystat(src, dst):
|
||||
"""Copy all stat info (mode bits, atime and mtime) from src to dst"""
|
||||
"""Copy all stat info (mode bits, atime, mtime, flags) from src to dst"""
|
||||
st = os.stat(src)
|
||||
mode = stat.S_IMODE(st.st_mode)
|
||||
if hasattr(os, 'utime'):
|
||||
os.utime(dst, (st.st_atime, st.st_mtime))
|
||||
if hasattr(os, 'chmod'):
|
||||
os.chmod(dst, mode)
|
||||
if hasattr(os, 'chflags') and hasattr(st, 'st_flags'):
|
||||
os.chflags(dst, st.st_flags)
|
||||
|
||||
|
||||
def copy(src, dst):
|
||||
|
|
13
Lib/stat.py
13
Lib/stat.py
|
@ -84,3 +84,16 @@ S_IRWXO = 00007
|
|||
S_IROTH = 00004
|
||||
S_IWOTH = 00002
|
||||
S_IXOTH = 00001
|
||||
|
||||
# Names for file flags
|
||||
|
||||
UF_NODUMP = 0x00000001
|
||||
UF_IMMUTABLE = 0x00000002
|
||||
UF_APPEND = 0x00000004
|
||||
UF_OPAQUE = 0x00000008
|
||||
UF_NOUNLINK = 0x00000010
|
||||
SF_ARCHIVED = 0x00010000
|
||||
SF_IMMUTABLE = 0x00020000
|
||||
SF_APPEND = 0x00040000
|
||||
SF_NOUNLINK = 0x00100000
|
||||
SF_SNAPSHOT = 0x00200000
|
||||
|
|
|
@ -593,14 +593,30 @@ class Popen(object):
|
|||
c2pread, c2pwrite,
|
||||
errread, errwrite)
|
||||
|
||||
if p2cwrite:
|
||||
# On Windows, you cannot just redirect one or two handles: You
|
||||
# either have to redirect all three or none. If the subprocess
|
||||
# user has only redirected one or two handles, we are
|
||||
# automatically creating PIPEs for the rest. We should close
|
||||
# these after the process is started. See bug #1124861.
|
||||
if mswindows:
|
||||
if stdin is None and p2cwrite is not None:
|
||||
os.close(p2cwrite)
|
||||
p2cwrite = None
|
||||
if stdout is None and c2pread is not None:
|
||||
os.close(c2pread)
|
||||
c2pread = None
|
||||
if stderr is None and errread is not None:
|
||||
os.close(errread)
|
||||
errread = None
|
||||
|
||||
if p2cwrite is not None:
|
||||
self.stdin = os.fdopen(p2cwrite, 'wb', bufsize)
|
||||
if c2pread:
|
||||
if c2pread is not None:
|
||||
if universal_newlines:
|
||||
self.stdout = os.fdopen(c2pread, 'rU', bufsize)
|
||||
else:
|
||||
self.stdout = os.fdopen(c2pread, 'rb', bufsize)
|
||||
if errread:
|
||||
if errread is not None:
|
||||
if universal_newlines:
|
||||
self.stderr = os.fdopen(errread, 'rU', bufsize)
|
||||
else:
|
||||
|
@ -669,7 +685,9 @@ class Popen(object):
|
|||
|
||||
if stdin is None:
|
||||
p2cread = GetStdHandle(STD_INPUT_HANDLE)
|
||||
elif stdin == PIPE:
|
||||
if p2cread is not None:
|
||||
pass
|
||||
elif stdin is None or stdin == PIPE:
|
||||
p2cread, p2cwrite = CreatePipe(None, 0)
|
||||
# Detach and turn into fd
|
||||
p2cwrite = p2cwrite.Detach()
|
||||
|
@ -683,7 +701,9 @@ class Popen(object):
|
|||
|
||||
if stdout is None:
|
||||
c2pwrite = GetStdHandle(STD_OUTPUT_HANDLE)
|
||||
elif stdout == PIPE:
|
||||
if c2pwrite is not None:
|
||||
pass
|
||||
elif stdout is None or stdout == PIPE:
|
||||
c2pread, c2pwrite = CreatePipe(None, 0)
|
||||
# Detach and turn into fd
|
||||
c2pread = c2pread.Detach()
|
||||
|
@ -697,7 +717,9 @@ class Popen(object):
|
|||
|
||||
if stderr is None:
|
||||
errwrite = GetStdHandle(STD_ERROR_HANDLE)
|
||||
elif stderr == PIPE:
|
||||
if errwrite is not None:
|
||||
pass
|
||||
elif stderr is None or stderr == PIPE:
|
||||
errread, errwrite = CreatePipe(None, 0)
|
||||
# Detach and turn into fd
|
||||
errread = errread.Detach()
|
||||
|
@ -987,29 +1009,29 @@ class Popen(object):
|
|||
# Child
|
||||
try:
|
||||
# Close parent's pipe ends
|
||||
if p2cwrite:
|
||||
if p2cwrite is not None:
|
||||
os.close(p2cwrite)
|
||||
if c2pread:
|
||||
if c2pread is not None:
|
||||
os.close(c2pread)
|
||||
if errread:
|
||||
if errread is not None:
|
||||
os.close(errread)
|
||||
os.close(errpipe_read)
|
||||
|
||||
# Dup fds for child
|
||||
if p2cread:
|
||||
if p2cread is not None:
|
||||
os.dup2(p2cread, 0)
|
||||
if c2pwrite:
|
||||
if c2pwrite is not None:
|
||||
os.dup2(c2pwrite, 1)
|
||||
if errwrite:
|
||||
if errwrite is not None:
|
||||
os.dup2(errwrite, 2)
|
||||
|
||||
# Close pipe fds. Make sure we don't close the same
|
||||
# fd more than once, or standard fds.
|
||||
if p2cread and p2cread not in (0,):
|
||||
if p2cread is not None and p2cread not in (0,):
|
||||
os.close(p2cread)
|
||||
if c2pwrite and c2pwrite not in (p2cread, 1):
|
||||
if c2pwrite is not None and c2pwrite not in (p2cread, 1):
|
||||
os.close(c2pwrite)
|
||||
if errwrite and errwrite not in (p2cread, c2pwrite, 2):
|
||||
if errwrite is not None and errwrite not in (p2cread, c2pwrite, 2):
|
||||
os.close(errwrite)
|
||||
|
||||
# Close all other fds, if asked for
|
||||
|
@ -1042,11 +1064,11 @@ class Popen(object):
|
|||
|
||||
# Parent
|
||||
os.close(errpipe_write)
|
||||
if p2cread and p2cwrite:
|
||||
if p2cread is not None and p2cwrite is not None:
|
||||
os.close(p2cread)
|
||||
if c2pwrite and c2pread:
|
||||
if c2pwrite is not None and c2pread is not None:
|
||||
os.close(c2pwrite)
|
||||
if errwrite and errread:
|
||||
if errwrite is not None and errread is not None:
|
||||
os.close(errwrite)
|
||||
|
||||
# Wait for exec to fail or succeed; possibly raising exception
|
||||
|
|
|
@ -1062,6 +1062,10 @@ class TarFile(object):
|
|||
self.mode = {"r": "rb", "a": "r+b", "w": "wb"}[mode]
|
||||
|
||||
if not fileobj:
|
||||
if self._mode == "a" and not os.path.exists(self.name):
|
||||
# Create nonexistent files in append mode.
|
||||
self._mode = "w"
|
||||
self.mode = "wb"
|
||||
fileobj = _open(self.name, self.mode)
|
||||
self._extfileobj = False
|
||||
else:
|
||||
|
@ -1095,7 +1099,8 @@ class TarFile(object):
|
|||
self.fileobj.seek(0)
|
||||
break
|
||||
if tarinfo is None:
|
||||
self.fileobj.seek(- BLOCKSIZE, 1)
|
||||
if self.offset > 0:
|
||||
self.fileobj.seek(- BLOCKSIZE, 1)
|
||||
break
|
||||
|
||||
if self._mode in "aw":
|
||||
|
@ -1122,7 +1127,7 @@ class TarFile(object):
|
|||
'r:' open for reading exclusively uncompressed
|
||||
'r:gz' open for reading with gzip compression
|
||||
'r:bz2' open for reading with bzip2 compression
|
||||
'a' or 'a:' open for appending
|
||||
'a' or 'a:' open for appending, creating the file if necessary
|
||||
'w' or 'w:' open for writing without compression
|
||||
'w:gz' open for writing with gzip compression
|
||||
'w:bz2' open for writing with bzip2 compression
|
||||
|
|
|
@ -1767,6 +1767,11 @@ class TestTime(HarmlessMixedComparison):
|
|||
self.assertEqual(t.isoformat(), "00:00:00.100000")
|
||||
self.assertEqual(t.isoformat(), str(t))
|
||||
|
||||
def test_1653736(self):
|
||||
# verify it doesn't accept extra keyword arguments
|
||||
t = self.theclass(second=1)
|
||||
self.assertRaises(TypeError, t.isoformat, foo=3)
|
||||
|
||||
def test_strftime(self):
|
||||
t = self.theclass(1, 2, 3, 4)
|
||||
self.assertEqual(t.strftime('%H %M %S'), "01 02 03")
|
||||
|
|
|
@ -47,6 +47,7 @@ class TestDefaultDict(unittest.TestCase):
|
|||
self.assertEqual(err.args, (15,))
|
||||
else:
|
||||
self.fail("d2[15] didn't raise KeyError")
|
||||
self.assertRaises(TypeError, defaultdict, 1)
|
||||
|
||||
def test_missing(self):
|
||||
d1 = defaultdict()
|
||||
|
@ -60,10 +61,10 @@ class TestDefaultDict(unittest.TestCase):
|
|||
self.assertEqual(repr(d1), "defaultdict(None, {})")
|
||||
d1[11] = 41
|
||||
self.assertEqual(repr(d1), "defaultdict(None, {11: 41})")
|
||||
d2 = defaultdict(0)
|
||||
self.assertEqual(d2.default_factory, 0)
|
||||
d2 = defaultdict(int)
|
||||
self.assertEqual(d2.default_factory, int)
|
||||
d2[12] = 42
|
||||
self.assertEqual(repr(d2), "defaultdict(0, {12: 42})")
|
||||
self.assertEqual(repr(d2), "defaultdict(<type 'int'>, {12: 42})")
|
||||
def foo(): return 43
|
||||
d3 = defaultdict(foo)
|
||||
self.assert_(d3.default_factory is foo)
|
||||
|
|
|
@ -2093,7 +2093,7 @@ def inherits():
|
|||
__slots__ = ['prec']
|
||||
def __init__(self, value=0.0, prec=12):
|
||||
self.prec = int(prec)
|
||||
float.__init__(value)
|
||||
float.__init__(self, value)
|
||||
def __repr__(self):
|
||||
return "%.*g" % (self.prec, self)
|
||||
vereq(repr(precfloat(1.1)), "1.1")
|
||||
|
|
|
@ -182,6 +182,14 @@ class DictTest(unittest.TestCase):
|
|||
|
||||
self.assertRaises(ValueError, {}.update, [(1, 2, 3)])
|
||||
|
||||
# SF #1615701: make d.update(m) honor __getitem__() and keys() in dict subclasses
|
||||
class KeyUpperDict(dict):
|
||||
def __getitem__(self, key):
|
||||
return key.upper()
|
||||
d.clear()
|
||||
d.update(KeyUpperDict.fromkeys('abc'))
|
||||
self.assertEqual(d, {'a':'A', 'b':'B', 'c':'C'})
|
||||
|
||||
def test_fromkeys(self):
|
||||
self.assertEqual(dict.fromkeys('abc'), {'a':None, 'b':None, 'c':None})
|
||||
d = {}
|
||||
|
|
|
@ -153,6 +153,13 @@ class TestGzip(unittest.TestCase):
|
|||
self.assertEqual(f.myfileobj.mode, 'rb')
|
||||
f.close()
|
||||
|
||||
def test_1647484(self):
|
||||
for mode in ('wb', 'rb'):
|
||||
f = gzip.GzipFile(self.filename, mode)
|
||||
self.assert_(hasattr(f, "name"))
|
||||
self.assertEqual(f.name, self.filename)
|
||||
f.close()
|
||||
|
||||
def test_main(verbose=None):
|
||||
test_support.run_unittest(TestGzip)
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
"""Unittests for heapq."""
|
||||
|
||||
from heapq import heappush, heappop, heapify, heapreplace, nlargest, nsmallest
|
||||
from heapq import heappush, heappop, heapify, heapreplace, merge, nlargest, nsmallest
|
||||
import random
|
||||
import unittest
|
||||
from test import test_support
|
||||
|
@ -103,6 +103,29 @@ class TestHeap(unittest.TestCase):
|
|||
heap_sorted = [heappop(heap) for i in range(size)]
|
||||
self.assertEqual(heap_sorted, sorted(data))
|
||||
|
||||
def test_merge(self):
|
||||
inputs = []
|
||||
for i in xrange(random.randrange(5)):
|
||||
row = sorted(random.randrange(1000) for j in range(random.randrange(10)))
|
||||
inputs.append(row)
|
||||
self.assertEqual(sorted(chain(*inputs)), list(merge(*inputs)))
|
||||
self.assertEqual(list(merge()), [])
|
||||
|
||||
def test_merge_stability(self):
|
||||
class Int(int):
|
||||
pass
|
||||
inputs = [[], [], [], []]
|
||||
for i in range(20000):
|
||||
stream = random.randrange(4)
|
||||
x = random.randrange(500)
|
||||
obj = Int(x)
|
||||
obj.pair = (x, stream)
|
||||
inputs[stream].append(obj)
|
||||
for stream in inputs:
|
||||
stream.sort()
|
||||
result = [i.pair for i in merge(*inputs)]
|
||||
self.assertEqual(result, sorted(result))
|
||||
|
||||
def test_nsmallest(self):
|
||||
data = [(random.randrange(2000), i) for i in range(1000)]
|
||||
for f in (None, lambda x: x[0] * 547 % 2000):
|
||||
|
|
|
@ -55,8 +55,7 @@ class TestBasicOps(unittest.TestCase):
|
|||
self.assertEqual(take(2, lzip('abc',count(3))), [('a', 3), ('b', 4)])
|
||||
self.assertRaises(TypeError, count, 2, 3)
|
||||
self.assertRaises(TypeError, count, 'a')
|
||||
c = count(sys.maxint-2) # verify that rollover doesn't crash
|
||||
c.next(); c.next(); c.next(); c.next(); c.next()
|
||||
self.assertRaises(OverflowError, list, islice(count(sys.maxint-5), 10))
|
||||
c = count(3)
|
||||
self.assertEqual(repr(c), 'count(3)')
|
||||
c.next()
|
||||
|
@ -203,6 +202,51 @@ class TestBasicOps(unittest.TestCase):
|
|||
ids = map(id, list(izip('abc', 'def')))
|
||||
self.assertEqual(len(dict.fromkeys(ids)), len(ids))
|
||||
|
||||
def test_iziplongest(self):
|
||||
for args in [
|
||||
['abc', range(6)],
|
||||
[range(6), 'abc'],
|
||||
[range(1000), range(2000,2100), range(3000,3050)],
|
||||
[range(1000), range(0), range(3000,3050), range(1200), range(1500)],
|
||||
[range(1000), range(0), range(3000,3050), range(1200), range(1500), range(0)],
|
||||
]:
|
||||
target = map(None, *args)
|
||||
self.assertEqual(list(izip_longest(*args)), target)
|
||||
self.assertEqual(list(izip_longest(*args, **{})), target)
|
||||
target = [tuple((e is None and 'X' or e) for e in t) for t in target] # Replace None fills with 'X'
|
||||
self.assertEqual(list(izip_longest(*args, **dict(fillvalue='X'))), target)
|
||||
|
||||
self.assertEqual(take(3,izip_longest('abcdef', count())), list(zip('abcdef', range(3)))) # take 3 from infinite input
|
||||
|
||||
self.assertEqual(list(izip_longest()), list(zip()))
|
||||
self.assertEqual(list(izip_longest([])), list(zip([])))
|
||||
self.assertEqual(list(izip_longest('abcdef')), list(zip('abcdef')))
|
||||
|
||||
self.assertEqual(list(izip_longest('abc', 'defg', **{})), map(None, 'abc', 'defg')) # empty keyword dict
|
||||
self.assertRaises(TypeError, izip_longest, 3)
|
||||
self.assertRaises(TypeError, izip_longest, range(3), 3)
|
||||
|
||||
for stmt in [
|
||||
"izip_longest('abc', fv=1)",
|
||||
"izip_longest('abc', fillvalue=1, bogus_keyword=None)",
|
||||
]:
|
||||
try:
|
||||
eval(stmt, globals(), locals())
|
||||
except TypeError:
|
||||
pass
|
||||
else:
|
||||
self.fail('Did not raise Type in: ' + stmt)
|
||||
|
||||
# Check tuple re-use (implementation detail)
|
||||
self.assertEqual([tuple(list(pair)) for pair in izip_longest('abc', 'def')],
|
||||
list(zip('abc', 'def')))
|
||||
self.assertEqual([pair for pair in izip_longest('abc', 'def')],
|
||||
list(zip('abc', 'def')))
|
||||
ids = map(id, izip_longest('abc', 'def'))
|
||||
self.assertEqual(min(ids), max(ids))
|
||||
ids = map(id, list(izip_longest('abc', 'def')))
|
||||
self.assertEqual(len(dict.fromkeys(ids)), len(ids))
|
||||
|
||||
def test_repeat(self):
|
||||
self.assertEqual(lzip(xrange(3),repeat('a')),
|
||||
[(0, 'a'), (1, 'a'), (2, 'a')])
|
||||
|
@ -616,6 +660,15 @@ class TestVariousIteratorArgs(unittest.TestCase):
|
|||
self.assertRaises(TypeError, izip, N(s))
|
||||
self.assertRaises(ZeroDivisionError, list, izip(E(s)))
|
||||
|
||||
def test_iziplongest(self):
|
||||
for s in ("123", "", range(1000), ('do', 1.2), xrange(2000,2200,5)):
|
||||
for g in (G, I, Ig, S, L, R):
|
||||
self.assertEqual(list(izip_longest(g(s))), list(zip(g(s))))
|
||||
self.assertEqual(list(izip_longest(g(s), g(s))), list(zip(g(s), g(s))))
|
||||
self.assertRaises(TypeError, izip_longest, X(s))
|
||||
self.assertRaises(TypeError, izip_longest, N(s))
|
||||
self.assertRaises(ZeroDivisionError, list, izip_longest(E(s)))
|
||||
|
||||
def test_imap(self):
|
||||
for s in (range(10), range(0), range(100), (7,11), xrange(20,50,5)):
|
||||
for g in (G, I, Ig, S, L, R):
|
||||
|
|
|
@ -210,6 +210,8 @@ class OperatorTestCase(unittest.TestCase):
|
|||
self.failUnless(operator.isSequenceType(xrange(10)))
|
||||
self.failUnless(operator.isSequenceType('yeahbuddy'))
|
||||
self.failIf(operator.isSequenceType(3))
|
||||
class Dict(dict): pass
|
||||
self.failIf(operator.isSequenceType(Dict()))
|
||||
|
||||
def test_lshift(self):
|
||||
self.failUnlessRaises(TypeError, operator.lshift)
|
||||
|
|
|
@ -192,6 +192,18 @@ class PosixTester(unittest.TestCase):
|
|||
posix.utime(test_support.TESTFN, (int(now), int(now)))
|
||||
posix.utime(test_support.TESTFN, (now, now))
|
||||
|
||||
def test_chflags(self):
|
||||
if hasattr(posix, 'chflags'):
|
||||
st = os.stat(test_support.TESTFN)
|
||||
if hasattr(st, 'st_flags'):
|
||||
posix.chflags(test_support.TESTFN, st.st_flags)
|
||||
|
||||
def test_lchflags(self):
|
||||
if hasattr(posix, 'lchflags'):
|
||||
st = os.stat(test_support.TESTFN)
|
||||
if hasattr(st, 'st_flags'):
|
||||
posix.lchflags(test_support.TESTFN, st.st_flags)
|
||||
|
||||
def test_main():
|
||||
test_support.run_unittest(PosixTester)
|
||||
|
||||
|
|
|
@ -216,7 +216,44 @@ def test_xmlgen_ns():
|
|||
('<ns1:doc xmlns:ns1="%s"><udoc></udoc></ns1:doc>' %
|
||||
ns_uri)
|
||||
|
||||
# ===== XMLFilterBase
|
||||
def test_1463026_1():
|
||||
result = StringIO()
|
||||
gen = XMLGenerator(result)
|
||||
|
||||
gen.startDocument()
|
||||
gen.startElementNS((None, 'a'), 'a', {(None, 'b'):'c'})
|
||||
gen.endElementNS((None, 'a'), 'a')
|
||||
gen.endDocument()
|
||||
|
||||
return result.getvalue() == start+'<a b="c"></a>'
|
||||
|
||||
def test_1463026_2():
|
||||
result = StringIO()
|
||||
gen = XMLGenerator(result)
|
||||
|
||||
gen.startDocument()
|
||||
gen.startPrefixMapping(None, 'qux')
|
||||
gen.startElementNS(('qux', 'a'), 'a', {})
|
||||
gen.endElementNS(('qux', 'a'), 'a')
|
||||
gen.endPrefixMapping(None)
|
||||
gen.endDocument()
|
||||
|
||||
return result.getvalue() == start+'<a xmlns="qux"></a>'
|
||||
|
||||
def test_1463026_3():
|
||||
result = StringIO()
|
||||
gen = XMLGenerator(result)
|
||||
|
||||
gen.startDocument()
|
||||
gen.startPrefixMapping('my', 'qux')
|
||||
gen.startElementNS(('qux', 'a'), 'a', {(None, 'b'):'c'})
|
||||
gen.endElementNS(('qux', 'a'), 'a')
|
||||
gen.endPrefixMapping('my')
|
||||
gen.endDocument()
|
||||
|
||||
return result.getvalue() == start+'<my:a xmlns:my="qux" b="c"></my:a>'
|
||||
|
||||
# ===== Xmlfilterbase
|
||||
|
||||
def test_filter_basic():
|
||||
result = StringIO()
|
||||
|
|
|
@ -26,6 +26,14 @@ class ReprWrapper:
|
|||
def __repr__(self):
|
||||
return repr(self.value)
|
||||
|
||||
class HashCountingInt(int):
|
||||
'int-like object that counts the number of times __hash__ is called'
|
||||
def __init__(self, *args):
|
||||
self.hash_count = 0
|
||||
def __hash__(self):
|
||||
self.hash_count += 1
|
||||
return int.__hash__(self)
|
||||
|
||||
class TestJointOps(unittest.TestCase):
|
||||
# Tests common to both set and frozenset
|
||||
|
||||
|
@ -273,6 +281,18 @@ class TestJointOps(unittest.TestCase):
|
|||
fo.close()
|
||||
os.remove(test_support.TESTFN)
|
||||
|
||||
def test_do_not_rehash_dict_keys(self):
|
||||
n = 10
|
||||
d = dict.fromkeys(map(HashCountingInt, xrange(n)))
|
||||
self.assertEqual(sum(elem.hash_count for elem in d), n)
|
||||
s = self.thetype(d)
|
||||
self.assertEqual(sum(elem.hash_count for elem in d), n)
|
||||
s.difference(d)
|
||||
self.assertEqual(sum(elem.hash_count for elem in d), n)
|
||||
if hasattr(s, 'symmetric_difference_update'):
|
||||
s.symmetric_difference_update(d)
|
||||
self.assertEqual(sum(elem.hash_count for elem in d), n)
|
||||
|
||||
class TestSet(TestJointOps):
|
||||
thetype = set
|
||||
|
||||
|
|
|
@ -305,6 +305,61 @@ class WriteTest(BaseTest):
|
|||
self.assertEqual(self.dst.getnames(), [], "added the archive to itself")
|
||||
|
||||
|
||||
class AppendTest(unittest.TestCase):
|
||||
# Test append mode (cp. patch #1652681).
|
||||
|
||||
def setUp(self):
|
||||
self.tarname = tmpname()
|
||||
if os.path.exists(self.tarname):
|
||||
os.remove(self.tarname)
|
||||
|
||||
def _add_testfile(self, fileobj=None):
|
||||
tar = tarfile.open(self.tarname, "a", fileobj=fileobj)
|
||||
tar.addfile(tarfile.TarInfo("bar"))
|
||||
tar.close()
|
||||
|
||||
def _create_testtar(self):
|
||||
src = tarfile.open(tarname())
|
||||
t = src.getmember("0-REGTYPE")
|
||||
t.name = "foo"
|
||||
f = src.extractfile(t)
|
||||
tar = tarfile.open(self.tarname, "w")
|
||||
tar.addfile(t, f)
|
||||
tar.close()
|
||||
|
||||
def _test(self, names=["bar"], fileobj=None):
|
||||
tar = tarfile.open(self.tarname, fileobj=fileobj)
|
||||
self.assert_(tar.getnames() == names)
|
||||
|
||||
def test_non_existing(self):
|
||||
self._add_testfile()
|
||||
self._test()
|
||||
|
||||
def test_empty(self):
|
||||
open(self.tarname, "wb").close()
|
||||
self._add_testfile()
|
||||
self._test()
|
||||
|
||||
def test_empty_fileobj(self):
|
||||
fobj = StringIO.StringIO()
|
||||
self._add_testfile(fobj)
|
||||
fobj.seek(0)
|
||||
self._test(fileobj=fobj)
|
||||
|
||||
def test_fileobj(self):
|
||||
self._create_testtar()
|
||||
data = open(self.tarname, "rb").read()
|
||||
fobj = StringIO.StringIO(data)
|
||||
self._add_testfile(fobj)
|
||||
fobj.seek(0)
|
||||
self._test(names=["foo", "bar"], fileobj=fobj)
|
||||
|
||||
def test_existing(self):
|
||||
self._create_testtar()
|
||||
self._add_testfile()
|
||||
self._test(names=["foo", "bar"])
|
||||
|
||||
|
||||
class Write100Test(BaseTest):
|
||||
# The name field in a tar header stores strings of at most 100 chars.
|
||||
# If a string is shorter than 100 chars it has to be padded with '\0',
|
||||
|
@ -711,6 +766,7 @@ def test_main():
|
|||
ReadAsteriskTest,
|
||||
ReadStreamAsteriskTest,
|
||||
WriteTest,
|
||||
AppendTest,
|
||||
Write100Test,
|
||||
WriteSize0Test,
|
||||
WriteStreamTest,
|
||||
|
|
|
@ -307,6 +307,28 @@ class PyZipFileTests(unittest.TestCase):
|
|||
|
||||
|
||||
class OtherTests(unittest.TestCase):
|
||||
def testCreateNonExistentFileForAppend(self):
|
||||
if os.path.exists(TESTFN):
|
||||
os.unlink(TESTFN)
|
||||
|
||||
filename = 'testfile.txt'
|
||||
content = 'hello, world. this is some content.'
|
||||
|
||||
try:
|
||||
zf = zipfile.ZipFile(TESTFN, 'a')
|
||||
zf.writestr(filename, content)
|
||||
zf.close()
|
||||
except IOError:
|
||||
self.fail('Could not append data to a non-existent zip file.')
|
||||
|
||||
self.assert_(os.path.exists(TESTFN))
|
||||
|
||||
zf = zipfile.ZipFile(TESTFN, 'r')
|
||||
self.assertEqual(zf.read(filename), content)
|
||||
zf.close()
|
||||
|
||||
os.unlink(TESTFN)
|
||||
|
||||
def testCloseErroneousFile(self):
|
||||
# This test checks that the ZipFile constructor closes the file object
|
||||
# it opens if there's an error in the file. If it doesn't, the traceback
|
||||
|
@ -349,8 +371,49 @@ class OtherTests(unittest.TestCase):
|
|||
# and report that the first file in the archive was corrupt.
|
||||
self.assertRaises(RuntimeError, zipf.testzip)
|
||||
|
||||
|
||||
class DecryptionTests(unittest.TestCase):
|
||||
# This test checks that ZIP decryption works. Since the library does not
|
||||
# support encryption at the moment, we use a pre-generated encrypted
|
||||
# ZIP file
|
||||
|
||||
data = (
|
||||
'PK\x03\x04\x14\x00\x01\x00\x00\x00n\x92i.#y\xef?&\x00\x00\x00\x1a\x00'
|
||||
'\x00\x00\x08\x00\x00\x00test.txt\xfa\x10\xa0gly|\xfa-\xc5\xc0=\xf9y'
|
||||
'\x18\xe0\xa8r\xb3Z}Lg\xbc\xae\xf9|\x9b\x19\xe4\x8b\xba\xbb)\x8c\xb0\xdbl'
|
||||
'PK\x01\x02\x14\x00\x14\x00\x01\x00\x00\x00n\x92i.#y\xef?&\x00\x00\x00'
|
||||
'\x1a\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x01\x00 \x00\xb6\x81'
|
||||
'\x00\x00\x00\x00test.txtPK\x05\x06\x00\x00\x00\x00\x01\x00\x01\x006\x00'
|
||||
'\x00\x00L\x00\x00\x00\x00\x00' )
|
||||
|
||||
plain = 'zipfile.py encryption test'
|
||||
|
||||
def setUp(self):
|
||||
fp = open(TESTFN, "wb")
|
||||
fp.write(self.data)
|
||||
fp.close()
|
||||
self.zip = zipfile.ZipFile(TESTFN, "r")
|
||||
|
||||
def tearDown(self):
|
||||
self.zip.close()
|
||||
os.unlink(TESTFN)
|
||||
|
||||
def testNoPassword(self):
|
||||
# Reading the encrypted file without password
|
||||
# must generate a RunTime exception
|
||||
self.assertRaises(RuntimeError, self.zip.read, "test.txt")
|
||||
|
||||
def testBadPassword(self):
|
||||
self.zip.setpassword("perl")
|
||||
self.assertRaises(RuntimeError, self.zip.read, "test.txt")
|
||||
|
||||
def testGoodPassword(self):
|
||||
self.zip.setpassword("python")
|
||||
self.assertEquals(self.zip.read("test.txt"), self.plain)
|
||||
|
||||
def test_main():
|
||||
run_unittest(TestsWithSourceFile, TestZip64InSmallFiles, OtherTests, PyZipFileTests)
|
||||
run_unittest(TestsWithSourceFile, TestZip64InSmallFiles, OtherTests,
|
||||
PyZipFileTests, DecryptionTests)
|
||||
#run_unittest(TestZip64InSmallFiles)
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
|
|
@ -587,7 +587,7 @@ class Trace:
|
|||
"""
|
||||
if why == 'call':
|
||||
code = frame.f_code
|
||||
filename = code.co_filename
|
||||
filename = frame.f_globals.get('__file__', None)
|
||||
if filename:
|
||||
# XXX modname() doesn't work right for packages, so
|
||||
# the ignore support won't work right for packages
|
||||
|
|
|
@ -100,6 +100,17 @@ class XMLGenerator(handler.ContentHandler):
|
|||
else:
|
||||
self._out.write(text.encode(self._encoding, _error_handling))
|
||||
|
||||
def _qname(self, name):
|
||||
"""Builds a qualified name from a (ns_url, localname) pair"""
|
||||
if name[0]:
|
||||
# The name is in a non-empty namespace
|
||||
prefix = self._current_context[name[0]]
|
||||
if prefix:
|
||||
# If it is not the default namespace, prepend the prefix
|
||||
return prefix + ":" + name[1]
|
||||
# Return the unqualified name
|
||||
return name[1]
|
||||
|
||||
# ContentHandler methods
|
||||
|
||||
def startDocument(self):
|
||||
|
@ -125,29 +136,21 @@ class XMLGenerator(handler.ContentHandler):
|
|||
self._write('</%s>' % name)
|
||||
|
||||
def startElementNS(self, name, qname, attrs):
|
||||
if name[0] is None:
|
||||
# if the name was not namespace-scoped, use the unqualified part
|
||||
name = name[1]
|
||||
else:
|
||||
# else try to restore the original prefix from the namespace
|
||||
name = self._current_context[name[0]] + ":" + name[1]
|
||||
self._write('<' + name)
|
||||
self._write('<' + self._qname(name))
|
||||
|
||||
for pair in self._undeclared_ns_maps:
|
||||
self._write(' xmlns:%s="%s"' % pair)
|
||||
for prefix, uri in self._undeclared_ns_maps:
|
||||
if prefix:
|
||||
self._out.write(' xmlns:%s="%s"' % (prefix, uri))
|
||||
else:
|
||||
self._out.write(' xmlns="%s"' % uri)
|
||||
self._undeclared_ns_maps = []
|
||||
|
||||
for (name, value) in attrs.items():
|
||||
name = self._current_context[name[0]] + ":" + name[1]
|
||||
self._write(' %s=%s' % (name, quoteattr(value)))
|
||||
self._write(' %s=%s' % (self._qname(name), quoteattr(value)))
|
||||
self._write('>')
|
||||
|
||||
def endElementNS(self, name, qname):
|
||||
if name[0] is None:
|
||||
name = name[1]
|
||||
else:
|
||||
name = self._current_context[name[0]] + ":" + name[1]
|
||||
self._write('</%s>' % name)
|
||||
self._write('</%s>' % self._qname(name))
|
||||
|
||||
def characters(self, content):
|
||||
self._write(escape(content))
|
||||
|
|
|
@ -296,6 +296,65 @@ class ZipInfo (object):
|
|||
extra = extra[ln+4:]
|
||||
|
||||
|
||||
class _ZipDecrypter:
|
||||
"""Class to handle decryption of files stored within a ZIP archive.
|
||||
|
||||
ZIP supports a password-based form of encryption. Even though known
|
||||
plaintext attacks have been found against it, it is still useful
|
||||
for low-level securicy.
|
||||
|
||||
Usage:
|
||||
zd = _ZipDecrypter(mypwd)
|
||||
plain_char = zd(cypher_char)
|
||||
plain_text = map(zd, cypher_text)
|
||||
"""
|
||||
|
||||
def _GenerateCRCTable():
|
||||
"""Generate a CRC-32 table.
|
||||
|
||||
ZIP encryption uses the CRC32 one-byte primitive for scrambling some
|
||||
internal keys. We noticed that a direct implementation is faster than
|
||||
relying on binascii.crc32().
|
||||
"""
|
||||
poly = 0xedb88320
|
||||
table = [0] * 256
|
||||
for i in range(256):
|
||||
crc = i
|
||||
for j in range(8):
|
||||
if crc & 1:
|
||||
crc = ((crc >> 1) & 0x7FFFFFFF) ^ poly
|
||||
else:
|
||||
crc = ((crc >> 1) & 0x7FFFFFFF)
|
||||
table[i] = crc
|
||||
return table
|
||||
crctable = _GenerateCRCTable()
|
||||
|
||||
def _crc32(self, ch, crc):
|
||||
"""Compute the CRC32 primitive on one byte."""
|
||||
return ((crc >> 8) & 0xffffff) ^ self.crctable[(crc ^ ord(ch)) & 0xff]
|
||||
|
||||
def __init__(self, pwd):
|
||||
self.key0 = 305419896
|
||||
self.key1 = 591751049
|
||||
self.key2 = 878082192
|
||||
for p in pwd:
|
||||
self._UpdateKeys(p)
|
||||
|
||||
def _UpdateKeys(self, c):
|
||||
self.key0 = self._crc32(c, self.key0)
|
||||
self.key1 = (self.key1 + (self.key0 & 255)) & 4294967295
|
||||
self.key1 = (self.key1 * 134775813 + 1) & 4294967295
|
||||
self.key2 = self._crc32(chr((self.key1 >> 24) & 255), self.key2)
|
||||
|
||||
def __call__(self, c):
|
||||
"""Decrypt a single character."""
|
||||
c = ord(c)
|
||||
k = self.key2 | 2
|
||||
c = c ^ (((k * (k^1)) >> 8) & 255)
|
||||
c = chr(c)
|
||||
self._UpdateKeys(c)
|
||||
return c
|
||||
|
||||
class ZipFile:
|
||||
""" Class with methods to open, read, write, close, list zip files.
|
||||
|
||||
|
@ -330,13 +389,21 @@ class ZipFile:
|
|||
self.filelist = [] # List of ZipInfo instances for archive
|
||||
self.compression = compression # Method of compression
|
||||
self.mode = key = mode.replace('b', '')[0]
|
||||
self.pwd = None
|
||||
|
||||
# Check if we were passed a file-like object
|
||||
if isinstance(file, basestring):
|
||||
self._filePassed = 0
|
||||
self.filename = file
|
||||
modeDict = {'r' : 'rb', 'w': 'wb', 'a' : 'r+b'}
|
||||
self.fp = open(file, modeDict[mode])
|
||||
try:
|
||||
self.fp = open(file, modeDict[mode])
|
||||
except IOError:
|
||||
if mode == 'a':
|
||||
mode = key = 'w'
|
||||
self.fp = open(file, modeDict[mode])
|
||||
else:
|
||||
raise
|
||||
else:
|
||||
self._filePassed = 1
|
||||
self.fp = file
|
||||
|
@ -461,7 +528,11 @@ class ZipFile:
|
|||
"""Return the instance of ZipInfo given 'name'."""
|
||||
return self.NameToInfo[name]
|
||||
|
||||
def read(self, name):
|
||||
def setpassword(self, pwd):
|
||||
"""Set default password for encrypted files."""
|
||||
self.pwd = pwd
|
||||
|
||||
def read(self, name, pwd=None):
|
||||
"""Return file bytes (as a string) for name."""
|
||||
if self.mode not in ("r", "a"):
|
||||
raise RuntimeError, 'read() requires mode "r" or "a"'
|
||||
|
@ -469,6 +540,13 @@ class ZipFile:
|
|||
raise RuntimeError, \
|
||||
"Attempt to read ZIP archive that was already closed"
|
||||
zinfo = self.getinfo(name)
|
||||
is_encrypted = zinfo.flag_bits & 0x1
|
||||
if is_encrypted:
|
||||
if not pwd:
|
||||
pwd = self.pwd
|
||||
if not pwd:
|
||||
raise RuntimeError, "File %s is encrypted, " \
|
||||
"password required for extraction" % name
|
||||
filepos = self.fp.tell()
|
||||
|
||||
self.fp.seek(zinfo.header_offset, 0)
|
||||
|
@ -489,6 +567,18 @@ class ZipFile:
|
|||
zinfo.orig_filename, fname)
|
||||
|
||||
bytes = self.fp.read(zinfo.compress_size)
|
||||
# Go with decryption
|
||||
if is_encrypted:
|
||||
zd = _ZipDecrypter(pwd)
|
||||
# The first 12 bytes in the cypher stream is an encryption header
|
||||
# used to strengthen the algorithm. The first 11 bytes are
|
||||
# completely random, while the 12th contains the MSB of the CRC,
|
||||
# and is used to check the correctness of the password.
|
||||
h = map(zd, bytes[0:12])
|
||||
if ord(h[11]) != ((zinfo.CRC>>24)&255):
|
||||
raise RuntimeError, "Bad password for file %s" % name
|
||||
bytes = "".join(map(zd, bytes[12:]))
|
||||
# Go with decompression
|
||||
self.fp.seek(filepos, 0)
|
||||
if zinfo.compress_type == ZIP_STORED:
|
||||
pass
|
||||
|
|
|
@ -377,6 +377,7 @@ Luc Lefebvre
|
|||
Kip Lehman
|
||||
Joerg Lehmann
|
||||
Marc-Andre Lemburg
|
||||
Mark Levinson
|
||||
William Lewis
|
||||
Robert van Liere
|
||||
Martin Ligr
|
||||
|
|
|
@ -1259,8 +1259,14 @@ defdict_init(PyObject *self, PyObject *args, PyObject *kwds)
|
|||
newargs = PyTuple_New(0);
|
||||
else {
|
||||
Py_ssize_t n = PyTuple_GET_SIZE(args);
|
||||
if (n > 0)
|
||||
if (n > 0) {
|
||||
newdefault = PyTuple_GET_ITEM(args, 0);
|
||||
if (!PyCallable_Check(newdefault)) {
|
||||
PyErr_SetString(PyExc_TypeError,
|
||||
"first argument must be callable");
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
newargs = PySequence_GetSlice(args, 1, n);
|
||||
}
|
||||
if (newargs == NULL)
|
||||
|
|
|
@ -3138,7 +3138,7 @@ time_str(PyDateTime_Time *self)
|
|||
}
|
||||
|
||||
static PyObject *
|
||||
time_isoformat(PyDateTime_Time *self)
|
||||
time_isoformat(PyDateTime_Time *self, PyObject *unused)
|
||||
{
|
||||
char buf[100];
|
||||
PyObject *result;
|
||||
|
@ -3374,7 +3374,7 @@ time_reduce(PyDateTime_Time *self, PyObject *arg)
|
|||
|
||||
static PyMethodDef time_methods[] = {
|
||||
|
||||
{"isoformat", (PyCFunction)time_isoformat, METH_KEYWORDS,
|
||||
{"isoformat", (PyCFunction)time_isoformat, METH_NOARGS,
|
||||
PyDoc_STR("Return string in ISO 8601 format, HH:MM:SS[.mmmmmm]"
|
||||
"[+HH:MM].")},
|
||||
|
||||
|
|
|
@ -2073,6 +2073,11 @@ count_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
|
|||
static PyObject *
|
||||
count_next(countobject *lz)
|
||||
{
|
||||
if (lz->cnt == LONG_MAX) {
|
||||
PyErr_SetString(PyExc_OverflowError,
|
||||
"cannot count beyond LONG_MAX");
|
||||
return NULL;
|
||||
}
|
||||
return PyInt_FromSsize_t(lz->cnt++);
|
||||
}
|
||||
|
||||
|
@ -2467,6 +2472,234 @@ static PyTypeObject repeat_type = {
|
|||
PyObject_GC_Del, /* tp_free */
|
||||
};
|
||||
|
||||
/* iziplongest object ************************************************************/
|
||||
|
||||
#include "Python.h"
|
||||
|
||||
typedef struct {
|
||||
PyObject_HEAD
|
||||
Py_ssize_t tuplesize;
|
||||
Py_ssize_t numactive;
|
||||
PyObject *ittuple; /* tuple of iterators */
|
||||
PyObject *result;
|
||||
PyObject *fillvalue;
|
||||
} iziplongestobject;
|
||||
|
||||
static PyTypeObject iziplongest_type;
|
||||
|
||||
static PyObject *
|
||||
izip_longest_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
|
||||
{
|
||||
iziplongestobject *lz;
|
||||
Py_ssize_t i;
|
||||
PyObject *ittuple; /* tuple of iterators */
|
||||
PyObject *result;
|
||||
PyObject *fillvalue = Py_None;
|
||||
Py_ssize_t tuplesize = PySequence_Length(args);
|
||||
|
||||
if (kwds != NULL && PyDict_CheckExact(kwds) && PyDict_Size(kwds) > 0) {
|
||||
fillvalue = PyDict_GetItemString(kwds, "fillvalue");
|
||||
if (fillvalue == NULL || PyDict_Size(kwds) > 1) {
|
||||
PyErr_SetString(PyExc_TypeError,
|
||||
"izip_longest() got an unexpected keyword argument");
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
|
||||
/* args must be a tuple */
|
||||
assert(PyTuple_Check(args));
|
||||
|
||||
/* obtain iterators */
|
||||
ittuple = PyTuple_New(tuplesize);
|
||||
if (ittuple == NULL)
|
||||
return NULL;
|
||||
for (i=0; i < tuplesize; ++i) {
|
||||
PyObject *item = PyTuple_GET_ITEM(args, i);
|
||||
PyObject *it = PyObject_GetIter(item);
|
||||
if (it == NULL) {
|
||||
if (PyErr_ExceptionMatches(PyExc_TypeError))
|
||||
PyErr_Format(PyExc_TypeError,
|
||||
"izip_longest argument #%zd must support iteration",
|
||||
i+1);
|
||||
Py_DECREF(ittuple);
|
||||
return NULL;
|
||||
}
|
||||
PyTuple_SET_ITEM(ittuple, i, it);
|
||||
}
|
||||
|
||||
/* create a result holder */
|
||||
result = PyTuple_New(tuplesize);
|
||||
if (result == NULL) {
|
||||
Py_DECREF(ittuple);
|
||||
return NULL;
|
||||
}
|
||||
for (i=0 ; i < tuplesize ; i++) {
|
||||
Py_INCREF(Py_None);
|
||||
PyTuple_SET_ITEM(result, i, Py_None);
|
||||
}
|
||||
|
||||
/* create iziplongestobject structure */
|
||||
lz = (iziplongestobject *)type->tp_alloc(type, 0);
|
||||
if (lz == NULL) {
|
||||
Py_DECREF(ittuple);
|
||||
Py_DECREF(result);
|
||||
return NULL;
|
||||
}
|
||||
lz->ittuple = ittuple;
|
||||
lz->tuplesize = tuplesize;
|
||||
lz->numactive = tuplesize;
|
||||
lz->result = result;
|
||||
Py_INCREF(fillvalue);
|
||||
lz->fillvalue = fillvalue;
|
||||
return (PyObject *)lz;
|
||||
}
|
||||
|
||||
static void
|
||||
izip_longest_dealloc(iziplongestobject *lz)
|
||||
{
|
||||
PyObject_GC_UnTrack(lz);
|
||||
Py_XDECREF(lz->ittuple);
|
||||
Py_XDECREF(lz->result);
|
||||
Py_XDECREF(lz->fillvalue);
|
||||
lz->ob_type->tp_free(lz);
|
||||
}
|
||||
|
||||
static int
|
||||
izip_longest_traverse(iziplongestobject *lz, visitproc visit, void *arg)
|
||||
{
|
||||
Py_VISIT(lz->ittuple);
|
||||
Py_VISIT(lz->result);
|
||||
Py_VISIT(lz->fillvalue);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
izip_longest_next(iziplongestobject *lz)
|
||||
{
|
||||
Py_ssize_t i;
|
||||
Py_ssize_t tuplesize = lz->tuplesize;
|
||||
PyObject *result = lz->result;
|
||||
PyObject *it;
|
||||
PyObject *item;
|
||||
PyObject *olditem;
|
||||
|
||||
if (tuplesize == 0)
|
||||
return NULL;
|
||||
if (lz->numactive == 0)
|
||||
return NULL;
|
||||
if (result->ob_refcnt == 1) {
|
||||
Py_INCREF(result);
|
||||
for (i=0 ; i < tuplesize ; i++) {
|
||||
it = PyTuple_GET_ITEM(lz->ittuple, i);
|
||||
if (it == NULL) {
|
||||
Py_INCREF(lz->fillvalue);
|
||||
item = lz->fillvalue;
|
||||
} else {
|
||||
assert(PyIter_Check(it));
|
||||
item = (*it->ob_type->tp_iternext)(it);
|
||||
if (item == NULL) {
|
||||
lz->numactive -= 1;
|
||||
if (lz->numactive == 0) {
|
||||
Py_DECREF(result);
|
||||
return NULL;
|
||||
} else {
|
||||
Py_INCREF(lz->fillvalue);
|
||||
item = lz->fillvalue;
|
||||
PyTuple_SET_ITEM(lz->ittuple, i, NULL);
|
||||
Py_DECREF(it);
|
||||
}
|
||||
}
|
||||
}
|
||||
olditem = PyTuple_GET_ITEM(result, i);
|
||||
PyTuple_SET_ITEM(result, i, item);
|
||||
Py_DECREF(olditem);
|
||||
}
|
||||
} else {
|
||||
result = PyTuple_New(tuplesize);
|
||||
if (result == NULL)
|
||||
return NULL;
|
||||
for (i=0 ; i < tuplesize ; i++) {
|
||||
it = PyTuple_GET_ITEM(lz->ittuple, i);
|
||||
if (it == NULL) {
|
||||
Py_INCREF(lz->fillvalue);
|
||||
item = lz->fillvalue;
|
||||
} else {
|
||||
assert(PyIter_Check(it));
|
||||
item = (*it->ob_type->tp_iternext)(it);
|
||||
if (item == NULL) {
|
||||
lz->numactive -= 1;
|
||||
if (lz->numactive == 0) {
|
||||
Py_DECREF(result);
|
||||
return NULL;
|
||||
} else {
|
||||
Py_INCREF(lz->fillvalue);
|
||||
item = lz->fillvalue;
|
||||
PyTuple_SET_ITEM(lz->ittuple, i, NULL);
|
||||
Py_DECREF(it);
|
||||
}
|
||||
}
|
||||
}
|
||||
PyTuple_SET_ITEM(result, i, item);
|
||||
}
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
PyDoc_STRVAR(izip_longest_doc,
|
||||
"izip_longest(iter1 [,iter2 [...]], [fillvalue=None]) --> izip_longest object\n\
|
||||
\n\
|
||||
Return an izip_longest object whose .next() method returns a tuple where\n\
|
||||
the i-th element comes from the i-th iterable argument. The .next()\n\
|
||||
method continues until the longest iterable in the argument sequence\n\
|
||||
is exhausted and then it raises StopIteration. When the shorter iterables\n\
|
||||
are exhausted, the fillvalue is substituted in their place. The fillvalue\n\
|
||||
defaults to None or can be specified by a keyword argument.\n\
|
||||
");
|
||||
|
||||
static PyTypeObject iziplongest_type = {
|
||||
PyObject_HEAD_INIT(NULL)
|
||||
0, /* ob_size */
|
||||
"itertools.izip_longest", /* tp_name */
|
||||
sizeof(iziplongestobject), /* tp_basicsize */
|
||||
0, /* tp_itemsize */
|
||||
/* methods */
|
||||
(destructor)izip_longest_dealloc, /* tp_dealloc */
|
||||
0, /* tp_print */
|
||||
0, /* tp_getattr */
|
||||
0, /* tp_setattr */
|
||||
0, /* tp_compare */
|
||||
0, /* tp_repr */
|
||||
0, /* tp_as_number */
|
||||
0, /* tp_as_sequence */
|
||||
0, /* tp_as_mapping */
|
||||
0, /* tp_hash */
|
||||
0, /* tp_call */
|
||||
0, /* tp_str */
|
||||
PyObject_GenericGetAttr, /* tp_getattro */
|
||||
0, /* tp_setattro */
|
||||
0, /* tp_as_buffer */
|
||||
Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC |
|
||||
Py_TPFLAGS_BASETYPE, /* tp_flags */
|
||||
izip_longest_doc, /* tp_doc */
|
||||
(traverseproc)izip_longest_traverse, /* tp_traverse */
|
||||
0, /* tp_clear */
|
||||
0, /* tp_richcompare */
|
||||
0, /* tp_weaklistoffset */
|
||||
PyObject_SelfIter, /* tp_iter */
|
||||
(iternextfunc)izip_longest_next, /* tp_iternext */
|
||||
0, /* tp_methods */
|
||||
0, /* tp_members */
|
||||
0, /* tp_getset */
|
||||
0, /* tp_base */
|
||||
0, /* tp_dict */
|
||||
0, /* tp_descr_get */
|
||||
0, /* tp_descr_set */
|
||||
0, /* tp_dictoffset */
|
||||
0, /* tp_init */
|
||||
0, /* tp_alloc */
|
||||
izip_longest_new, /* tp_new */
|
||||
PyObject_GC_Del, /* tp_free */
|
||||
};
|
||||
|
||||
/* module level code ********************************************************/
|
||||
|
||||
|
@ -2480,6 +2713,7 @@ repeat(elem [,n]) --> elem, elem, elem, ... endlessly or up to n times\n\
|
|||
\n\
|
||||
Iterators terminating on the shortest input sequence:\n\
|
||||
izip(p, q, ...) --> (p[0], q[0]), (p[1], q[1]), ... \n\
|
||||
izip_longest(p, q, ...) --> (p[0], q[0]), (p[1], q[1]), ... \n\
|
||||
ifilter(pred, seq) --> elements of seq where pred(elem) is True\n\
|
||||
ifilterfalse(pred, seq) --> elements of seq where pred(elem) is False\n\
|
||||
islice(seq, [start,] stop [, step]) --> elements from\n\
|
||||
|
@ -2517,6 +2751,7 @@ inititertools(void)
|
|||
&ifilterfalse_type,
|
||||
&count_type,
|
||||
&izip_type,
|
||||
&iziplongest_type,
|
||||
&repeat_type,
|
||||
&groupby_type,
|
||||
NULL
|
||||
|
|
|
@ -1692,6 +1692,57 @@ posix_chmod(PyObject *self, PyObject *args)
|
|||
}
|
||||
|
||||
|
||||
#ifdef HAVE_CHFLAGS
|
||||
PyDoc_STRVAR(posix_chflags__doc__,
|
||||
"chflags(path, flags)\n\n\
|
||||
Set file flags.");
|
||||
|
||||
static PyObject *
|
||||
posix_chflags(PyObject *self, PyObject *args)
|
||||
{
|
||||
char *path;
|
||||
unsigned long flags;
|
||||
int res;
|
||||
if (!PyArg_ParseTuple(args, "etk:chflags",
|
||||
Py_FileSystemDefaultEncoding, &path, &flags))
|
||||
return NULL;
|
||||
Py_BEGIN_ALLOW_THREADS
|
||||
res = chflags(path, flags);
|
||||
Py_END_ALLOW_THREADS
|
||||
if (res < 0)
|
||||
return posix_error_with_allocated_filename(path);
|
||||
PyMem_Free(path);
|
||||
Py_INCREF(Py_None);
|
||||
return Py_None;
|
||||
}
|
||||
#endif /* HAVE_CHFLAGS */
|
||||
|
||||
#ifdef HAVE_LCHFLAGS
|
||||
PyDoc_STRVAR(posix_lchflags__doc__,
|
||||
"lchflags(path, flags)\n\n\
|
||||
Set file flags.\n\
|
||||
This function will not follow symbolic links.");
|
||||
|
||||
static PyObject *
|
||||
posix_lchflags(PyObject *self, PyObject *args)
|
||||
{
|
||||
char *path;
|
||||
unsigned long flags;
|
||||
int res;
|
||||
if (!PyArg_ParseTuple(args, "etk:lchflags",
|
||||
Py_FileSystemDefaultEncoding, &path, &flags))
|
||||
return NULL;
|
||||
Py_BEGIN_ALLOW_THREADS
|
||||
res = lchflags(path, flags);
|
||||
Py_END_ALLOW_THREADS
|
||||
if (res < 0)
|
||||
return posix_error_with_allocated_filename(path);
|
||||
PyMem_Free(path);
|
||||
Py_INCREF(Py_None);
|
||||
return Py_None;
|
||||
}
|
||||
#endif /* HAVE_LCHFLAGS */
|
||||
|
||||
#ifdef HAVE_CHROOT
|
||||
PyDoc_STRVAR(posix_chroot__doc__,
|
||||
"chroot(path)\n\n\
|
||||
|
@ -8070,10 +8121,16 @@ static PyMethodDef posix_methods[] = {
|
|||
{"ttyname", posix_ttyname, METH_VARARGS, posix_ttyname__doc__},
|
||||
#endif
|
||||
{"chdir", posix_chdir, METH_VARARGS, posix_chdir__doc__},
|
||||
#ifdef HAVE_CHFLAGS
|
||||
{"chflags", posix_chflags, METH_VARARGS, posix_chflags__doc__},
|
||||
#endif /* HAVE_CHFLAGS */
|
||||
{"chmod", posix_chmod, METH_VARARGS, posix_chmod__doc__},
|
||||
#ifdef HAVE_CHOWN
|
||||
{"chown", posix_chown, METH_VARARGS, posix_chown__doc__},
|
||||
#endif /* HAVE_CHOWN */
|
||||
#ifdef HAVE_LCHFLAGS
|
||||
{"lchflags", posix_lchflags, METH_VARARGS, posix_lchflags__doc__},
|
||||
#endif /* HAVE_LCHFLAGS */
|
||||
#ifdef HAVE_LCHOWN
|
||||
{"lchown", posix_lchown, METH_VARARGS, posix_lchown__doc__},
|
||||
#endif /* HAVE_LCHOWN */
|
||||
|
|
|
@ -362,20 +362,25 @@ const char *inet_ntop(int af, const void *src, char *dst, socklen_t size);
|
|||
#if defined(__FreeBSD__)
|
||||
#define BTPROTO_L2CAP BLUETOOTH_PROTO_L2CAP
|
||||
#define BTPROTO_RFCOMM BLUETOOTH_PROTO_RFCOMM
|
||||
#define BTPROTO_HCI BLUETOOTH_PROTO_HCI
|
||||
#define sockaddr_l2 sockaddr_l2cap
|
||||
#define sockaddr_rc sockaddr_rfcomm
|
||||
#define _BT_L2_MEMB(sa, memb) ((sa)->l2cap_##memb)
|
||||
#define _BT_RC_MEMB(sa, memb) ((sa)->rfcomm_##memb)
|
||||
#define _BT_HCI_MEMB(sa, memb) ((sa)->hci_##memb)
|
||||
#elif defined(__NetBSD__)
|
||||
#define sockaddr_l2 sockaddr_bt
|
||||
#define sockaddr_rc sockaddr_bt
|
||||
#define sockaddr_hci sockaddr_bt
|
||||
#define sockaddr_sco sockaddr_bt
|
||||
#define _BT_L2_MEMB(sa, memb) ((sa)->bt_##memb)
|
||||
#define _BT_RC_MEMB(sa, memb) ((sa)->bt_##memb)
|
||||
#define _BT_HCI_MEMB(sa, memb) ((sa)->bt_##memb)
|
||||
#define _BT_SCO_MEMB(sa, memb) ((sa)->bt_##memb)
|
||||
#else
|
||||
#define _BT_L2_MEMB(sa, memb) ((sa)->l2_##memb)
|
||||
#define _BT_RC_MEMB(sa, memb) ((sa)->rc_##memb)
|
||||
#define _BT_HCI_MEMB(sa, memb) ((sa)->hci_##memb)
|
||||
#define _BT_SCO_MEMB(sa, memb) ((sa)->sco_##memb)
|
||||
#endif
|
||||
#endif
|
||||
|
@ -1119,6 +1124,14 @@ makesockaddr(int sockfd, struct sockaddr *addr, int addrlen, int proto)
|
|||
return ret;
|
||||
}
|
||||
|
||||
case BTPROTO_HCI:
|
||||
{
|
||||
struct sockaddr_hci *a = (struct sockaddr_hci *) addr;
|
||||
PyObject *ret = NULL;
|
||||
ret = Py_BuildValue("i", _BT_HCI_MEMB(a, dev));
|
||||
return ret;
|
||||
}
|
||||
|
||||
#if !defined(__FreeBSD__)
|
||||
case BTPROTO_SCO:
|
||||
{
|
||||
|
@ -1347,6 +1360,18 @@ getsockaddrarg(PySocketSockObject *s, PyObject *args,
|
|||
*len_ret = sizeof *addr;
|
||||
return 1;
|
||||
}
|
||||
case BTPROTO_HCI:
|
||||
{
|
||||
struct sockaddr_hci *addr = (struct sockaddr_hci *)addr_ret;
|
||||
_BT_HCI_MEMB(addr, family) = AF_BLUETOOTH;
|
||||
if (!PyArg_ParseTuple(args, "i", &_BT_HCI_MEMB(addr, dev))) {
|
||||
PyErr_SetString(socket_error, "getsockaddrarg: "
|
||||
"wrong format");
|
||||
return 0;
|
||||
}
|
||||
*len_ret = sizeof *addr;
|
||||
return 1;
|
||||
}
|
||||
#if !defined(__FreeBSD__)
|
||||
case BTPROTO_SCO:
|
||||
{
|
||||
|
@ -1485,6 +1510,9 @@ getsockaddrlen(PySocketSockObject *s, socklen_t *len_ret)
|
|||
case BTPROTO_RFCOMM:
|
||||
*len_ret = sizeof (struct sockaddr_rc);
|
||||
return 1;
|
||||
case BTPROTO_HCI:
|
||||
*len_ret = sizeof (struct sockaddr_hci);
|
||||
return 1;
|
||||
#if !defined(__FreeBSD__)
|
||||
case BTPROTO_SCO:
|
||||
*len_ret = sizeof (struct sockaddr_sco);
|
||||
|
@ -4363,7 +4391,9 @@ init_socket(void)
|
|||
PyModule_AddIntConstant(m, "NETLINK_ROUTE6", NETLINK_ROUTE6);
|
||||
#endif
|
||||
PyModule_AddIntConstant(m, "NETLINK_IP6_FW", NETLINK_IP6_FW);
|
||||
#ifdef NETLINK_DNRTMSG
|
||||
PyModule_AddIntConstant(m, "NETLINK_DNRTMSG", NETLINK_DNRTMSG);
|
||||
#endif
|
||||
#ifdef NETLINK_TAPBASE
|
||||
PyModule_AddIntConstant(m, "NETLINK_TAPBASE", NETLINK_TAPBASE);
|
||||
#endif
|
||||
|
@ -4408,6 +4438,11 @@ init_socket(void)
|
|||
#ifdef USE_BLUETOOTH
|
||||
PyModule_AddIntConstant(m, "AF_BLUETOOTH", AF_BLUETOOTH);
|
||||
PyModule_AddIntConstant(m, "BTPROTO_L2CAP", BTPROTO_L2CAP);
|
||||
PyModule_AddIntConstant(m, "BTPROTO_HCI", BTPROTO_HCI);
|
||||
PyModule_AddIntConstant(m, "SOL_HCI", SOL_HCI);
|
||||
PyModule_AddIntConstant(m, "HCI_TIME_STAMP", HCI_TIME_STAMP);
|
||||
PyModule_AddIntConstant(m, "HCI_DATA_DIR", HCI_DATA_DIR);
|
||||
PyModule_AddIntConstant(m, "HCI_FILTER", HCI_FILTER);
|
||||
#if !defined(__FreeBSD__)
|
||||
PyModule_AddIntConstant(m, "BTPROTO_SCO", BTPROTO_SCO);
|
||||
#endif
|
||||
|
|
|
@ -46,6 +46,7 @@
|
|||
#include <bluetooth/rfcomm.h>
|
||||
#include <bluetooth/l2cap.h>
|
||||
#include <bluetooth/sco.h>
|
||||
#include <bluetooth/hci.h>
|
||||
#endif
|
||||
|
||||
#ifdef HAVE_BLUETOOTH_H
|
||||
|
@ -98,6 +99,7 @@ typedef union sock_addr {
|
|||
struct sockaddr_l2 bt_l2;
|
||||
struct sockaddr_rc bt_rc;
|
||||
struct sockaddr_sco bt_sco;
|
||||
struct sockaddr_hci bt_hci;
|
||||
#endif
|
||||
#ifdef HAVE_NETPACKET_PACKET_H
|
||||
struct sockaddr_ll ll;
|
||||
|
|
|
@ -980,6 +980,8 @@ PyNumber_Float(PyObject *o)
|
|||
int
|
||||
PySequence_Check(PyObject *s)
|
||||
{
|
||||
if (PyObject_IsInstance(s, (PyObject *)&PyDict_Type))
|
||||
return 0;
|
||||
return s != NULL && s->ob_type->tp_as_sequence &&
|
||||
s->ob_type->tp_as_sequence->sq_item != NULL;
|
||||
}
|
||||
|
|
|
@ -98,6 +98,17 @@ Tunable Dictionary Parameters
|
|||
depending on the size of the dictionary. Setting to *4
|
||||
eliminates every other resize step.
|
||||
|
||||
* Maximum sparseness (minimum dictionary load). What percentage
|
||||
of entries can be unused before the dictionary shrinks to
|
||||
free up memory and speed up iteration? (The current CPython
|
||||
code does not represent this parameter directly.)
|
||||
|
||||
* Shrinkage rate upon exceeding maximum sparseness. The current
|
||||
CPython code never even checks sparseness when deleting a
|
||||
key. When a new key is added, it resizes based on the number
|
||||
of active keys, so that the addition may trigger shrinkage
|
||||
rather than growth.
|
||||
|
||||
Tune-ups should be measured across a broad range of applications and
|
||||
use cases. A change to any parameter will help in some situations and
|
||||
hurt in others. The key is to find settings that help the most common
|
||||
|
@ -115,6 +126,15 @@ __iter__(), iterkeys(), iteritems(), itervalues(), and update().
|
|||
Also, every dictionary iterates at least twice, once for the memset()
|
||||
when it is created and once by dealloc().
|
||||
|
||||
Dictionary operations involving only a single key can be O(1) unless
|
||||
resizing is possible. By checking for a resize only when the
|
||||
dictionary can grow (and may *require* resizing), other operations
|
||||
remain O(1), and the odds of resize thrashing or memory fragmentation
|
||||
are reduced. In particular, an algorithm that empties a dictionary
|
||||
by repeatedly invoking .pop will see no resizing, which might
|
||||
not be necessary at all because the dictionary is eventually
|
||||
discarded entirely.
|
||||
|
||||
|
||||
Results of Cache Locality Experiments
|
||||
-------------------------------------
|
||||
|
|
|
@ -833,6 +833,34 @@ PyDict_Next(PyObject *op, Py_ssize_t *ppos, PyObject **pkey, PyObject **pvalue)
|
|||
return 1;
|
||||
}
|
||||
|
||||
/* Internal version of PyDict_Next that returns a hash value in addition to the key and value.*/
|
||||
int
|
||||
_PyDict_Next(PyObject *op, Py_ssize_t *ppos, PyObject **pkey, PyObject **pvalue, long *phash)
|
||||
{
|
||||
register Py_ssize_t i;
|
||||
register Py_ssize_t mask;
|
||||
register dictentry *ep;
|
||||
|
||||
if (!PyDict_Check(op))
|
||||
return 0;
|
||||
i = *ppos;
|
||||
if (i < 0)
|
||||
return 0;
|
||||
ep = ((dictobject *)op)->ma_table;
|
||||
mask = ((dictobject *)op)->ma_mask;
|
||||
while (i <= mask && ep[i].me_value == NULL)
|
||||
i++;
|
||||
*ppos = i+1;
|
||||
if (i > mask)
|
||||
return 0;
|
||||
*phash = (long)(ep[i].me_hash);
|
||||
if (pkey)
|
||||
*pkey = ep[i].me_key;
|
||||
if (pvalue)
|
||||
*pvalue = ep[i].me_value;
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* Methods */
|
||||
|
||||
static void
|
||||
|
@ -1336,7 +1364,7 @@ PyDict_Merge(PyObject *a, PyObject *b, int override)
|
|||
return -1;
|
||||
}
|
||||
mp = (dictobject*)a;
|
||||
if (PyDict_Check(b)) {
|
||||
if (PyDict_CheckExact(b)) {
|
||||
other = (dictobject*)b;
|
||||
if (other == mp || other->ma_used == 0)
|
||||
/* a.update(a) or a.update({}); nothing to do */
|
||||
|
@ -1909,6 +1937,17 @@ PyDict_Contains(PyObject *op, PyObject *key)
|
|||
return ep == NULL ? -1 : (ep->me_value != NULL);
|
||||
}
|
||||
|
||||
/* Internal version of PyDict_Contains used when the hash value is already known */
|
||||
int
|
||||
_PyDict_Contains(PyObject *op, PyObject *key, long hash)
|
||||
{
|
||||
dictobject *mp = (dictobject *)op;
|
||||
dictentry *ep;
|
||||
|
||||
ep = (mp->ma_lookup)(mp, key, hash);
|
||||
return ep == NULL ? -1 : (ep->me_value != NULL);
|
||||
}
|
||||
|
||||
/* Hack to implement "key in dict" */
|
||||
static PySequenceMethods dict_as_sequence = {
|
||||
0, /* sq_length */
|
||||
|
|
|
@ -62,6 +62,12 @@ enum_next(enumobject *en)
|
|||
PyObject *result = en->en_result;
|
||||
PyObject *it = en->en_sit;
|
||||
|
||||
if (en->en_index == LONG_MAX) {
|
||||
PyErr_SetString(PyExc_OverflowError,
|
||||
"enumerate() is limited to LONG_MAX items");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
next_item = (*it->ob_type->tp_iternext)(it);
|
||||
if (next_item == NULL)
|
||||
return NULL;
|
||||
|
|
|
@ -932,14 +932,31 @@ set_update_internal(PySetObject *so, PyObject *other)
|
|||
{
|
||||
PyObject *key, *it;
|
||||
|
||||
if (PyAnySet_Check(other))
|
||||
if (PyAnySet_CheckExact(other))
|
||||
return set_merge(so, other);
|
||||
|
||||
if (PyDict_CheckExact(other)) {
|
||||
PyObject *value;
|
||||
Py_ssize_t pos = 0;
|
||||
while (PyDict_Next(other, &pos, &key, &value)) {
|
||||
if (set_add_key(so, key) == -1)
|
||||
long hash;
|
||||
Py_ssize_t dictsize = PyDict_Size(other);
|
||||
|
||||
/* Do one big resize at the start, rather than
|
||||
* incrementally resizing as we insert new keys. Expect
|
||||
* that there will be no (or few) overlapping keys.
|
||||
*/
|
||||
if (dictsize == -1)
|
||||
return -1;
|
||||
if ((so->fill + dictsize)*3 >= (so->mask+1)*2) {
|
||||
if (set_table_resize(so, (so->used + dictsize)*2) != 0)
|
||||
return -1;
|
||||
}
|
||||
while (_PyDict_Next(other, &pos, &key, &value, &hash)) {
|
||||
setentry an_entry;
|
||||
|
||||
an_entry.hash = hash;
|
||||
an_entry.key = key;
|
||||
if (set_add_entry(so, &an_entry) == -1)
|
||||
return -1;
|
||||
}
|
||||
return 0;
|
||||
|
@ -1210,7 +1227,7 @@ set_intersection(PySetObject *so, PyObject *other)
|
|||
if (result == NULL)
|
||||
return NULL;
|
||||
|
||||
if (PyAnySet_Check(other)) {
|
||||
if (PyAnySet_CheckExact(other)) {
|
||||
Py_ssize_t pos = 0;
|
||||
setentry *entry;
|
||||
|
||||
|
@ -1334,7 +1351,7 @@ set_difference_update_internal(PySetObject *so, PyObject *other)
|
|||
if ((PyObject *)so == other)
|
||||
return set_clear_internal(so);
|
||||
|
||||
if (PyAnySet_Check(other)) {
|
||||
if (PyAnySet_CheckExact(other)) {
|
||||
setentry *entry;
|
||||
Py_ssize_t pos = 0;
|
||||
|
||||
|
@ -1383,7 +1400,7 @@ set_difference(PySetObject *so, PyObject *other)
|
|||
setentry *entry;
|
||||
Py_ssize_t pos = 0;
|
||||
|
||||
if (!PyAnySet_Check(other) && !PyDict_CheckExact(other)) {
|
||||
if (!PyAnySet_CheckExact(other) && !PyDict_CheckExact(other)) {
|
||||
result = set_copy(so);
|
||||
if (result == NULL)
|
||||
return NULL;
|
||||
|
@ -1402,7 +1419,7 @@ set_difference(PySetObject *so, PyObject *other)
|
|||
setentry entrycopy;
|
||||
entrycopy.hash = entry->hash;
|
||||
entrycopy.key = entry->key;
|
||||
if (!PyDict_Contains(other, entry->key)) {
|
||||
if (!_PyDict_Contains(other, entry->key, entry->hash)) {
|
||||
if (set_add_entry((PySetObject *)result, &entrycopy) == -1) {
|
||||
Py_DECREF(result);
|
||||
return NULL;
|
||||
|
@ -1473,12 +1490,10 @@ set_symmetric_difference_update(PySetObject *so, PyObject *other)
|
|||
if (PyDict_CheckExact(other)) {
|
||||
PyObject *value;
|
||||
int rv;
|
||||
while (PyDict_Next(other, &pos, &key, &value)) {
|
||||
long hash;
|
||||
while (_PyDict_Next(other, &pos, &key, &value, &hash)) {
|
||||
setentry an_entry;
|
||||
long hash = PyObject_Hash(key);
|
||||
|
||||
if (hash == -1)
|
||||
return NULL;
|
||||
an_entry.hash = hash;
|
||||
an_entry.key = key;
|
||||
rv = set_discard_entry(so, &an_entry);
|
||||
|
@ -1492,7 +1507,7 @@ set_symmetric_difference_update(PySetObject *so, PyObject *other)
|
|||
Py_RETURN_NONE;
|
||||
}
|
||||
|
||||
if (PyAnySet_Check(other)) {
|
||||
if (PyAnySet_CheckExact(other)) {
|
||||
Py_INCREF(other);
|
||||
otherset = (PySetObject *)other;
|
||||
} else {
|
||||
|
@ -1575,7 +1590,7 @@ set_issubset(PySetObject *so, PyObject *other)
|
|||
setentry *entry;
|
||||
Py_ssize_t pos = 0;
|
||||
|
||||
if (!PyAnySet_Check(other)) {
|
||||
if (!PyAnySet_CheckExact(other)) {
|
||||
PyObject *tmp, *result;
|
||||
tmp = make_new_set(&PySet_Type, other);
|
||||
if (tmp == NULL)
|
||||
|
@ -1604,7 +1619,7 @@ set_issuperset(PySetObject *so, PyObject *other)
|
|||
{
|
||||
PyObject *tmp, *result;
|
||||
|
||||
if (!PyAnySet_Check(other)) {
|
||||
if (!PyAnySet_CheckExact(other)) {
|
||||
tmp = make_new_set(&PySet_Type, other);
|
||||
if (tmp == NULL)
|
||||
return NULL;
|
||||
|
|
|
@ -4279,7 +4279,13 @@ SLOT1(slot_nb_inplace_add, "__iadd__", PyObject *, "O")
|
|||
SLOT1(slot_nb_inplace_subtract, "__isub__", PyObject *, "O")
|
||||
SLOT1(slot_nb_inplace_multiply, "__imul__", PyObject *, "O")
|
||||
SLOT1(slot_nb_inplace_remainder, "__imod__", PyObject *, "O")
|
||||
SLOT1(slot_nb_inplace_power, "__ipow__", PyObject *, "O")
|
||||
/* Can't use SLOT1 here, because nb_inplace_power is ternary */
|
||||
static PyObject *
|
||||
slot_nb_inplace_power(PyObject *self, PyObject * arg1, PyObject *arg2)
|
||||
{
|
||||
static PyObject *cache_str;
|
||||
return call_method(self, "__ipow__", &cache_str, "(" "O" ")", arg1);
|
||||
}
|
||||
SLOT1(slot_nb_inplace_lshift, "__ilshift__", PyObject *, "O")
|
||||
SLOT1(slot_nb_inplace_rshift, "__irshift__", PyObject *, "O")
|
||||
SLOT1(slot_nb_inplace_and, "__iand__", PyObject *, "O")
|
||||
|
|
|
@ -68,7 +68,7 @@ module Python version "$Revision$"
|
|||
| Subscript(expr value, slice slice, expr_context ctx)
|
||||
| Name(identifier id, expr_context ctx)
|
||||
| List(expr* elts, expr_context ctx)
|
||||
| Tuple(expr *elts, expr_context ctx)
|
||||
| Tuple(expr* elts, expr_context ctx)
|
||||
|
||||
-- col_offset is the byte offset in the utf8 string the parser uses
|
||||
attributes (int lineno, int col_offset)
|
||||
|
|
|
@ -525,6 +525,9 @@ static PyObject* ast2obj_int(bool b)
|
|||
(cons.name, cons.name), 1)
|
||||
self.emit("if (!%s_singleton) return 0;" % cons.name, 1)
|
||||
|
||||
def parse_version(mod):
|
||||
return mod.version.value[12:-3]
|
||||
|
||||
class ASTModuleVisitor(PickleVisitor):
|
||||
|
||||
def visitModule(self, mod):
|
||||
|
@ -540,7 +543,8 @@ class ASTModuleVisitor(PickleVisitor):
|
|||
self.emit('if (PyModule_AddIntConstant(m, "PyCF_ONLY_AST", PyCF_ONLY_AST) < 0)', 1)
|
||||
self.emit("return;", 2)
|
||||
# Value of version: "$Revision$"
|
||||
self.emit('if (PyModule_AddStringConstant(m, "__version__", "%s") < 0)' % mod.version.value[12:-3], 1)
|
||||
self.emit('if (PyModule_AddStringConstant(m, "__version__", "%s") < 0)'
|
||||
% parse_version(mod), 1)
|
||||
self.emit("return;", 2)
|
||||
for dfn in mod.dfns:
|
||||
self.visit(dfn)
|
||||
|
@ -721,11 +725,23 @@ class ChainOfVisitors:
|
|||
v.visit(object)
|
||||
v.emit("", 0)
|
||||
|
||||
common_msg = "/* File automatically generated by %s. */\n"
|
||||
|
||||
c_file_msg = """
|
||||
/*
|
||||
__version__ %s.
|
||||
|
||||
This module must be committed separately after each AST grammar change;
|
||||
The __version__ number is set to the revision number of the commit
|
||||
containing the grammar change.
|
||||
*/
|
||||
"""
|
||||
|
||||
def main(srcfile):
|
||||
argv0 = sys.argv[0]
|
||||
components = argv0.split(os.sep)
|
||||
argv0 = os.sep.join(components[-2:])
|
||||
auto_gen_msg = '/* File automatically generated by %s */\n' % argv0
|
||||
auto_gen_msg = common_msg % argv0
|
||||
mod = asdl.parse(srcfile)
|
||||
if not asdl.check(mod):
|
||||
sys.exit(1)
|
||||
|
@ -746,6 +762,7 @@ def main(srcfile):
|
|||
p = os.path.join(SRC_DIR, str(mod.name) + "-ast.c")
|
||||
f = open(p, "wb")
|
||||
print >> f, auto_gen_msg
|
||||
print >> f, c_file_msg % parse_version(mod)
|
||||
print >> f, '#include "Python.h"'
|
||||
print >> f, '#include "%s-ast.h"' % mod.name
|
||||
print >> f
|
||||
|
|
|
@ -1,4 +1,13 @@
|
|||
/* File automatically generated by Parser/asdl_c.py */
|
||||
/* File automatically generated by Parser/asdl_c.py. */
|
||||
|
||||
|
||||
/*
|
||||
__version__ 53731.
|
||||
|
||||
This module must be committed separately after each AST grammar change;
|
||||
The __version__ number is set to the revision number of the commit
|
||||
containing the grammar change.
|
||||
*/
|
||||
|
||||
#include "Python.h"
|
||||
#include "Python-ast.h"
|
||||
|
@ -3080,7 +3089,7 @@ init_ast(void)
|
|||
if (PyDict_SetItemString(d, "AST", (PyObject*)AST_type) < 0) return;
|
||||
if (PyModule_AddIntConstant(m, "PyCF_ONLY_AST", PyCF_ONLY_AST) < 0)
|
||||
return;
|
||||
if (PyModule_AddStringConstant(m, "__version__", "53704") < 0)
|
||||
if (PyModule_AddStringConstant(m, "__version__", "53731") < 0)
|
||||
return;
|
||||
if (PyDict_SetItemString(d, "mod", (PyObject*)mod_type) < 0) return;
|
||||
if (PyDict_SetItemString(d, "Module", (PyObject*)Module_type) < 0)
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
#! /bin/sh
|
||||
# From configure.in Revision: 53017 .
|
||||
# From configure.in Revision: 53610 .
|
||||
# Guess values for system-dependent variables and create Makefiles.
|
||||
# Generated by GNU Autoconf 2.61 for python 3.0.
|
||||
#
|
||||
|
@ -15068,11 +15068,13 @@ echo "${ECHO_T}MACHDEP_OBJS" >&6; }
|
|||
|
||||
|
||||
|
||||
for ac_func in alarm bind_textdomain_codeset chown clock confstr ctermid \
|
||||
execv fork fpathconf ftime ftruncate \
|
||||
|
||||
|
||||
for ac_func in alarm bind_textdomain_codeset chflags chown clock confstr \
|
||||
ctermid execv fork fpathconf ftime ftruncate \
|
||||
gai_strerror getgroups getlogin getloadavg getpeername getpgid getpid \
|
||||
getpriority getpwent getspnam getspent getsid getwd \
|
||||
kill killpg lchown lstat mkfifo mknod mktime \
|
||||
kill killpg lchflags lchown lstat mkfifo mknod mktime \
|
||||
mremap nice pathconf pause plock poll pthread_init \
|
||||
putenv readlink realpath \
|
||||
select setegid seteuid setgid \
|
||||
|
|
|
@ -2282,11 +2282,11 @@ fi
|
|||
AC_MSG_RESULT(MACHDEP_OBJS)
|
||||
|
||||
# checks for library functions
|
||||
AC_CHECK_FUNCS(alarm bind_textdomain_codeset chown clock confstr ctermid \
|
||||
execv fork fpathconf ftime ftruncate \
|
||||
AC_CHECK_FUNCS(alarm bind_textdomain_codeset chflags chown clock confstr \
|
||||
ctermid execv fork fpathconf ftime ftruncate \
|
||||
gai_strerror getgroups getlogin getloadavg getpeername getpgid getpid \
|
||||
getpriority getpwent getspnam getspent getsid getwd \
|
||||
kill killpg lchown lstat mkfifo mknod mktime \
|
||||
kill killpg lchflags lchown lstat mkfifo mknod mktime \
|
||||
mremap nice pathconf pause plock poll pthread_init \
|
||||
putenv readlink realpath \
|
||||
select setegid seteuid setgid \
|
||||
|
|
|
@ -67,6 +67,9 @@
|
|||
/* Define this if you have the type _Bool. */
|
||||
#undef HAVE_C99_BOOL
|
||||
|
||||
/* Define to 1 if you have the `chflags' function. */
|
||||
#undef HAVE_CHFLAGS
|
||||
|
||||
/* Define to 1 if you have the `chown' function. */
|
||||
#undef HAVE_CHOWN
|
||||
|
||||
|
@ -290,6 +293,9 @@
|
|||
Solaris and Linux, the necessary defines are already defined.) */
|
||||
#undef HAVE_LARGEFILE_SUPPORT
|
||||
|
||||
/* Define to 1 if you have the `lchflags' function. */
|
||||
#undef HAVE_LCHFLAGS
|
||||
|
||||
/* Define to 1 if you have the `lchown' function. */
|
||||
#undef HAVE_LCHOWN
|
||||
|
||||
|
|
3
setup.py
3
setup.py
|
@ -1318,7 +1318,8 @@ class PyBuildExt(build_ext):
|
|||
from distutils.dep_util import newer_group
|
||||
|
||||
config_sources = [os.path.join(ffi_srcdir, fname)
|
||||
for fname in os.listdir(ffi_srcdir)]
|
||||
for fname in os.listdir(ffi_srcdir)
|
||||
if os.path.isfile(os.path.join(ffi_srcdir, fname))]
|
||||
if self.force or newer_group(config_sources,
|
||||
ffi_configfile):
|
||||
from distutils.dir_util import mkpath
|
||||
|
|
Loading…
Reference in New Issue