Merge with remote.
This commit is contained in:
commit
2f234db3eb
|
@ -321,7 +321,7 @@ parameters to be passed in as a tuple acceptable for parsing via
|
||||||
|
|
||||||
The :const:`METH_KEYWORDS` bit may be set in the third field if keyword
|
The :const:`METH_KEYWORDS` bit may be set in the third field if keyword
|
||||||
arguments should be passed to the function. In this case, the C function should
|
arguments should be passed to the function. In this case, the C function should
|
||||||
accept a third ``PyObject \*`` parameter which will be a dictionary of keywords.
|
accept a third ``PyObject *`` parameter which will be a dictionary of keywords.
|
||||||
Use :c:func:`PyArg_ParseTupleAndKeywords` to parse the arguments to such a
|
Use :c:func:`PyArg_ParseTupleAndKeywords` to parse the arguments to such a
|
||||||
function.
|
function.
|
||||||
|
|
||||||
|
|
|
@ -157,7 +157,7 @@ How do I obtain a copy of the Python source?
|
||||||
|
|
||||||
The latest Python source distribution is always available from python.org, at
|
The latest Python source distribution is always available from python.org, at
|
||||||
http://www.python.org/download/. The latest development sources can be obtained
|
http://www.python.org/download/. The latest development sources can be obtained
|
||||||
via anonymous Subversion at http://svn.python.org/projects/python/trunk.
|
via anonymous Mercurial access at http://hg.python.org/cpython.
|
||||||
|
|
||||||
The source distribution is a gzipped tar file containing the complete C source,
|
The source distribution is a gzipped tar file containing the complete C source,
|
||||||
Sphinx-formatted documentation, Python library modules, example programs, and
|
Sphinx-formatted documentation, Python library modules, example programs, and
|
||||||
|
|
|
@ -44,9 +44,11 @@ The module defines the following items:
|
||||||
|
|
||||||
The *mode* argument can be any of ``'r'``, ``'rb'``, ``'a'``, ``'ab'``, ``'w'``,
|
The *mode* argument can be any of ``'r'``, ``'rb'``, ``'a'``, ``'ab'``, ``'w'``,
|
||||||
or ``'wb'``, depending on whether the file will be read or written. The default
|
or ``'wb'``, depending on whether the file will be read or written. The default
|
||||||
is the mode of *fileobj* if discernible; otherwise, the default is ``'rb'``. If
|
is the mode of *fileobj* if discernible; otherwise, the default is ``'rb'``.
|
||||||
not given, the 'b' flag will be added to the mode to ensure the file is opened
|
|
||||||
in binary mode for cross-platform portability.
|
Note that the file is always opened in binary mode; text mode is not
|
||||||
|
supported. If you need to read a compressed file in text mode, wrap your
|
||||||
|
:class:`GzipFile` with an :class:`io.TextIOWrapper`.
|
||||||
|
|
||||||
The *compresslevel* argument is an integer from ``1`` to ``9`` controlling the
|
The *compresslevel* argument is an integer from ``1`` to ``9`` controlling the
|
||||||
level of compression; ``1`` is fastest and produces the least compression, and
|
level of compression; ``1`` is fastest and produces the least compression, and
|
||||||
|
|
|
@ -2294,8 +2294,8 @@ Files and Directories
|
||||||
single: directory; walking
|
single: directory; walking
|
||||||
single: directory; traversal
|
single: directory; traversal
|
||||||
|
|
||||||
This behaves exactly like :func:`walk`, except that it yields a 4-tuple
|
This behaves exactly like :func:`walk`, except that it yields a 4-tuple
|
||||||
``(dirpath, dirnames, filenames, dirfd)``.
|
``(dirpath, dirnames, filenames, dirfd)``.
|
||||||
|
|
||||||
*dirpath*, *dirnames* and *filenames* are identical to :func:`walk` output,
|
*dirpath*, *dirnames* and *filenames* are identical to :func:`walk` output,
|
||||||
and *dirfd* is a file descriptor referring to the directory *dirpath*.
|
and *dirfd* is a file descriptor referring to the directory *dirpath*.
|
||||||
|
|
|
@ -326,10 +326,32 @@ The :mod:`array` module supports the :c:type:`long long` type using ``q`` and
|
||||||
(Contributed by Oren Tirosh and Hirokazu Yamamoto in :issue:`1172711`)
|
(Contributed by Oren Tirosh and Hirokazu Yamamoto in :issue:`1172711`)
|
||||||
|
|
||||||
|
|
||||||
|
bz2
|
||||||
|
---
|
||||||
|
|
||||||
|
The :mod:`bz2` module has been rewritten from scratch. In the process, several
|
||||||
|
new features have been added:
|
||||||
|
|
||||||
|
* :class:`bz2.BZ2File` can now read from and write to arbitrary file-like
|
||||||
|
objects, by means of its constructor's *fileobj* argument.
|
||||||
|
|
||||||
|
(Contributed by Nadeem Vawda in :issue:`5863`)
|
||||||
|
|
||||||
|
* :class:`bz2.BZ2File` and :func:`bz2.decompress` can now decompress
|
||||||
|
multi-stream inputs (such as those produced by the :program:`pbzip2` tool).
|
||||||
|
:class:`bz2.BZ2File` can now also be used to create this type of file, using
|
||||||
|
the ``'a'`` (append) mode.
|
||||||
|
|
||||||
|
(Contributed by Nir Aides in :issue:`1625`)
|
||||||
|
|
||||||
|
* :class:`bz2.BZ2File` now implements all of the :class:`io.BufferedIOBase` API,
|
||||||
|
except for the :meth:`detach` and :meth:`truncate` methods.
|
||||||
|
|
||||||
|
|
||||||
codecs
|
codecs
|
||||||
------
|
------
|
||||||
|
|
||||||
The :mod:`~encodings.mbcs` codec has be rewritten to handle correclty
|
The :mod:`~encodings.mbcs` codec has been rewritten to handle correctly
|
||||||
``replace`` and ``ignore`` error handlers on all Windows versions. The
|
``replace`` and ``ignore`` error handlers on all Windows versions. The
|
||||||
:mod:`~encodings.mbcs` codec now supports all error handlers, instead of only
|
:mod:`~encodings.mbcs` codec now supports all error handlers, instead of only
|
||||||
``replace`` to encode and ``ignore`` to decode.
|
``replace`` to encode and ``ignore`` to decode.
|
||||||
|
|
|
@ -138,7 +138,7 @@ class BZ2File(io.BufferedIOBase):
|
||||||
|
|
||||||
def seekable(self):
|
def seekable(self):
|
||||||
"""Return whether the file supports seeking."""
|
"""Return whether the file supports seeking."""
|
||||||
return self.readable()
|
return self.readable() and self._fp.seekable()
|
||||||
|
|
||||||
def readable(self):
|
def readable(self):
|
||||||
"""Return whether the file was opened for reading."""
|
"""Return whether the file was opened for reading."""
|
||||||
|
@ -165,9 +165,12 @@ class BZ2File(io.BufferedIOBase):
|
||||||
raise io.UnsupportedOperation("File not open for writing")
|
raise io.UnsupportedOperation("File not open for writing")
|
||||||
|
|
||||||
def _check_can_seek(self):
|
def _check_can_seek(self):
|
||||||
if not self.seekable():
|
if not self.readable():
|
||||||
raise io.UnsupportedOperation("Seeking is only supported "
|
raise io.UnsupportedOperation("Seeking is only supported "
|
||||||
"on files open for reading")
|
"on files open for reading")
|
||||||
|
if not self._fp.seekable():
|
||||||
|
raise io.UnsupportedOperation("The underlying file object "
|
||||||
|
"does not support seeking")
|
||||||
|
|
||||||
# Fill the readahead buffer if it is empty. Returns False on EOF.
|
# Fill the readahead buffer if it is empty. Returns False on EOF.
|
||||||
def _fill_buffer(self):
|
def _fill_buffer(self):
|
||||||
|
|
|
@ -313,10 +313,8 @@ def translate_pattern(pattern, anchor=1, prefix=None, is_regex=0):
|
||||||
# ditch end of pattern character
|
# ditch end of pattern character
|
||||||
empty_pattern = glob_to_re('')
|
empty_pattern = glob_to_re('')
|
||||||
prefix_re = (glob_to_re(prefix))[:-len(empty_pattern)]
|
prefix_re = (glob_to_re(prefix))[:-len(empty_pattern)]
|
||||||
# match both path separators, as in Postel's principle
|
# paths should always use / in manifest templates
|
||||||
sep_pat = "[" + re.escape(os.path.sep + os.path.altsep
|
pattern_re = "^%s/.*%s" % (prefix_re, pattern_re)
|
||||||
if os.path.altsep else os.path.sep) + "]"
|
|
||||||
pattern_re = "^" + sep_pat.join([prefix_re, ".*" + pattern_re])
|
|
||||||
else: # no prefix -- respect anchor flag
|
else: # no prefix -- respect anchor flag
|
||||||
if anchor:
|
if anchor:
|
||||||
pattern_re = "^" + pattern_re
|
pattern_re = "^" + pattern_re
|
||||||
|
|
|
@ -146,6 +146,7 @@ def get_python_lib(plat_specific=0, standard_lib=0, prefix=None):
|
||||||
"I don't know where Python installs its library "
|
"I don't know where Python installs its library "
|
||||||
"on platform '%s'" % os.name)
|
"on platform '%s'" % os.name)
|
||||||
|
|
||||||
|
_USE_CLANG = None
|
||||||
|
|
||||||
def customize_compiler(compiler):
|
def customize_compiler(compiler):
|
||||||
"""Do any platform-specific customization of a CCompiler instance.
|
"""Do any platform-specific customization of a CCompiler instance.
|
||||||
|
@ -158,8 +159,38 @@ def customize_compiler(compiler):
|
||||||
get_config_vars('CC', 'CXX', 'OPT', 'CFLAGS',
|
get_config_vars('CC', 'CXX', 'OPT', 'CFLAGS',
|
||||||
'CCSHARED', 'LDSHARED', 'SO', 'AR', 'ARFLAGS')
|
'CCSHARED', 'LDSHARED', 'SO', 'AR', 'ARFLAGS')
|
||||||
|
|
||||||
|
newcc = None
|
||||||
if 'CC' in os.environ:
|
if 'CC' in os.environ:
|
||||||
cc = os.environ['CC']
|
newcc = os.environ['CC']
|
||||||
|
elif sys.platform == 'darwin' and cc == 'gcc-4.2':
|
||||||
|
# Issue #13590:
|
||||||
|
# Since Apple removed gcc-4.2 in Xcode 4.2, we can no
|
||||||
|
# longer assume it is available for extension module builds.
|
||||||
|
# If Python was built with gcc-4.2, check first to see if
|
||||||
|
# it is available on this system; if not, try to use clang
|
||||||
|
# instead unless the caller explicitly set CC.
|
||||||
|
global _USE_CLANG
|
||||||
|
if _USE_CLANG is None:
|
||||||
|
from distutils import log
|
||||||
|
from subprocess import Popen, PIPE
|
||||||
|
p = Popen("! type gcc-4.2 && type clang && exit 2",
|
||||||
|
shell=True, stdout=PIPE, stderr=PIPE)
|
||||||
|
p.wait()
|
||||||
|
if p.returncode == 2:
|
||||||
|
_USE_CLANG = True
|
||||||
|
log.warn("gcc-4.2 not found, using clang instead")
|
||||||
|
else:
|
||||||
|
_USE_CLANG = False
|
||||||
|
if _USE_CLANG:
|
||||||
|
newcc = 'clang'
|
||||||
|
if newcc:
|
||||||
|
# On OS X, if CC is overridden, use that as the default
|
||||||
|
# command for LDSHARED as well
|
||||||
|
if (sys.platform == 'darwin'
|
||||||
|
and 'LDSHARED' not in os.environ
|
||||||
|
and ldshared.startswith(cc)):
|
||||||
|
ldshared = newcc + ldshared[len(cc):]
|
||||||
|
cc = newcc
|
||||||
if 'CXX' in os.environ:
|
if 'CXX' in os.environ:
|
||||||
cxx = os.environ['CXX']
|
cxx = os.environ['CXX']
|
||||||
if 'LDSHARED' in os.environ:
|
if 'LDSHARED' in os.environ:
|
||||||
|
|
20
Lib/gzip.py
20
Lib/gzip.py
|
@ -93,6 +93,9 @@ class GzipFile(io.BufferedIOBase):
|
||||||
"""The GzipFile class simulates most of the methods of a file object with
|
"""The GzipFile class simulates most of the methods of a file object with
|
||||||
the exception of the readinto() and truncate() methods.
|
the exception of the readinto() and truncate() methods.
|
||||||
|
|
||||||
|
This class only supports opening files in binary mode. If you need to open a
|
||||||
|
compressed file in text mode, wrap your GzipFile with an io.TextIOWrapper.
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
myfileobj = None
|
myfileobj = None
|
||||||
|
@ -119,8 +122,8 @@ class GzipFile(io.BufferedIOBase):
|
||||||
The mode argument can be any of 'r', 'rb', 'a', 'ab', 'w', or 'wb',
|
The mode argument can be any of 'r', 'rb', 'a', 'ab', 'w', or 'wb',
|
||||||
depending on whether the file will be read or written. The default
|
depending on whether the file will be read or written. The default
|
||||||
is the mode of fileobj if discernible; otherwise, the default is 'rb'.
|
is the mode of fileobj if discernible; otherwise, the default is 'rb'.
|
||||||
Be aware that only the 'rb', 'ab', and 'wb' values should be used
|
A mode of 'r' is equivalent to one of 'rb', and similarly for 'w' and
|
||||||
for cross-platform portability.
|
'wb', and 'a' and 'ab'.
|
||||||
|
|
||||||
The compresslevel argument is an integer from 1 to 9 controlling the
|
The compresslevel argument is an integer from 1 to 9 controlling the
|
||||||
level of compression; 1 is fastest and produces the least compression,
|
level of compression; 1 is fastest and produces the least compression,
|
||||||
|
@ -137,8 +140,8 @@ class GzipFile(io.BufferedIOBase):
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# guarantee the file is opened in binary mode on platforms
|
if mode and ('t' in mode or 'U' in mode):
|
||||||
# that care about that sort of thing
|
raise ValueError("Invalid mode: {!r}".format(mode))
|
||||||
if mode and 'b' not in mode:
|
if mode and 'b' not in mode:
|
||||||
mode += 'b'
|
mode += 'b'
|
||||||
if fileobj is None:
|
if fileobj is None:
|
||||||
|
@ -149,10 +152,9 @@ class GzipFile(io.BufferedIOBase):
|
||||||
else:
|
else:
|
||||||
filename = ''
|
filename = ''
|
||||||
if mode is None:
|
if mode is None:
|
||||||
if hasattr(fileobj, 'mode'): mode = fileobj.mode
|
mode = getattr(fileobj, 'mode', 'rb')
|
||||||
else: mode = 'rb'
|
|
||||||
|
|
||||||
if mode[0:1] == 'r':
|
if mode.startswith('r'):
|
||||||
self.mode = READ
|
self.mode = READ
|
||||||
# Set flag indicating start of a new member
|
# Set flag indicating start of a new member
|
||||||
self._new_member = True
|
self._new_member = True
|
||||||
|
@ -167,7 +169,7 @@ class GzipFile(io.BufferedIOBase):
|
||||||
self.min_readsize = 100
|
self.min_readsize = 100
|
||||||
fileobj = _PaddedFile(fileobj)
|
fileobj = _PaddedFile(fileobj)
|
||||||
|
|
||||||
elif mode[0:1] == 'w' or mode[0:1] == 'a':
|
elif mode.startswith(('w', 'a')):
|
||||||
self.mode = WRITE
|
self.mode = WRITE
|
||||||
self._init_write(filename)
|
self._init_write(filename)
|
||||||
self.compress = zlib.compressobj(compresslevel,
|
self.compress = zlib.compressobj(compresslevel,
|
||||||
|
@ -176,7 +178,7 @@ class GzipFile(io.BufferedIOBase):
|
||||||
zlib.DEF_MEM_LEVEL,
|
zlib.DEF_MEM_LEVEL,
|
||||||
0)
|
0)
|
||||||
else:
|
else:
|
||||||
raise IOError("Mode " + mode + " not supported")
|
raise ValueError("Invalid mode: {!r}".format(mode))
|
||||||
|
|
||||||
self.fileobj = fileobj
|
self.fileobj = fileobj
|
||||||
self.offset = 0
|
self.offset = 0
|
||||||
|
|
|
@ -184,7 +184,17 @@ class HTMLParser(_markupbase.ParserBase):
|
||||||
elif startswith("<?", i):
|
elif startswith("<?", i):
|
||||||
k = self.parse_pi(i)
|
k = self.parse_pi(i)
|
||||||
elif startswith("<!", i):
|
elif startswith("<!", i):
|
||||||
k = self.parse_declaration(i)
|
# this might fail with things like <! not a comment > or
|
||||||
|
# <! -- space before '--' -->. When strict is True an
|
||||||
|
# error is raised, when it's False they will be considered
|
||||||
|
# as bogus comments and parsed (see parse_bogus_comment).
|
||||||
|
if self.strict:
|
||||||
|
k = self.parse_declaration(i)
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
k = self.parse_declaration(i)
|
||||||
|
except HTMLParseError:
|
||||||
|
k = self.parse_bogus_comment(i)
|
||||||
elif (i + 1) < n:
|
elif (i + 1) < n:
|
||||||
self.handle_data("<")
|
self.handle_data("<")
|
||||||
k = i + 1
|
k = i + 1
|
||||||
|
@ -256,6 +266,19 @@ class HTMLParser(_markupbase.ParserBase):
|
||||||
i = self.updatepos(i, n)
|
i = self.updatepos(i, n)
|
||||||
self.rawdata = rawdata[i:]
|
self.rawdata = rawdata[i:]
|
||||||
|
|
||||||
|
# Internal -- parse bogus comment, return length or -1 if not terminated
|
||||||
|
# see http://www.w3.org/TR/html5/tokenization.html#bogus-comment-state
|
||||||
|
def parse_bogus_comment(self, i, report=1):
|
||||||
|
rawdata = self.rawdata
|
||||||
|
if rawdata[i:i+2] != '<!':
|
||||||
|
self.error('unexpected call to parse_comment()')
|
||||||
|
pos = rawdata.find('>', i+2)
|
||||||
|
if pos == -1:
|
||||||
|
return -1
|
||||||
|
if report:
|
||||||
|
self.handle_comment(rawdata[i+2:pos])
|
||||||
|
return pos + 1
|
||||||
|
|
||||||
# Internal -- parse processing instr, return end or -1 if not terminated
|
# Internal -- parse processing instr, return end or -1 if not terminated
|
||||||
def parse_pi(self, i):
|
def parse_pi(self, i):
|
||||||
rawdata = self.rawdata
|
rawdata = self.rawdata
|
||||||
|
|
|
@ -36,10 +36,7 @@ def _case_ok(directory, check):
|
||||||
b'PYTHONCASEOK' not in _os.environ):
|
b'PYTHONCASEOK' not in _os.environ):
|
||||||
if not directory:
|
if not directory:
|
||||||
directory = '.'
|
directory = '.'
|
||||||
if check in _os.listdir(directory):
|
return check in _os.listdir(directory)
|
||||||
return True
|
|
||||||
else:
|
|
||||||
return False
|
|
||||||
else:
|
else:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
|
|
@ -165,7 +165,7 @@ class LZMAFile(io.BufferedIOBase):
|
||||||
|
|
||||||
def seekable(self):
|
def seekable(self):
|
||||||
"""Return whether the file supports seeking."""
|
"""Return whether the file supports seeking."""
|
||||||
return self.readable()
|
return self.readable() and self._fp.seekable()
|
||||||
|
|
||||||
def readable(self):
|
def readable(self):
|
||||||
"""Return whether the file was opened for reading."""
|
"""Return whether the file was opened for reading."""
|
||||||
|
@ -192,9 +192,12 @@ class LZMAFile(io.BufferedIOBase):
|
||||||
raise io.UnsupportedOperation("File not open for writing")
|
raise io.UnsupportedOperation("File not open for writing")
|
||||||
|
|
||||||
def _check_can_seek(self):
|
def _check_can_seek(self):
|
||||||
if not self.seekable():
|
if not self.readable():
|
||||||
raise io.UnsupportedOperation("Seeking is only supported "
|
raise io.UnsupportedOperation("Seeking is only supported "
|
||||||
"on files open for reading")
|
"on files open for reading")
|
||||||
|
if not self._fp.seekable():
|
||||||
|
raise io.UnsupportedOperation("The underlying file object "
|
||||||
|
"does not support seeking")
|
||||||
|
|
||||||
# Fill the readahead buffer if it is empty. Returns False on EOF.
|
# Fill the readahead buffer if it is empty. Returns False on EOF.
|
||||||
def _fill_buffer(self):
|
def _fill_buffer(self):
|
||||||
|
|
|
@ -6,38 +6,28 @@ from packaging.util import resolve_name
|
||||||
__all__ = ['get_command_names', 'set_command', 'get_command_class',
|
__all__ = ['get_command_names', 'set_command', 'get_command_class',
|
||||||
'STANDARD_COMMANDS']
|
'STANDARD_COMMANDS']
|
||||||
|
|
||||||
_COMMANDS = {
|
|
||||||
'check': 'packaging.command.check.check',
|
|
||||||
'test': 'packaging.command.test.test',
|
|
||||||
'build': 'packaging.command.build.build',
|
|
||||||
'build_py': 'packaging.command.build_py.build_py',
|
|
||||||
'build_ext': 'packaging.command.build_ext.build_ext',
|
|
||||||
'build_clib': 'packaging.command.build_clib.build_clib',
|
|
||||||
'build_scripts': 'packaging.command.build_scripts.build_scripts',
|
|
||||||
'clean': 'packaging.command.clean.clean',
|
|
||||||
'install_dist': 'packaging.command.install_dist.install_dist',
|
|
||||||
'install_lib': 'packaging.command.install_lib.install_lib',
|
|
||||||
'install_headers': 'packaging.command.install_headers.install_headers',
|
|
||||||
'install_scripts': 'packaging.command.install_scripts.install_scripts',
|
|
||||||
'install_data': 'packaging.command.install_data.install_data',
|
|
||||||
'install_distinfo':
|
|
||||||
'packaging.command.install_distinfo.install_distinfo',
|
|
||||||
'sdist': 'packaging.command.sdist.sdist',
|
|
||||||
'bdist': 'packaging.command.bdist.bdist',
|
|
||||||
'bdist_dumb': 'packaging.command.bdist_dumb.bdist_dumb',
|
|
||||||
'bdist_wininst': 'packaging.command.bdist_wininst.bdist_wininst',
|
|
||||||
'register': 'packaging.command.register.register',
|
|
||||||
'upload': 'packaging.command.upload.upload',
|
|
||||||
'upload_docs': 'packaging.command.upload_docs.upload_docs',
|
|
||||||
}
|
|
||||||
|
|
||||||
# XXX this is crappy
|
STANDARD_COMMANDS = [
|
||||||
|
# packaging
|
||||||
|
'check', 'test',
|
||||||
|
# building
|
||||||
|
'build', 'build_py', 'build_ext', 'build_clib', 'build_scripts', 'clean',
|
||||||
|
# installing
|
||||||
|
'install_dist', 'install_lib', 'install_headers', 'install_scripts',
|
||||||
|
'install_data', 'install_distinfo',
|
||||||
|
# distributing
|
||||||
|
'sdist', 'bdist', 'bdist_dumb', 'bdist_wininst',
|
||||||
|
'register', 'upload', 'upload_docs',
|
||||||
|
]
|
||||||
|
|
||||||
if os.name == 'nt':
|
if os.name == 'nt':
|
||||||
_COMMANDS['bdist_msi'] = 'packaging.command.bdist_msi.bdist_msi'
|
STANDARD_COMMANDS.insert(STANDARD_COMMANDS.index('bdist_wininst'),
|
||||||
|
'bdist_msi')
|
||||||
|
|
||||||
# XXX use OrderedDict to preserve the grouping (build-related, install-related,
|
# XXX maybe we need more than one registry, so that --list-comands can display
|
||||||
# distribution-related)
|
# standard, custom and overriden standard commands differently
|
||||||
STANDARD_COMMANDS = set(_COMMANDS)
|
_COMMANDS = dict((name, 'packaging.command.%s.%s' % (name, name))
|
||||||
|
for name in STANDARD_COMMANDS)
|
||||||
|
|
||||||
|
|
||||||
def get_command_names():
|
def get_command_names():
|
||||||
|
|
|
@ -7,9 +7,8 @@ import sys
|
||||||
import os
|
import os
|
||||||
import msilib
|
import msilib
|
||||||
|
|
||||||
|
|
||||||
from sysconfig import get_python_version
|
|
||||||
from shutil import rmtree
|
from shutil import rmtree
|
||||||
|
from sysconfig import get_python_version
|
||||||
from packaging.command.cmd import Command
|
from packaging.command.cmd import Command
|
||||||
from packaging.version import NormalizedVersion
|
from packaging.version import NormalizedVersion
|
||||||
from packaging.errors import PackagingOptionError
|
from packaging.errors import PackagingOptionError
|
||||||
|
@ -204,7 +203,7 @@ class bdist_msi(Command):
|
||||||
target_version = self.target_version
|
target_version = self.target_version
|
||||||
if not target_version:
|
if not target_version:
|
||||||
assert self.skip_build, "Should have already checked this"
|
assert self.skip_build, "Should have already checked this"
|
||||||
target_version = sys.version[0:3]
|
target_version = '%s.%s' % sys.version_info[:2]
|
||||||
plat_specifier = ".%s-%s" % (self.plat_name, target_version)
|
plat_specifier = ".%s-%s" % (self.plat_name, target_version)
|
||||||
build = self.get_finalized_command('build')
|
build = self.get_finalized_command('build')
|
||||||
build.build_lib = os.path.join(build.build_base,
|
build.build_lib = os.path.join(build.build_base,
|
||||||
|
|
|
@ -136,7 +136,7 @@ class bdist_wininst(Command):
|
||||||
target_version = self.target_version
|
target_version = self.target_version
|
||||||
if not target_version:
|
if not target_version:
|
||||||
assert self.skip_build, "Should have already checked this"
|
assert self.skip_build, "Should have already checked this"
|
||||||
target_version = sys.version[0:3]
|
target_version = '%s.%s' % sys.version_info[:2]
|
||||||
plat_specifier = ".%s-%s" % (self.plat_name, target_version)
|
plat_specifier = ".%s-%s" % (self.plat_name, target_version)
|
||||||
build = self.get_finalized_command('build')
|
build = self.get_finalized_command('build')
|
||||||
build.build_lib = os.path.join(build.build_base,
|
build.build_lib = os.path.join(build.build_base,
|
||||||
|
|
|
@ -82,8 +82,8 @@ class build(Command):
|
||||||
raise PackagingOptionError(
|
raise PackagingOptionError(
|
||||||
"--plat-name only supported on Windows (try "
|
"--plat-name only supported on Windows (try "
|
||||||
"using './configure --help' on your platform)")
|
"using './configure --help' on your platform)")
|
||||||
|
pyversion = '%s.%s' % sys.version_info[:2]
|
||||||
plat_specifier = ".%s-%s" % (self.plat_name, sys.version[0:3])
|
plat_specifier = ".%s-%s" % (self.plat_name, pyversion)
|
||||||
|
|
||||||
# Make it so Python 2.x and Python 2.x with --with-pydebug don't
|
# Make it so Python 2.x and Python 2.x with --with-pydebug don't
|
||||||
# share the same build directories. Doing so confuses the build
|
# share the same build directories. Doing so confuses the build
|
||||||
|
@ -116,7 +116,7 @@ class build(Command):
|
||||||
'temp' + plat_specifier)
|
'temp' + plat_specifier)
|
||||||
if self.build_scripts is None:
|
if self.build_scripts is None:
|
||||||
self.build_scripts = os.path.join(self.build_base,
|
self.build_scripts = os.path.join(self.build_base,
|
||||||
'scripts-' + sys.version[0:3])
|
'scripts-' + pyversion)
|
||||||
|
|
||||||
if self.executable is None:
|
if self.executable is None:
|
||||||
self.executable = os.path.normpath(sys.executable)
|
self.executable = os.path.normpath(sys.executable)
|
||||||
|
|
|
@ -242,7 +242,7 @@ class install_dist(Command):
|
||||||
# $platbase in the other installation directories and not worry
|
# $platbase in the other installation directories and not worry
|
||||||
# about needing recursive variable expansion (shudder).
|
# about needing recursive variable expansion (shudder).
|
||||||
|
|
||||||
py_version = sys.version.split()[0]
|
py_version = '%s.%s' % sys.version_info[:2]
|
||||||
prefix, exec_prefix, srcdir, projectbase = get_config_vars(
|
prefix, exec_prefix, srcdir, projectbase = get_config_vars(
|
||||||
'prefix', 'exec_prefix', 'srcdir', 'projectbase')
|
'prefix', 'exec_prefix', 'srcdir', 'projectbase')
|
||||||
|
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
"""Compatibility helpers."""
|
"""Support for build-time 2to3 conversion."""
|
||||||
|
|
||||||
from packaging import logger
|
from packaging import logger
|
||||||
|
|
||||||
|
@ -25,7 +25,7 @@ class Mixin2to3(_KLASS):
|
||||||
"""
|
"""
|
||||||
if _CONVERT:
|
if _CONVERT:
|
||||||
|
|
||||||
def _run_2to3(self, files, doctests=[], fixers=[]):
|
def _run_2to3(self, files=[], doctests=[], fixers=[]):
|
||||||
""" Takes a list of files and doctests, and performs conversion
|
""" Takes a list of files and doctests, and performs conversion
|
||||||
on those.
|
on those.
|
||||||
- First, the files which contain the code(`files`) are converted.
|
- First, the files which contain the code(`files`) are converted.
|
||||||
|
@ -35,17 +35,16 @@ class Mixin2to3(_KLASS):
|
||||||
if fixers:
|
if fixers:
|
||||||
self.fixer_names = fixers
|
self.fixer_names = fixers
|
||||||
|
|
||||||
logger.info('converting Python code')
|
if files:
|
||||||
_KLASS.run_2to3(self, files)
|
logger.info('converting Python code and doctests')
|
||||||
|
_KLASS.run_2to3(self, files)
|
||||||
|
_KLASS.run_2to3(self, files, doctests_only=True)
|
||||||
|
|
||||||
logger.info('converting doctests in Python files')
|
if doctests:
|
||||||
_KLASS.run_2to3(self, files, doctests_only=True)
|
logger.info('converting doctests in text files')
|
||||||
|
|
||||||
if doctests != []:
|
|
||||||
logger.info('converting doctest in text files')
|
|
||||||
_KLASS.run_2to3(self, doctests, doctests_only=True)
|
_KLASS.run_2to3(self, doctests, doctests_only=True)
|
||||||
else:
|
else:
|
||||||
# If run on Python 2.x, there is nothing to do.
|
# If run on Python 2.x, there is nothing to do.
|
||||||
|
|
||||||
def _run_2to3(self, files, doctests=[], fixers=[]):
|
def _run_2to3(self, files=[], doctests=[], fixers=[]):
|
||||||
pass
|
pass
|
||||||
|
|
|
@ -56,6 +56,10 @@ from packaging.errors import PackagingExecError, CompileError, UnknownFileError
|
||||||
from packaging.util import get_compiler_versions
|
from packaging.util import get_compiler_versions
|
||||||
import sysconfig
|
import sysconfig
|
||||||
|
|
||||||
|
# TODO use platform instead of sys.version
|
||||||
|
# (platform does unholy sys.version parsing too, but at least it gives other
|
||||||
|
# VMs a chance to override the returned values)
|
||||||
|
|
||||||
|
|
||||||
def get_msvcr():
|
def get_msvcr():
|
||||||
"""Include the appropriate MSVC runtime library if Python was built
|
"""Include the appropriate MSVC runtime library if Python was built
|
||||||
|
|
|
@ -366,10 +366,8 @@ def _translate_pattern(pattern, anchor=True, prefix=None, is_regex=False):
|
||||||
# ditch end of pattern character
|
# ditch end of pattern character
|
||||||
empty_pattern = _glob_to_re('')
|
empty_pattern = _glob_to_re('')
|
||||||
prefix_re = _glob_to_re(prefix)[:-len(empty_pattern)]
|
prefix_re = _glob_to_re(prefix)[:-len(empty_pattern)]
|
||||||
# match both path separators, as in Postel's principle
|
# paths should always use / in manifest templates
|
||||||
sep_pat = "[" + re.escape(os.path.sep + os.path.altsep
|
pattern_re = "^%s/.*%s" % (prefix_re, pattern_re)
|
||||||
if os.path.altsep else os.path.sep) + "]"
|
|
||||||
pattern_re = "^" + sep_pat.join([prefix_re, ".*" + pattern_re])
|
|
||||||
else: # no prefix -- respect anchor flag
|
else: # no prefix -- respect anchor flag
|
||||||
if anchor:
|
if anchor:
|
||||||
pattern_re = "^" + pattern_re
|
pattern_re = "^" + pattern_re
|
||||||
|
|
|
@ -1,11 +1,10 @@
|
||||||
"""Parser for the environment markers micro-language defined in PEP 345."""
|
"""Parser for the environment markers micro-language defined in PEP 345."""
|
||||||
|
|
||||||
|
import os
|
||||||
import sys
|
import sys
|
||||||
import platform
|
import platform
|
||||||
import os
|
|
||||||
|
|
||||||
from tokenize import tokenize, NAME, OP, STRING, ENDMARKER, ENCODING
|
|
||||||
from io import BytesIO
|
from io import BytesIO
|
||||||
|
from tokenize import tokenize, NAME, OP, STRING, ENDMARKER, ENCODING
|
||||||
|
|
||||||
__all__ = ['interpret']
|
__all__ = ['interpret']
|
||||||
|
|
||||||
|
@ -27,12 +26,15 @@ def _operate(operation, x, y):
|
||||||
|
|
||||||
# restricted set of variables
|
# restricted set of variables
|
||||||
_VARS = {'sys.platform': sys.platform,
|
_VARS = {'sys.platform': sys.platform,
|
||||||
'python_version': sys.version[:3],
|
'python_version': '%s.%s' % sys.version_info[:2],
|
||||||
|
# FIXME parsing sys.platform is not reliable, but there is no other
|
||||||
|
# way to get e.g. 2.7.2+, and the PEP is defined with sys.version
|
||||||
'python_full_version': sys.version.split(' ', 1)[0],
|
'python_full_version': sys.version.split(' ', 1)[0],
|
||||||
'os.name': os.name,
|
'os.name': os.name,
|
||||||
'platform.version': platform.version(),
|
'platform.version': platform.version(),
|
||||||
'platform.machine': platform.machine(),
|
'platform.machine': platform.machine(),
|
||||||
'platform.python_implementation': platform.python_implementation()}
|
'platform.python_implementation': platform.python_implementation(),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
class _Operation:
|
class _Operation:
|
||||||
|
|
|
@ -35,8 +35,8 @@ __all__ = ['Crawler', 'DEFAULT_SIMPLE_INDEX_URL']
|
||||||
DEFAULT_SIMPLE_INDEX_URL = "http://a.pypi.python.org/simple/"
|
DEFAULT_SIMPLE_INDEX_URL = "http://a.pypi.python.org/simple/"
|
||||||
DEFAULT_HOSTS = ("*",)
|
DEFAULT_HOSTS = ("*",)
|
||||||
SOCKET_TIMEOUT = 15
|
SOCKET_TIMEOUT = 15
|
||||||
USER_AGENT = "Python-urllib/%s packaging/%s" % (
|
USER_AGENT = "Python-urllib/%s.%s packaging/%s" % (
|
||||||
sys.version[:3], packaging_version)
|
sys.version_info[0], sys.version_info[1], packaging_version)
|
||||||
|
|
||||||
# -- Regexps -------------------------------------------------
|
# -- Regexps -------------------------------------------------
|
||||||
EGG_FRAGMENT = re.compile(r'^egg=([-A-Za-z0-9_.]+)$')
|
EGG_FRAGMENT = re.compile(r'^egg=([-A-Za-z0-9_.]+)$')
|
||||||
|
|
|
@ -254,16 +254,13 @@ def _run(dispatcher, args, **kw):
|
||||||
parser = dispatcher.parser
|
parser = dispatcher.parser
|
||||||
args = args[1:]
|
args = args[1:]
|
||||||
|
|
||||||
commands = STANDARD_COMMANDS # + extra commands
|
commands = STANDARD_COMMANDS # FIXME display extra commands
|
||||||
|
|
||||||
if args == ['--list-commands']:
|
if args == ['--list-commands']:
|
||||||
print('List of available commands:')
|
print('List of available commands:')
|
||||||
cmds = sorted(commands)
|
for cmd in commands:
|
||||||
|
|
||||||
for cmd in cmds:
|
|
||||||
cls = dispatcher.cmdclass.get(cmd) or get_command_class(cmd)
|
cls = dispatcher.cmdclass.get(cmd) or get_command_class(cmd)
|
||||||
desc = getattr(cls, 'description',
|
desc = getattr(cls, 'description', '(no description available)')
|
||||||
'(no description available)')
|
|
||||||
print(' %s: %s' % (cmd, desc))
|
print(' %s: %s' % (cmd, desc))
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,16 @@
|
||||||
|
# Example custom fixer, derived from fix_raw_input by Andre Roberge
|
||||||
|
|
||||||
|
from lib2to3 import fixer_base
|
||||||
|
from lib2to3.fixer_util import Name
|
||||||
|
|
||||||
|
|
||||||
|
class FixEcho(fixer_base.BaseFix):
|
||||||
|
|
||||||
|
BM_compatible = True
|
||||||
|
PATTERN = """
|
||||||
|
power< name='echo' trailer< '(' [any] ')' > any* >
|
||||||
|
"""
|
||||||
|
|
||||||
|
def transform(self, node, results):
|
||||||
|
name = results['name']
|
||||||
|
name.replace(Name('print', prefix=name.prefix))
|
|
@ -0,0 +1,16 @@
|
||||||
|
# Example custom fixer, derived from fix_raw_input by Andre Roberge
|
||||||
|
|
||||||
|
from lib2to3 import fixer_base
|
||||||
|
from lib2to3.fixer_util import Name
|
||||||
|
|
||||||
|
|
||||||
|
class FixEcho2(fixer_base.BaseFix):
|
||||||
|
|
||||||
|
BM_compatible = True
|
||||||
|
PATTERN = """
|
||||||
|
power< name='echo2' trailer< '(' [any] ')' > any* >
|
||||||
|
"""
|
||||||
|
|
||||||
|
def transform(self, node, results):
|
||||||
|
name = results['name']
|
||||||
|
name.replace(Name('print', prefix=name.prefix))
|
|
@ -1,134 +0,0 @@
|
||||||
"""Adjust some old Python 2 idioms to their modern counterparts.
|
|
||||||
|
|
||||||
* Change some type comparisons to isinstance() calls:
|
|
||||||
type(x) == T -> isinstance(x, T)
|
|
||||||
type(x) is T -> isinstance(x, T)
|
|
||||||
type(x) != T -> not isinstance(x, T)
|
|
||||||
type(x) is not T -> not isinstance(x, T)
|
|
||||||
|
|
||||||
* Change "while 1:" into "while True:".
|
|
||||||
|
|
||||||
* Change both
|
|
||||||
|
|
||||||
v = list(EXPR)
|
|
||||||
v.sort()
|
|
||||||
foo(v)
|
|
||||||
|
|
||||||
and the more general
|
|
||||||
|
|
||||||
v = EXPR
|
|
||||||
v.sort()
|
|
||||||
foo(v)
|
|
||||||
|
|
||||||
into
|
|
||||||
|
|
||||||
v = sorted(EXPR)
|
|
||||||
foo(v)
|
|
||||||
"""
|
|
||||||
# Author: Jacques Frechet, Collin Winter
|
|
||||||
|
|
||||||
# Local imports
|
|
||||||
from lib2to3 import fixer_base
|
|
||||||
from lib2to3.fixer_util import Call, Comma, Name, Node, syms
|
|
||||||
|
|
||||||
CMP = "(n='!=' | '==' | 'is' | n=comp_op< 'is' 'not' >)"
|
|
||||||
TYPE = "power< 'type' trailer< '(' x=any ')' > >"
|
|
||||||
|
|
||||||
class FixIdioms(fixer_base.BaseFix):
|
|
||||||
|
|
||||||
explicit = False # The user must ask for this fixer
|
|
||||||
|
|
||||||
PATTERN = r"""
|
|
||||||
isinstance=comparison< %s %s T=any >
|
|
||||||
|
|
|
||||||
isinstance=comparison< T=any %s %s >
|
|
||||||
|
|
|
||||||
while_stmt< 'while' while='1' ':' any+ >
|
|
||||||
|
|
|
||||||
sorted=any<
|
|
||||||
any*
|
|
||||||
simple_stmt<
|
|
||||||
expr_stmt< id1=any '='
|
|
||||||
power< list='list' trailer< '(' (not arglist<any+>) any ')' > >
|
|
||||||
>
|
|
||||||
'\n'
|
|
||||||
>
|
|
||||||
sort=
|
|
||||||
simple_stmt<
|
|
||||||
power< id2=any
|
|
||||||
trailer< '.' 'sort' > trailer< '(' ')' >
|
|
||||||
>
|
|
||||||
'\n'
|
|
||||||
>
|
|
||||||
next=any*
|
|
||||||
>
|
|
||||||
|
|
|
||||||
sorted=any<
|
|
||||||
any*
|
|
||||||
simple_stmt< expr_stmt< id1=any '=' expr=any > '\n' >
|
|
||||||
sort=
|
|
||||||
simple_stmt<
|
|
||||||
power< id2=any
|
|
||||||
trailer< '.' 'sort' > trailer< '(' ')' >
|
|
||||||
>
|
|
||||||
'\n'
|
|
||||||
>
|
|
||||||
next=any*
|
|
||||||
>
|
|
||||||
""" % (TYPE, CMP, CMP, TYPE)
|
|
||||||
|
|
||||||
def match(self, node):
|
|
||||||
r = super(FixIdioms, self).match(node)
|
|
||||||
# If we've matched one of the sort/sorted subpatterns above, we
|
|
||||||
# want to reject matches where the initial assignment and the
|
|
||||||
# subsequent .sort() call involve different identifiers.
|
|
||||||
if r and "sorted" in r:
|
|
||||||
if r["id1"] == r["id2"]:
|
|
||||||
return r
|
|
||||||
return None
|
|
||||||
return r
|
|
||||||
|
|
||||||
def transform(self, node, results):
|
|
||||||
if "isinstance" in results:
|
|
||||||
return self.transform_isinstance(node, results)
|
|
||||||
elif "while" in results:
|
|
||||||
return self.transform_while(node, results)
|
|
||||||
elif "sorted" in results:
|
|
||||||
return self.transform_sort(node, results)
|
|
||||||
else:
|
|
||||||
raise RuntimeError("Invalid match")
|
|
||||||
|
|
||||||
def transform_isinstance(self, node, results):
|
|
||||||
x = results["x"].clone() # The thing inside of type()
|
|
||||||
T = results["T"].clone() # The type being compared against
|
|
||||||
x.prefix = ""
|
|
||||||
T.prefix = " "
|
|
||||||
test = Call(Name("isinstance"), [x, Comma(), T])
|
|
||||||
if "n" in results:
|
|
||||||
test.prefix = " "
|
|
||||||
test = Node(syms.not_test, [Name("not"), test])
|
|
||||||
test.prefix = node.prefix
|
|
||||||
return test
|
|
||||||
|
|
||||||
def transform_while(self, node, results):
|
|
||||||
one = results["while"]
|
|
||||||
one.replace(Name("True", prefix=one.prefix))
|
|
||||||
|
|
||||||
def transform_sort(self, node, results):
|
|
||||||
sort_stmt = results["sort"]
|
|
||||||
next_stmt = results["next"]
|
|
||||||
list_call = results.get("list")
|
|
||||||
simple_expr = results.get("expr")
|
|
||||||
|
|
||||||
if list_call:
|
|
||||||
list_call.replace(Name("sorted", prefix=list_call.prefix))
|
|
||||||
elif simple_expr:
|
|
||||||
new = simple_expr.clone()
|
|
||||||
new.prefix = ""
|
|
||||||
simple_expr.replace(Call(Name("sorted"), [new],
|
|
||||||
prefix=simple_expr.prefix))
|
|
||||||
else:
|
|
||||||
raise RuntimeError("should not have reached here")
|
|
||||||
sort_stmt.remove()
|
|
||||||
if next_stmt:
|
|
||||||
next_stmt[0].prefix = sort_stmt._prefix
|
|
|
@ -56,8 +56,9 @@ __all__ = [
|
||||||
# misc. functions and decorators
|
# misc. functions and decorators
|
||||||
'fake_dec', 'create_distribution', 'use_command',
|
'fake_dec', 'create_distribution', 'use_command',
|
||||||
'copy_xxmodule_c', 'fixup_build_ext',
|
'copy_xxmodule_c', 'fixup_build_ext',
|
||||||
|
'skip_2to3_optimize',
|
||||||
# imported from this module for backport purposes
|
# imported from this module for backport purposes
|
||||||
'unittest', 'requires_zlib', 'skip_2to3_optimize', 'skip_unless_symlink',
|
'unittest', 'requires_zlib', 'skip_unless_symlink',
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
|
@ -332,22 +333,18 @@ def copy_xxmodule_c(directory):
|
||||||
"""
|
"""
|
||||||
filename = _get_xxmodule_path()
|
filename = _get_xxmodule_path()
|
||||||
if filename is None:
|
if filename is None:
|
||||||
raise unittest.SkipTest('cannot find xxmodule.c (test must run in '
|
raise unittest.SkipTest('cannot find xxmodule.c')
|
||||||
'the python build dir)')
|
|
||||||
shutil.copy(filename, directory)
|
shutil.copy(filename, directory)
|
||||||
|
|
||||||
|
|
||||||
def _get_xxmodule_path():
|
def _get_xxmodule_path():
|
||||||
srcdir = sysconfig.get_config_var('srcdir')
|
if sysconfig.is_python_build():
|
||||||
candidates = [
|
srcdir = sysconfig.get_config_var('projectbase')
|
||||||
# use installed copy if available
|
path = os.path.join(os.getcwd(), srcdir, 'Modules', 'xxmodule.c')
|
||||||
os.path.join(os.path.dirname(__file__), 'xxmodule.c'),
|
else:
|
||||||
# otherwise try using copy from build directory
|
os.path.join(os.path.dirname(__file__), 'xxmodule.c')
|
||||||
os.path.join(srcdir, 'Modules', 'xxmodule.c'),
|
if os.path.exists(path):
|
||||||
]
|
return path
|
||||||
for path in candidates:
|
|
||||||
if os.path.exists(path):
|
|
||||||
return path
|
|
||||||
|
|
||||||
|
|
||||||
def fixup_build_ext(cmd):
|
def fixup_build_ext(cmd):
|
||||||
|
@ -355,20 +352,21 @@ def fixup_build_ext(cmd):
|
||||||
|
|
||||||
When Python was built with --enable-shared on Unix, -L. is not enough to
|
When Python was built with --enable-shared on Unix, -L. is not enough to
|
||||||
find libpython<blah>.so, because regrtest runs in a tempdir, not in the
|
find libpython<blah>.so, because regrtest runs in a tempdir, not in the
|
||||||
source directory where the .so lives.
|
source directory where the .so lives. (Mac OS X embeds absolute paths
|
||||||
|
to shared libraries into executables, so the fixup is a no-op on that
|
||||||
|
platform.)
|
||||||
|
|
||||||
When Python was built with in debug mode on Windows, build_ext commands
|
When Python was built with in debug mode on Windows, build_ext commands
|
||||||
need their debug attribute set, and it is not done automatically for
|
need their debug attribute set, and it is not done automatically for
|
||||||
some reason.
|
some reason.
|
||||||
|
|
||||||
This function handles both of these things. Example use:
|
This function handles both of these things, and also fixes
|
||||||
|
cmd.distribution.include_dirs if the running Python is an uninstalled
|
||||||
|
build. Example use:
|
||||||
|
|
||||||
cmd = build_ext(dist)
|
cmd = build_ext(dist)
|
||||||
support.fixup_build_ext(cmd)
|
support.fixup_build_ext(cmd)
|
||||||
cmd.ensure_finalized()
|
cmd.ensure_finalized()
|
||||||
|
|
||||||
Unlike most other Unix platforms, Mac OS X embeds absolute paths
|
|
||||||
to shared libraries into executables, so the fixup is not needed there.
|
|
||||||
"""
|
"""
|
||||||
if os.name == 'nt':
|
if os.name == 'nt':
|
||||||
cmd.debug = sys.executable.endswith('_d.exe')
|
cmd.debug = sys.executable.endswith('_d.exe')
|
||||||
|
@ -386,12 +384,17 @@ def fixup_build_ext(cmd):
|
||||||
name, equals, value = runshared.partition('=')
|
name, equals, value = runshared.partition('=')
|
||||||
cmd.library_dirs = value.split(os.pathsep)
|
cmd.library_dirs = value.split(os.pathsep)
|
||||||
|
|
||||||
|
# Allow tests to run with an uninstalled Python
|
||||||
|
if sysconfig.is_python_build():
|
||||||
|
pysrcdir = sysconfig.get_config_var('projectbase')
|
||||||
|
cmd.distribution.include_dirs.append(os.path.join(pysrcdir, 'Include'))
|
||||||
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from test.support import skip_unless_symlink
|
from test.support import skip_unless_symlink
|
||||||
except ImportError:
|
except ImportError:
|
||||||
skip_unless_symlink = unittest.skip(
|
skip_unless_symlink = unittest.skip(
|
||||||
'requires test.support.skip_unless_symlink')
|
'requires test.support.skip_unless_symlink')
|
||||||
|
|
||||||
|
|
||||||
skip_2to3_optimize = unittest.skipIf(sys.flags.optimize,
|
skip_2to3_optimize = unittest.skipIf(sys.flags.optimize,
|
||||||
"2to3 doesn't work under -O")
|
"2to3 doesn't work under -O")
|
||||||
|
|
|
@ -26,7 +26,8 @@ class BuildTestCase(support.TempdirManager,
|
||||||
# build_platlib is 'build/lib.platform-x.x[-pydebug]'
|
# build_platlib is 'build/lib.platform-x.x[-pydebug]'
|
||||||
# examples:
|
# examples:
|
||||||
# build/lib.macosx-10.3-i386-2.7
|
# build/lib.macosx-10.3-i386-2.7
|
||||||
plat_spec = '.%s-%s' % (cmd.plat_name, sys.version[0:3])
|
pyversion = '%s.%s' % sys.version_info[:2]
|
||||||
|
plat_spec = '.%s-%s' % (cmd.plat_name, pyversion)
|
||||||
if hasattr(sys, 'gettotalrefcount'):
|
if hasattr(sys, 'gettotalrefcount'):
|
||||||
self.assertTrue(cmd.build_platlib.endswith('-pydebug'))
|
self.assertTrue(cmd.build_platlib.endswith('-pydebug'))
|
||||||
plat_spec += '-pydebug'
|
plat_spec += '-pydebug'
|
||||||
|
@ -41,7 +42,7 @@ class BuildTestCase(support.TempdirManager,
|
||||||
self.assertEqual(cmd.build_temp, wanted)
|
self.assertEqual(cmd.build_temp, wanted)
|
||||||
|
|
||||||
# build_scripts is build/scripts-x.x
|
# build_scripts is build/scripts-x.x
|
||||||
wanted = os.path.join(cmd.build_base, 'scripts-' + sys.version[0:3])
|
wanted = os.path.join(cmd.build_base, 'scripts-' + pyversion)
|
||||||
self.assertEqual(cmd.build_scripts, wanted)
|
self.assertEqual(cmd.build_scripts, wanted)
|
||||||
|
|
||||||
# executable is os.path.normpath(sys.executable)
|
# executable is os.path.normpath(sys.executable)
|
||||||
|
|
|
@ -20,8 +20,6 @@ class MarkersTestCase(LoggingCatcher,
|
||||||
platform_python_implementation = platform.python_implementation()
|
platform_python_implementation = platform.python_implementation()
|
||||||
|
|
||||||
self.assertTrue(interpret("sys.platform == '%s'" % sys_platform))
|
self.assertTrue(interpret("sys.platform == '%s'" % sys_platform))
|
||||||
self.assertTrue(interpret(
|
|
||||||
"sys.platform == '%s' or python_version == '2.4'" % sys_platform))
|
|
||||||
self.assertTrue(interpret(
|
self.assertTrue(interpret(
|
||||||
"sys.platform == '%s' and python_full_version == '%s'" %
|
"sys.platform == '%s' and python_full_version == '%s'" %
|
||||||
(sys_platform, version)))
|
(sys_platform, version)))
|
||||||
|
@ -41,12 +39,18 @@ class MarkersTestCase(LoggingCatcher,
|
||||||
|
|
||||||
# combined operations
|
# combined operations
|
||||||
OP = 'os.name == "%s"' % os_name
|
OP = 'os.name == "%s"' % os_name
|
||||||
|
FALSEOP = 'os.name == "buuuu"'
|
||||||
AND = ' and '
|
AND = ' and '
|
||||||
OR = ' or '
|
OR = ' or '
|
||||||
self.assertTrue(interpret(OP + AND + OP))
|
self.assertTrue(interpret(OP + AND + OP))
|
||||||
self.assertTrue(interpret(OP + AND + OP + AND + OP))
|
self.assertTrue(interpret(OP + AND + OP + AND + OP))
|
||||||
self.assertTrue(interpret(OP + OR + OP))
|
self.assertTrue(interpret(OP + OR + OP))
|
||||||
self.assertTrue(interpret(OP + OR + OP + OR + OP))
|
self.assertTrue(interpret(OP + OR + FALSEOP))
|
||||||
|
self.assertTrue(interpret(OP + OR + OP + OR + FALSEOP))
|
||||||
|
self.assertTrue(interpret(OP + OR + FALSEOP + OR + FALSEOP))
|
||||||
|
self.assertTrue(interpret(FALSEOP + OR + OP))
|
||||||
|
self.assertFalse(interpret(FALSEOP + AND + FALSEOP))
|
||||||
|
self.assertFalse(interpret(FALSEOP + OR + FALSEOP))
|
||||||
|
|
||||||
# other operators
|
# other operators
|
||||||
self.assertTrue(interpret("os.name != 'buuuu'"))
|
self.assertTrue(interpret("os.name != 'buuuu'"))
|
||||||
|
|
|
@ -8,70 +8,76 @@ class Mixin2to3TestCase(support.TempdirManager,
|
||||||
support.LoggingCatcher,
|
support.LoggingCatcher,
|
||||||
unittest.TestCase):
|
unittest.TestCase):
|
||||||
|
|
||||||
@support.skip_2to3_optimize
|
def setUp(self):
|
||||||
def test_convert_code_only(self):
|
super(Mixin2to3TestCase, self).setUp()
|
||||||
# used to check if code gets converted properly.
|
self.filename = self.mktempfile().name
|
||||||
code = "print 'test'"
|
|
||||||
|
|
||||||
with self.mktempfile() as fp:
|
def check(self, source, wanted, **kwargs):
|
||||||
fp.write(code)
|
source = textwrap.dedent(source)
|
||||||
|
with open(self.filename, 'w') as fp:
|
||||||
|
fp.write(source)
|
||||||
|
|
||||||
mixin2to3 = Mixin2to3()
|
Mixin2to3()._run_2to3(**kwargs)
|
||||||
mixin2to3._run_2to3([fp.name])
|
|
||||||
expected = "print('test')"
|
|
||||||
|
|
||||||
with open(fp.name) as fp:
|
wanted = textwrap.dedent(wanted)
|
||||||
|
with open(self.filename) as fp:
|
||||||
converted = fp.read()
|
converted = fp.read()
|
||||||
|
self.assertMultiLineEqual(converted, wanted)
|
||||||
|
|
||||||
self.assertEqual(expected, converted)
|
def test_conversion(self):
|
||||||
|
# check that code and doctests get converted
|
||||||
def test_doctests_only(self):
|
self.check('''\
|
||||||
# used to check if doctests gets converted properly.
|
|
||||||
doctest = textwrap.dedent('''\
|
|
||||||
"""Example docstring.
|
"""Example docstring.
|
||||||
|
|
||||||
>>> print test
|
>>> print test
|
||||||
test
|
test
|
||||||
|
|
||||||
It works.
|
It works.
|
||||||
"""''')
|
"""
|
||||||
|
print 'test'
|
||||||
with self.mktempfile() as fp:
|
''',
|
||||||
fp.write(doctest)
|
'''\
|
||||||
|
|
||||||
mixin2to3 = Mixin2to3()
|
|
||||||
mixin2to3._run_2to3([fp.name])
|
|
||||||
expected = textwrap.dedent('''\
|
|
||||||
"""Example docstring.
|
"""Example docstring.
|
||||||
|
|
||||||
>>> print(test)
|
>>> print(test)
|
||||||
test
|
test
|
||||||
|
|
||||||
It works.
|
It works.
|
||||||
"""\n''')
|
"""
|
||||||
|
print('test')
|
||||||
|
|
||||||
with open(fp.name) as fp:
|
''', # 2to3 adds a newline here
|
||||||
converted = fp.read()
|
files=[self.filename])
|
||||||
|
|
||||||
self.assertEqual(expected, converted)
|
def test_doctests_conversion(self):
|
||||||
|
# check that doctest files are converted
|
||||||
|
self.check('''\
|
||||||
|
Welcome to the doc.
|
||||||
|
|
||||||
|
>>> print test
|
||||||
|
test
|
||||||
|
''',
|
||||||
|
'''\
|
||||||
|
Welcome to the doc.
|
||||||
|
|
||||||
|
>>> print(test)
|
||||||
|
test
|
||||||
|
|
||||||
|
''',
|
||||||
|
doctests=[self.filename])
|
||||||
|
|
||||||
def test_additional_fixers(self):
|
def test_additional_fixers(self):
|
||||||
# used to check if use_2to3_fixers works
|
# make sure the fixers argument works
|
||||||
code = 'type(x) is not T'
|
self.check("""\
|
||||||
|
echo('42')
|
||||||
with self.mktempfile() as fp:
|
echo2('oh no')
|
||||||
fp.write(code)
|
""",
|
||||||
|
"""\
|
||||||
mixin2to3 = Mixin2to3()
|
print('42')
|
||||||
mixin2to3._run_2to3(files=[fp.name], doctests=[fp.name],
|
print('oh no')
|
||||||
fixers=['packaging.tests.fixer'])
|
""",
|
||||||
|
files=[self.filename],
|
||||||
expected = 'not isinstance(x, T)'
|
fixers=['packaging.tests.fixer'])
|
||||||
|
|
||||||
with open(fp.name) as fp:
|
|
||||||
converted = fp.read()
|
|
||||||
|
|
||||||
self.assertEqual(expected, converted)
|
|
||||||
|
|
||||||
|
|
||||||
def test_suite():
|
def test_suite():
|
||||||
|
|
|
@ -67,6 +67,23 @@ class RunTestCase(support.TempdirManager,
|
||||||
self.assertGreater(out, b'')
|
self.assertGreater(out, b'')
|
||||||
self.assertEqual(err, b'')
|
self.assertEqual(err, b'')
|
||||||
|
|
||||||
|
def test_list_commands(self):
|
||||||
|
status, out, err = assert_python_ok('-m', 'packaging.run', 'run',
|
||||||
|
'--list-commands')
|
||||||
|
# check that something is displayed
|
||||||
|
self.assertEqual(status, 0)
|
||||||
|
self.assertGreater(out, b'')
|
||||||
|
self.assertEqual(err, b'')
|
||||||
|
|
||||||
|
# make sure the manual grouping of commands is respected
|
||||||
|
check_position = out.find(b' check: ')
|
||||||
|
build_position = out.find(b' build: ')
|
||||||
|
self.assertTrue(check_position, out) # "out" printed as debugging aid
|
||||||
|
self.assertTrue(build_position, out)
|
||||||
|
self.assertLess(check_position, build_position, out)
|
||||||
|
|
||||||
|
# TODO test that custom commands don't break --list-commands
|
||||||
|
|
||||||
|
|
||||||
def test_suite():
|
def test_suite():
|
||||||
return unittest.makeSuite(RunTestCase)
|
return unittest.makeSuite(RunTestCase)
|
||||||
|
|
|
@ -853,13 +853,11 @@ def run_2to3(files, doctests_only=False, fixer_names=None,
|
||||||
|
|
||||||
# Make this class local, to delay import of 2to3
|
# Make this class local, to delay import of 2to3
|
||||||
from lib2to3.refactor import get_fixers_from_package, RefactoringTool
|
from lib2to3.refactor import get_fixers_from_package, RefactoringTool
|
||||||
fixers = []
|
|
||||||
fixers = get_fixers_from_package('lib2to3.fixes')
|
fixers = get_fixers_from_package('lib2to3.fixes')
|
||||||
|
|
||||||
if fixer_names:
|
if fixer_names:
|
||||||
for fixername in fixer_names:
|
for fixername in fixer_names:
|
||||||
fixers.extend(fixer for fixer in
|
fixers.extend(get_fixers_from_package(fixername))
|
||||||
get_fixers_from_package(fixername))
|
|
||||||
r = RefactoringTool(fixers, options=options)
|
r = RefactoringTool(fixers, options=options)
|
||||||
r.refactor(files, write=True, doctests_only=doctests_only)
|
r.refactor(files, write=True, doctests_only=doctests_only)
|
||||||
|
|
||||||
|
@ -870,21 +868,23 @@ class Mixin2to3:
|
||||||
the class variables, or inherit from this class
|
the class variables, or inherit from this class
|
||||||
to override how 2to3 is invoked.
|
to override how 2to3 is invoked.
|
||||||
"""
|
"""
|
||||||
# provide list of fixers to run.
|
# list of fixers to run; defaults to all implicit from lib2to3.fixers
|
||||||
# defaults to all from lib2to3.fixers
|
|
||||||
fixer_names = None
|
fixer_names = None
|
||||||
|
# dict of options
|
||||||
# options dictionary
|
|
||||||
options = None
|
options = None
|
||||||
|
# list of extra fixers to invoke
|
||||||
# list of fixers to invoke even though they are marked as explicit
|
|
||||||
explicit = None
|
explicit = None
|
||||||
|
# TODO need a better way to add just one fixer from a package
|
||||||
|
# TODO need a way to exclude individual fixers
|
||||||
|
|
||||||
def run_2to3(self, files, doctests_only=False):
|
def run_2to3(self, files, doctests_only=False):
|
||||||
""" Issues a call to util.run_2to3. """
|
""" Issues a call to util.run_2to3. """
|
||||||
return run_2to3(files, doctests_only, self.fixer_names,
|
return run_2to3(files, doctests_only, self.fixer_names,
|
||||||
self.options, self.explicit)
|
self.options, self.explicit)
|
||||||
|
|
||||||
|
# TODO provide initialize/finalize_options
|
||||||
|
|
||||||
|
|
||||||
RICH_GLOB = re.compile(r'\{([^}]*)\}')
|
RICH_GLOB = re.compile(r'\{([^}]*)\}')
|
||||||
_CHECK_RECURSIVE_GLOB = re.compile(r'[^/\\,{]\*\*|\*\*[^/\\,}]')
|
_CHECK_RECURSIVE_GLOB = re.compile(r'[^/\\,{]\*\*|\*\*[^/\\,}]')
|
||||||
_CHECK_MISMATCH_SET = re.compile(r'^[^{]*\}|\{[^}]*$')
|
_CHECK_MISMATCH_SET = re.compile(r'^[^{]*\}|\{[^}]*$')
|
||||||
|
@ -1049,7 +1049,6 @@ def cfg_to_args(path='setup.cfg'):
|
||||||
|
|
||||||
SETUP_TEMPLATE = """\
|
SETUP_TEMPLATE = """\
|
||||||
# This script was automatically generated by packaging
|
# This script was automatically generated by packaging
|
||||||
import os
|
|
||||||
import codecs
|
import codecs
|
||||||
from distutils.core import setup
|
from distutils.core import setup
|
||||||
try:
|
try:
|
||||||
|
@ -1057,6 +1056,7 @@ try:
|
||||||
except ImportError:
|
except ImportError:
|
||||||
from configparser import RawConfigParser
|
from configparser import RawConfigParser
|
||||||
|
|
||||||
|
|
||||||
%(split_multiline)s
|
%(split_multiline)s
|
||||||
|
|
||||||
%(cfg_to_args)s
|
%(cfg_to_args)s
|
||||||
|
|
|
@ -372,6 +372,15 @@ class BZ2FileTest(BaseTest):
|
||||||
bz2f.close()
|
bz2f.close()
|
||||||
self.assertRaises(ValueError, bz2f.seekable)
|
self.assertRaises(ValueError, bz2f.seekable)
|
||||||
|
|
||||||
|
src = BytesIO(self.DATA)
|
||||||
|
src.seekable = lambda: False
|
||||||
|
bz2f = BZ2File(fileobj=src)
|
||||||
|
try:
|
||||||
|
self.assertFalse(bz2f.seekable())
|
||||||
|
finally:
|
||||||
|
bz2f.close()
|
||||||
|
self.assertRaises(ValueError, bz2f.seekable)
|
||||||
|
|
||||||
def testReadable(self):
|
def testReadable(self):
|
||||||
bz2f = BZ2File(fileobj=BytesIO(self.DATA))
|
bz2f = BZ2File(fileobj=BytesIO(self.DATA))
|
||||||
try:
|
try:
|
||||||
|
|
|
@ -323,6 +323,23 @@ DOCTYPE html [
|
||||||
("endtag", element_lower)],
|
("endtag", element_lower)],
|
||||||
collector=Collector())
|
collector=Collector())
|
||||||
|
|
||||||
|
def test_comments(self):
|
||||||
|
html = ("<!-- I'm a valid comment -->"
|
||||||
|
'<!--me too!-->'
|
||||||
|
'<!------>'
|
||||||
|
'<!---->'
|
||||||
|
'<!----I have many hyphens---->'
|
||||||
|
'<!-- I have a > in the middle -->'
|
||||||
|
'<!-- and I have -- in the middle! -->')
|
||||||
|
expected = [('comment', " I'm a valid comment "),
|
||||||
|
('comment', 'me too!'),
|
||||||
|
('comment', '--'),
|
||||||
|
('comment', ''),
|
||||||
|
('comment', '--I have many hyphens--'),
|
||||||
|
('comment', ' I have a > in the middle '),
|
||||||
|
('comment', ' and I have -- in the middle! ')]
|
||||||
|
self._run_check(html, expected)
|
||||||
|
|
||||||
def test_condcoms(self):
|
def test_condcoms(self):
|
||||||
html = ('<!--[if IE & !(lte IE 8)]>aren\'t<![endif]-->'
|
html = ('<!--[if IE & !(lte IE 8)]>aren\'t<![endif]-->'
|
||||||
'<!--[if IE 8]>condcoms<![endif]-->'
|
'<!--[if IE 8]>condcoms<![endif]-->'
|
||||||
|
@ -426,6 +443,19 @@ class HTMLParserTolerantTestCase(HTMLParserStrictTestCase):
|
||||||
# see #12888
|
# see #12888
|
||||||
self.assertEqual(p.unescape('{ ' * 1050), '{ ' * 1050)
|
self.assertEqual(p.unescape('{ ' * 1050), '{ ' * 1050)
|
||||||
|
|
||||||
|
def test_broken_comments(self):
|
||||||
|
html = ('<! not really a comment >'
|
||||||
|
'<! not a comment either -->'
|
||||||
|
'<! -- close enough -->'
|
||||||
|
'<!!! another bogus comment !!!>')
|
||||||
|
expected = [
|
||||||
|
('comment', ' not really a comment '),
|
||||||
|
('comment', ' not a comment either --'),
|
||||||
|
('comment', ' -- close enough --'),
|
||||||
|
('comment', '!! another bogus comment !!!'),
|
||||||
|
]
|
||||||
|
self._run_check(html, expected)
|
||||||
|
|
||||||
def test_broken_condcoms(self):
|
def test_broken_condcoms(self):
|
||||||
# these condcoms are missing the '--' after '<!' and before the '>'
|
# these condcoms are missing the '--' after '<!' and before the '>'
|
||||||
html = ('<![if !(IE)]>broken condcom<![endif]>'
|
html = ('<![if !(IE)]>broken condcom<![endif]>'
|
||||||
|
|
|
@ -525,6 +525,15 @@ class FileTestCase(unittest.TestCase):
|
||||||
f.close()
|
f.close()
|
||||||
self.assertRaises(ValueError, f.seekable)
|
self.assertRaises(ValueError, f.seekable)
|
||||||
|
|
||||||
|
src = BytesIO(COMPRESSED_XZ)
|
||||||
|
src.seekable = lambda: False
|
||||||
|
f = LZMAFile(fileobj=src)
|
||||||
|
try:
|
||||||
|
self.assertFalse(f.seekable())
|
||||||
|
finally:
|
||||||
|
f.close()
|
||||||
|
self.assertRaises(ValueError, f.seekable)
|
||||||
|
|
||||||
def test_readable(self):
|
def test_readable(self):
|
||||||
f = LZMAFile(fileobj=BytesIO(COMPRESSED_XZ))
|
f = LZMAFile(fileobj=BytesIO(COMPRESSED_XZ))
|
||||||
try:
|
try:
|
||||||
|
|
|
@ -1,3 +1,153 @@
|
||||||
# Wrapper module for _elementtree
|
# Wrapper module for _elementtree
|
||||||
|
|
||||||
|
from xml.etree.ElementTree import (ElementTree, dump, iselement, QName,
|
||||||
|
fromstringlist,
|
||||||
|
tostring, tostringlist, VERSION)
|
||||||
|
# These ones are not in ElementTree.__all__
|
||||||
|
from xml.etree.ElementTree import ElementPath, register_namespace
|
||||||
|
|
||||||
|
# Import the C accelerators:
|
||||||
|
# Element, SubElement, TreeBuilder, XMLParser, ParseError
|
||||||
from _elementtree import *
|
from _elementtree import *
|
||||||
|
|
||||||
|
|
||||||
|
class ElementTree(ElementTree):
|
||||||
|
|
||||||
|
def parse(self, source, parser=None):
|
||||||
|
close_source = False
|
||||||
|
if not hasattr(source, 'read'):
|
||||||
|
source = open(source, 'rb')
|
||||||
|
close_source = True
|
||||||
|
try:
|
||||||
|
if parser is not None:
|
||||||
|
while True:
|
||||||
|
data = source.read(65536)
|
||||||
|
if not data:
|
||||||
|
break
|
||||||
|
parser.feed(data)
|
||||||
|
self._root = parser.close()
|
||||||
|
else:
|
||||||
|
parser = XMLParser()
|
||||||
|
self._root = parser._parse(source)
|
||||||
|
return self._root
|
||||||
|
finally:
|
||||||
|
if close_source:
|
||||||
|
source.close()
|
||||||
|
|
||||||
|
|
||||||
|
class iterparse:
|
||||||
|
root = None
|
||||||
|
|
||||||
|
def __init__(self, file, events=None):
|
||||||
|
self._close_file = False
|
||||||
|
if not hasattr(file, 'read'):
|
||||||
|
file = open(file, 'rb')
|
||||||
|
self._close_file = True
|
||||||
|
self._file = file
|
||||||
|
self._events = []
|
||||||
|
self._index = 0
|
||||||
|
self._error = None
|
||||||
|
self.root = self._root = None
|
||||||
|
b = TreeBuilder()
|
||||||
|
self._parser = XMLParser(b)
|
||||||
|
self._parser._setevents(self._events, events)
|
||||||
|
|
||||||
|
def __next__(self):
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
item = self._events[self._index]
|
||||||
|
self._index += 1
|
||||||
|
return item
|
||||||
|
except IndexError:
|
||||||
|
pass
|
||||||
|
if self._error:
|
||||||
|
e = self._error
|
||||||
|
self._error = None
|
||||||
|
raise e
|
||||||
|
if self._parser is None:
|
||||||
|
self.root = self._root
|
||||||
|
if self._close_file:
|
||||||
|
self._file.close()
|
||||||
|
raise StopIteration
|
||||||
|
# load event buffer
|
||||||
|
del self._events[:]
|
||||||
|
self._index = 0
|
||||||
|
data = self._file.read(16384)
|
||||||
|
if data:
|
||||||
|
try:
|
||||||
|
self._parser.feed(data)
|
||||||
|
except SyntaxError as exc:
|
||||||
|
self._error = exc
|
||||||
|
else:
|
||||||
|
self._root = self._parser.close()
|
||||||
|
self._parser = None
|
||||||
|
|
||||||
|
def __iter__(self):
|
||||||
|
return self
|
||||||
|
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
#
|
||||||
|
# Everything below this line can be removed
|
||||||
|
# after cElementTree is folded behind ElementTree.
|
||||||
|
#
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
from xml.etree.ElementTree import Comment as _Comment, PI as _PI
|
||||||
|
|
||||||
|
|
||||||
|
def parse(source, parser=None):
|
||||||
|
tree = ElementTree()
|
||||||
|
tree.parse(source, parser)
|
||||||
|
return tree
|
||||||
|
|
||||||
|
|
||||||
|
def XML(text, parser=None):
|
||||||
|
if not parser:
|
||||||
|
parser = XMLParser()
|
||||||
|
parser = XMLParser()
|
||||||
|
parser.feed(text)
|
||||||
|
return parser.close()
|
||||||
|
|
||||||
|
|
||||||
|
def XMLID(text, parser=None):
|
||||||
|
tree = XML(text, parser=parser)
|
||||||
|
ids = {}
|
||||||
|
for elem in tree.iter():
|
||||||
|
id = elem.get('id')
|
||||||
|
if id:
|
||||||
|
ids[id] = elem
|
||||||
|
return tree, ids
|
||||||
|
|
||||||
|
|
||||||
|
class CommentProxy:
|
||||||
|
|
||||||
|
def __call__(self, text=None):
|
||||||
|
element = Element(_Comment)
|
||||||
|
element.text = text
|
||||||
|
return element
|
||||||
|
|
||||||
|
def __eq__(self, other):
|
||||||
|
return _Comment == other
|
||||||
|
|
||||||
|
|
||||||
|
class PIProxy:
|
||||||
|
|
||||||
|
def __call__(self, target, text=None):
|
||||||
|
element = Element(_PI)
|
||||||
|
element.text = target
|
||||||
|
if text:
|
||||||
|
element.text = element.text + ' ' + text
|
||||||
|
return element
|
||||||
|
|
||||||
|
def __eq__(self, other):
|
||||||
|
return _PI == other
|
||||||
|
|
||||||
|
|
||||||
|
Comment = CommentProxy()
|
||||||
|
PI = ProcessingInstruction = PIProxy()
|
||||||
|
del CommentProxy, PIProxy
|
||||||
|
|
||||||
|
# Aliases
|
||||||
|
fromstring = XML
|
||||||
|
XMLTreeBuilder = XMLParser
|
||||||
|
|
26
Misc/NEWS
26
Misc/NEWS
|
@ -466,6 +466,25 @@ Core and Builtins
|
||||||
Library
|
Library
|
||||||
-------
|
-------
|
||||||
|
|
||||||
|
- Issue #13989: Document that GzipFile does not support text mode, and give a
|
||||||
|
more helpful error message when opened with an invalid mode string.
|
||||||
|
|
||||||
|
- Issue #13590: On OS X 10.7 and 10.6 with Xcode 4.2, building
|
||||||
|
Distutils-based packages with C extension modules may fail because
|
||||||
|
Apple has removed gcc-4.2, the version used to build python.org
|
||||||
|
64-bit/32-bit Pythons. If the user does not explicitly override
|
||||||
|
the default C compiler by setting the CC environment variable,
|
||||||
|
Distutils will now attempt to compile extension modules with clang
|
||||||
|
if gcc-4.2 is required but not found. Also as a convenience, if
|
||||||
|
the user does explicitly set CC, substitute its value as the default
|
||||||
|
compiler in the Distutils LDSHARED configuration variable for OS X.
|
||||||
|
(Note, the python.org 32-bit-only Pythons use gcc-4.0 and the 10.4u
|
||||||
|
SDK, neither of which are available in Xcode 4. This change does not
|
||||||
|
attempt to override settings to support their use with Xcode 4.)
|
||||||
|
|
||||||
|
- Issue #13960: HTMLParser is now able to handle broken comments when
|
||||||
|
strict=False.
|
||||||
|
|
||||||
- Issue #13921: Undocument and clean up sqlite3.OptimizedUnicode,
|
- Issue #13921: Undocument and clean up sqlite3.OptimizedUnicode,
|
||||||
which is obsolete in Python 3.x. It's now aliased to str for
|
which is obsolete in Python 3.x. It's now aliased to str for
|
||||||
backwards compatibility.
|
backwards compatibility.
|
||||||
|
@ -498,7 +517,7 @@ Library
|
||||||
|
|
||||||
- Issue #10881: Fix test_site failure with OS X framework builds.
|
- Issue #10881: Fix test_site failure with OS X framework builds.
|
||||||
|
|
||||||
- Issue #964437 Make IDLE help window non-modal.
|
- Issue #964437: Make IDLE help window non-modal.
|
||||||
Patch by Guilherme Polo and Roger Serwy.
|
Patch by Guilherme Polo and Roger Serwy.
|
||||||
|
|
||||||
- Issue #13734: Add os.fwalk(), a directory walking function yielding file
|
- Issue #13734: Add os.fwalk(), a directory walking function yielding file
|
||||||
|
@ -758,9 +777,8 @@ Library
|
||||||
- Issues #1745761, #755670, #13357, #12629, #1200313: HTMLParser now correctly
|
- Issues #1745761, #755670, #13357, #12629, #1200313: HTMLParser now correctly
|
||||||
handles non-valid attributes, including adjacent and unquoted attributes.
|
handles non-valid attributes, including adjacent and unquoted attributes.
|
||||||
|
|
||||||
- Issue #13193: Fix distutils.filelist.FileList and
|
- Issue #13193: Fix distutils.filelist.FileList and packaging.manifest.Manifest
|
||||||
packaging.manifest.Manifest under Windows. The "recursive-include"
|
under Windows.
|
||||||
directive now recognizes both legal path separators.
|
|
||||||
|
|
||||||
- Issue #13384: Remove unnecessary __future__ import in Lib/random.py
|
- Issue #13384: Remove unnecessary __future__ import in Lib/random.py
|
||||||
|
|
||||||
|
|
|
@ -94,25 +94,6 @@ do { memory -= size; printf("%8d - %s\n", memory, comment); } while (0)
|
||||||
#define LOCAL(type) static type
|
#define LOCAL(type) static type
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
/* compatibility macros */
|
|
||||||
#if (PY_VERSION_HEX < 0x02060000)
|
|
||||||
#define Py_REFCNT(ob) (((PyObject*)(ob))->ob_refcnt)
|
|
||||||
#define Py_TYPE(ob) (((PyObject*)(ob))->ob_type)
|
|
||||||
#endif
|
|
||||||
|
|
||||||
#if (PY_VERSION_HEX < 0x02050000)
|
|
||||||
typedef int Py_ssize_t;
|
|
||||||
#define lenfunc inquiry
|
|
||||||
#endif
|
|
||||||
|
|
||||||
#if (PY_VERSION_HEX < 0x02040000)
|
|
||||||
#define PyDict_CheckExact PyDict_Check
|
|
||||||
|
|
||||||
#if !defined(Py_RETURN_NONE)
|
|
||||||
#define Py_RETURN_NONE return Py_INCREF(Py_None), Py_None
|
|
||||||
#endif
|
|
||||||
#endif
|
|
||||||
|
|
||||||
/* macros used to store 'join' flags in string object pointers. note
|
/* macros used to store 'join' flags in string object pointers. note
|
||||||
that all use of text and tail as object pointers must be wrapped in
|
that all use of text and tail as object pointers must be wrapped in
|
||||||
JOIN_OBJ. see comments in the ElementObject definition for more
|
JOIN_OBJ. see comments in the ElementObject definition for more
|
||||||
|
@ -123,7 +104,6 @@ typedef int Py_ssize_t;
|
||||||
|
|
||||||
/* glue functions (see the init function for details) */
|
/* glue functions (see the init function for details) */
|
||||||
static PyObject* elementtree_parseerror_obj;
|
static PyObject* elementtree_parseerror_obj;
|
||||||
static PyObject* elementtree_copyelement_obj;
|
|
||||||
static PyObject* elementtree_deepcopy_obj;
|
static PyObject* elementtree_deepcopy_obj;
|
||||||
static PyObject* elementtree_iter_obj;
|
static PyObject* elementtree_iter_obj;
|
||||||
static PyObject* elementtree_itertext_obj;
|
static PyObject* elementtree_itertext_obj;
|
||||||
|
@ -1127,31 +1107,6 @@ element_makeelement(PyObject* self, PyObject* args, PyObject* kw)
|
||||||
return elem;
|
return elem;
|
||||||
}
|
}
|
||||||
|
|
||||||
static PyObject*
|
|
||||||
element_reduce(ElementObject* self, PyObject* args)
|
|
||||||
{
|
|
||||||
if (!PyArg_ParseTuple(args, ":__reduce__"))
|
|
||||||
return NULL;
|
|
||||||
|
|
||||||
/* Hack alert: This method is used to work around a __copy__
|
|
||||||
problem on certain 2.3 and 2.4 versions. To save time and
|
|
||||||
simplify the code, we create the copy in here, and use a dummy
|
|
||||||
copyelement helper to trick the copy module into doing the
|
|
||||||
right thing. */
|
|
||||||
|
|
||||||
if (!elementtree_copyelement_obj) {
|
|
||||||
PyErr_SetString(
|
|
||||||
PyExc_RuntimeError,
|
|
||||||
"copyelement helper not found"
|
|
||||||
);
|
|
||||||
return NULL;
|
|
||||||
}
|
|
||||||
|
|
||||||
return Py_BuildValue(
|
|
||||||
"O(N)", elementtree_copyelement_obj, element_copy(self, args)
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
static PyObject*
|
static PyObject*
|
||||||
element_remove(ElementObject* self, PyObject* args)
|
element_remove(ElementObject* self, PyObject* args)
|
||||||
{
|
{
|
||||||
|
@ -1260,13 +1215,8 @@ element_subscr(PyObject* self_, PyObject* item)
|
||||||
{
|
{
|
||||||
ElementObject* self = (ElementObject*) self_;
|
ElementObject* self = (ElementObject*) self_;
|
||||||
|
|
||||||
#if (PY_VERSION_HEX < 0x02050000)
|
|
||||||
if (PyInt_Check(item) || PyLong_Check(item)) {
|
|
||||||
long i = PyInt_AsLong(item);
|
|
||||||
#else
|
|
||||||
if (PyIndex_Check(item)) {
|
if (PyIndex_Check(item)) {
|
||||||
Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError);
|
Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError);
|
||||||
#endif
|
|
||||||
|
|
||||||
if (i == -1 && PyErr_Occurred()) {
|
if (i == -1 && PyErr_Occurred()) {
|
||||||
return NULL;
|
return NULL;
|
||||||
|
@ -1317,13 +1267,8 @@ element_ass_subscr(PyObject* self_, PyObject* item, PyObject* value)
|
||||||
{
|
{
|
||||||
ElementObject* self = (ElementObject*) self_;
|
ElementObject* self = (ElementObject*) self_;
|
||||||
|
|
||||||
#if (PY_VERSION_HEX < 0x02050000)
|
|
||||||
if (PyInt_Check(item) || PyLong_Check(item)) {
|
|
||||||
long i = PyInt_AsLong(item);
|
|
||||||
#else
|
|
||||||
if (PyIndex_Check(item)) {
|
if (PyIndex_Check(item)) {
|
||||||
Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError);
|
Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError);
|
||||||
#endif
|
|
||||||
|
|
||||||
if (i == -1 && PyErr_Occurred()) {
|
if (i == -1 && PyErr_Occurred()) {
|
||||||
return -1;
|
return -1;
|
||||||
|
@ -1364,13 +1309,8 @@ element_ass_subscr(PyObject* self_, PyObject* item, PyObject* value)
|
||||||
if (step != 1 && newlen != slicelen)
|
if (step != 1 && newlen != slicelen)
|
||||||
{
|
{
|
||||||
PyErr_Format(PyExc_ValueError,
|
PyErr_Format(PyExc_ValueError,
|
||||||
#if (PY_VERSION_HEX < 0x02050000)
|
|
||||||
"attempt to assign sequence of size %d "
|
|
||||||
"to extended slice of size %d",
|
|
||||||
#else
|
|
||||||
"attempt to assign sequence of size %zd "
|
"attempt to assign sequence of size %zd "
|
||||||
"to extended slice of size %zd",
|
"to extended slice of size %zd",
|
||||||
#endif
|
|
||||||
newlen, slicelen
|
newlen, slicelen
|
||||||
);
|
);
|
||||||
return -1;
|
return -1;
|
||||||
|
@ -1470,18 +1410,6 @@ static PyMethodDef element_methods[] = {
|
||||||
{"__copy__", (PyCFunction) element_copy, METH_VARARGS},
|
{"__copy__", (PyCFunction) element_copy, METH_VARARGS},
|
||||||
{"__deepcopy__", (PyCFunction) element_deepcopy, METH_VARARGS},
|
{"__deepcopy__", (PyCFunction) element_deepcopy, METH_VARARGS},
|
||||||
|
|
||||||
/* Some 2.3 and 2.4 versions do not handle the __copy__ method on
|
|
||||||
C objects correctly, so we have to fake it using a __reduce__-
|
|
||||||
based hack (see the element_reduce implementation above for
|
|
||||||
details). */
|
|
||||||
|
|
||||||
/* The behaviour has been changed in 2.3.5 and 2.4.1, so we're
|
|
||||||
using a runtime test to figure out if we need to fake things
|
|
||||||
or now (see the init code below). The following entry is
|
|
||||||
enabled only if the hack is needed. */
|
|
||||||
|
|
||||||
{"!__reduce__", (PyCFunction) element_reduce, METH_VARARGS},
|
|
||||||
|
|
||||||
{NULL, NULL}
|
{NULL, NULL}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -2878,7 +2806,6 @@ static PyMethodDef _functions[] = {
|
||||||
{"TreeBuilder", (PyCFunction) treebuilder, METH_VARARGS},
|
{"TreeBuilder", (PyCFunction) treebuilder, METH_VARARGS},
|
||||||
#if defined(USE_EXPAT)
|
#if defined(USE_EXPAT)
|
||||||
{"XMLParser", (PyCFunction) xmlparser, METH_VARARGS|METH_KEYWORDS},
|
{"XMLParser", (PyCFunction) xmlparser, METH_VARARGS|METH_KEYWORDS},
|
||||||
{"XMLTreeBuilder", (PyCFunction) xmlparser, METH_VARARGS|METH_KEYWORDS},
|
|
||||||
#endif
|
#endif
|
||||||
{NULL, NULL}
|
{NULL, NULL}
|
||||||
};
|
};
|
||||||
|
@ -2933,54 +2860,8 @@ PyInit__elementtree(void)
|
||||||
|
|
||||||
bootstrap = (
|
bootstrap = (
|
||||||
|
|
||||||
"from copy import copy, deepcopy\n"
|
"from copy import deepcopy\n"
|
||||||
|
"from xml.etree import ElementPath\n"
|
||||||
"try:\n"
|
|
||||||
" from xml.etree import ElementTree\n"
|
|
||||||
"except ImportError:\n"
|
|
||||||
" import ElementTree\n"
|
|
||||||
"ET = ElementTree\n"
|
|
||||||
"del ElementTree\n"
|
|
||||||
|
|
||||||
"import _elementtree as cElementTree\n"
|
|
||||||
|
|
||||||
"try:\n" /* check if copy works as is */
|
|
||||||
" copy(cElementTree.Element('x'))\n"
|
|
||||||
"except:\n"
|
|
||||||
" def copyelement(elem):\n"
|
|
||||||
" return elem\n"
|
|
||||||
|
|
||||||
"class CommentProxy:\n"
|
|
||||||
" def __call__(self, text=None):\n"
|
|
||||||
" element = cElementTree.Element(ET.Comment)\n"
|
|
||||||
" element.text = text\n"
|
|
||||||
" return element\n"
|
|
||||||
" def __eq__(self, other):\n"
|
|
||||||
" return ET.Comment == other\n"
|
|
||||||
"cElementTree.Comment = CommentProxy()\n"
|
|
||||||
|
|
||||||
"class ElementTree(ET.ElementTree):\n" /* public */
|
|
||||||
" def parse(self, source, parser=None):\n"
|
|
||||||
" close_source = False\n"
|
|
||||||
" if not hasattr(source, 'read'):\n"
|
|
||||||
" source = open(source, 'rb')\n"
|
|
||||||
" close_source = True\n"
|
|
||||||
" try:\n"
|
|
||||||
" if parser is not None:\n"
|
|
||||||
" while 1:\n"
|
|
||||||
" data = source.read(65536)\n"
|
|
||||||
" if not data:\n"
|
|
||||||
" break\n"
|
|
||||||
" parser.feed(data)\n"
|
|
||||||
" self._root = parser.close()\n"
|
|
||||||
" else:\n"
|
|
||||||
" parser = cElementTree.XMLParser()\n"
|
|
||||||
" self._root = parser._parse(source)\n"
|
|
||||||
" return self._root\n"
|
|
||||||
" finally:\n"
|
|
||||||
" if close_source:\n"
|
|
||||||
" source.close()\n"
|
|
||||||
"cElementTree.ElementTree = ElementTree\n"
|
|
||||||
|
|
||||||
"def iter(node, tag=None):\n" /* helper */
|
"def iter(node, tag=None):\n" /* helper */
|
||||||
" if tag == '*':\n"
|
" if tag == '*':\n"
|
||||||
|
@ -3000,123 +2881,12 @@ PyInit__elementtree(void)
|
||||||
" if e.tail:\n"
|
" if e.tail:\n"
|
||||||
" yield e.tail\n"
|
" yield e.tail\n"
|
||||||
|
|
||||||
"def parse(source, parser=None):\n" /* public */
|
|
||||||
" tree = ElementTree()\n"
|
|
||||||
" tree.parse(source, parser)\n"
|
|
||||||
" return tree\n"
|
|
||||||
"cElementTree.parse = parse\n"
|
|
||||||
|
|
||||||
"class iterparse:\n"
|
|
||||||
" root = None\n"
|
|
||||||
" def __init__(self, file, events=None):\n"
|
|
||||||
" self._close_file = False\n"
|
|
||||||
" if not hasattr(file, 'read'):\n"
|
|
||||||
" file = open(file, 'rb')\n"
|
|
||||||
" self._close_file = True\n"
|
|
||||||
" self._file = file\n"
|
|
||||||
" self._events = []\n"
|
|
||||||
" self._index = 0\n"
|
|
||||||
" self._error = None\n"
|
|
||||||
" self.root = self._root = None\n"
|
|
||||||
" b = cElementTree.TreeBuilder()\n"
|
|
||||||
" self._parser = cElementTree.XMLParser(b)\n"
|
|
||||||
" self._parser._setevents(self._events, events)\n"
|
|
||||||
" def __next__(self):\n"
|
|
||||||
" while 1:\n"
|
|
||||||
" try:\n"
|
|
||||||
" item = self._events[self._index]\n"
|
|
||||||
" self._index += 1\n"
|
|
||||||
" return item\n"
|
|
||||||
" except IndexError:\n"
|
|
||||||
" pass\n"
|
|
||||||
" if self._error:\n"
|
|
||||||
" e = self._error\n"
|
|
||||||
" self._error = None\n"
|
|
||||||
" raise e\n"
|
|
||||||
" if self._parser is None:\n"
|
|
||||||
" self.root = self._root\n"
|
|
||||||
" if self._close_file:\n"
|
|
||||||
" self._file.close()\n"
|
|
||||||
" raise StopIteration\n"
|
|
||||||
" # load event buffer\n"
|
|
||||||
" del self._events[:]\n"
|
|
||||||
" self._index = 0\n"
|
|
||||||
" data = self._file.read(16384)\n"
|
|
||||||
" if data:\n"
|
|
||||||
" try:\n"
|
|
||||||
" self._parser.feed(data)\n"
|
|
||||||
" except SyntaxError as exc:\n"
|
|
||||||
" self._error = exc\n"
|
|
||||||
" else:\n"
|
|
||||||
" self._root = self._parser.close()\n"
|
|
||||||
" self._parser = None\n"
|
|
||||||
" def __iter__(self):\n"
|
|
||||||
" return self\n"
|
|
||||||
"cElementTree.iterparse = iterparse\n"
|
|
||||||
|
|
||||||
"class PIProxy:\n"
|
|
||||||
" def __call__(self, target, text=None):\n"
|
|
||||||
" element = cElementTree.Element(ET.PI)\n"
|
|
||||||
" element.text = target\n"
|
|
||||||
" if text:\n"
|
|
||||||
" element.text = element.text + ' ' + text\n"
|
|
||||||
" return element\n"
|
|
||||||
" def __eq__(self, other):\n"
|
|
||||||
" return ET.PI == other\n"
|
|
||||||
"cElementTree.PI = cElementTree.ProcessingInstruction = PIProxy()\n"
|
|
||||||
|
|
||||||
"def XML(text):\n" /* public */
|
|
||||||
" parser = cElementTree.XMLParser()\n"
|
|
||||||
" parser.feed(text)\n"
|
|
||||||
" return parser.close()\n"
|
|
||||||
"cElementTree.XML = cElementTree.fromstring = XML\n"
|
|
||||||
|
|
||||||
"def XMLID(text):\n" /* public */
|
|
||||||
" tree = XML(text)\n"
|
|
||||||
" ids = {}\n"
|
|
||||||
" for elem in tree.iter():\n"
|
|
||||||
" id = elem.get('id')\n"
|
|
||||||
" if id:\n"
|
|
||||||
" ids[id] = elem\n"
|
|
||||||
" return tree, ids\n"
|
|
||||||
"cElementTree.XMLID = XMLID\n"
|
|
||||||
|
|
||||||
"try:\n"
|
|
||||||
" register_namespace = ET.register_namespace\n"
|
|
||||||
"except AttributeError:\n"
|
|
||||||
" def register_namespace(prefix, uri):\n"
|
|
||||||
" ET._namespace_map[uri] = prefix\n"
|
|
||||||
"cElementTree.register_namespace = register_namespace\n"
|
|
||||||
|
|
||||||
"cElementTree.dump = ET.dump\n"
|
|
||||||
"cElementTree.ElementPath = ElementPath = ET.ElementPath\n"
|
|
||||||
"cElementTree.iselement = ET.iselement\n"
|
|
||||||
"cElementTree.QName = ET.QName\n"
|
|
||||||
"cElementTree.tostring = ET.tostring\n"
|
|
||||||
"cElementTree.fromstringlist = ET.fromstringlist\n"
|
|
||||||
"cElementTree.tostringlist = ET.tostringlist\n"
|
|
||||||
"cElementTree.VERSION = '" VERSION "'\n"
|
|
||||||
"cElementTree.__version__ = '" VERSION "'\n"
|
|
||||||
|
|
||||||
);
|
);
|
||||||
|
|
||||||
if (!PyRun_String(bootstrap, Py_file_input, g, NULL))
|
if (!PyRun_String(bootstrap, Py_file_input, g, NULL))
|
||||||
return NULL;
|
return NULL;
|
||||||
|
|
||||||
elementpath_obj = PyDict_GetItemString(g, "ElementPath");
|
elementpath_obj = PyDict_GetItemString(g, "ElementPath");
|
||||||
|
|
||||||
elementtree_copyelement_obj = PyDict_GetItemString(g, "copyelement");
|
|
||||||
if (elementtree_copyelement_obj) {
|
|
||||||
/* reduce hack needed; enable reduce method */
|
|
||||||
PyMethodDef* mp;
|
|
||||||
for (mp = element_methods; mp->ml_name; mp++)
|
|
||||||
if (mp->ml_meth == (PyCFunction) element_reduce) {
|
|
||||||
mp->ml_name = "__reduce__";
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
} else
|
|
||||||
PyErr_Clear();
|
|
||||||
|
|
||||||
elementtree_deepcopy_obj = PyDict_GetItemString(g, "deepcopy");
|
elementtree_deepcopy_obj = PyDict_GetItemString(g, "deepcopy");
|
||||||
elementtree_iter_obj = PyDict_GetItemString(g, "iter");
|
elementtree_iter_obj = PyDict_GetItemString(g, "iter");
|
||||||
elementtree_itertext_obj = PyDict_GetItemString(g, "itertext");
|
elementtree_itertext_obj = PyDict_GetItemString(g, "itertext");
|
||||||
|
|
|
@ -2474,7 +2474,7 @@ _PyExc_Init(void)
|
||||||
Py_DECREF(args_tuple);
|
Py_DECREF(args_tuple);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
Py_DECREF(bltinmod);
|
||||||
}
|
}
|
||||||
|
|
||||||
void
|
void
|
||||||
|
|
Loading…
Reference in New Issue