mirror of https://github.com/python/cpython
merge
This commit is contained in:
commit
f67e494ca8
1
.hgtags
1
.hgtags
|
@ -95,3 +95,4 @@ ac1f7e5c05104d557d5acd922e95625ba5d1fe10 v3.2.1
|
|||
c860feaa348d663e598986894ee4680480577e15 v3.2.2rc1
|
||||
137e45f15c0bd262c9ad4c032d97425bc0589456 v3.2.2
|
||||
7085403daf439adb3f9e70ef13f6bedb1c447376 v3.2.3rc1
|
||||
f1a9a6505731714f0e157453ff850e3b71615c45 v3.3.0a1
|
||||
|
|
|
@ -81,17 +81,23 @@ allows them to be created and copied very simply. When a generic wrapper
|
|||
around a buffer is needed, a :ref:`memoryview <memoryview-objects>` object
|
||||
can be created.
|
||||
|
||||
For short instructions how to write an exporting object, see
|
||||
:ref:`Buffer Object Structures <buffer-structs>`. For obtaining
|
||||
a buffer, see :c:func:`PyObject_GetBuffer`.
|
||||
|
||||
.. c:type:: Py_buffer
|
||||
|
||||
.. c:member:: void \*obj
|
||||
|
||||
A new reference to the exporting object or *NULL*. The reference is owned
|
||||
by the consumer and automatically decremented and set to *NULL* by
|
||||
:c:func:`PyBuffer_Release`.
|
||||
A new reference to the exporting object. The reference is owned by
|
||||
the consumer and automatically decremented and set to *NULL* by
|
||||
:c:func:`PyBuffer_Release`. The field is the equivalent of the return
|
||||
value of any standard C-API function.
|
||||
|
||||
For temporary buffers that are wrapped by :c:func:`PyMemoryView_FromBuffer`
|
||||
this field must be *NULL*.
|
||||
As a special case, for *temporary* buffers that are wrapped by
|
||||
:c:func:`PyMemoryView_FromBuffer` or :c:func:`PyBuffer_FillInfo`
|
||||
this field is *NULL*. In general, exporting objects MUST NOT
|
||||
use this scheme.
|
||||
|
||||
.. c:member:: void \*buf
|
||||
|
||||
|
@ -423,7 +429,9 @@ Buffer-related functions
|
|||
return -1.
|
||||
|
||||
On success, fill in *view*, set :c:member:`view->obj` to a new reference
|
||||
to *exporter* and return 0.
|
||||
to *exporter* and return 0. In the case of chained buffer providers
|
||||
that redirect requests to a single object, :c:member:`view->obj` MAY
|
||||
refer to this object instead of *exporter* (See :ref:`Buffer Object Structures <buffer-structs>`).
|
||||
|
||||
Successful calls to :c:func:`PyObject_GetBuffer` must be paired with calls
|
||||
to :c:func:`PyBuffer_Release`, similar to :c:func:`malloc` and :c:func:`free`.
|
||||
|
|
|
@ -1213,18 +1213,29 @@ Buffer Object Structures
|
|||
int (PyObject *exporter, Py_buffer *view, int flags);
|
||||
|
||||
Handle a request to *exporter* to fill in *view* as specified by *flags*.
|
||||
A standard implementation of this function will take these steps:
|
||||
Except for point (3), an implementation of this function MUST take these
|
||||
steps:
|
||||
|
||||
- Check if the request can be met. If not, raise :c:data:`PyExc_BufferError`,
|
||||
set :c:data:`view->obj` to *NULL* and return -1.
|
||||
(1) Check if the request can be met. If not, raise :c:data:`PyExc_BufferError`,
|
||||
set :c:data:`view->obj` to *NULL* and return -1.
|
||||
|
||||
- Fill in the requested fields.
|
||||
(2) Fill in the requested fields.
|
||||
|
||||
- Increment an internal counter for the number of exports.
|
||||
(3) Increment an internal counter for the number of exports.
|
||||
|
||||
- Set :c:data:`view->obj` to *exporter* and increment :c:data:`view->obj`.
|
||||
(4) Set :c:data:`view->obj` to *exporter* and increment :c:data:`view->obj`.
|
||||
|
||||
- Return 0.
|
||||
(5) Return 0.
|
||||
|
||||
If *exporter* is part of a chain or tree of buffer providers, two main
|
||||
schemes can be used:
|
||||
|
||||
* Re-export: Each member of the tree acts as the exporting object and
|
||||
sets :c:data:`view->obj` to a new reference to itself.
|
||||
|
||||
* Redirect: The buffer request is redirected to the root object of the
|
||||
tree. Here, :c:data:`view->obj` will be a new reference to the root
|
||||
object.
|
||||
|
||||
The individual fields of *view* are described in section
|
||||
:ref:`Buffer structure <buffer-structure>`, the rules how an exporter
|
||||
|
@ -1233,8 +1244,9 @@ Buffer Object Structures
|
|||
|
||||
All memory pointed to in the :c:type:`Py_buffer` structure belongs to
|
||||
the exporter and must remain valid until there are no consumers left.
|
||||
:c:member:`~Py_buffer.shape`, :c:member:`~Py_buffer.strides`,
|
||||
:c:member:`~Py_buffer.suboffsets` and :c:member:`~Py_buffer.internal`
|
||||
:c:member:`~Py_buffer.format`, :c:member:`~Py_buffer.shape`,
|
||||
:c:member:`~Py_buffer.strides`, :c:member:`~Py_buffer.suboffsets`
|
||||
and :c:member:`~Py_buffer.internal`
|
||||
are read-only for the consumer.
|
||||
|
||||
:c:func:`PyBuffer_FillInfo` provides an easy way of exposing a simple
|
||||
|
@ -1250,21 +1262,23 @@ Buffer Object Structures
|
|||
void (PyObject *exporter, Py_buffer *view);
|
||||
|
||||
Handle a request to release the resources of the buffer. If no resources
|
||||
need to be released, this field may be *NULL*. A standard implementation
|
||||
of this function will take these steps:
|
||||
need to be released, :c:member:`PyBufferProcs.bf_releasebuffer` may be
|
||||
*NULL*. Otherwise, a standard implementation of this function will take
|
||||
these optional steps:
|
||||
|
||||
- Decrement an internal counter for the number of exports.
|
||||
(1) Decrement an internal counter for the number of exports.
|
||||
|
||||
- If the counter is 0, free all memory associated with *view*.
|
||||
(2) If the counter is 0, free all memory associated with *view*.
|
||||
|
||||
The exporter MUST use the :c:member:`~Py_buffer.internal` field to keep
|
||||
track of buffer-specific resources (if present). This field is guaranteed
|
||||
to remain constant, while a consumer MAY pass a copy of the original buffer
|
||||
as the *view* argument.
|
||||
track of buffer-specific resources. This field is guaranteed to remain
|
||||
constant, while a consumer MAY pass a copy of the original buffer as the
|
||||
*view* argument.
|
||||
|
||||
|
||||
This function MUST NOT decrement :c:data:`view->obj`, since that is
|
||||
done automatically in :c:func:`PyBuffer_Release`.
|
||||
done automatically in :c:func:`PyBuffer_Release` (this scheme is
|
||||
useful for breaking reference cycles).
|
||||
|
||||
|
||||
:c:func:`PyBuffer_Release` is the interface for the consumer that
|
||||
|
|
|
@ -264,8 +264,7 @@ the organizations that use Python.
|
|||
|
||||
**What are the restrictions on Python's use?**
|
||||
|
||||
They're practically nonexistent. Consult the :file:`Misc/COPYRIGHT` file in the
|
||||
source distribution, or the section :ref:`history-and-license` for the full
|
||||
They're practically nonexistent. Consult :ref:`history-and-license` for the full
|
||||
language, but it boils down to three conditions:
|
||||
|
||||
* You have to leave the copyright notice on the software; if you don't include
|
||||
|
|
|
@ -261,8 +261,8 @@ behave slightly differently from real Capsules. Specifically:
|
|||
copy as you see fit.)
|
||||
|
||||
You can find :file:`capsulethunk.h` in the Python source distribution
|
||||
in the :file:`Doc/includes` directory. We also include it here for
|
||||
your reference; here is :file:`capsulethunk.h`:
|
||||
as :source:`Doc/includes/capsulethunk.h`. We also include it here for
|
||||
your convenience:
|
||||
|
||||
.. literalinclude:: ../includes/capsulethunk.h
|
||||
|
||||
|
|
|
@ -360,7 +360,7 @@ and more.
|
|||
|
||||
You can learn about this by interactively experimenting with the :mod:`re`
|
||||
module. If you have :mod:`tkinter` available, you may also want to look at
|
||||
:file:`Tools/demo/redemo.py`, a demonstration program included with the
|
||||
:source:`Tools/demo/redemo.py`, a demonstration program included with the
|
||||
Python distribution. It allows you to enter REs and strings, and displays
|
||||
whether the RE matches or fails. :file:`redemo.py` can be quite useful when
|
||||
trying to debug a complicated RE. Phil Schwartz's `Kodos
|
||||
|
@ -495,7 +495,7 @@ more convenient. If a program contains a lot of regular expressions, or re-uses
|
|||
the same ones in several locations, then it might be worthwhile to collect all
|
||||
the definitions in one place, in a section of code that compiles all the REs
|
||||
ahead of time. To take an example from the standard library, here's an extract
|
||||
from the now deprecated :file:`xmllib.py`::
|
||||
from the now-defunct Python 2 standard :mod:`xmllib` module::
|
||||
|
||||
ref = re.compile( ... )
|
||||
entityref = re.compile( ... )
|
||||
|
|
|
@ -32,6 +32,8 @@ Such constructors may be factory functions or class instances.
|
|||
returned by *function* at pickling time. :exc:`TypeError` will be raised if
|
||||
*object* is a class or *constructor* is not callable.
|
||||
|
||||
See the :mod:`pickle` module for more details on the interface expected of
|
||||
*function* and *constructor*.
|
||||
|
||||
See the :mod:`pickle` module for more details on the interface
|
||||
expected of *function* and *constructor*. Note that the
|
||||
:attr:`~pickle.Pickler.dispatch_table` attribute of a pickler
|
||||
object or subclass of :class:`pickle.Pickler` can also be used for
|
||||
declaring reduction functions.
|
||||
|
|
|
@ -23,7 +23,7 @@ definition of the Python bindings for the DOM and SAX interfaces.
|
|||
html.rst
|
||||
html.parser.rst
|
||||
html.entities.rst
|
||||
pyexpat.rst
|
||||
xml.etree.elementtree.rst
|
||||
xml.dom.rst
|
||||
xml.dom.minidom.rst
|
||||
xml.dom.pulldom.rst
|
||||
|
@ -31,4 +31,4 @@ definition of the Python bindings for the DOM and SAX interfaces.
|
|||
xml.sax.handler.rst
|
||||
xml.sax.utils.rst
|
||||
xml.sax.reader.rst
|
||||
xml.etree.elementtree.rst
|
||||
pyexpat.rst
|
||||
|
|
|
@ -415,13 +415,14 @@ The :mod:`multiprocessing` package mostly replicates the API of the
|
|||
A numeric handle of a system object which will become "ready" when
|
||||
the process ends.
|
||||
|
||||
You can use this value if you want to wait on several events at
|
||||
once using :func:`multiprocessing.connection.wait`. Otherwise
|
||||
calling :meth:`join()` is simpler.
|
||||
|
||||
On Windows, this is an OS handle usable with the ``WaitForSingleObject``
|
||||
and ``WaitForMultipleObjects`` family of API calls. On Unix, this is
|
||||
a file descriptor usable with primitives from the :mod:`select` module.
|
||||
|
||||
You can use this value if you want to wait on several events at once.
|
||||
Otherwise calling :meth:`join()` is simpler.
|
||||
|
||||
.. versionadded:: 3.3
|
||||
|
||||
.. method:: terminate()
|
||||
|
@ -785,6 +786,9 @@ Connection objects are usually created using :func:`Pipe` -- see also
|
|||
*timeout* is a number then this specifies the maximum time in seconds to
|
||||
block. If *timeout* is ``None`` then an infinite timeout is used.
|
||||
|
||||
Note that multiple connection objects may be polled at once by
|
||||
using :func:`multiprocessing.connection.wait`.
|
||||
|
||||
.. method:: send_bytes(buffer[, offset[, size]])
|
||||
|
||||
Send byte data from an object supporting the buffer interface as a
|
||||
|
@ -1779,8 +1783,9 @@ Usually message passing between processes is done using queues or by using
|
|||
|
||||
However, the :mod:`multiprocessing.connection` module allows some extra
|
||||
flexibility. It basically gives a high level message oriented API for dealing
|
||||
with sockets or Windows named pipes, and also has support for *digest
|
||||
authentication* using the :mod:`hmac` module.
|
||||
with sockets or Windows named pipes. It also has support for *digest
|
||||
authentication* using the :mod:`hmac` module, and for polling
|
||||
multiple connections at the same time.
|
||||
|
||||
|
||||
.. function:: deliver_challenge(connection, authkey)
|
||||
|
@ -1878,6 +1883,38 @@ authentication* using the :mod:`hmac` module.
|
|||
The address from which the last accepted connection came. If this is
|
||||
unavailable then it is ``None``.
|
||||
|
||||
.. function:: wait(object_list, timeout=None)
|
||||
|
||||
Wait till an object in *object_list* is ready. Returns the list of
|
||||
those objects in *object_list* which are ready. If *timeout* is a
|
||||
float then the call blocks for at most that many seconds. If
|
||||
*timeout* is ``None`` then it will block for an unlimited period.
|
||||
|
||||
For both Unix and Windows, an object can appear in *object_list* if
|
||||
it is
|
||||
|
||||
* a readable :class:`~multiprocessing.Connection` object;
|
||||
* a connected and readable :class:`socket.socket` object; or
|
||||
* the :attr:`~multiprocessing.Process.sentinel` attribute of a
|
||||
:class:`~multiprocessing.Process` object.
|
||||
|
||||
A connection or socket object is ready when there is data available
|
||||
to be read from it, or the other end has been closed.
|
||||
|
||||
**Unix**: ``wait(object_list, timeout)`` almost equivalent
|
||||
``select.select(object_list, [], [], timeout)``. The difference is
|
||||
that, if :func:`select.select` is interrupted by a signal, it can
|
||||
raise :exc:`OSError` with an error number of ``EINTR``, whereas
|
||||
:func:`wait` will not.
|
||||
|
||||
**Windows**: An item in *object_list* must either be an integer
|
||||
handle which is waitable (according to the definition used by the
|
||||
documentation of the Win32 function ``WaitForMultipleObjects()``)
|
||||
or it can be an object with a :meth:`fileno` method which returns a
|
||||
socket handle or pipe handle. (Note that pipe handles and socket
|
||||
handles are **not** waitable handles.)
|
||||
|
||||
.. versionadded:: 3.3
|
||||
|
||||
The module defines two exceptions:
|
||||
|
||||
|
@ -1929,6 +1966,41 @@ server::
|
|||
|
||||
conn.close()
|
||||
|
||||
The following code uses :func:`~multiprocessing.connection.wait` to
|
||||
wait for messages from multiple processes at once::
|
||||
|
||||
import time, random
|
||||
from multiprocessing import Process, Pipe, current_process
|
||||
from multiprocessing.connection import wait
|
||||
|
||||
def foo(w):
|
||||
for i in range(10):
|
||||
w.send((i, current_process().name))
|
||||
w.close()
|
||||
|
||||
if __name__ == '__main__':
|
||||
readers = []
|
||||
|
||||
for i in range(4):
|
||||
r, w = Pipe(duplex=False)
|
||||
readers.append(r)
|
||||
p = Process(target=foo, args=(w,))
|
||||
p.start()
|
||||
# We close the writable end of the pipe now to be sure that
|
||||
# p is the only process which owns a handle for it. This
|
||||
# ensures that when p closes its handle for the writable end,
|
||||
# wait() will promptly report the readable end as being ready.
|
||||
w.close()
|
||||
|
||||
while readers:
|
||||
for r in wait(readers):
|
||||
try:
|
||||
msg = r.recv()
|
||||
except EOFError:
|
||||
readers.remove(r)
|
||||
else:
|
||||
print(msg)
|
||||
|
||||
|
||||
.. _multiprocessing-address-formats:
|
||||
|
||||
|
|
|
@ -15,6 +15,11 @@ Installed Python distributions are represented by instances of
|
|||
Most functions also provide an extra argument ``use_egg_info`` to take legacy
|
||||
distributions into account.
|
||||
|
||||
For the purpose of this module, "installed" means that the distribution's
|
||||
:file:`.dist-info`, :file:`.egg-info` or :file:`egg` directory or file is found
|
||||
on :data:`sys.path`. For example, if the parent directory of a
|
||||
:file:`dist-info` directory is added to :envvar:`PYTHONPATH`, then it will be
|
||||
available in the database.
|
||||
|
||||
Classes representing installed distributions
|
||||
--------------------------------------------
|
||||
|
@ -128,7 +133,7 @@ Functions to work with the database
|
|||
for the first installed distribution matching *name*. Egg distributions are
|
||||
considered only if *use_egg_info* is true; if both a dist-info and an egg
|
||||
file are found, the dist-info prevails. The directories to be searched are
|
||||
given in *paths*, which defaults to :data:`sys.path`. Return ``None`` if no
|
||||
given in *paths*, which defaults to :data:`sys.path`. Returns ``None`` if no
|
||||
matching distribution is found.
|
||||
|
||||
.. FIXME param should be named use_egg
|
||||
|
@ -200,20 +205,23 @@ functions:
|
|||
Examples
|
||||
--------
|
||||
|
||||
Print all information about a distribution
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Printing all information about a distribution
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Given a path to a ``.dist-info`` distribution, we shall print out all
|
||||
Given the name of an installed distribution, we shall print out all
|
||||
information that can be obtained using functions provided in this module::
|
||||
|
||||
import sys
|
||||
import packaging.database
|
||||
|
||||
path = input()
|
||||
# first create the Distribution instance
|
||||
try:
|
||||
dist = packaging.database.Distribution(path)
|
||||
except FileNotFoundError:
|
||||
name = sys.argv[1]
|
||||
except ValueError:
|
||||
sys.exit('Not enough arguments')
|
||||
|
||||
# first create the Distribution instance
|
||||
dist = packaging.database.Distribution(path)
|
||||
if dist is None:
|
||||
sys.exit('No such distribution')
|
||||
|
||||
print('Information about %r' % dist.name)
|
||||
|
@ -244,7 +252,7 @@ information from a :file:`.dist-info` directory. By typing in the console:
|
|||
|
||||
.. code-block:: sh
|
||||
|
||||
$ echo /tmp/choxie/choxie-2.0.0.9.dist-info | python3 print_info.py
|
||||
python print_info.py choxie
|
||||
|
||||
we get the following output:
|
||||
|
||||
|
@ -299,10 +307,23 @@ we get the following output:
|
|||
* It was installed as a dependency
|
||||
|
||||
|
||||
Find out obsoleted distributions
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Getting metadata about a distribution
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Now, we take tackle a different problem, we are interested in finding out
|
||||
Sometimes you're not interested about the packaging information contained in a
|
||||
full :class:`Distribution` object but just want to do something with its
|
||||
:attr:`~Distribution.metadata`::
|
||||
|
||||
>>> from packaging.database import get_distribution
|
||||
>>> info = get_distribution('chocolate').metadata
|
||||
>>> info['Keywords']
|
||||
['cooking', 'happiness']
|
||||
|
||||
|
||||
Finding out obsoleted distributions
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Now, we tackle a different problem, we are interested in finding out
|
||||
which distributions have been obsoleted. This can be easily done as follows::
|
||||
|
||||
import packaging.database
|
||||
|
|
|
@ -285,6 +285,29 @@ The :mod:`pickle` module exports two classes, :class:`Pickler` and
|
|||
|
||||
See :ref:`pickle-persistent` for details and examples of uses.
|
||||
|
||||
.. attribute:: dispatch_table
|
||||
|
||||
A pickler object's dispatch table is a registry of *reduction
|
||||
functions* of the kind which can be declared using
|
||||
:func:`copyreg.pickle`. It is a mapping whose keys are classes
|
||||
and whose values are reduction functions. A reduction function
|
||||
takes a single argument of the associated class and should
|
||||
conform to the same interface as a :meth:`~object.__reduce__`
|
||||
method.
|
||||
|
||||
By default, a pickler object will not have a
|
||||
:attr:`dispatch_table` attribute, and it will instead use the
|
||||
global dispatch table managed by the :mod:`copyreg` module.
|
||||
However, to customize the pickling for a specific pickler object
|
||||
one can set the :attr:`dispatch_table` attribute to a dict-like
|
||||
object. Alternatively, if a subclass of :class:`Pickler` has a
|
||||
:attr:`dispatch_table` attribute then this will be used as the
|
||||
default dispatch table for instances of that class.
|
||||
|
||||
See :ref:`pickle-dispatch` for usage examples.
|
||||
|
||||
.. versionadded:: 3.3
|
||||
|
||||
.. attribute:: fast
|
||||
|
||||
Deprecated. Enable fast mode if set to a true value. The fast mode
|
||||
|
@ -575,6 +598,44 @@ pickle external objects by reference.
|
|||
|
||||
.. literalinclude:: ../includes/dbpickle.py
|
||||
|
||||
.. _pickle-dispatch:
|
||||
|
||||
Dispatch Tables
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
If one wants to customize pickling of some classes without disturbing
|
||||
any other code which depends on pickling, then one can create a
|
||||
pickler with a private dispatch table.
|
||||
|
||||
The global dispatch table managed by the :mod:`copyreg` module is
|
||||
available as :data:`copyreg.dispatch_table`. Therefore, one may
|
||||
choose to use a modified copy of :data:`copyreg.dispatch_table` as a
|
||||
private dispatch table.
|
||||
|
||||
For example ::
|
||||
|
||||
f = io.BytesIO()
|
||||
p = pickle.Pickler(f)
|
||||
p.dispatch_table = copyreg.dispatch_table.copy()
|
||||
p.dispatch_table[SomeClass] = reduce_SomeClass
|
||||
|
||||
creates an instance of :class:`pickle.Pickler` with a private dispatch
|
||||
table which handles the ``SomeClass`` class specially. Alternatively,
|
||||
the code ::
|
||||
|
||||
class MyPickler(pickle.Pickler):
|
||||
dispatch_table = copyreg.dispatch_table.copy()
|
||||
dispatch_table[SomeClass] = reduce_SomeClass
|
||||
f = io.BytesIO()
|
||||
p = MyPickler(f)
|
||||
|
||||
does the same, but all instances of ``MyPickler`` will by default
|
||||
share the same dispatch table. The equivalent code using the
|
||||
:mod:`copyreg` module is ::
|
||||
|
||||
copyreg.pickle(SomeClass, reduce_SomeClass)
|
||||
f = io.BytesIO()
|
||||
p = pickle.Pickler(f)
|
||||
|
||||
.. _pickle-state:
|
||||
|
||||
|
|
|
@ -369,12 +369,11 @@ The :mod:`signal` module defines the following functions:
|
|||
.. versionadded:: 3.3
|
||||
|
||||
|
||||
.. function:: sigtimedwait(sigset, (timeout_sec, timeout_nsec))
|
||||
.. function:: sigtimedwait(sigset, timeout)
|
||||
|
||||
Like :func:`sigtimedwait`, but takes a tuple of ``(seconds, nanoseconds)``
|
||||
as an additional argument specifying a timeout. If both *timeout_sec* and
|
||||
*timeout_nsec* are specified as :const:`0`, a poll is performed. Returns
|
||||
:const:`None` if a timeout occurs.
|
||||
Like :func:`sigwaitinfo`, but takes an additional *timeout* argument
|
||||
specifying a timeout. If *timeout* is specified as :const:`0`, a poll is
|
||||
performed. Returns :const:`None` if a timeout occurs.
|
||||
|
||||
Availability: Unix (see the man page :manpage:`sigtimedwait(2)` for further
|
||||
information).
|
||||
|
|
|
@ -1311,7 +1311,7 @@ network. This example might require special priviledge::
|
|||
import struct
|
||||
|
||||
|
||||
# CAN frame packing/unpacking (see `struct can_frame` in <linux/can.h>)
|
||||
# CAN frame packing/unpacking (see 'struct can_frame' in <linux/can.h>)
|
||||
|
||||
can_frame_fmt = "=IB3x8s"
|
||||
can_frame_size = struct.calcsize(can_frame_fmt)
|
||||
|
@ -1326,7 +1326,7 @@ network. This example might require special priviledge::
|
|||
return (can_id, can_dlc, data[:can_dlc])
|
||||
|
||||
|
||||
# create a raw socket and bind it to the `vcan0` interface
|
||||
# create a raw socket and bind it to the 'vcan0' interface
|
||||
s = socket.socket(socket.AF_CAN, socket.SOCK_RAW, socket.CAN_RAW)
|
||||
s.bind(('vcan0',))
|
||||
|
||||
|
|
|
@ -770,7 +770,7 @@ always available.
|
|||
independent Python files are installed; by default, this is the string
|
||||
``'/usr/local'``. This can be set at build time with the ``--prefix``
|
||||
argument to the :program:`configure` script. The main collection of Python
|
||||
library modules is installed in the directory :file:`{prefix}/lib/python{X.Y}``
|
||||
library modules is installed in the directory :file:`{prefix}/lib/python{X.Y}`
|
||||
while the platform independent header files (all except :file:`pyconfig.h`) are
|
||||
stored in :file:`{prefix}/include/python{X.Y}`, where *X.Y* is the version
|
||||
number of Python, for example ``3.2``.
|
||||
|
|
|
@ -15,6 +15,14 @@
|
|||
Model interface. It is intended to be simpler than the full DOM and also
|
||||
significantly smaller.
|
||||
|
||||
.. note::
|
||||
|
||||
The :mod:`xml.dom.minidom` module provides an implementation of the W3C-DOM,
|
||||
with an API similar to that in other programming languages. Users who are
|
||||
unfamiliar with the W3C-DOM interface or who would like to write less code
|
||||
for processing XML files should consider using the
|
||||
:mod:`xml.etree.ElementTree` module instead.
|
||||
|
||||
DOM applications typically start by parsing some XML into a DOM. With
|
||||
:mod:`xml.dom.minidom`, this is done through the parse functions::
|
||||
|
||||
|
|
|
@ -118,7 +118,7 @@ been GPL-compatible; the table below summarizes the various releases.
|
|||
+----------------+--------------+------------+------------+-----------------+
|
||||
| 3.2.2 | 3.2.1 | 2011 | PSF | yes |
|
||||
+----------------+--------------+------------+------------+-----------------+
|
||||
| 3.3 | 3.2 | 2012 | PSF | yes |
|
||||
| 3.3.0 | 3.2 | 2012 | PSF | yes |
|
||||
+----------------+--------------+------------+------------+-----------------+
|
||||
|
||||
.. note::
|
||||
|
|
|
@ -401,7 +401,7 @@ String literals are described by the following lexical definitions:
|
|||
|
||||
.. productionlist::
|
||||
stringliteral: [`stringprefix`](`shortstring` | `longstring`)
|
||||
stringprefix: "r" | "R"
|
||||
stringprefix: "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"
|
||||
shortstring: "'" `shortstringitem`* "'" | '"' `shortstringitem`* '"'
|
||||
longstring: "'''" `longstringitem`* "'''" | '"""' `longstringitem`* '"""'
|
||||
shortstringitem: `shortstringchar` | `stringescapeseq`
|
||||
|
@ -441,6 +441,9 @@ instance of the :class:`bytes` type instead of the :class:`str` type. They
|
|||
may only contain ASCII characters; bytes with a numeric value of 128 or greater
|
||||
must be expressed with escapes.
|
||||
|
||||
As of Python 3.3 it is possible again to prefix unicode strings with a
|
||||
``u`` prefix to simplify maintenance of dual 2.x and 3.x codebases.
|
||||
|
||||
Both string and bytes literals may optionally be prefixed with a letter ``'r'``
|
||||
or ``'R'``; such strings are called :dfn:`raw strings` and treat backslashes as
|
||||
literal characters. As a result, in string literals, ``'\U'`` and ``'\u'``
|
||||
|
@ -450,6 +453,11 @@ escapes in raw strings are not treated specially.
|
|||
The ``'rb'`` prefix of raw bytes literals has been added as a synonym
|
||||
of ``'br'``.
|
||||
|
||||
.. versionadded:: 3.3
|
||||
Support for the unicode legacy literal (``u'value'``) and other
|
||||
versions were reintroduced to simplify the maintenance of dual
|
||||
Python 2.x and 3.x codebases. See :pep:`414` for more information.
|
||||
|
||||
In triple-quoted strings, unescaped newlines and quotes are allowed (and are
|
||||
retained), except that three unescaped quotes in a row terminate the string. (A
|
||||
"quote" is the character used to open the string, i.e. either ``'`` or ``"``.)
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
|
||||
Sphinx extension with Python doc-specific markup.
|
||||
|
||||
:copyright: 2008, 2009, 2010 by Georg Brandl.
|
||||
:copyright: 2008, 2009, 2010, 2011, 2012 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
|
||||
|
@ -201,11 +201,12 @@ class PydocTopicsBuilder(Builder):
|
|||
document.append(doctree.ids[labelid])
|
||||
destination = StringOutput(encoding='utf-8')
|
||||
writer.write(document, destination)
|
||||
self.topics[label] = str(writer.output)
|
||||
self.topics[label] = writer.output.encode('utf-8')
|
||||
|
||||
def finish(self):
|
||||
f = open(path.join(self.outdir, 'topics.py'), 'w')
|
||||
try:
|
||||
f.write('# -*- coding: utf-8 -*-\n')
|
||||
f.write('# Autogenerated by Sphinx on %s\n' % asctime())
|
||||
f.write('topics = ' + pformat(self.topics) + '\n')
|
||||
finally:
|
||||
|
|
|
@ -1,16 +1,24 @@
|
|||
c-api/arg,,:ref,"PyArg_ParseTuple(args, ""O|O:ref"", &object, &callback)"
|
||||
c-api/list,,:high,list[low:high]
|
||||
c-api/list,,:high,list[low:high] = itemlist
|
||||
c-api/sequence,,:i2,del o[i1:i2]
|
||||
c-api/sequence,,:i2,o[i1:i2]
|
||||
c-api/sequence,,:i2,o[i1:i2] = v
|
||||
c-api/sequence,,:i2,del o[i1:i2]
|
||||
c-api/unicode,,:end,str[start:end]
|
||||
c-api/unicode,,:start,unicode[start:start+length]
|
||||
distutils/examples,267,`,This is the description of the ``foobar`` package.
|
||||
distutils/setupscript,,::,
|
||||
extending/embedding,,:numargs,"if(!PyArg_ParseTuple(args, "":numargs""))"
|
||||
extending/extending,,:set,"if (PyArg_ParseTuple(args, ""O:set_callback"", &temp)) {"
|
||||
extending/extending,,:myfunction,"PyArg_ParseTuple(args, ""D:myfunction"", &c);"
|
||||
extending/extending,,:set,"if (PyArg_ParseTuple(args, ""O:set_callback"", &temp)) {"
|
||||
extending/newtypes,,:call,"if (!PyArg_ParseTuple(args, ""sss:call"", &arg1, &arg2, &arg3)) {"
|
||||
extending/windows,,:initspam,/export:initspam
|
||||
faq/programming,,:chr,">=4.0) or 1+f(xc,yc,x*x-y*y+xc,2.0*x*y+yc,k-1,f):f(xc,yc,x,y,k,f):chr("
|
||||
faq/programming,,::,for x in sequence[::-1]:
|
||||
faq/programming,,:reduce,"print((lambda Ru,Ro,Iu,Io,IM,Sx,Sy:reduce(lambda x,y:x+y,map(lambda y,"
|
||||
faq/programming,,:reduce,"Sx=Sx,Sy=Sy:reduce(lambda x,y:x+y,map(lambda x,xc=Ru,yc=yc,Ru=Ru,Ro=Ro,"
|
||||
faq/windows,229,:EOF,@setlocal enableextensions & python -x %~f0 %* & goto :EOF
|
||||
faq/windows,393,:REG,.py :REG_SZ: c:\<path to python>\python.exe -u %s %s
|
||||
howto/cporting,,:add,"if (!PyArg_ParseTuple(args, ""ii:add_ints"", &one, &two))"
|
||||
howto/cporting,,:encode,"if (!PyArg_ParseTuple(args, ""O:encode_object"", &myobj))"
|
||||
howto/cporting,,:say,"if (!PyArg_ParseTuple(args, ""U:say_hello"", &name))"
|
||||
|
@ -22,19 +30,53 @@ howto/curses,,:magenta,"They are: 0:black, 1:red, 2:green, 3:yellow, 4:blue, 5:m
|
|||
howto/curses,,:red,"They are: 0:black, 1:red, 2:green, 3:yellow, 4:blue, 5:magenta, 6:cyan, and"
|
||||
howto/curses,,:white,"7:white."
|
||||
howto/curses,,:yellow,"They are: 0:black, 1:red, 2:green, 3:yellow, 4:blue, 5:magenta, 6:cyan, and"
|
||||
howto/logging,,:And,"WARNING:And this, too"
|
||||
howto/logging,,:And,"WARNING:root:And this, too"
|
||||
howto/logging,,:Doing,INFO:root:Doing something
|
||||
howto/logging,,:Finished,INFO:root:Finished
|
||||
howto/logging,,:logger,severity:logger name:message
|
||||
howto/logging,,:Look,WARNING:root:Look before you leap!
|
||||
howto/logging,,:message,severity:logger name:message
|
||||
howto/logging,,:root,DEBUG:root:This message should go to the log file
|
||||
howto/logging,,:root,INFO:root:Doing something
|
||||
howto/logging,,:root,INFO:root:Finished
|
||||
howto/logging,,:root,INFO:root:So should this
|
||||
howto/logging,,:root,INFO:root:Started
|
||||
howto/logging,,:root,"WARNING:root:And this, too"
|
||||
howto/logging,,:root,WARNING:root:Look before you leap!
|
||||
howto/logging,,:root,WARNING:root:Watch out!
|
||||
howto/logging,,:So,INFO:root:So should this
|
||||
howto/logging,,:So,INFO:So should this
|
||||
howto/logging,,:Started,INFO:root:Started
|
||||
howto/logging,,:This,DEBUG:root:This message should go to the log file
|
||||
howto/logging,,:This,DEBUG:This message should appear on the console
|
||||
howto/logging,,:Watch,WARNING:root:Watch out!
|
||||
howto/pyporting,75,::,# make sure to use :: Python *and* :: Python :: 3 so
|
||||
howto/pyporting,75,::,"'Programming Language :: Python',"
|
||||
howto/pyporting,75,::,'Programming Language :: Python :: 3'
|
||||
howto/regex,,::,
|
||||
howto/regex,,:foo,(?:foo)
|
||||
howto/urllib2,,:example,"for example ""joe@password:example.com"""
|
||||
howto/webservers,,.. image:,.. image:: http.png
|
||||
library/audioop,,:ipos,"# factor = audioop.findfactor(in_test[ipos*2:ipos*2+len(out_test)],"
|
||||
library/bisect,32,:hi,all(val >= x for val in a[i:hi])
|
||||
library/bisect,42,:hi,all(val > x for val in a[i:hi])
|
||||
library/configparser,,:home,my_dir: ${Common:home_dir}/twosheds
|
||||
library/configparser,,:option,${section:option}
|
||||
library/configparser,,:path,python_dir: ${Frameworks:path}/Python/Versions/${Frameworks:Python}
|
||||
library/configparser,,:Python,python_dir: ${Frameworks:path}/Python/Versions/${Frameworks:Python}
|
||||
library/configparser,,`,# Set the optional `raw` argument of get() to True if you wish to disable
|
||||
library/configparser,,:system,path: ${Common:system_dir}/Library/Frameworks/
|
||||
library/configparser,,`,# The optional `fallback` argument can be used to provide a fallback value
|
||||
library/configparser,,`,# The optional `vars` argument is a dict with members that will take
|
||||
library/datetime,,:MM,
|
||||
library/datetime,,:SS,
|
||||
library/decimal,,:optional,"trailneg:optional trailing minus indicator"
|
||||
library/difflib,,:ahi,a[alo:ahi]
|
||||
library/difflib,,:bhi,b[blo:bhi]
|
||||
library/difflib,,:i1,
|
||||
library/difflib,,:i2,
|
||||
library/difflib,,:j2,
|
||||
library/difflib,,:i1,
|
||||
library/dis,,:TOS,
|
||||
library/dis,,`,TOS = `TOS`
|
||||
library/doctest,,`,``factorial`` from the ``example`` module:
|
||||
|
@ -44,96 +86,164 @@ library/functions,,:step,a[start:stop:step]
|
|||
library/functions,,:stop,"a[start:stop, i]"
|
||||
library/functions,,:stop,a[start:stop:step]
|
||||
library/hotshot,,:lineno,"ncalls tottime percall cumtime percall filename:lineno(function)"
|
||||
library/http.client,52,:port,host:port
|
||||
library/httplib,,:port,host:port
|
||||
library/imaplib,,:MM,"""DD-Mmm-YYYY HH:MM:SS"
|
||||
library/imaplib,,:SS,"""DD-Mmm-YYYY HH:MM:SS"
|
||||
library/itertools,,:stop,elements from seq[start:stop:step]
|
||||
library/itertools,,:step,elements from seq[start:stop:step]
|
||||
library/itertools,,:stop,elements from seq[start:stop:step]
|
||||
library/linecache,,:sys,"sys:x:3:3:sys:/dev:/bin/sh"
|
||||
library/logging,,:And,
|
||||
library/logging,,:Doing,INFO:root:Doing something
|
||||
library/logging,,:Finished,INFO:root:Finished
|
||||
library/logging,,:logger,severity:logger name:message
|
||||
library/logging,,:Look,WARNING:root:Look before you leap!
|
||||
library/logging,,:message,severity:logger name:message
|
||||
library/logging,,:package1,
|
||||
library/logging,,:package2,
|
||||
library/logging,,:root,
|
||||
library/logging,,:This,
|
||||
library/logging,,:port,host:port
|
||||
library/logging,,:root,
|
||||
library/logging,,:So,INFO:root:So should this
|
||||
library/logging,,:So,INFO:So should this
|
||||
library/logging,,:Started,INFO:root:Started
|
||||
library/logging,,:This,
|
||||
library/logging,,:Watch,WARNING:root:Watch out!
|
||||
library/logging.handlers,,:port,host:port
|
||||
library/mmap,,:i2,obj[i1:i2]
|
||||
library/multiprocessing,,:queue,">>> QueueManager.register('get_queue', callable=lambda:queue)"
|
||||
library/multiprocessing,,`,">>> l._callmethod('__getitem__', (20,)) # equiv to `l[20]`"
|
||||
library/multiprocessing,,`,">>> l._callmethod('__getslice__', (2, 7)) # equiv to `l[2:7]`"
|
||||
library/multiprocessing,,`,# `BaseManager`.
|
||||
library/multiprocessing,,`,# `Pool.imap()` (which will save on the amount of code needed anyway).
|
||||
library/multiprocessing,,`,# Add more tasks using `put()`
|
||||
library/multiprocessing,,`,# A test file for the `multiprocessing` package
|
||||
library/multiprocessing,,`,# A test of `multiprocessing.Pool` class
|
||||
library/multiprocessing,,`,# Add more tasks using `put()`
|
||||
library/multiprocessing,,`,# `BaseManager`.
|
||||
library/multiprocessing,,`,`Cluster` is a subclass of `SyncManager` so it allows creation of
|
||||
library/multiprocessing,,`,# create server for a `HostManager` object
|
||||
library/multiprocessing,,`,# Depends on `multiprocessing` package -- tested with `processing-0.60`
|
||||
library/multiprocessing,,`,`hostname` gives the name of the host. If hostname is not
|
||||
library/multiprocessing,,`,# in the original order then consider using `Pool.map()` or
|
||||
library/multiprocessing,,`,">>> l._callmethod('__getitem__', (20,)) # equiv to `l[20]`"
|
||||
library/multiprocessing,,`,">>> l._callmethod('__getslice__', (2, 7)) # equiv to `l[2:7]`"
|
||||
library/multiprocessing,,`,# Not sure if we should synchronize access to `socket.accept()` method by
|
||||
library/multiprocessing,,`,# object. (We import `multiprocessing.reduction` to enable this pickling.)
|
||||
library/multiprocessing,,`,# `Pool.imap()` (which will save on the amount of code needed anyway).
|
||||
library/multiprocessing,,:queue,">>> QueueManager.register('get_queue', callable=lambda:queue)"
|
||||
library/multiprocessing,,`,# register the Foo class; make `f()` and `g()` accessible via proxy
|
||||
library/multiprocessing,,`,# register the Foo class; make `g()` and `_h()` accessible via proxy
|
||||
library/multiprocessing,,`,# register the generator function baz; use `GeneratorProxy` to make proxies
|
||||
library/multiprocessing,,`,`Cluster` is a subclass of `SyncManager` so it allows creation of
|
||||
library/multiprocessing,,`,`hostname` gives the name of the host. If hostname is not
|
||||
library/multiprocessing,,`,`slots` is used to specify the number of slots for processes on
|
||||
library/nntplib,,:bytes,:bytes
|
||||
library/nntplib,,:bytes,"['xref', 'from', ':lines', ':bytes', 'references', 'date', 'message-id', 'subject']"
|
||||
library/nntplib,,:lines,:lines
|
||||
library/nntplib,,:lines,"['xref', 'from', ':lines', ':bytes', 'references', 'date', 'message-id', 'subject']"
|
||||
library/optparse,,:len,"del parser.rargs[:len(value)]"
|
||||
library/os.path,,:foo,c:foo
|
||||
library/parser,,`,"""Make a function that raises an argument to the exponent `exp`."""
|
||||
library/pdb,,:lineno,filename:lineno
|
||||
library/pdb,,:lineno,[filename:lineno | bpnumber [bpnumber ...]]
|
||||
library/pickle,,:memory,"conn = sqlite3.connect("":memory:"")"
|
||||
library/posix,,`,"CFLAGS=""`getconf LFS_CFLAGS`"" OPT=""-g -O2 $CFLAGS"""
|
||||
library/profile,,:lineno,ncalls tottime percall cumtime percall filename:lineno(function)
|
||||
library/pprint,209,::,"'classifiers': ['Development Status :: 4 - Beta',"
|
||||
library/pprint,209,::,"'Intended Audience :: Developers',"
|
||||
library/pprint,209,::,"'License :: OSI Approved :: MIT License',"
|
||||
library/pprint,209,::,"'Natural Language :: English',"
|
||||
library/pprint,209,::,"'Operating System :: OS Independent',"
|
||||
library/pprint,209,::,"'Programming Language :: Python',"
|
||||
library/pprint,209,::,"'Programming Language :: Python :: 2',"
|
||||
library/pprint,209,::,"'Programming Language :: Python :: 2.6',"
|
||||
library/pprint,209,::,"'Programming Language :: Python :: 2.7',"
|
||||
library/pprint,209,::,"'Topic :: Software Development :: Libraries',"
|
||||
library/pprint,209,::,"'Topic :: Software Development :: Libraries :: Python Modules'],"
|
||||
library/profile,,:lineno,filename:lineno(function)
|
||||
library/profile,,:lineno,ncalls tottime percall cumtime percall filename:lineno(function)
|
||||
library/profile,,:lineno,"(sort by filename:lineno),"
|
||||
library/pyexpat,,:elem1,<py:elem1 />
|
||||
library/pyexpat,,:py,"xmlns:py = ""http://www.python.org/ns/"">"
|
||||
library/repr,,`,"return `obj`"
|
||||
library/smtplib,,:port,"as well as a regular host:port server."
|
||||
library/smtplib,,:port,method must support that as well as a regular host:port
|
||||
library/socket,,::,"(10, 1, 6, '', ('2001:888:2000:d::a2', 80, 0, 0))]"
|
||||
library/socket,,::,'5aef:2b::8'
|
||||
library/sqlite3,,:memory,
|
||||
library/socket,,:can,"return (can_id, can_dlc, data[:can_dlc])"
|
||||
library/socket,,:len,fds.fromstring(cmsg_data[:len(cmsg_data) - (len(cmsg_data) % fds.itemsize)])
|
||||
library/sqlite3,,:age,"cur.execute(""select * from people where name_last=:who and age=:age"", {""who"": who, ""age"": age})"
|
||||
library/sqlite3,,:age,"select name_last, age from people where name_last=:who and age=:age"
|
||||
library/sqlite3,,:who,"select name_last, age from people where name_last=:who and age=:age"
|
||||
library/ssl,,:My,"Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Organization, Inc."
|
||||
library/sqlite3,,:memory,
|
||||
library/sqlite3,,:who,"cur.execute(""select * from people where name_last=:who and age=:age"", {""who"": who, ""age"": age})"
|
||||
library/ssl,,:My,"Organizational Unit Name (eg, section) []:My Group"
|
||||
library/ssl,,:My,"Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Organization, Inc."
|
||||
library/ssl,,:myserver,"Common Name (eg, YOUR name) []:myserver.mygroup.myorganization.com"
|
||||
library/ssl,,:MyState,State or Province Name (full name) [Some-State]:MyState
|
||||
library/ssl,,:ops,Email Address []:ops@myserver.mygroup.myorganization.com
|
||||
library/ssl,,:Some,"Locality Name (eg, city) []:Some City"
|
||||
library/ssl,,:US,Country Name (2 letter code) [AU]:US
|
||||
library/stdtypes,,::,>>> a[::-1].tolist()
|
||||
library/stdtypes,,::,>>> a[::2].tolist()
|
||||
library/stdtypes,,:end,s[start:end]
|
||||
library/stdtypes,,::,>>> hash(v[::-2]) == hash(b'abcefg'[::-2])
|
||||
library/stdtypes,,:len,s[len(s):len(s)]
|
||||
library/stdtypes,,:len,s[len(s):len(s)]
|
||||
library/stdtypes,,::,>>> y = m[::2]
|
||||
library/string,,:end,s[start:end]
|
||||
library/string,,:end,s[start:end]
|
||||
library/subprocess,,`,"output=`mycmd myarg`"
|
||||
library/subprocess,,`,"output=`dmesg | grep hda`"
|
||||
library/subprocess,,`,"output=`mycmd myarg`"
|
||||
library/tarfile,,:bz2,
|
||||
library/tarfile,,:compression,filemode[:compression]
|
||||
library/tarfile,,:gz,
|
||||
library/tarfile,,:bz2,
|
||||
library/tarfile,,:xz,'a:xz'
|
||||
library/tarfile,,:xz,'r:xz'
|
||||
library/tarfile,,:xz,'w:xz'
|
||||
library/time,,:mm,
|
||||
library/time,,:ss,
|
||||
library/turtle,,::,Example::
|
||||
library/urllib,,:port,:port
|
||||
library/urllib2,,:password,"""joe:password@python.org"""
|
||||
library/urllib,,:port,:port
|
||||
library/urllib.request,,:close,Connection:close
|
||||
library/urllib.request,,:lang,"xmlns=""http://www.w3.org/1999/xhtml"" xml:lang=""en"" lang=""en"">\n\n<head>\n"
|
||||
library/urllib.request,,:password,"""joe:password@python.org"""
|
||||
library/uuid,,:uuid,urn:uuid:12345678-1234-5678-1234-567812345678
|
||||
library/xmlrpclib,,:pass,http://user:pass@host:port/path
|
||||
library/xmlrpclib,,:pass,user:pass
|
||||
library/xmlrpclib,,:port,http://user:pass@host:port/path
|
||||
library/xmlrpc.client,,:pass,http://user:pass@host:port/path
|
||||
library/xmlrpc.client,,:pass,user:pass
|
||||
library/xmlrpc.client,,:port,http://user:pass@host:port/path
|
||||
license,,`,"``Software''), to deal in the Software without restriction, including"
|
||||
license,,`,"THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND,"
|
||||
license,,`,* THIS SOFTWARE IS PROVIDED BY ERIC YOUNG ``AS IS'' AND
|
||||
license,,`,THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
|
||||
license,,`,* THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT ``AS IS'' AND ANY
|
||||
license,,`,THIS SOFTWARE IS PROVIDED BY THE PROJECT AND CONTRIBUTORS ``AS IS'' AND
|
||||
license,,:zooko,mailto:zooko@zooko.com
|
||||
license,,`,THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
|
||||
reference/datamodel,,:step,a[i:j:step]
|
||||
packaging/examples,,`,This is the description of the ``foobar`` project.
|
||||
packaging/setupcfg,,::,Development Status :: 3 - Alpha
|
||||
packaging/setupcfg,,::,License :: OSI Approved :: Mozilla Public License 1.1 (MPL 1.1)
|
||||
packaging/setupscript,,::,"'Development Status :: 4 - Beta',"
|
||||
packaging/setupscript,,::,"'Environment :: Console',"
|
||||
packaging/setupscript,,::,"'Environment :: Web Environment',"
|
||||
packaging/setupscript,,::,"'Intended Audience :: Developers',"
|
||||
packaging/setupscript,,::,"'Intended Audience :: End Users/Desktop',"
|
||||
packaging/setupscript,,::,"'Intended Audience :: System Administrators',"
|
||||
packaging/setupscript,,::,"'License :: OSI Approved :: Python Software Foundation License',"
|
||||
packaging/setupscript,,::,"'Operating System :: MacOS :: MacOS X',"
|
||||
packaging/setupscript,,::,"'Operating System :: Microsoft :: Windows',"
|
||||
packaging/setupscript,,::,"'Operating System :: POSIX',"
|
||||
packaging/setupscript,,::,"'Programming Language :: Python',"
|
||||
packaging/setupscript,,::,"'Topic :: Communications :: Email',"
|
||||
packaging/setupscript,,::,"'Topic :: Office/Business',"
|
||||
packaging/setupscript,,::,"'Topic :: Software Development :: Bug Tracking',"
|
||||
packaging/tutorial,,::,1) License :: OSI Approved :: GNU General Public License (GPL)
|
||||
packaging/tutorial,,::,2) License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)
|
||||
packaging/tutorial,,::,classifier = Development Status :: 3 - Alpha
|
||||
packaging/tutorial,,::,License :: OSI Approved :: GNU General Public License (GPL)
|
||||
packaging/tutorial,,::,Type the number of the license you wish to use or ? to try again:: 1
|
||||
reference/datamodel,,:max,
|
||||
reference/expressions,,:index,x[index:index]
|
||||
reference/datamodel,,:step,a[i:j:step]
|
||||
reference/expressions,,:datum,{key:datum...}
|
||||
reference/expressions,,`,`expressions...`
|
||||
reference/expressions,,:index,x[index:index]
|
||||
reference/grammar,,:output,#diagram:output
|
||||
reference/grammar,,:rules,#diagram:rules
|
||||
reference/grammar,,:token,#diagram:token
|
||||
reference/grammar,,`,'`' testlist1 '`'
|
||||
reference/lexical_analysis,,:fileencoding,# vim:fileencoding=<encoding-name>
|
||||
reference/grammar,,:token,#diagram:token
|
||||
reference/lexical_analysis,,`,", : . ` = ;"
|
||||
tutorial/datastructures,,:value,key:value pairs within the braces adds initial key:value pairs
|
||||
reference/lexical_analysis,,`,$ ? `
|
||||
reference/lexical_analysis,,:fileencoding,# vim:fileencoding=<encoding-name>
|
||||
tutorial/datastructures,,:value,It is also possible to delete a key:value
|
||||
tutorial/stdlib2,,:start,"fields = struct.unpack('<IIIHH', data[start:start+16])"
|
||||
tutorial/stdlib2,,:start,extra = data[start:start+extra_size]
|
||||
tutorial/stdlib2,,:start,filename = data[start:start+filenamesize]
|
||||
tutorial/datastructures,,:value,key:value pairs within the braces adds initial key:value pairs
|
||||
tutorial/stdlib2,,:config,"logging.warning('Warning:config file %s not found', 'server.conf')"
|
||||
tutorial/stdlib2,,:config,WARNING:root:Warning:config file server.conf not found
|
||||
tutorial/stdlib2,,:Critical,CRITICAL:root:Critical error -- shutting down
|
||||
|
@ -141,15 +251,16 @@ tutorial/stdlib2,,:Error,ERROR:root:Error occurred
|
|||
tutorial/stdlib2,,:root,CRITICAL:root:Critical error -- shutting down
|
||||
tutorial/stdlib2,,:root,ERROR:root:Error occurred
|
||||
tutorial/stdlib2,,:root,WARNING:root:Warning:config file server.conf not found
|
||||
tutorial/stdlib2,,:start,extra = data[start:start+extra_size]
|
||||
tutorial/stdlib2,,:start,"fields = struct.unpack('<IIIHH', data[start:start+16])"
|
||||
tutorial/stdlib2,,:start,filename = data[start:start+filenamesize]
|
||||
tutorial/stdlib2,,:Warning,WARNING:root:Warning:config file server.conf not found
|
||||
using/cmdline,,:line,file:line: category: message
|
||||
using/cmdline,,:category,action:message:category:module:line
|
||||
using/cmdline,,:errorhandler,:errorhandler
|
||||
using/cmdline,,:line,action:message:category:module:line
|
||||
using/cmdline,,:line,file:line: category: message
|
||||
using/cmdline,,:message,action:message:category:module:line
|
||||
using/cmdline,,:module,action:message:category:module:line
|
||||
using/cmdline,,:errorhandler,:errorhandler
|
||||
using/windows,162,`,`` this fixes syntax highlighting errors in some editors due to the \\\\ hackery
|
||||
using/windows,170,`,``
|
||||
whatsnew/2.0,418,:len,
|
||||
whatsnew/2.3,,::,
|
||||
whatsnew/2.3,,:config,
|
||||
|
@ -163,135 +274,26 @@ whatsnew/2.4,,:System,
|
|||
whatsnew/2.5,,:memory,:memory:
|
||||
whatsnew/2.5,,:step,[start:stop:step]
|
||||
whatsnew/2.5,,:stop,[start:stop:step]
|
||||
distutils/examples,267,`,This is the description of the ``foobar`` package.
|
||||
faq/programming,,:reduce,"print((lambda Ru,Ro,Iu,Io,IM,Sx,Sy:reduce(lambda x,y:x+y,map(lambda y,"
|
||||
faq/programming,,:reduce,"Sx=Sx,Sy=Sy:reduce(lambda x,y:x+y,map(lambda x,xc=Ru,yc=yc,Ru=Ru,Ro=Ro,"
|
||||
faq/programming,,:chr,">=4.0) or 1+f(xc,yc,x*x-y*y+xc,2.0*x*y+yc,k-1,f):f(xc,yc,x,y,k,f):chr("
|
||||
faq/programming,,::,for x in sequence[::-1]:
|
||||
faq/windows,229,:EOF,@setlocal enableextensions & python -x %~f0 %* & goto :EOF
|
||||
faq/windows,393,:REG,.py :REG_SZ: c:\<path to python>\python.exe -u %s %s
|
||||
library/bisect,32,:hi,all(val >= x for val in a[i:hi])
|
||||
library/bisect,42,:hi,all(val > x for val in a[i:hi])
|
||||
library/http.client,52,:port,host:port
|
||||
library/nntplib,,:bytes,:bytes
|
||||
library/nntplib,,:lines,:lines
|
||||
library/nntplib,,:lines,"['xref', 'from', ':lines', ':bytes', 'references', 'date', 'message-id', 'subject']"
|
||||
library/nntplib,,:bytes,"['xref', 'from', ':lines', ':bytes', 'references', 'date', 'message-id', 'subject']"
|
||||
library/pickle,,:memory,"conn = sqlite3.connect("":memory:"")"
|
||||
library/profile,,:lineno,"(sort by filename:lineno),"
|
||||
library/socket,,::,"(10, 1, 6, '', ('2001:888:2000:d::a2', 80, 0, 0))]"
|
||||
library/stdtypes,,:end,s[start:end]
|
||||
library/stdtypes,,:end,s[start:end]
|
||||
library/urllib.request,,:close,Connection:close
|
||||
library/urllib.request,,:password,"""joe:password@python.org"""
|
||||
library/urllib.request,,:lang,"xmlns=""http://www.w3.org/1999/xhtml"" xml:lang=""en"" lang=""en"">\n\n<head>\n"
|
||||
library/xmlrpc.client,103,:pass,http://user:pass@host:port/path
|
||||
library/xmlrpc.client,103,:port,http://user:pass@host:port/path
|
||||
library/xmlrpc.client,103,:pass,user:pass
|
||||
license,,`,* THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT ``AS IS'' AND ANY
|
||||
license,,`,* THIS SOFTWARE IS PROVIDED BY ERIC YOUNG ``AS IS'' AND
|
||||
license,,`,"``Software''), to deal in the Software without restriction, including"
|
||||
license,,`,"THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND,"
|
||||
reference/lexical_analysis,704,`,$ ? `
|
||||
whatsnew/2.7,735,:Sunday,'2009:4:Sunday'
|
||||
whatsnew/2.7,862,::,"export PYTHONWARNINGS=all,error:::Cookie:0"
|
||||
whatsnew/2.7,862,:Cookie,"export PYTHONWARNINGS=all,error:::Cookie:0"
|
||||
whatsnew/2.7,1619,::,>>> urlparse.urlparse('http://[1080::8:800:200C:417A]/foo')
|
||||
whatsnew/2.7,1619,::,"ParseResult(scheme='http', netloc='[1080::8:800:200C:417A]',"
|
||||
library/configparser,,`,# Set the optional `raw` argument of get() to True if you wish to disable
|
||||
library/configparser,,`,# The optional `vars` argument is a dict with members that will take
|
||||
library/configparser,,`,# The optional `fallback` argument can be used to provide a fallback value
|
||||
library/configparser,,:option,${section:option}
|
||||
library/configparser,,:system,path: ${Common:system_dir}/Library/Frameworks/
|
||||
library/configparser,,:home,my_dir: ${Common:home_dir}/twosheds
|
||||
library/configparser,,:path,python_dir: ${Frameworks:path}/Python/Versions/${Frameworks:Python}
|
||||
library/configparser,,:Python,python_dir: ${Frameworks:path}/Python/Versions/${Frameworks:Python}
|
||||
library/pdb,,:lineno,[filename:lineno | bpnumber [bpnumber ...]]
|
||||
library/pdb,,:lineno,filename:lineno
|
||||
library/logging,,:Watch,WARNING:root:Watch out!
|
||||
library/logging,,:So,INFO:root:So should this
|
||||
library/logging,,:Started,INFO:root:Started
|
||||
library/logging,,:Doing,INFO:root:Doing something
|
||||
library/logging,,:Finished,INFO:root:Finished
|
||||
library/logging,,:Look,WARNING:root:Look before you leap!
|
||||
library/logging,,:So,INFO:So should this
|
||||
library/logging,,:logger,severity:logger name:message
|
||||
library/logging,,:message,severity:logger name:message
|
||||
whatsnew/3.2,,:directory,... ${buildout:directory}/downloads/dist
|
||||
whatsnew/3.2,,:location,... zope9-location = ${zope9:location}
|
||||
whatsnew/3.2,,:prefix,... zope-conf = ${custom:prefix}/etc/zope.conf
|
||||
howto/logging,,:root,WARNING:root:Watch out!
|
||||
howto/logging,,:Watch,WARNING:root:Watch out!
|
||||
howto/logging,,:root,DEBUG:root:This message should go to the log file
|
||||
howto/logging,,:This,DEBUG:root:This message should go to the log file
|
||||
howto/logging,,:root,INFO:root:So should this
|
||||
howto/logging,,:So,INFO:root:So should this
|
||||
howto/logging,,:root,"WARNING:root:And this, too"
|
||||
howto/logging,,:And,"WARNING:root:And this, too"
|
||||
howto/logging,,:root,INFO:root:Started
|
||||
howto/logging,,:Started,INFO:root:Started
|
||||
howto/logging,,:root,INFO:root:Doing something
|
||||
howto/logging,,:Doing,INFO:root:Doing something
|
||||
howto/logging,,:root,INFO:root:Finished
|
||||
howto/logging,,:Finished,INFO:root:Finished
|
||||
howto/logging,,:root,WARNING:root:Look before you leap!
|
||||
howto/logging,,:Look,WARNING:root:Look before you leap!
|
||||
howto/logging,,:This,DEBUG:This message should appear on the console
|
||||
howto/logging,,:So,INFO:So should this
|
||||
howto/logging,,:And,"WARNING:And this, too"
|
||||
howto/logging,,:logger,severity:logger name:message
|
||||
howto/logging,,:message,severity:logger name:message
|
||||
library/logging.handlers,,:port,host:port
|
||||
library/imaplib,116,:MM,"""DD-Mmm-YYYY HH:MM:SS"
|
||||
library/imaplib,116,:SS,"""DD-Mmm-YYYY HH:MM:SS"
|
||||
whatsnew/3.2,,::,"$ export PYTHONWARNINGS='ignore::RuntimeWarning::,once::UnicodeWarning::'"
|
||||
howto/pyporting,75,::,# make sure to use :: Python *and* :: Python :: 3 so
|
||||
howto/pyporting,75,::,"'Programming Language :: Python',"
|
||||
howto/pyporting,75,::,'Programming Language :: Python :: 3'
|
||||
whatsnew/3.2,,:gz,">>> with tarfile.open(name='myarchive.tar.gz', mode='w:gz') as tf:"
|
||||
whatsnew/3.2,,:directory,${buildout:directory}/downloads/dist
|
||||
whatsnew/3.2,,:location,zope9-location = ${zope9:location}
|
||||
whatsnew/3.2,,:prefix,zope-conf = ${custom:prefix}/etc/zope.conf
|
||||
whatsnew/3.2,,:beef,>>> urllib.parse.urlparse('http://[dead:beef:cafe:5417:affe:8FA3:deaf:feed]/foo/')
|
||||
whatsnew/3.2,,:cafe,>>> urllib.parse.urlparse('http://[dead:beef:cafe:5417:affe:8FA3:deaf:feed]/foo/')
|
||||
whatsnew/3.2,,:affe,>>> urllib.parse.urlparse('http://[dead:beef:cafe:5417:affe:8FA3:deaf:feed]/foo/')
|
||||
whatsnew/3.2,,:deaf,>>> urllib.parse.urlparse('http://[dead:beef:cafe:5417:affe:8FA3:deaf:feed]/foo/')
|
||||
whatsnew/3.2,,:feed,>>> urllib.parse.urlparse('http://[dead:beef:cafe:5417:affe:8FA3:deaf:feed]/foo/')
|
||||
whatsnew/3.2,,:beef,"netloc='[dead:beef:cafe:5417:affe:8FA3:deaf:feed]',"
|
||||
whatsnew/3.2,,:cafe,"netloc='[dead:beef:cafe:5417:affe:8FA3:deaf:feed]',"
|
||||
whatsnew/2.7,1619,::,>>> urlparse.urlparse('http://[1080::8:800:200C:417A]/foo')
|
||||
whatsnew/2.7,735,:Sunday,'2009:4:Sunday'
|
||||
whatsnew/2.7,862,:Cookie,"export PYTHONWARNINGS=all,error:::Cookie:0"
|
||||
whatsnew/2.7,862,::,"export PYTHONWARNINGS=all,error:::Cookie:0"
|
||||
whatsnew/3.2,,:affe,"netloc='[dead:beef:cafe:5417:affe:8FA3:deaf:feed]',"
|
||||
whatsnew/3.2,,:affe,>>> urllib.parse.urlparse('http://[dead:beef:cafe:5417:affe:8FA3:deaf:feed]/foo/')
|
||||
whatsnew/3.2,,:beef,"netloc='[dead:beef:cafe:5417:affe:8FA3:deaf:feed]',"
|
||||
whatsnew/3.2,,:beef,>>> urllib.parse.urlparse('http://[dead:beef:cafe:5417:affe:8FA3:deaf:feed]/foo/')
|
||||
whatsnew/3.2,,:cafe,"netloc='[dead:beef:cafe:5417:affe:8FA3:deaf:feed]',"
|
||||
whatsnew/3.2,,:cafe,>>> urllib.parse.urlparse('http://[dead:beef:cafe:5417:affe:8FA3:deaf:feed]/foo/')
|
||||
whatsnew/3.2,,:deaf,"netloc='[dead:beef:cafe:5417:affe:8FA3:deaf:feed]',"
|
||||
whatsnew/3.2,,:deaf,>>> urllib.parse.urlparse('http://[dead:beef:cafe:5417:affe:8FA3:deaf:feed]/foo/')
|
||||
whatsnew/3.2,,:directory,... ${buildout:directory}/downloads/dist
|
||||
whatsnew/3.2,,:directory,${buildout:directory}/downloads/dist
|
||||
whatsnew/3.2,,::,"$ export PYTHONWARNINGS='ignore::RuntimeWarning::,once::UnicodeWarning::'"
|
||||
whatsnew/3.2,,:feed,"netloc='[dead:beef:cafe:5417:affe:8FA3:deaf:feed]',"
|
||||
library/pprint,209,::,"'classifiers': ['Development Status :: 4 - Beta',"
|
||||
library/pprint,209,::,"'Intended Audience :: Developers',"
|
||||
library/pprint,209,::,"'License :: OSI Approved :: MIT License',"
|
||||
library/pprint,209,::,"'Natural Language :: English',"
|
||||
library/pprint,209,::,"'Operating System :: OS Independent',"
|
||||
library/pprint,209,::,"'Programming Language :: Python',"
|
||||
library/pprint,209,::,"'Programming Language :: Python :: 2',"
|
||||
library/pprint,209,::,"'Programming Language :: Python :: 2.6',"
|
||||
library/pprint,209,::,"'Programming Language :: Python :: 2.7',"
|
||||
library/pprint,209,::,"'Topic :: Software Development :: Libraries',"
|
||||
library/pprint,209,::,"'Topic :: Software Development :: Libraries :: Python Modules'],"
|
||||
packaging/examples,,`,This is the description of the ``foobar`` project.
|
||||
packaging/setupcfg,,::,Development Status :: 3 - Alpha
|
||||
packaging/setupcfg,,::,License :: OSI Approved :: Mozilla Public License 1.1 (MPL 1.1)
|
||||
packaging/setupscript,,::,"'Development Status :: 4 - Beta',"
|
||||
packaging/setupscript,,::,"'Environment :: Console',"
|
||||
packaging/setupscript,,::,"'Environment :: Web Environment',"
|
||||
packaging/setupscript,,::,"'Intended Audience :: End Users/Desktop',"
|
||||
packaging/setupscript,,::,"'Intended Audience :: Developers',"
|
||||
packaging/setupscript,,::,"'Intended Audience :: System Administrators',"
|
||||
packaging/setupscript,,::,"'License :: OSI Approved :: Python Software Foundation License',"
|
||||
packaging/setupscript,,::,"'Operating System :: MacOS :: MacOS X',"
|
||||
packaging/setupscript,,::,"'Operating System :: Microsoft :: Windows',"
|
||||
packaging/setupscript,,::,"'Operating System :: POSIX',"
|
||||
packaging/setupscript,,::,"'Programming Language :: Python',"
|
||||
packaging/setupscript,,::,"'Topic :: Communications :: Email',"
|
||||
packaging/setupscript,,::,"'Topic :: Office/Business',"
|
||||
packaging/setupscript,,::,"'Topic :: Software Development :: Bug Tracking',"
|
||||
packaging/tutorial,,::,1) License :: OSI Approved :: GNU General Public License (GPL)
|
||||
packaging/tutorial,,::,2) License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)
|
||||
packaging/tutorial,,::,Type the number of the license you wish to use or ? to try again:: 1
|
||||
packaging/tutorial,,::,classifier = Development Status :: 3 - Alpha
|
||||
packaging/tutorial,,::,License :: OSI Approved :: GNU General Public License (GPL)
|
||||
whatsnew/3.2,,:feed,>>> urllib.parse.urlparse('http://[dead:beef:cafe:5417:affe:8FA3:deaf:feed]/foo/')
|
||||
whatsnew/3.2,,:gz,">>> with tarfile.open(name='myarchive.tar.gz', mode='w:gz') as tf:"
|
||||
whatsnew/3.2,,:location,... zope9-location = ${zope9:location}
|
||||
whatsnew/3.2,,:location,zope9-location = ${zope9:location}
|
||||
whatsnew/3.2,,:prefix,... zope-conf = ${custom:prefix}/etc/zope.conf
|
||||
whatsnew/3.2,,:prefix,zope-conf = ${custom:prefix}/etc/zope.conf
|
||||
|
|
|
|
@ -49,6 +49,8 @@
|
|||
This article explains the new features in Python 3.3, compared to 3.2.
|
||||
|
||||
|
||||
.. pep-3118-update:
|
||||
|
||||
PEP 3118: New memoryview implementation and buffer protocol documentation
|
||||
=========================================================================
|
||||
|
||||
|
@ -85,7 +87,9 @@ Features
|
|||
* Multi-dimensional comparisons are supported for any array type.
|
||||
|
||||
* All array types are hashable if the exporting object is hashable
|
||||
and the view is read-only.
|
||||
and the view is read-only. (Contributed by Antoine Pitrou in
|
||||
:issue:`13411`)
|
||||
|
||||
|
||||
* Arbitrary slicing of any 1-D arrays type is supported. For example, it
|
||||
is now possible to reverse a memoryview in O(1) by using a negative step.
|
||||
|
@ -167,19 +171,16 @@ The storage of Unicode strings now depends on the highest codepoint in the strin
|
|||
|
||||
* non-BMP strings (``U+10000-U+10FFFF``) use 4 bytes per codepoint.
|
||||
|
||||
The net effect is that for most applications, memory usage of string storage
|
||||
should decrease significantly - especially compared to former wide unicode
|
||||
builds - as, in many cases, strings will be pure ASCII even in international
|
||||
contexts (because many strings store non-human language data, such as XML
|
||||
fragments, HTTP headers, JSON-encoded data, etc.). We also hope that it
|
||||
will, for the same reasons, increase CPU cache efficiency on non-trivial
|
||||
applications.
|
||||
|
||||
.. The memory usage of Python 3.3 is two to three times smaller than Python 3.2,
|
||||
and a little bit better than Python 2.7, on a `Django benchmark
|
||||
<http://mail.python.org/pipermail/python-dev/2011-September/113714.html>`_.
|
||||
XXX The result should be moved in the PEP and a link to the PEP should
|
||||
be added here.
|
||||
The net effect is that for most applications, memory usage of string
|
||||
storage should decrease significantly - especially compared to former
|
||||
wide unicode builds - as, in many cases, strings will be pure ASCII
|
||||
even in international contexts (because many strings store non-human
|
||||
language data, such as XML fragments, HTTP headers, JSON-encoded data,
|
||||
etc.). We also hope that it will, for the same reasons, increase CPU
|
||||
cache efficiency on non-trivial applications. The memory usage of
|
||||
Python 3.3 is two to three times smaller than Python 3.2, and a little
|
||||
bit better than Python 2.7, on a Django benchmark (see the PEP for
|
||||
details).
|
||||
|
||||
|
||||
PEP 3151: Reworking the OS and IO exception hierarchy
|
||||
|
@ -261,9 +262,56 @@ part of its operations to another generator. This allows a section of code
|
|||
containing 'yield' to be factored out and placed in another generator.
|
||||
Additionally, the subgenerator is allowed to return with a value, and the
|
||||
value is made available to the delegating generator.
|
||||
|
||||
While designed primarily for use in delegating to a subgenerator, the ``yield
|
||||
from`` expression actually allows delegation to arbitrary subiterators.
|
||||
|
||||
For simple iterators, ``yield from iterable`` is essentially just a shortened
|
||||
form of ``for item in iterable: yield item``::
|
||||
|
||||
>>> def g(x):
|
||||
... yield from range(x, 0, -1)
|
||||
... yield from range(x)
|
||||
...
|
||||
>>> list(g(5))
|
||||
[5, 4, 3, 2, 1, 0, 1, 2, 3, 4]
|
||||
|
||||
However, unlike an ordinary loop, ``yield from`` allows subgenerators to
|
||||
receive sent and thrown values directly from the calling scope, and
|
||||
return a final value to the outer generator::
|
||||
|
||||
>>> def accumulate(start=0):
|
||||
... tally = start
|
||||
... while 1:
|
||||
... next = yield
|
||||
... if next is None:
|
||||
... return tally
|
||||
... tally += next
|
||||
...
|
||||
>>> def gather_tallies(tallies, start=0):
|
||||
... while 1:
|
||||
... tally = yield from accumulate()
|
||||
... tallies.append(tally)
|
||||
...
|
||||
>>> tallies = []
|
||||
>>> acc = gather_tallies(tallies)
|
||||
>>> next(acc) # Ensure the accumulator is ready to accept values
|
||||
>>> for i in range(10):
|
||||
... acc.send(i)
|
||||
...
|
||||
>>> acc.send(None) # Finish the first tally
|
||||
>>> for i in range(5):
|
||||
... acc.send(i)
|
||||
...
|
||||
>>> acc.send(None) # Finish the second tally
|
||||
>>> tallies
|
||||
[45, 10]
|
||||
|
||||
The main principle driving this change is to allow even generators that are
|
||||
designed to be used with the ``send`` and ``throw`` methods to be split into
|
||||
multiple subgenerators as easily as a single large function can be split into
|
||||
multiple subfunctions.
|
||||
|
||||
(Implementation by Greg Ewing, integrated into 3.3 by Renaud Blanch, Ryan
|
||||
Kelly and Nick Coghlan, documentation by Zbigniew Jędrzejewski-Szmek and
|
||||
Nick Coghlan)
|
||||
|
@ -330,6 +378,21 @@ suppressed valuable underlying details)::
|
|||
KeyError('x',)
|
||||
|
||||
|
||||
PEP 414: Explicit Unicode literals
|
||||
======================================
|
||||
|
||||
:pep:`414` - Explicit Unicode literals
|
||||
PEP written by Armin Ronacher.
|
||||
|
||||
To ease the transition from Python 2 for Unicode aware Python applications
|
||||
that make heavy use of Unicode literals, Python 3.3 once again supports the
|
||||
"``u``" prefix for string literals. This prefix has no semantic significance
|
||||
in Python 3, it is provided solely to reduce the number of purely mechanical
|
||||
changes in migrating to Python 3, making it easier for developers to focus on
|
||||
the more significant semantic changes (such as the stricter default
|
||||
separation of binary and text data).
|
||||
|
||||
|
||||
PEP 3155: Qualified name for classes and functions
|
||||
==================================================
|
||||
|
||||
|
@ -411,10 +474,6 @@ Some smaller changes made to the core Python language are:
|
|||
|
||||
(:issue:`12170`)
|
||||
|
||||
* Memoryview objects are now hashable when the underlying object is hashable.
|
||||
|
||||
(Contributed by Antoine Pitrou in :issue:`13411`)
|
||||
|
||||
|
||||
New and Improved Modules
|
||||
========================
|
||||
|
|
|
@ -1026,7 +1026,7 @@ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx*/
|
|||
|
||||
PyAPI_FUNC(PyObject *) PySequence_Fast(PyObject *o, const char* m);
|
||||
/*
|
||||
Returns the sequence, o, as a tuple, unless it's already a
|
||||
Returns the sequence, o, as a list, unless it's already a
|
||||
tuple or list. Use PySequence_Fast_GET_ITEM to access the
|
||||
members of this list, and PySequence_Fast_GET_SIZE to get its length.
|
||||
|
||||
|
|
|
@ -20,10 +20,10 @@
|
|||
#define PY_MINOR_VERSION 3
|
||||
#define PY_MICRO_VERSION 0
|
||||
#define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_ALPHA
|
||||
#define PY_RELEASE_SERIAL 0
|
||||
#define PY_RELEASE_SERIAL 1
|
||||
|
||||
/* Version as a string */
|
||||
#define PY_VERSION "3.3.0a0"
|
||||
#define PY_VERSION "3.3.0a1+"
|
||||
/*--end constants--*/
|
||||
|
||||
/* Version as a single 4-byte hex number, e.g. 0x010502B2 == 1.5.2b2.
|
||||
|
|
|
@ -3,6 +3,7 @@
|
|||
#define Py_PYTIME_H
|
||||
|
||||
#include "pyconfig.h" /* include for defines */
|
||||
#include "object.h"
|
||||
|
||||
/**************************************************************************
|
||||
Symbols and macros to supply platform-independent interfaces to time related
|
||||
|
@ -37,6 +38,16 @@ do { \
|
|||
((tv_end.tv_sec - tv_start.tv_sec) + \
|
||||
(tv_end.tv_usec - tv_start.tv_usec) * 0.000001)
|
||||
|
||||
#ifndef Py_LIMITED_API
|
||||
/* Convert a number of seconds, int or float, to a timespec structure.
|
||||
nsec is always in the range [0; 999999999]. For example, -1.2 is converted
|
||||
to (-2, 800000000). */
|
||||
PyAPI_FUNC(int) _PyTime_ObjectToTimespec(
|
||||
PyObject *obj,
|
||||
time_t *sec,
|
||||
long *nsec);
|
||||
#endif
|
||||
|
||||
/* Dummy to force linking. */
|
||||
PyAPI_FUNC(void) _PyTime_Init(void);
|
||||
|
||||
|
|
|
@ -499,17 +499,14 @@ enum PyUnicode_Kind {
|
|||
do { \
|
||||
switch ((kind)) { \
|
||||
case PyUnicode_1BYTE_KIND: { \
|
||||
assert(value <= 0xff); \
|
||||
((Py_UCS1 *)(data))[(index)] = (Py_UCS1)(value); \
|
||||
break; \
|
||||
} \
|
||||
case PyUnicode_2BYTE_KIND: { \
|
||||
assert(value <= 0xffff); \
|
||||
((Py_UCS2 *)(data))[(index)] = (Py_UCS2)(value); \
|
||||
break; \
|
||||
} \
|
||||
default: { \
|
||||
assert(value <= 0x10ffff); \
|
||||
assert((kind) == PyUnicode_4BYTE_KIND); \
|
||||
((Py_UCS4 *)(data))[(index)] = (Py_UCS4)(value); \
|
||||
} \
|
||||
|
|
2
LICENSE
2
LICENSE
|
@ -73,7 +73,7 @@ the various releases.
|
|||
3.2 3.1 2011 PSF yes
|
||||
3.2.1 3.2 2011 PSF yes
|
||||
3.2.2 3.2.1 2011 PSF yes
|
||||
3.3 3.2 2012 PSF yes
|
||||
3.3.0 3.2 2012 PSF yes
|
||||
|
||||
Footnotes:
|
||||
|
||||
|
|
|
@ -114,36 +114,21 @@ class WeakSet:
|
|||
def update(self, other):
|
||||
if self._pending_removals:
|
||||
self._commit_removals()
|
||||
if isinstance(other, self.__class__):
|
||||
self.data.update(other.data)
|
||||
else:
|
||||
for element in other:
|
||||
self.add(element)
|
||||
for element in other:
|
||||
self.add(element)
|
||||
|
||||
def __ior__(self, other):
|
||||
self.update(other)
|
||||
return self
|
||||
|
||||
# Helper functions for simple delegating methods.
|
||||
def _apply(self, other, method):
|
||||
if not isinstance(other, self.__class__):
|
||||
other = self.__class__(other)
|
||||
newdata = method(other.data)
|
||||
newset = self.__class__()
|
||||
newset.data = newdata
|
||||
return newset
|
||||
|
||||
def difference(self, other):
|
||||
return self._apply(other, self.data.difference)
|
||||
newset = self.copy()
|
||||
newset.difference_update(other)
|
||||
return newset
|
||||
__sub__ = difference
|
||||
|
||||
def difference_update(self, other):
|
||||
if self._pending_removals:
|
||||
self._commit_removals()
|
||||
if self is other:
|
||||
self.data.clear()
|
||||
else:
|
||||
self.data.difference_update(ref(item) for item in other)
|
||||
self.__isub__(other)
|
||||
def __isub__(self, other):
|
||||
if self._pending_removals:
|
||||
self._commit_removals()
|
||||
|
@ -154,13 +139,11 @@ class WeakSet:
|
|||
return self
|
||||
|
||||
def intersection(self, other):
|
||||
return self._apply(other, self.data.intersection)
|
||||
return self.__class__(item for item in other if item in self)
|
||||
__and__ = intersection
|
||||
|
||||
def intersection_update(self, other):
|
||||
if self._pending_removals:
|
||||
self._commit_removals()
|
||||
self.data.intersection_update(ref(item) for item in other)
|
||||
self.__iand__(other)
|
||||
def __iand__(self, other):
|
||||
if self._pending_removals:
|
||||
self._commit_removals()
|
||||
|
@ -169,17 +152,17 @@ class WeakSet:
|
|||
|
||||
def issubset(self, other):
|
||||
return self.data.issubset(ref(item) for item in other)
|
||||
__lt__ = issubset
|
||||
__le__ = issubset
|
||||
|
||||
def __le__(self, other):
|
||||
return self.data <= set(ref(item) for item in other)
|
||||
def __lt__(self, other):
|
||||
return self.data < set(ref(item) for item in other)
|
||||
|
||||
def issuperset(self, other):
|
||||
return self.data.issuperset(ref(item) for item in other)
|
||||
__gt__ = issuperset
|
||||
__ge__ = issuperset
|
||||
|
||||
def __ge__(self, other):
|
||||
return self.data >= set(ref(item) for item in other)
|
||||
def __gt__(self, other):
|
||||
return self.data > set(ref(item) for item in other)
|
||||
|
||||
def __eq__(self, other):
|
||||
if not isinstance(other, self.__class__):
|
||||
|
@ -187,27 +170,24 @@ class WeakSet:
|
|||
return self.data == set(ref(item) for item in other)
|
||||
|
||||
def symmetric_difference(self, other):
|
||||
return self._apply(other, self.data.symmetric_difference)
|
||||
newset = self.copy()
|
||||
newset.symmetric_difference_update(other)
|
||||
return newset
|
||||
__xor__ = symmetric_difference
|
||||
|
||||
def symmetric_difference_update(self, other):
|
||||
if self._pending_removals:
|
||||
self._commit_removals()
|
||||
if self is other:
|
||||
self.data.clear()
|
||||
else:
|
||||
self.data.symmetric_difference_update(ref(item) for item in other)
|
||||
self.__ixor__(other)
|
||||
def __ixor__(self, other):
|
||||
if self._pending_removals:
|
||||
self._commit_removals()
|
||||
if self is other:
|
||||
self.data.clear()
|
||||
else:
|
||||
self.data.symmetric_difference_update(ref(item) for item in other)
|
||||
self.data.symmetric_difference_update(ref(item, self._remove) for item in other)
|
||||
return self
|
||||
|
||||
def union(self, other):
|
||||
return self._apply(other, self.data.union)
|
||||
return self.__class__(e for s in (self, other) for e in s)
|
||||
__or__ = union
|
||||
|
||||
def isdisjoint(self, other):
|
||||
|
|
|
@ -50,7 +50,8 @@ import os
|
|||
from concurrent.futures import _base
|
||||
import queue
|
||||
import multiprocessing
|
||||
from multiprocessing.queues import SimpleQueue, SentinelReady, Full
|
||||
from multiprocessing.queues import SimpleQueue, Full
|
||||
from multiprocessing.connection import wait
|
||||
import threading
|
||||
import weakref
|
||||
|
||||
|
@ -212,6 +213,8 @@ def _queue_management_worker(executor_reference,
|
|||
for p in processes.values():
|
||||
p.join()
|
||||
|
||||
reader = result_queue._reader
|
||||
|
||||
while True:
|
||||
_add_call_item_to_queue(pending_work_items,
|
||||
work_ids_queue,
|
||||
|
@ -219,9 +222,10 @@ def _queue_management_worker(executor_reference,
|
|||
|
||||
sentinels = [p.sentinel for p in processes.values()]
|
||||
assert sentinels
|
||||
try:
|
||||
result_item = result_queue.get(sentinels=sentinels)
|
||||
except SentinelReady:
|
||||
ready = wait([reader] + sentinels)
|
||||
if reader in ready:
|
||||
result_item = reader.recv()
|
||||
else:
|
||||
# Mark the process pool broken so that submits fail right now.
|
||||
executor = executor_reference()
|
||||
if executor is not None:
|
||||
|
|
|
@ -13,5 +13,5 @@ used from a setup script as
|
|||
# Updated automatically by the Python release process.
|
||||
#
|
||||
#--start constants--
|
||||
__version__ = "3.3a0"
|
||||
__version__ = "3.3.0a1"
|
||||
#--end constants--
|
||||
|
|
|
@ -260,7 +260,7 @@ class bdist_msi(Command):
|
|||
self.db.Commit()
|
||||
|
||||
if hasattr(self.distribution, 'dist_files'):
|
||||
tup = 'bdist_msi', self.target_version or 'any', fullname
|
||||
tup = 'bdist_msi', self.target_version or 'any', installer_name
|
||||
self.distribution.dist_files.append(tup)
|
||||
|
||||
if not self.keep_temp:
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
"""Tests for distutils.command.bdist_msi."""
|
||||
import unittest
|
||||
import os
|
||||
import sys
|
||||
|
||||
import unittest
|
||||
from test.support import run_unittest
|
||||
|
||||
from distutils.tests import support
|
||||
|
||||
@unittest.skipUnless(sys.platform=="win32", "These tests are only for win32")
|
||||
|
||||
@unittest.skipUnless(sys.platform == 'win32', 'these tests require Windows')
|
||||
class BDistMSITestCase(support.TempdirManager,
|
||||
support.LoggingSilencer,
|
||||
unittest.TestCase):
|
||||
|
@ -14,9 +14,18 @@ class BDistMSITestCase(support.TempdirManager,
|
|||
def test_minimal(self):
|
||||
# minimal test XXX need more tests
|
||||
from distutils.command.bdist_msi import bdist_msi
|
||||
pkg_pth, dist = self.create_dist()
|
||||
project_dir, dist = self.create_dist()
|
||||
cmd = bdist_msi(dist)
|
||||
cmd.ensure_finalized()
|
||||
cmd.run()
|
||||
|
||||
bdists = os.listdir(os.path.join(project_dir, 'dist'))
|
||||
self.assertEqual(bdists, ['foo-0.1.msi'])
|
||||
|
||||
# bug #13719: upload ignores bdist_msi files
|
||||
self.assertEqual(dist.dist_files,
|
||||
[('bdist_msi', 'any', 'dist/foo-0.1.msi')])
|
||||
|
||||
|
||||
def test_suite():
|
||||
return unittest.makeSuite(BDistMSITestCase)
|
||||
|
|
|
@ -1 +1 @@
|
|||
IDLE_VERSION = "3.3a0"
|
||||
IDLE_VERSION = "3.3.0a1"
|
||||
|
|
|
@ -32,7 +32,7 @@
|
|||
# SUCH DAMAGE.
|
||||
#
|
||||
|
||||
__all__ = [ 'Client', 'Listener', 'Pipe' ]
|
||||
__all__ = [ 'Client', 'Listener', 'Pipe', 'wait' ]
|
||||
|
||||
import io
|
||||
import os
|
||||
|
@ -58,8 +58,6 @@ except ImportError:
|
|||
raise
|
||||
win32 = None
|
||||
|
||||
_select = _eintr_retry(select.select)
|
||||
|
||||
#
|
||||
#
|
||||
#
|
||||
|
@ -122,15 +120,6 @@ def address_type(address):
|
|||
else:
|
||||
raise ValueError('address type of %r unrecognized' % address)
|
||||
|
||||
|
||||
class SentinelReady(Exception):
|
||||
"""
|
||||
Raised when a sentinel is ready when polling.
|
||||
"""
|
||||
def __init__(self, *args):
|
||||
Exception.__init__(self, *args)
|
||||
self.sentinels = args[0]
|
||||
|
||||
#
|
||||
# Connection classes
|
||||
#
|
||||
|
@ -268,11 +257,11 @@ class _ConnectionBase:
|
|||
(offset + size) // itemsize])
|
||||
return size
|
||||
|
||||
def recv(self, sentinels=None):
|
||||
def recv(self):
|
||||
"""Receive a (picklable) object"""
|
||||
self._check_closed()
|
||||
self._check_readable()
|
||||
buf = self._recv_bytes(sentinels=sentinels)
|
||||
buf = self._recv_bytes()
|
||||
return pickle.loads(buf.getbuffer())
|
||||
|
||||
def poll(self, timeout=0.0):
|
||||
|
@ -290,85 +279,80 @@ if win32:
|
|||
Overlapped I/O is used, so the handles must have been created
|
||||
with FILE_FLAG_OVERLAPPED.
|
||||
"""
|
||||
_buffered = b''
|
||||
_got_empty_message = False
|
||||
|
||||
def _close(self, _CloseHandle=win32.CloseHandle):
|
||||
_CloseHandle(self._handle)
|
||||
|
||||
def _send_bytes(self, buf):
|
||||
overlapped = win32.WriteFile(self._handle, buf, overlapped=True)
|
||||
nwritten, complete = overlapped.GetOverlappedResult(True)
|
||||
assert complete
|
||||
ov, err = win32.WriteFile(self._handle, buf, overlapped=True)
|
||||
try:
|
||||
if err == win32.ERROR_IO_PENDING:
|
||||
waitres = win32.WaitForMultipleObjects(
|
||||
[ov.event], False, INFINITE)
|
||||
assert waitres == WAIT_OBJECT_0
|
||||
except:
|
||||
ov.cancel()
|
||||
raise
|
||||
finally:
|
||||
nwritten, err = ov.GetOverlappedResult(True)
|
||||
assert err == 0
|
||||
assert nwritten == len(buf)
|
||||
|
||||
def _recv_bytes(self, maxsize=None, sentinels=()):
|
||||
if sentinels:
|
||||
self._poll(-1.0, sentinels)
|
||||
buf = io.BytesIO()
|
||||
firstchunk = self._buffered
|
||||
if firstchunk:
|
||||
lenfirstchunk = len(firstchunk)
|
||||
buf.write(firstchunk)
|
||||
self._buffered = b''
|
||||
def _recv_bytes(self, maxsize=None):
|
||||
if self._got_empty_message:
|
||||
self._got_empty_message = False
|
||||
return io.BytesIO()
|
||||
else:
|
||||
# A reasonable size for the first chunk transfer
|
||||
bufsize = 128
|
||||
if maxsize is not None and maxsize < bufsize:
|
||||
bufsize = maxsize
|
||||
bsize = 128 if maxsize is None else min(maxsize, 128)
|
||||
try:
|
||||
overlapped = win32.ReadFile(self._handle, bufsize, overlapped=True)
|
||||
lenfirstchunk, complete = overlapped.GetOverlappedResult(True)
|
||||
firstchunk = overlapped.getbuffer()
|
||||
assert lenfirstchunk == len(firstchunk)
|
||||
ov, err = win32.ReadFile(self._handle, bsize,
|
||||
overlapped=True)
|
||||
try:
|
||||
if err == win32.ERROR_IO_PENDING:
|
||||
waitres = win32.WaitForMultipleObjects(
|
||||
[ov.event], False, INFINITE)
|
||||
assert waitres == WAIT_OBJECT_0
|
||||
except:
|
||||
ov.cancel()
|
||||
raise
|
||||
finally:
|
||||
nread, err = ov.GetOverlappedResult(True)
|
||||
if err == 0:
|
||||
f = io.BytesIO()
|
||||
f.write(ov.getbuffer())
|
||||
return f
|
||||
elif err == win32.ERROR_MORE_DATA:
|
||||
return self._get_more_data(ov, maxsize)
|
||||
except IOError as e:
|
||||
if e.winerror == win32.ERROR_BROKEN_PIPE:
|
||||
raise EOFError
|
||||
raise
|
||||
buf.write(firstchunk)
|
||||
if complete:
|
||||
return buf
|
||||
navail, nleft = win32.PeekNamedPipe(self._handle)
|
||||
if maxsize is not None and lenfirstchunk + nleft > maxsize:
|
||||
return None
|
||||
if nleft > 0:
|
||||
overlapped = win32.ReadFile(self._handle, nleft, overlapped=True)
|
||||
res, complete = overlapped.GetOverlappedResult(True)
|
||||
assert res == nleft
|
||||
assert complete
|
||||
buf.write(overlapped.getbuffer())
|
||||
return buf
|
||||
else:
|
||||
raise
|
||||
raise RuntimeError("shouldn't get here; expected KeyboardInterrupt")
|
||||
|
||||
def _poll(self, timeout, sentinels=()):
|
||||
# Fast non-blocking path
|
||||
navail, nleft = win32.PeekNamedPipe(self._handle)
|
||||
if navail > 0:
|
||||
def _poll(self, timeout):
|
||||
if (self._got_empty_message or
|
||||
win32.PeekNamedPipe(self._handle)[0] != 0):
|
||||
return True
|
||||
elif timeout == 0.0:
|
||||
return False
|
||||
# Blocking: use overlapped I/O
|
||||
if timeout < 0.0:
|
||||
timeout = INFINITE
|
||||
else:
|
||||
timeout = int(timeout * 1000 + 0.5)
|
||||
overlapped = win32.ReadFile(self._handle, 1, overlapped=True)
|
||||
try:
|
||||
handles = [overlapped.event]
|
||||
handles += sentinels
|
||||
res = win32.WaitForMultipleObjects(handles, False, timeout)
|
||||
finally:
|
||||
# Always cancel overlapped I/O in the same thread
|
||||
# (because CancelIoEx() appears only in Vista)
|
||||
overlapped.cancel()
|
||||
if res == WAIT_TIMEOUT:
|
||||
return False
|
||||
idx = res - WAIT_OBJECT_0
|
||||
if idx == 0:
|
||||
# I/O was successful, store received data
|
||||
overlapped.GetOverlappedResult(True)
|
||||
self._buffered += overlapped.getbuffer()
|
||||
return True
|
||||
assert 0 < idx < len(handles)
|
||||
raise SentinelReady([handles[idx]])
|
||||
if timeout < 0:
|
||||
timeout = None
|
||||
return bool(wait([self], timeout))
|
||||
|
||||
def _get_more_data(self, ov, maxsize):
|
||||
buf = ov.getbuffer()
|
||||
f = io.BytesIO()
|
||||
f.write(buf)
|
||||
left = win32.PeekNamedPipe(self._handle)[1]
|
||||
assert left > 0
|
||||
if maxsize is not None and len(buf) + left > maxsize:
|
||||
self._bad_message_length()
|
||||
ov, err = win32.ReadFile(self._handle, left, overlapped=True)
|
||||
rbytes, err = ov.GetOverlappedResult(True)
|
||||
assert err == 0
|
||||
assert rbytes == left
|
||||
f.write(ov.getbuffer())
|
||||
return f
|
||||
|
||||
|
||||
class Connection(_ConnectionBase):
|
||||
|
@ -397,17 +381,11 @@ class Connection(_ConnectionBase):
|
|||
break
|
||||
buf = buf[n:]
|
||||
|
||||
def _recv(self, size, sentinels=(), read=_read):
|
||||
def _recv(self, size, read=_read):
|
||||
buf = io.BytesIO()
|
||||
handle = self._handle
|
||||
if sentinels:
|
||||
handles = [handle] + sentinels
|
||||
remaining = size
|
||||
while remaining > 0:
|
||||
if sentinels:
|
||||
r = _select(handles, [], [])[0]
|
||||
if handle not in r:
|
||||
raise SentinelReady(r)
|
||||
chunk = read(handle, remaining)
|
||||
n = len(chunk)
|
||||
if n == 0:
|
||||
|
@ -428,17 +406,17 @@ class Connection(_ConnectionBase):
|
|||
if n > 0:
|
||||
self._send(buf)
|
||||
|
||||
def _recv_bytes(self, maxsize=None, sentinels=()):
|
||||
buf = self._recv(4, sentinels)
|
||||
def _recv_bytes(self, maxsize=None):
|
||||
buf = self._recv(4)
|
||||
size, = struct.unpack("!i", buf.getvalue())
|
||||
if maxsize is not None and size > maxsize:
|
||||
return None
|
||||
return self._recv(size, sentinels)
|
||||
return self._recv(size)
|
||||
|
||||
def _poll(self, timeout):
|
||||
if timeout < 0.0:
|
||||
timeout = None
|
||||
r = _select([self._handle], [], [], timeout)[0]
|
||||
r = wait([self._handle], timeout)
|
||||
return bool(r)
|
||||
|
||||
|
||||
|
@ -559,7 +537,8 @@ else:
|
|||
)
|
||||
|
||||
overlapped = win32.ConnectNamedPipe(h1, overlapped=True)
|
||||
overlapped.GetOverlappedResult(True)
|
||||
_, err = overlapped.GetOverlappedResult(True)
|
||||
assert err == 0
|
||||
|
||||
c1 = PipeConnection(h1, writable=duplex)
|
||||
c2 = PipeConnection(h2, readable=duplex)
|
||||
|
@ -633,39 +612,40 @@ if sys.platform == 'win32':
|
|||
'''
|
||||
def __init__(self, address, backlog=None):
|
||||
self._address = address
|
||||
handle = win32.CreateNamedPipe(
|
||||
address, win32.PIPE_ACCESS_DUPLEX |
|
||||
win32.FILE_FLAG_FIRST_PIPE_INSTANCE,
|
||||
win32.PIPE_TYPE_MESSAGE | win32.PIPE_READMODE_MESSAGE |
|
||||
win32.PIPE_WAIT,
|
||||
win32.PIPE_UNLIMITED_INSTANCES, BUFSIZE, BUFSIZE,
|
||||
win32.NMPWAIT_WAIT_FOREVER, win32.NULL
|
||||
)
|
||||
self._handle_queue = [handle]
|
||||
self._handle_queue = [self._new_handle(first=True)]
|
||||
|
||||
self._last_accepted = None
|
||||
|
||||
sub_debug('listener created with address=%r', self._address)
|
||||
|
||||
self.close = Finalize(
|
||||
self, PipeListener._finalize_pipe_listener,
|
||||
args=(self._handle_queue, self._address), exitpriority=0
|
||||
)
|
||||
|
||||
def accept(self):
|
||||
newhandle = win32.CreateNamedPipe(
|
||||
self._address, win32.PIPE_ACCESS_DUPLEX,
|
||||
def _new_handle(self, first=False):
|
||||
flags = win32.PIPE_ACCESS_DUPLEX | win32.FILE_FLAG_OVERLAPPED
|
||||
if first:
|
||||
flags |= win32.FILE_FLAG_FIRST_PIPE_INSTANCE
|
||||
return win32.CreateNamedPipe(
|
||||
self._address, flags,
|
||||
win32.PIPE_TYPE_MESSAGE | win32.PIPE_READMODE_MESSAGE |
|
||||
win32.PIPE_WAIT,
|
||||
win32.PIPE_UNLIMITED_INSTANCES, BUFSIZE, BUFSIZE,
|
||||
win32.NMPWAIT_WAIT_FOREVER, win32.NULL
|
||||
)
|
||||
self._handle_queue.append(newhandle)
|
||||
|
||||
def accept(self):
|
||||
self._handle_queue.append(self._new_handle())
|
||||
handle = self._handle_queue.pop(0)
|
||||
ov = win32.ConnectNamedPipe(handle, overlapped=True)
|
||||
try:
|
||||
win32.ConnectNamedPipe(handle, win32.NULL)
|
||||
except WindowsError as e:
|
||||
if e.winerror != win32.ERROR_PIPE_CONNECTED:
|
||||
raise
|
||||
res = win32.WaitForMultipleObjects([ov.event], False, INFINITE)
|
||||
except:
|
||||
ov.cancel()
|
||||
win32.CloseHandle(handle)
|
||||
raise
|
||||
finally:
|
||||
_, err = ov.GetOverlappedResult(True)
|
||||
assert err == 0
|
||||
return PipeConnection(handle)
|
||||
|
||||
@staticmethod
|
||||
|
@ -684,7 +664,8 @@ if sys.platform == 'win32':
|
|||
win32.WaitNamedPipe(address, 1000)
|
||||
h = win32.CreateFile(
|
||||
address, win32.GENERIC_READ | win32.GENERIC_WRITE,
|
||||
0, win32.NULL, win32.OPEN_EXISTING, 0, win32.NULL
|
||||
0, win32.NULL, win32.OPEN_EXISTING,
|
||||
win32.FILE_FLAG_OVERLAPPED, win32.NULL
|
||||
)
|
||||
except WindowsError as e:
|
||||
if e.winerror not in (win32.ERROR_SEM_TIMEOUT,
|
||||
|
@ -773,6 +754,125 @@ def XmlClient(*args, **kwds):
|
|||
import xmlrpc.client as xmlrpclib
|
||||
return ConnectionWrapper(Client(*args, **kwds), _xml_dumps, _xml_loads)
|
||||
|
||||
#
|
||||
# Wait
|
||||
#
|
||||
|
||||
if sys.platform == 'win32':
|
||||
|
||||
def _exhaustive_wait(handles, timeout):
|
||||
# Return ALL handles which are currently signalled. (Only
|
||||
# returning the first signalled might create starvation issues.)
|
||||
L = list(handles)
|
||||
ready = []
|
||||
while L:
|
||||
res = win32.WaitForMultipleObjects(L, False, timeout)
|
||||
if res == WAIT_TIMEOUT:
|
||||
break
|
||||
elif WAIT_OBJECT_0 <= res < WAIT_OBJECT_0 + len(L):
|
||||
res -= WAIT_OBJECT_0
|
||||
elif WAIT_ABANDONED_0 <= res < WAIT_ABANDONED_0 + len(L):
|
||||
res -= WAIT_ABANDONED_0
|
||||
else:
|
||||
raise RuntimeError('Should not get here')
|
||||
ready.append(L[res])
|
||||
L = L[res+1:]
|
||||
timeout = 0
|
||||
return ready
|
||||
|
||||
_ready_errors = {win32.ERROR_BROKEN_PIPE, win32.ERROR_NETNAME_DELETED}
|
||||
|
||||
def wait(object_list, timeout=None):
|
||||
'''
|
||||
Wait till an object in object_list is ready/readable.
|
||||
|
||||
Returns list of those objects in object_list which are ready/readable.
|
||||
'''
|
||||
if timeout is None:
|
||||
timeout = INFINITE
|
||||
elif timeout < 0:
|
||||
timeout = 0
|
||||
else:
|
||||
timeout = int(timeout * 1000 + 0.5)
|
||||
|
||||
object_list = list(object_list)
|
||||
waithandle_to_obj = {}
|
||||
ov_list = []
|
||||
ready_objects = set()
|
||||
ready_handles = set()
|
||||
|
||||
try:
|
||||
for o in object_list:
|
||||
try:
|
||||
fileno = getattr(o, 'fileno')
|
||||
except AttributeError:
|
||||
waithandle_to_obj[o.__index__()] = o
|
||||
else:
|
||||
# start an overlapped read of length zero
|
||||
try:
|
||||
ov, err = win32.ReadFile(fileno(), 0, True)
|
||||
except OSError as e:
|
||||
err = e.winerror
|
||||
if err not in _ready_errors:
|
||||
raise
|
||||
if err == win32.ERROR_IO_PENDING:
|
||||
ov_list.append(ov)
|
||||
waithandle_to_obj[ov.event] = o
|
||||
else:
|
||||
# If o.fileno() is an overlapped pipe handle and
|
||||
# err == 0 then there is a zero length message
|
||||
# in the pipe, but it HAS NOT been consumed.
|
||||
ready_objects.add(o)
|
||||
timeout = 0
|
||||
|
||||
ready_handles = _exhaustive_wait(waithandle_to_obj.keys(), timeout)
|
||||
finally:
|
||||
# request that overlapped reads stop
|
||||
for ov in ov_list:
|
||||
ov.cancel()
|
||||
|
||||
# wait for all overlapped reads to stop
|
||||
for ov in ov_list:
|
||||
try:
|
||||
_, err = ov.GetOverlappedResult(True)
|
||||
except OSError as e:
|
||||
err = e.winerror
|
||||
if err not in _ready_errors:
|
||||
raise
|
||||
if err != win32.ERROR_OPERATION_ABORTED:
|
||||
o = waithandle_to_obj[ov.event]
|
||||
ready_objects.add(o)
|
||||
if err == 0:
|
||||
# If o.fileno() is an overlapped pipe handle then
|
||||
# a zero length message HAS been consumed.
|
||||
if hasattr(o, '_got_empty_message'):
|
||||
o._got_empty_message = True
|
||||
|
||||
ready_objects.update(waithandle_to_obj[h] for h in ready_handles)
|
||||
return [o for o in object_list if o in ready_objects]
|
||||
|
||||
else:
|
||||
|
||||
def wait(object_list, timeout=None):
|
||||
'''
|
||||
Wait till an object in object_list is ready/readable.
|
||||
|
||||
Returns list of those objects in object_list which are ready/readable.
|
||||
'''
|
||||
if timeout is not None:
|
||||
if timeout <= 0:
|
||||
return select.select(object_list, [], [], 0)[0]
|
||||
else:
|
||||
deadline = time.time() + timeout
|
||||
while True:
|
||||
try:
|
||||
return select.select(object_list, [], [], timeout)[0]
|
||||
except OSError as e:
|
||||
if e.errno != errno.EINTR:
|
||||
raise
|
||||
if timeout is not None:
|
||||
timeout = deadline - time.time()
|
||||
|
||||
|
||||
# Late import because of circular import
|
||||
from multiprocessing.forking import duplicate, close
|
||||
|
|
|
@ -44,7 +44,7 @@ import errno
|
|||
|
||||
from queue import Empty, Full
|
||||
import _multiprocessing
|
||||
from multiprocessing.connection import Pipe, SentinelReady
|
||||
from multiprocessing.connection import Pipe
|
||||
from multiprocessing.synchronize import Lock, BoundedSemaphore, Semaphore, Condition
|
||||
from multiprocessing.util import debug, info, Finalize, register_after_fork
|
||||
from multiprocessing.forking import assert_spawning
|
||||
|
@ -360,6 +360,7 @@ class SimpleQueue(object):
|
|||
def __init__(self):
|
||||
self._reader, self._writer = Pipe(duplex=False)
|
||||
self._rlock = Lock()
|
||||
self._poll = self._reader.poll
|
||||
if sys.platform == 'win32':
|
||||
self._wlock = None
|
||||
else:
|
||||
|
@ -367,7 +368,7 @@ class SimpleQueue(object):
|
|||
self._make_methods()
|
||||
|
||||
def empty(self):
|
||||
return not self._reader.poll()
|
||||
return not self._poll()
|
||||
|
||||
def __getstate__(self):
|
||||
assert_spawning(self)
|
||||
|
@ -380,10 +381,10 @@ class SimpleQueue(object):
|
|||
def _make_methods(self):
|
||||
recv = self._reader.recv
|
||||
racquire, rrelease = self._rlock.acquire, self._rlock.release
|
||||
def get(*, sentinels=None):
|
||||
def get():
|
||||
racquire()
|
||||
try:
|
||||
return recv(sentinels)
|
||||
return recv()
|
||||
finally:
|
||||
rrelease()
|
||||
self.get = get
|
||||
|
|
|
@ -261,7 +261,7 @@ class bdist_msi(Command):
|
|||
self.db.Commit()
|
||||
|
||||
if hasattr(self.distribution, 'dist_files'):
|
||||
tup = 'bdist_msi', self.target_version or 'any', fullname
|
||||
tup = 'bdist_msi', self.target_version or 'any', installer_name
|
||||
self.distribution.dist_files.append(tup)
|
||||
|
||||
if not self.keep_temp:
|
||||
|
|
|
@ -19,6 +19,7 @@ __all__ = [
|
|||
'get_distributions', 'get_distribution', 'get_file_users',
|
||||
'provides_distribution', 'obsoletes_distribution',
|
||||
'enable_cache', 'disable_cache', 'clear_cache',
|
||||
# XXX these functions' names look like get_file_users but are not related
|
||||
'get_file_path', 'get_file']
|
||||
|
||||
|
||||
|
|
|
@ -1,20 +1,29 @@
|
|||
"""Tests for distutils.command.bdist_msi."""
|
||||
import os
|
||||
import sys
|
||||
|
||||
from packaging.tests import unittest, support
|
||||
|
||||
|
||||
@unittest.skipUnless(sys.platform == 'win32', 'these tests require Windows')
|
||||
class BDistMSITestCase(support.TempdirManager,
|
||||
support.LoggingCatcher,
|
||||
unittest.TestCase):
|
||||
|
||||
@unittest.skipUnless(sys.platform == "win32", "runs only on win32")
|
||||
def test_minimal(self):
|
||||
# minimal test XXX need more tests
|
||||
from packaging.command.bdist_msi import bdist_msi
|
||||
pkg_pth, dist = self.create_dist()
|
||||
project_dir, dist = self.create_dist()
|
||||
cmd = bdist_msi(dist)
|
||||
cmd.ensure_finalized()
|
||||
cmd.run()
|
||||
|
||||
bdists = os.listdir(os.path.join(project_dir, 'dist'))
|
||||
self.assertEqual(bdists, ['foo-0.1.msi'])
|
||||
|
||||
# bug #13719: upload ignores bdist_msi files
|
||||
self.assertEqual(dist.dist_files,
|
||||
[('bdist_msi', 'any', 'dist/foo-0.1.msi')])
|
||||
|
||||
|
||||
def test_suite():
|
||||
|
|
|
@ -297,8 +297,8 @@ class _Pickler:
|
|||
f(self, obj) # Call unbound method with explicit self
|
||||
return
|
||||
|
||||
# Check copyreg.dispatch_table
|
||||
reduce = dispatch_table.get(t)
|
||||
# Check private dispatch table if any, or else copyreg.dispatch_table
|
||||
reduce = getattr(self, 'dispatch_table', dispatch_table).get(t)
|
||||
if reduce:
|
||||
rv = reduce(obj)
|
||||
else:
|
||||
|
|
File diff suppressed because one or more lines are too long
|
@ -1,47 +0,0 @@
|
|||
# from http://mail.python.org/pipermail/python-dev/2001-June/015239.html
|
||||
|
||||
# if you keep changing a dictionary while looking up a key, you can
|
||||
# provoke an infinite recursion in C
|
||||
|
||||
# At the time neither Tim nor Michael could be bothered to think of a
|
||||
# way to fix it.
|
||||
|
||||
class Yuck:
|
||||
def __init__(self):
|
||||
self.i = 0
|
||||
|
||||
def make_dangerous(self):
|
||||
self.i = 1
|
||||
|
||||
def __hash__(self):
|
||||
# direct to slot 4 in table of size 8; slot 12 when size 16
|
||||
return 4 + 8
|
||||
|
||||
def __eq__(self, other):
|
||||
if self.i == 0:
|
||||
# leave dict alone
|
||||
pass
|
||||
elif self.i == 1:
|
||||
# fiddle to 16 slots
|
||||
self.__fill_dict(6)
|
||||
self.i = 2
|
||||
else:
|
||||
# fiddle to 8 slots
|
||||
self.__fill_dict(4)
|
||||
self.i = 1
|
||||
|
||||
return 1
|
||||
|
||||
def __fill_dict(self, n):
|
||||
self.i = 0
|
||||
dict.clear()
|
||||
for i in range(n):
|
||||
dict[i] = i
|
||||
dict[self] = "OK!"
|
||||
|
||||
y = Yuck()
|
||||
dict = {y: "OK!"}
|
||||
|
||||
z = Yuck()
|
||||
y.make_dangerous()
|
||||
print(dict[z])
|
|
@ -1605,6 +1605,105 @@ class AbstractPicklerUnpicklerObjectTests(unittest.TestCase):
|
|||
self.assertEqual(unpickler.load(), data)
|
||||
|
||||
|
||||
# Tests for dispatch_table attribute
|
||||
|
||||
REDUCE_A = 'reduce_A'
|
||||
|
||||
class AAA(object):
|
||||
def __reduce__(self):
|
||||
return str, (REDUCE_A,)
|
||||
|
||||
class BBB(object):
|
||||
pass
|
||||
|
||||
class AbstractDispatchTableTests(unittest.TestCase):
|
||||
|
||||
def test_default_dispatch_table(self):
|
||||
# No dispatch_table attribute by default
|
||||
f = io.BytesIO()
|
||||
p = self.pickler_class(f, 0)
|
||||
with self.assertRaises(AttributeError):
|
||||
p.dispatch_table
|
||||
self.assertFalse(hasattr(p, 'dispatch_table'))
|
||||
|
||||
def test_class_dispatch_table(self):
|
||||
# A dispatch_table attribute can be specified class-wide
|
||||
dt = self.get_dispatch_table()
|
||||
|
||||
class MyPickler(self.pickler_class):
|
||||
dispatch_table = dt
|
||||
|
||||
def dumps(obj, protocol=None):
|
||||
f = io.BytesIO()
|
||||
p = MyPickler(f, protocol)
|
||||
self.assertEqual(p.dispatch_table, dt)
|
||||
p.dump(obj)
|
||||
return f.getvalue()
|
||||
|
||||
self._test_dispatch_table(dumps, dt)
|
||||
|
||||
def test_instance_dispatch_table(self):
|
||||
# A dispatch_table attribute can also be specified instance-wide
|
||||
dt = self.get_dispatch_table()
|
||||
|
||||
def dumps(obj, protocol=None):
|
||||
f = io.BytesIO()
|
||||
p = self.pickler_class(f, protocol)
|
||||
p.dispatch_table = dt
|
||||
self.assertEqual(p.dispatch_table, dt)
|
||||
p.dump(obj)
|
||||
return f.getvalue()
|
||||
|
||||
self._test_dispatch_table(dumps, dt)
|
||||
|
||||
def _test_dispatch_table(self, dumps, dispatch_table):
|
||||
def custom_load_dump(obj):
|
||||
return pickle.loads(dumps(obj, 0))
|
||||
|
||||
def default_load_dump(obj):
|
||||
return pickle.loads(pickle.dumps(obj, 0))
|
||||
|
||||
# pickling complex numbers using protocol 0 relies on copyreg
|
||||
# so check pickling a complex number still works
|
||||
z = 1 + 2j
|
||||
self.assertEqual(custom_load_dump(z), z)
|
||||
self.assertEqual(default_load_dump(z), z)
|
||||
|
||||
# modify pickling of complex
|
||||
REDUCE_1 = 'reduce_1'
|
||||
def reduce_1(obj):
|
||||
return str, (REDUCE_1,)
|
||||
dispatch_table[complex] = reduce_1
|
||||
self.assertEqual(custom_load_dump(z), REDUCE_1)
|
||||
self.assertEqual(default_load_dump(z), z)
|
||||
|
||||
# check picklability of AAA and BBB
|
||||
a = AAA()
|
||||
b = BBB()
|
||||
self.assertEqual(custom_load_dump(a), REDUCE_A)
|
||||
self.assertIsInstance(custom_load_dump(b), BBB)
|
||||
self.assertEqual(default_load_dump(a), REDUCE_A)
|
||||
self.assertIsInstance(default_load_dump(b), BBB)
|
||||
|
||||
# modify pickling of BBB
|
||||
dispatch_table[BBB] = reduce_1
|
||||
self.assertEqual(custom_load_dump(a), REDUCE_A)
|
||||
self.assertEqual(custom_load_dump(b), REDUCE_1)
|
||||
self.assertEqual(default_load_dump(a), REDUCE_A)
|
||||
self.assertIsInstance(default_load_dump(b), BBB)
|
||||
|
||||
# revert pickling of BBB and modify pickling of AAA
|
||||
REDUCE_2 = 'reduce_2'
|
||||
def reduce_2(obj):
|
||||
return str, (REDUCE_2,)
|
||||
dispatch_table[AAA] = reduce_2
|
||||
del dispatch_table[BBB]
|
||||
self.assertEqual(custom_load_dump(a), REDUCE_2)
|
||||
self.assertIsInstance(custom_load_dump(b), BBB)
|
||||
self.assertEqual(default_load_dump(a), REDUCE_A)
|
||||
self.assertIsInstance(default_load_dump(b), BBB)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Print some stuff that can be used to rewrite DATA{0,1,2}
|
||||
from pickletools import dis
|
||||
|
|
|
@ -749,10 +749,10 @@ def main(tests=None, testdir=None, verbose=0, quiet=False,
|
|||
if bad:
|
||||
print(count(len(bad), "test"), "failed:")
|
||||
printlist(bad)
|
||||
if environment_changed:
|
||||
print("{} altered the execution environment:".format(
|
||||
count(len(environment_changed), "test")))
|
||||
printlist(environment_changed)
|
||||
if environment_changed:
|
||||
print("{} altered the execution environment:".format(
|
||||
count(len(environment_changed), "test")))
|
||||
printlist(environment_changed)
|
||||
if skipped and not quiet:
|
||||
print(count(len(skipped), "test"), "skipped:")
|
||||
printlist(skipped)
|
||||
|
@ -970,6 +970,7 @@ class saved_test_environment:
|
|||
'multiprocessing.process._dangling',
|
||||
'sysconfig._CONFIG_VARS', 'sysconfig._SCHEMES',
|
||||
'packaging.command._COMMANDS', 'packaging.database_caches',
|
||||
'support.TESTFN',
|
||||
)
|
||||
|
||||
def get_sys_argv(self):
|
||||
|
@ -1163,6 +1164,20 @@ class saved_test_environment:
|
|||
sysconfig._SCHEMES._sections.clear()
|
||||
sysconfig._SCHEMES._sections.update(saved[2])
|
||||
|
||||
def get_support_TESTFN(self):
|
||||
if os.path.isfile(support.TESTFN):
|
||||
result = 'f'
|
||||
elif os.path.isdir(support.TESTFN):
|
||||
result = 'd'
|
||||
else:
|
||||
result = None
|
||||
return result
|
||||
def restore_support_TESTFN(self, saved_value):
|
||||
if saved_value is None:
|
||||
if os.path.isfile(support.TESTFN):
|
||||
os.unlink(support.TESTFN)
|
||||
elif os.path.isdir(support.TESTFN):
|
||||
shutil.rmtree(support.TESTFN)
|
||||
|
||||
def resource_info(self):
|
||||
for name in self.resources:
|
||||
|
|
|
@ -2,6 +2,7 @@ import unittest
|
|||
from test import support
|
||||
import base64
|
||||
import binascii
|
||||
import os
|
||||
import sys
|
||||
import subprocess
|
||||
|
||||
|
@ -274,6 +275,10 @@ class BaseXYTestCase(unittest.TestCase):
|
|||
|
||||
|
||||
class TestMain(unittest.TestCase):
|
||||
def tearDown(self):
|
||||
if os.path.exists(support.TESTFN):
|
||||
os.unlink(support.TESTFN)
|
||||
|
||||
def get_output(self, *args, **options):
|
||||
args = (sys.executable, '-m', 'base64') + args
|
||||
return subprocess.check_output(args, **options)
|
||||
|
|
|
@ -3373,6 +3373,15 @@ class TestBufferProtocol(unittest.TestCase):
|
|||
del nd
|
||||
m.release()
|
||||
|
||||
a = bytearray([1,2,3])
|
||||
m = memoryview(a)
|
||||
nd1 = ndarray(m, getbuf=PyBUF_FULL_RO, flags=ND_REDIRECT)
|
||||
nd2 = ndarray(nd1, getbuf=PyBUF_FULL_RO, flags=ND_REDIRECT)
|
||||
self.assertIs(nd2.obj, m)
|
||||
self.assertRaises(BufferError, m.release)
|
||||
del nd1, nd2
|
||||
m.release()
|
||||
|
||||
# chained views
|
||||
a = bytearray([1,2,3])
|
||||
m1 = memoryview(a)
|
||||
|
@ -3383,6 +3392,17 @@ class TestBufferProtocol(unittest.TestCase):
|
|||
del nd
|
||||
m2.release()
|
||||
|
||||
a = bytearray([1,2,3])
|
||||
m1 = memoryview(a)
|
||||
m2 = memoryview(m1)
|
||||
nd1 = ndarray(m2, getbuf=PyBUF_FULL_RO, flags=ND_REDIRECT)
|
||||
nd2 = ndarray(nd1, getbuf=PyBUF_FULL_RO, flags=ND_REDIRECT)
|
||||
self.assertIs(nd2.obj, m2)
|
||||
m1.release()
|
||||
self.assertRaises(BufferError, m2.release)
|
||||
del nd1, nd2
|
||||
m2.release()
|
||||
|
||||
# Allow changing layout while buffers are exported.
|
||||
nd = ndarray([1,2,3], shape=[3], flags=ND_VAREXPORT)
|
||||
m1 = memoryview(nd)
|
||||
|
@ -3418,11 +3438,182 @@ class TestBufferProtocol(unittest.TestCase):
|
|||
catch22(m1)
|
||||
self.assertEqual(m1[0], ord(b'1'))
|
||||
|
||||
# XXX If m1 has exports, raise BufferError.
|
||||
# x = bytearray(b'123')
|
||||
# with memoryview(x) as m1:
|
||||
# ex = ndarray(m1)
|
||||
# m1[0] == ord(b'1')
|
||||
x = ndarray(list(range(12)), shape=[2,2,3], format='l')
|
||||
y = ndarray(x, getbuf=PyBUF_FULL_RO, flags=ND_REDIRECT)
|
||||
z = ndarray(y, getbuf=PyBUF_FULL_RO, flags=ND_REDIRECT)
|
||||
self.assertIs(z.obj, x)
|
||||
with memoryview(z) as m:
|
||||
catch22(m)
|
||||
self.assertEqual(m[0:1].tolist(), [[[0, 1, 2], [3, 4, 5]]])
|
||||
|
||||
# Test garbage collection.
|
||||
for flags in (0, ND_REDIRECT):
|
||||
x = bytearray(b'123')
|
||||
with memoryview(x) as m1:
|
||||
del x
|
||||
y = ndarray(m1, getbuf=PyBUF_FULL_RO, flags=flags)
|
||||
with memoryview(y) as m2:
|
||||
del y
|
||||
z = ndarray(m2, getbuf=PyBUF_FULL_RO, flags=flags)
|
||||
with memoryview(z) as m3:
|
||||
del z
|
||||
catch22(m3)
|
||||
catch22(m2)
|
||||
catch22(m1)
|
||||
self.assertEqual(m1[0], ord(b'1'))
|
||||
self.assertEqual(m2[1], ord(b'2'))
|
||||
self.assertEqual(m3[2], ord(b'3'))
|
||||
del m3
|
||||
del m2
|
||||
del m1
|
||||
|
||||
x = bytearray(b'123')
|
||||
with memoryview(x) as m1:
|
||||
del x
|
||||
y = ndarray(m1, getbuf=PyBUF_FULL_RO, flags=flags)
|
||||
with memoryview(y) as m2:
|
||||
del y
|
||||
z = ndarray(m2, getbuf=PyBUF_FULL_RO, flags=flags)
|
||||
with memoryview(z) as m3:
|
||||
del z
|
||||
catch22(m1)
|
||||
catch22(m2)
|
||||
catch22(m3)
|
||||
self.assertEqual(m1[0], ord(b'1'))
|
||||
self.assertEqual(m2[1], ord(b'2'))
|
||||
self.assertEqual(m3[2], ord(b'3'))
|
||||
del m1, m2, m3
|
||||
|
||||
# memoryview.release() fails if the view has exported buffers.
|
||||
x = bytearray(b'123')
|
||||
with self.assertRaises(BufferError):
|
||||
with memoryview(x) as m:
|
||||
ex = ndarray(m)
|
||||
m[0] == ord(b'1')
|
||||
|
||||
def test_memoryview_redirect(self):
|
||||
|
||||
nd = ndarray([1.0 * x for x in range(12)], shape=[12], format='d')
|
||||
a = array.array('d', [1.0 * x for x in range(12)])
|
||||
|
||||
for x in (nd, a):
|
||||
y = ndarray(x, getbuf=PyBUF_FULL_RO, flags=ND_REDIRECT)
|
||||
z = ndarray(y, getbuf=PyBUF_FULL_RO, flags=ND_REDIRECT)
|
||||
m = memoryview(z)
|
||||
|
||||
self.assertIs(y.obj, x)
|
||||
self.assertIs(z.obj, x)
|
||||
self.assertIs(m.obj, x)
|
||||
|
||||
self.assertEqual(m, x)
|
||||
self.assertEqual(m, y)
|
||||
self.assertEqual(m, z)
|
||||
|
||||
self.assertEqual(m[1:3], x[1:3])
|
||||
self.assertEqual(m[1:3], y[1:3])
|
||||
self.assertEqual(m[1:3], z[1:3])
|
||||
del y, z
|
||||
self.assertEqual(m[1:3], x[1:3])
|
||||
|
||||
def test_memoryview_from_static_exporter(self):
|
||||
|
||||
fmt = 'B'
|
||||
lst = [0,1,2,3,4,5,6,7,8,9,10,11]
|
||||
|
||||
# exceptions
|
||||
self.assertRaises(TypeError, staticarray, 1, 2, 3)
|
||||
|
||||
# view.obj==x
|
||||
x = staticarray()
|
||||
y = memoryview(x)
|
||||
self.verify(y, obj=x,
|
||||
itemsize=1, fmt=fmt, readonly=1,
|
||||
ndim=1, shape=[12], strides=[1],
|
||||
lst=lst)
|
||||
for i in range(12):
|
||||
self.assertEqual(y[i], i)
|
||||
del x
|
||||
del y
|
||||
|
||||
x = staticarray()
|
||||
y = memoryview(x)
|
||||
del y
|
||||
del x
|
||||
|
||||
x = staticarray()
|
||||
y = ndarray(x, getbuf=PyBUF_FULL_RO)
|
||||
z = ndarray(y, getbuf=PyBUF_FULL_RO)
|
||||
m = memoryview(z)
|
||||
self.assertIs(y.obj, x)
|
||||
self.assertIs(m.obj, z)
|
||||
self.verify(m, obj=z,
|
||||
itemsize=1, fmt=fmt, readonly=1,
|
||||
ndim=1, shape=[12], strides=[1],
|
||||
lst=lst)
|
||||
del x, y, z, m
|
||||
|
||||
x = staticarray()
|
||||
y = ndarray(x, getbuf=PyBUF_FULL_RO, flags=ND_REDIRECT)
|
||||
z = ndarray(y, getbuf=PyBUF_FULL_RO, flags=ND_REDIRECT)
|
||||
m = memoryview(z)
|
||||
self.assertIs(y.obj, x)
|
||||
self.assertIs(z.obj, x)
|
||||
self.assertIs(m.obj, x)
|
||||
self.verify(m, obj=x,
|
||||
itemsize=1, fmt=fmt, readonly=1,
|
||||
ndim=1, shape=[12], strides=[1],
|
||||
lst=lst)
|
||||
del x, y, z, m
|
||||
|
||||
# view.obj==NULL
|
||||
x = staticarray(legacy_mode=True)
|
||||
y = memoryview(x)
|
||||
self.verify(y, obj=None,
|
||||
itemsize=1, fmt=fmt, readonly=1,
|
||||
ndim=1, shape=[12], strides=[1],
|
||||
lst=lst)
|
||||
for i in range(12):
|
||||
self.assertEqual(y[i], i)
|
||||
del x
|
||||
del y
|
||||
|
||||
x = staticarray(legacy_mode=True)
|
||||
y = memoryview(x)
|
||||
del y
|
||||
del x
|
||||
|
||||
x = staticarray(legacy_mode=True)
|
||||
y = ndarray(x, getbuf=PyBUF_FULL_RO)
|
||||
z = ndarray(y, getbuf=PyBUF_FULL_RO)
|
||||
m = memoryview(z)
|
||||
self.assertIs(y.obj, None)
|
||||
self.assertIs(m.obj, z)
|
||||
self.verify(m, obj=z,
|
||||
itemsize=1, fmt=fmt, readonly=1,
|
||||
ndim=1, shape=[12], strides=[1],
|
||||
lst=lst)
|
||||
del x, y, z, m
|
||||
|
||||
x = staticarray(legacy_mode=True)
|
||||
y = ndarray(x, getbuf=PyBUF_FULL_RO, flags=ND_REDIRECT)
|
||||
z = ndarray(y, getbuf=PyBUF_FULL_RO, flags=ND_REDIRECT)
|
||||
m = memoryview(z)
|
||||
# Clearly setting view.obj==NULL is inferior, since it
|
||||
# messes up the redirection chain:
|
||||
self.assertIs(y.obj, None)
|
||||
self.assertIs(z.obj, y)
|
||||
self.assertIs(m.obj, y)
|
||||
self.verify(m, obj=y,
|
||||
itemsize=1, fmt=fmt, readonly=1,
|
||||
ndim=1, shape=[12], strides=[1],
|
||||
lst=lst)
|
||||
del x, y, z, m
|
||||
|
||||
def test_memoryview_getbuffer_undefined(self):
|
||||
|
||||
# getbufferproc does not adhere to the new documentation
|
||||
nd = ndarray([1,2,3], [3], flags=ND_GETBUF_FAIL|ND_GETBUF_UNDEFINED)
|
||||
self.assertRaises(BufferError, memoryview, nd)
|
||||
|
||||
def test_issue_7385(self):
|
||||
x = ndarray([1,2,3], shape=[3], flags=ND_GETBUF_FAIL)
|
||||
|
|
|
@ -379,7 +379,7 @@ class DictTest(unittest.TestCase):
|
|||
x.fail = True
|
||||
self.assertRaises(Exc, d.pop, x)
|
||||
|
||||
def test_mutatingiteration(self):
|
||||
def test_mutating_iteration(self):
|
||||
# changing dict size during iteration
|
||||
d = {}
|
||||
d[1] = 1
|
||||
|
@ -387,6 +387,26 @@ class DictTest(unittest.TestCase):
|
|||
for i in d:
|
||||
d[i+1] = 1
|
||||
|
||||
def test_mutating_lookup(self):
|
||||
# changing dict during a lookup
|
||||
class NastyKey:
|
||||
mutate_dict = None
|
||||
|
||||
def __hash__(self):
|
||||
# hash collision!
|
||||
return 1
|
||||
|
||||
def __eq__(self, other):
|
||||
if self.mutate_dict:
|
||||
self.mutate_dict[self] = 1
|
||||
return self == other
|
||||
|
||||
d = {}
|
||||
d[NastyKey()] = 0
|
||||
NastyKey.mutate_dict = d
|
||||
with self.assertRaises(RuntimeError):
|
||||
d[NastyKey()] = None
|
||||
|
||||
def test_repr(self):
|
||||
d = {}
|
||||
self.assertEqual(repr(d), '{}')
|
||||
|
|
|
@ -38,7 +38,7 @@ class ExceptionTests(unittest.TestCase):
|
|||
try:
|
||||
try:
|
||||
import marshal
|
||||
marshal.loads('')
|
||||
marshal.loads(b'')
|
||||
except EOFError:
|
||||
pass
|
||||
finally:
|
||||
|
|
|
@ -3651,11 +3651,14 @@ class TimedRotatingFileHandlerTest(BaseFileTest):
|
|||
def test_rollover(self):
|
||||
fh = logging.handlers.TimedRotatingFileHandler(self.fn, 'S',
|
||||
backupCount=1)
|
||||
r = logging.makeLogRecord({'msg': 'testing'})
|
||||
fh.emit(r)
|
||||
fmt = logging.Formatter('%(asctime)s %(message)s')
|
||||
fh.setFormatter(fmt)
|
||||
r1 = logging.makeLogRecord({'msg': 'testing - initial'})
|
||||
fh.emit(r1)
|
||||
self.assertLogFile(self.fn)
|
||||
time.sleep(1.01) # just a little over a second ...
|
||||
fh.emit(r)
|
||||
time.sleep(1.1) # a little over a second ...
|
||||
r2 = logging.makeLogRecord({'msg': 'testing - after delay'})
|
||||
fh.emit(r2)
|
||||
fh.close()
|
||||
# At this point, we should have a recent rotated file which we
|
||||
# can test for the existence of. However, in practice, on some
|
||||
|
@ -3682,7 +3685,8 @@ class TimedRotatingFileHandlerTest(BaseFileTest):
|
|||
print('The only matching files are: %s' % files, file=sys.stderr)
|
||||
for f in files:
|
||||
print('Contents of %s:' % f)
|
||||
with open(f, 'r') as tf:
|
||||
path = os.path.join(dn, f)
|
||||
with open(path, 'r') as tf:
|
||||
print(tf.read())
|
||||
self.assertTrue(found, msg=msg)
|
||||
|
||||
|
|
|
@ -7,6 +7,7 @@ import email
|
|||
import email.message
|
||||
import re
|
||||
import io
|
||||
import shutil
|
||||
import tempfile
|
||||
from test import support
|
||||
import unittest
|
||||
|
@ -38,12 +39,7 @@ class TestBase(unittest.TestCase):
|
|||
def _delete_recursively(self, target):
|
||||
# Delete a file or delete a directory recursively
|
||||
if os.path.isdir(target):
|
||||
for path, dirs, files in os.walk(target, topdown=False):
|
||||
for name in files:
|
||||
os.remove(os.path.join(path, name))
|
||||
for name in dirs:
|
||||
os.rmdir(os.path.join(path, name))
|
||||
os.rmdir(target)
|
||||
shutil.rmtree(target)
|
||||
elif os.path.exists(target):
|
||||
os.remove(target)
|
||||
|
||||
|
@ -2028,6 +2024,10 @@ class MaildirTestCase(unittest.TestCase):
|
|||
def setUp(self):
|
||||
# create a new maildir mailbox to work with:
|
||||
self._dir = support.TESTFN
|
||||
if os.path.isdir(self._dir):
|
||||
shutil.rmtree(self._dir)
|
||||
elif os.path.isfile(self._dir):
|
||||
os.unlink(self._dir)
|
||||
os.mkdir(self._dir)
|
||||
os.mkdir(os.path.join(self._dir, "cur"))
|
||||
os.mkdir(os.path.join(self._dir, "tmp"))
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
from test import support
|
||||
import array
|
||||
import marshal
|
||||
import sys
|
||||
import unittest
|
||||
|
@ -154,6 +155,27 @@ class ContainerTestCase(unittest.TestCase, HelperMixin):
|
|||
for constructor in (set, frozenset):
|
||||
self.helper(constructor(self.d.keys()))
|
||||
|
||||
|
||||
class BufferTestCase(unittest.TestCase, HelperMixin):
|
||||
|
||||
def test_bytearray(self):
|
||||
b = bytearray(b"abc")
|
||||
self.helper(b)
|
||||
new = marshal.loads(marshal.dumps(b))
|
||||
self.assertEqual(type(new), bytes)
|
||||
|
||||
def test_memoryview(self):
|
||||
b = memoryview(b"abc")
|
||||
self.helper(b)
|
||||
new = marshal.loads(marshal.dumps(b))
|
||||
self.assertEqual(type(new), bytes)
|
||||
|
||||
def test_array(self):
|
||||
a = array.array('B', b"abc")
|
||||
new = marshal.loads(marshal.dumps(a))
|
||||
self.assertEqual(new, b"abc")
|
||||
|
||||
|
||||
class BugsTestCase(unittest.TestCase):
|
||||
def test_bug_5888452(self):
|
||||
# Simple-minded check for SF 588452: Debug build crashes
|
||||
|
@ -179,7 +201,7 @@ class BugsTestCase(unittest.TestCase):
|
|||
pass
|
||||
|
||||
def test_loads_recursion(self):
|
||||
s = 'c' + ('X' * 4*4) + '{' * 2**20
|
||||
s = b'c' + (b'X' * 4*4) + b'{' * 2**20
|
||||
self.assertRaises(ValueError, marshal.loads, s)
|
||||
|
||||
def test_recursion_limit(self):
|
||||
|
@ -252,6 +274,11 @@ class BugsTestCase(unittest.TestCase):
|
|||
finally:
|
||||
support.unlink(support.TESTFN)
|
||||
|
||||
def test_loads_reject_unicode_strings(self):
|
||||
# Issue #14177: marshal.loads() should not accept unicode strings
|
||||
unicode_string = 'T'
|
||||
self.assertRaises(TypeError, marshal.loads, unicode_string)
|
||||
|
||||
|
||||
def test_main():
|
||||
support.run_unittest(IntTestCase,
|
||||
|
@ -260,6 +287,7 @@ def test_main():
|
|||
CodeTestCase,
|
||||
ContainerTestCase,
|
||||
ExceptionTestCase,
|
||||
BufferTestCase,
|
||||
BugsTestCase)
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
|
|
@ -362,11 +362,17 @@ class MinidomTest(unittest.TestCase):
|
|||
def testGetAttrList(self):
|
||||
pass
|
||||
|
||||
def testGetAttrValues(self): pass
|
||||
def testGetAttrValues(self):
|
||||
pass
|
||||
|
||||
def testGetAttrLength(self): pass
|
||||
def testGetAttrLength(self):
|
||||
pass
|
||||
|
||||
def testGetAttribute(self): pass
|
||||
def testGetAttribute(self):
|
||||
dom = Document()
|
||||
child = dom.appendChild(
|
||||
dom.createElementNS("http://www.python.org", "python:abc"))
|
||||
self.assertEqual(child.getAttribute('missing'), '')
|
||||
|
||||
def testGetAttributeNS(self):
|
||||
dom = Document()
|
||||
|
@ -378,6 +384,9 @@ class MinidomTest(unittest.TestCase):
|
|||
'http://www.python.org')
|
||||
self.assertEqual(child.getAttributeNS("http://www.w3.org", "other"),
|
||||
'')
|
||||
child2 = child.appendChild(dom.createElement('abc'))
|
||||
self.assertEqual(child2.getAttributeNS("http://www.python.org", "missing"),
|
||||
'')
|
||||
|
||||
def testGetAttributeNode(self): pass
|
||||
|
||||
|
|
|
@ -1811,6 +1811,84 @@ class _TestListenerClient(BaseTestCase):
|
|||
p.join()
|
||||
l.close()
|
||||
|
||||
class _TestPoll(unittest.TestCase):
|
||||
|
||||
ALLOWED_TYPES = ('processes', 'threads')
|
||||
|
||||
def test_empty_string(self):
|
||||
a, b = self.Pipe()
|
||||
self.assertEqual(a.poll(), False)
|
||||
b.send_bytes(b'')
|
||||
self.assertEqual(a.poll(), True)
|
||||
self.assertEqual(a.poll(), True)
|
||||
|
||||
@classmethod
|
||||
def _child_strings(cls, conn, strings):
|
||||
for s in strings:
|
||||
time.sleep(0.1)
|
||||
conn.send_bytes(s)
|
||||
conn.close()
|
||||
|
||||
def test_strings(self):
|
||||
strings = (b'hello', b'', b'a', b'b', b'', b'bye', b'', b'lop')
|
||||
a, b = self.Pipe()
|
||||
p = self.Process(target=self._child_strings, args=(b, strings))
|
||||
p.start()
|
||||
|
||||
for s in strings:
|
||||
for i in range(200):
|
||||
if a.poll(0.01):
|
||||
break
|
||||
x = a.recv_bytes()
|
||||
self.assertEqual(s, x)
|
||||
|
||||
p.join()
|
||||
|
||||
@classmethod
|
||||
def _child_boundaries(cls, r):
|
||||
# Polling may "pull" a message in to the child process, but we
|
||||
# don't want it to pull only part of a message, as that would
|
||||
# corrupt the pipe for any other processes which might later
|
||||
# read from it.
|
||||
r.poll(5)
|
||||
|
||||
def test_boundaries(self):
|
||||
r, w = self.Pipe(False)
|
||||
p = self.Process(target=self._child_boundaries, args=(r,))
|
||||
p.start()
|
||||
time.sleep(2)
|
||||
L = [b"first", b"second"]
|
||||
for obj in L:
|
||||
w.send_bytes(obj)
|
||||
w.close()
|
||||
p.join()
|
||||
self.assertIn(r.recv_bytes(), L)
|
||||
|
||||
@classmethod
|
||||
def _child_dont_merge(cls, b):
|
||||
b.send_bytes(b'a')
|
||||
b.send_bytes(b'b')
|
||||
b.send_bytes(b'cd')
|
||||
|
||||
def test_dont_merge(self):
|
||||
a, b = self.Pipe()
|
||||
self.assertEqual(a.poll(0.0), False)
|
||||
self.assertEqual(a.poll(0.1), False)
|
||||
|
||||
p = self.Process(target=self._child_dont_merge, args=(b,))
|
||||
p.start()
|
||||
|
||||
self.assertEqual(a.recv_bytes(), b'a')
|
||||
self.assertEqual(a.poll(1.0), True)
|
||||
self.assertEqual(a.poll(1.0), True)
|
||||
self.assertEqual(a.recv_bytes(), b'b')
|
||||
self.assertEqual(a.poll(1.0), True)
|
||||
self.assertEqual(a.poll(1.0), True)
|
||||
self.assertEqual(a.poll(0.0), True)
|
||||
self.assertEqual(a.recv_bytes(), b'cd')
|
||||
|
||||
p.join()
|
||||
|
||||
#
|
||||
# Test of sending connection and socket objects between processes
|
||||
#
|
||||
|
@ -2404,8 +2482,164 @@ class TestStdinBadfiledescriptor(unittest.TestCase):
|
|||
flike.flush()
|
||||
assert sio.getvalue() == 'foo'
|
||||
|
||||
|
||||
class TestWait(unittest.TestCase):
|
||||
|
||||
@classmethod
|
||||
def _child_test_wait(cls, w, slow):
|
||||
for i in range(10):
|
||||
if slow:
|
||||
time.sleep(random.random()*0.1)
|
||||
w.send((i, os.getpid()))
|
||||
w.close()
|
||||
|
||||
def test_wait(self, slow=False):
|
||||
from multiprocessing.connection import wait
|
||||
readers = []
|
||||
procs = []
|
||||
messages = []
|
||||
|
||||
for i in range(4):
|
||||
r, w = multiprocessing.Pipe(duplex=False)
|
||||
p = multiprocessing.Process(target=self._child_test_wait, args=(w, slow))
|
||||
p.daemon = True
|
||||
p.start()
|
||||
w.close()
|
||||
readers.append(r)
|
||||
procs.append(p)
|
||||
self.addCleanup(p.join)
|
||||
|
||||
while readers:
|
||||
for r in wait(readers):
|
||||
try:
|
||||
msg = r.recv()
|
||||
except EOFError:
|
||||
readers.remove(r)
|
||||
r.close()
|
||||
else:
|
||||
messages.append(msg)
|
||||
|
||||
messages.sort()
|
||||
expected = sorted((i, p.pid) for i in range(10) for p in procs)
|
||||
self.assertEqual(messages, expected)
|
||||
|
||||
@classmethod
|
||||
def _child_test_wait_socket(cls, address, slow):
|
||||
s = socket.socket()
|
||||
s.connect(address)
|
||||
for i in range(10):
|
||||
if slow:
|
||||
time.sleep(random.random()*0.1)
|
||||
s.sendall(('%s\n' % i).encode('ascii'))
|
||||
s.close()
|
||||
|
||||
def test_wait_socket(self, slow=False):
|
||||
from multiprocessing.connection import wait
|
||||
l = socket.socket()
|
||||
l.bind(('', 0))
|
||||
l.listen(4)
|
||||
addr = ('localhost', l.getsockname()[1])
|
||||
readers = []
|
||||
procs = []
|
||||
dic = {}
|
||||
|
||||
for i in range(4):
|
||||
p = multiprocessing.Process(target=self._child_test_wait_socket,
|
||||
args=(addr, slow))
|
||||
p.daemon = True
|
||||
p.start()
|
||||
procs.append(p)
|
||||
self.addCleanup(p.join)
|
||||
|
||||
for i in range(4):
|
||||
r, _ = l.accept()
|
||||
readers.append(r)
|
||||
dic[r] = []
|
||||
l.close()
|
||||
|
||||
while readers:
|
||||
for r in wait(readers):
|
||||
msg = r.recv(32)
|
||||
if not msg:
|
||||
readers.remove(r)
|
||||
r.close()
|
||||
else:
|
||||
dic[r].append(msg)
|
||||
|
||||
expected = ''.join('%s\n' % i for i in range(10)).encode('ascii')
|
||||
for v in dic.values():
|
||||
self.assertEqual(b''.join(v), expected)
|
||||
|
||||
def test_wait_slow(self):
|
||||
self.test_wait(True)
|
||||
|
||||
def test_wait_socket_slow(self):
|
||||
self.test_wait(True)
|
||||
|
||||
def test_wait_timeout(self):
|
||||
from multiprocessing.connection import wait
|
||||
|
||||
expected = 1
|
||||
a, b = multiprocessing.Pipe()
|
||||
|
||||
start = time.time()
|
||||
res = wait([a, b], 1)
|
||||
delta = time.time() - start
|
||||
|
||||
self.assertEqual(res, [])
|
||||
self.assertLess(delta, expected + 0.2)
|
||||
self.assertGreater(delta, expected - 0.2)
|
||||
|
||||
b.send(None)
|
||||
|
||||
start = time.time()
|
||||
res = wait([a, b], 1)
|
||||
delta = time.time() - start
|
||||
|
||||
self.assertEqual(res, [a])
|
||||
self.assertLess(delta, 0.2)
|
||||
|
||||
def test_wait_integer(self):
|
||||
from multiprocessing.connection import wait
|
||||
|
||||
expected = 5
|
||||
a, b = multiprocessing.Pipe()
|
||||
p = multiprocessing.Process(target=time.sleep, args=(expected,))
|
||||
|
||||
p.start()
|
||||
self.assertIsInstance(p.sentinel, int)
|
||||
|
||||
start = time.time()
|
||||
res = wait([a, p.sentinel, b], expected + 20)
|
||||
delta = time.time() - start
|
||||
|
||||
self.assertEqual(res, [p.sentinel])
|
||||
self.assertLess(delta, expected + 1)
|
||||
self.assertGreater(delta, expected - 1)
|
||||
|
||||
a.send(None)
|
||||
|
||||
start = time.time()
|
||||
res = wait([a, p.sentinel, b], 20)
|
||||
delta = time.time() - start
|
||||
|
||||
self.assertEqual(res, [p.sentinel, b])
|
||||
self.assertLess(delta, 0.2)
|
||||
|
||||
b.send(None)
|
||||
|
||||
start = time.time()
|
||||
res = wait([a, p.sentinel, b], 20)
|
||||
delta = time.time() - start
|
||||
|
||||
self.assertEqual(res, [a, p.sentinel, b])
|
||||
self.assertLess(delta, 0.2)
|
||||
|
||||
p.join()
|
||||
|
||||
|
||||
testcases_other = [OtherTest, TestInvalidHandle, TestInitializers,
|
||||
TestStdinBadfiledescriptor]
|
||||
TestStdinBadfiledescriptor, TestWait]
|
||||
|
||||
#
|
||||
#
|
||||
|
|
|
@ -1,291 +0,0 @@
|
|||
from test.support import verbose, TESTFN
|
||||
import random
|
||||
import os
|
||||
|
||||
# From SF bug #422121: Insecurities in dict comparison.
|
||||
|
||||
# Safety of code doing comparisons has been an historical Python weak spot.
|
||||
# The problem is that comparison of structures written in C *naturally*
|
||||
# wants to hold on to things like the size of the container, or "the
|
||||
# biggest" containee so far, across a traversal of the container; but
|
||||
# code to do containee comparisons can call back into Python and mutate
|
||||
# the container in arbitrary ways while the C loop is in midstream. If the
|
||||
# C code isn't extremely paranoid about digging things out of memory on
|
||||
# each trip, and artificially boosting refcounts for the duration, anything
|
||||
# from infinite loops to OS crashes can result (yes, I use Windows <wink>).
|
||||
#
|
||||
# The other problem is that code designed to provoke a weakness is usually
|
||||
# white-box code, and so catches only the particular vulnerabilities the
|
||||
# author knew to protect against. For example, Python's list.sort() code
|
||||
# went thru many iterations as one "new" vulnerability after another was
|
||||
# discovered.
|
||||
#
|
||||
# So the dict comparison test here uses a black-box approach instead,
|
||||
# generating dicts of various sizes at random, and performing random
|
||||
# mutations on them at random times. This proved very effective,
|
||||
# triggering at least six distinct failure modes the first 20 times I
|
||||
# ran it. Indeed, at the start, the driver never got beyond 6 iterations
|
||||
# before the test died.
|
||||
|
||||
# The dicts are global to make it easy to mutate tham from within functions.
|
||||
dict1 = {}
|
||||
dict2 = {}
|
||||
|
||||
# The current set of keys in dict1 and dict2. These are materialized as
|
||||
# lists to make it easy to pick a dict key at random.
|
||||
dict1keys = []
|
||||
dict2keys = []
|
||||
|
||||
# Global flag telling maybe_mutate() whether to *consider* mutating.
|
||||
mutate = 0
|
||||
|
||||
# If global mutate is true, consider mutating a dict. May or may not
|
||||
# mutate a dict even if mutate is true. If it does decide to mutate a
|
||||
# dict, it picks one of {dict1, dict2} at random, and deletes a random
|
||||
# entry from it; or, more rarely, adds a random element.
|
||||
|
||||
def maybe_mutate():
|
||||
global mutate
|
||||
if not mutate:
|
||||
return
|
||||
if random.random() < 0.5:
|
||||
return
|
||||
|
||||
if random.random() < 0.5:
|
||||
target, keys = dict1, dict1keys
|
||||
else:
|
||||
target, keys = dict2, dict2keys
|
||||
|
||||
if random.random() < 0.2:
|
||||
# Insert a new key.
|
||||
mutate = 0 # disable mutation until key inserted
|
||||
while 1:
|
||||
newkey = Horrid(random.randrange(100))
|
||||
if newkey not in target:
|
||||
break
|
||||
target[newkey] = Horrid(random.randrange(100))
|
||||
keys.append(newkey)
|
||||
mutate = 1
|
||||
|
||||
elif keys:
|
||||
# Delete a key at random.
|
||||
mutate = 0 # disable mutation until key deleted
|
||||
i = random.randrange(len(keys))
|
||||
key = keys[i]
|
||||
del target[key]
|
||||
del keys[i]
|
||||
mutate = 1
|
||||
|
||||
# A horrid class that triggers random mutations of dict1 and dict2 when
|
||||
# instances are compared.
|
||||
|
||||
class Horrid:
|
||||
def __init__(self, i):
|
||||
# Comparison outcomes are determined by the value of i.
|
||||
self.i = i
|
||||
|
||||
# An artificial hashcode is selected at random so that we don't
|
||||
# have any systematic relationship between comparison outcomes
|
||||
# (based on self.i and other.i) and relative position within the
|
||||
# hash vector (based on hashcode).
|
||||
# XXX This is no longer effective.
|
||||
##self.hashcode = random.randrange(1000000000)
|
||||
|
||||
def __hash__(self):
|
||||
return 42
|
||||
return self.hashcode
|
||||
|
||||
def __eq__(self, other):
|
||||
maybe_mutate() # The point of the test.
|
||||
return self.i == other.i
|
||||
|
||||
def __ne__(self, other):
|
||||
raise RuntimeError("I didn't expect some kind of Spanish inquisition!")
|
||||
|
||||
__lt__ = __le__ = __gt__ = __ge__ = __ne__
|
||||
|
||||
def __repr__(self):
|
||||
return "Horrid(%d)" % self.i
|
||||
|
||||
# Fill dict d with numentries (Horrid(i), Horrid(j)) key-value pairs,
|
||||
# where i and j are selected at random from the candidates list.
|
||||
# Return d.keys() after filling.
|
||||
|
||||
def fill_dict(d, candidates, numentries):
|
||||
d.clear()
|
||||
for i in range(numentries):
|
||||
d[Horrid(random.choice(candidates))] = \
|
||||
Horrid(random.choice(candidates))
|
||||
return list(d.keys())
|
||||
|
||||
# Test one pair of randomly generated dicts, each with n entries.
|
||||
# Note that dict comparison is trivial if they don't have the same number
|
||||
# of entires (then the "shorter" dict is instantly considered to be the
|
||||
# smaller one, without even looking at the entries).
|
||||
|
||||
def test_one(n):
|
||||
global mutate, dict1, dict2, dict1keys, dict2keys
|
||||
|
||||
# Fill the dicts without mutating them.
|
||||
mutate = 0
|
||||
dict1keys = fill_dict(dict1, range(n), n)
|
||||
dict2keys = fill_dict(dict2, range(n), n)
|
||||
|
||||
# Enable mutation, then compare the dicts so long as they have the
|
||||
# same size.
|
||||
mutate = 1
|
||||
if verbose:
|
||||
print("trying w/ lengths", len(dict1), len(dict2), end=' ')
|
||||
while dict1 and len(dict1) == len(dict2):
|
||||
if verbose:
|
||||
print(".", end=' ')
|
||||
c = dict1 == dict2
|
||||
if verbose:
|
||||
print()
|
||||
|
||||
# Run test_one n times. At the start (before the bugs were fixed), 20
|
||||
# consecutive runs of this test each blew up on or before the sixth time
|
||||
# test_one was run. So n doesn't have to be large to get an interesting
|
||||
# test.
|
||||
# OTOH, calling with large n is also interesting, to ensure that the fixed
|
||||
# code doesn't hold on to refcounts *too* long (in which case memory would
|
||||
# leak).
|
||||
|
||||
def test(n):
|
||||
for i in range(n):
|
||||
test_one(random.randrange(1, 100))
|
||||
|
||||
# See last comment block for clues about good values for n.
|
||||
test(100)
|
||||
|
||||
##########################################################################
|
||||
# Another segfault bug, distilled by Michael Hudson from a c.l.py post.
|
||||
|
||||
class Child:
|
||||
def __init__(self, parent):
|
||||
self.__dict__['parent'] = parent
|
||||
def __getattr__(self, attr):
|
||||
self.parent.a = 1
|
||||
self.parent.b = 1
|
||||
self.parent.c = 1
|
||||
self.parent.d = 1
|
||||
self.parent.e = 1
|
||||
self.parent.f = 1
|
||||
self.parent.g = 1
|
||||
self.parent.h = 1
|
||||
self.parent.i = 1
|
||||
return getattr(self.parent, attr)
|
||||
|
||||
class Parent:
|
||||
def __init__(self):
|
||||
self.a = Child(self)
|
||||
|
||||
# Hard to say what this will print! May vary from time to time. But
|
||||
# we're specifically trying to test the tp_print slot here, and this is
|
||||
# the clearest way to do it. We print the result to a temp file so that
|
||||
# the expected-output file doesn't need to change.
|
||||
|
||||
f = open(TESTFN, "w")
|
||||
print(Parent().__dict__, file=f)
|
||||
f.close()
|
||||
os.unlink(TESTFN)
|
||||
|
||||
##########################################################################
|
||||
# And another core-dumper from Michael Hudson.
|
||||
|
||||
dict = {}
|
||||
|
||||
# Force dict to malloc its table.
|
||||
for i in range(1, 10):
|
||||
dict[i] = i
|
||||
|
||||
f = open(TESTFN, "w")
|
||||
|
||||
class Machiavelli:
|
||||
def __repr__(self):
|
||||
dict.clear()
|
||||
|
||||
# Michael sez: "doesn't crash without this. don't know why."
|
||||
# Tim sez: "luck of the draw; crashes with or without for me."
|
||||
print(file=f)
|
||||
|
||||
return repr("machiavelli")
|
||||
|
||||
def __hash__(self):
|
||||
return 0
|
||||
|
||||
dict[Machiavelli()] = Machiavelli()
|
||||
|
||||
print(str(dict), file=f)
|
||||
f.close()
|
||||
os.unlink(TESTFN)
|
||||
del f, dict
|
||||
|
||||
|
||||
##########################################################################
|
||||
# And another core-dumper from Michael Hudson.
|
||||
|
||||
dict = {}
|
||||
|
||||
# let's force dict to malloc its table
|
||||
for i in range(1, 10):
|
||||
dict[i] = i
|
||||
|
||||
class Machiavelli2:
|
||||
def __eq__(self, other):
|
||||
dict.clear()
|
||||
return 1
|
||||
|
||||
def __hash__(self):
|
||||
return 0
|
||||
|
||||
dict[Machiavelli2()] = Machiavelli2()
|
||||
|
||||
try:
|
||||
dict[Machiavelli2()]
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
del dict
|
||||
|
||||
##########################################################################
|
||||
# And another core-dumper from Michael Hudson.
|
||||
|
||||
dict = {}
|
||||
|
||||
# let's force dict to malloc its table
|
||||
for i in range(1, 10):
|
||||
dict[i] = i
|
||||
|
||||
class Machiavelli3:
|
||||
def __init__(self, id):
|
||||
self.id = id
|
||||
|
||||
def __eq__(self, other):
|
||||
if self.id == other.id:
|
||||
dict.clear()
|
||||
return 1
|
||||
else:
|
||||
return 0
|
||||
|
||||
def __repr__(self):
|
||||
return "%s(%s)"%(self.__class__.__name__, self.id)
|
||||
|
||||
def __hash__(self):
|
||||
return 0
|
||||
|
||||
dict[Machiavelli3(1)] = Machiavelli3(0)
|
||||
dict[Machiavelli3(2)] = Machiavelli3(0)
|
||||
|
||||
f = open(TESTFN, "w")
|
||||
try:
|
||||
try:
|
||||
print(dict[Machiavelli3(2)], file=f)
|
||||
except KeyError:
|
||||
pass
|
||||
finally:
|
||||
f.close()
|
||||
os.unlink(TESTFN)
|
||||
|
||||
del dict
|
||||
del dict1, dict2, dict1keys, dict2keys
|
|
@ -1,5 +1,6 @@
|
|||
import pickle
|
||||
import io
|
||||
import collections
|
||||
|
||||
from test import support
|
||||
|
||||
|
@ -7,6 +8,7 @@ from test.pickletester import AbstractPickleTests
|
|||
from test.pickletester import AbstractPickleModuleTests
|
||||
from test.pickletester import AbstractPersistentPicklerTests
|
||||
from test.pickletester import AbstractPicklerUnpicklerObjectTests
|
||||
from test.pickletester import AbstractDispatchTableTests
|
||||
from test.pickletester import BigmemPickleTests
|
||||
|
||||
try:
|
||||
|
@ -80,6 +82,18 @@ class PyPicklerUnpicklerObjectTests(AbstractPicklerUnpicklerObjectTests):
|
|||
unpickler_class = pickle._Unpickler
|
||||
|
||||
|
||||
class PyDispatchTableTests(AbstractDispatchTableTests):
|
||||
pickler_class = pickle._Pickler
|
||||
def get_dispatch_table(self):
|
||||
return pickle.dispatch_table.copy()
|
||||
|
||||
|
||||
class PyChainDispatchTableTests(AbstractDispatchTableTests):
|
||||
pickler_class = pickle._Pickler
|
||||
def get_dispatch_table(self):
|
||||
return collections.ChainMap({}, pickle.dispatch_table)
|
||||
|
||||
|
||||
if has_c_implementation:
|
||||
class CPicklerTests(PyPicklerTests):
|
||||
pickler = _pickle.Pickler
|
||||
|
@ -101,14 +115,26 @@ if has_c_implementation:
|
|||
pickler_class = _pickle.Pickler
|
||||
unpickler_class = _pickle.Unpickler
|
||||
|
||||
class CDispatchTableTests(AbstractDispatchTableTests):
|
||||
pickler_class = pickle.Pickler
|
||||
def get_dispatch_table(self):
|
||||
return pickle.dispatch_table.copy()
|
||||
|
||||
class CChainDispatchTableTests(AbstractDispatchTableTests):
|
||||
pickler_class = pickle.Pickler
|
||||
def get_dispatch_table(self):
|
||||
return collections.ChainMap({}, pickle.dispatch_table)
|
||||
|
||||
|
||||
def test_main():
|
||||
tests = [PickleTests, PyPicklerTests, PyPersPicklerTests]
|
||||
tests = [PickleTests, PyPicklerTests, PyPersPicklerTests,
|
||||
PyDispatchTableTests, PyChainDispatchTableTests]
|
||||
if has_c_implementation:
|
||||
tests.extend([CPicklerTests, CPersPicklerTests,
|
||||
CDumpPickle_LoadPickle, DumpPickle_CLoadPickle,
|
||||
PyPicklerUnpicklerObjectTests,
|
||||
CPicklerUnpicklerObjectTests,
|
||||
CDispatchTableTests, CChainDispatchTableTests,
|
||||
InMemoryPickleTests])
|
||||
support.run_unittest(*tests)
|
||||
support.run_doctest(pickle)
|
||||
|
|
|
@ -205,6 +205,7 @@ class SmallPtyTests(unittest.TestCase):
|
|||
self.orig_stdout_fileno = pty.STDOUT_FILENO
|
||||
self.orig_pty_select = pty.select
|
||||
self.fds = [] # A list of file descriptors to close.
|
||||
self.files = []
|
||||
self.select_rfds_lengths = []
|
||||
self.select_rfds_results = []
|
||||
|
||||
|
@ -212,10 +213,15 @@ class SmallPtyTests(unittest.TestCase):
|
|||
pty.STDIN_FILENO = self.orig_stdin_fileno
|
||||
pty.STDOUT_FILENO = self.orig_stdout_fileno
|
||||
pty.select = self.orig_pty_select
|
||||
for file in self.files:
|
||||
try:
|
||||
file.close()
|
||||
except OSError:
|
||||
pass
|
||||
for fd in self.fds:
|
||||
try:
|
||||
os.close(fd)
|
||||
except:
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
def _pipe(self):
|
||||
|
@ -223,6 +229,11 @@ class SmallPtyTests(unittest.TestCase):
|
|||
self.fds.extend(pipe_fds)
|
||||
return pipe_fds
|
||||
|
||||
def _socketpair(self):
|
||||
socketpair = socket.socketpair()
|
||||
self.files.extend(socketpair)
|
||||
return socketpair
|
||||
|
||||
def _mock_select(self, rfds, wfds, xfds):
|
||||
# This will raise IndexError when no more expected calls exist.
|
||||
self.assertEqual(self.select_rfds_lengths.pop(0), len(rfds))
|
||||
|
@ -234,9 +245,8 @@ class SmallPtyTests(unittest.TestCase):
|
|||
pty.STDOUT_FILENO = mock_stdout_fd
|
||||
mock_stdin_fd, write_to_stdin_fd = self._pipe()
|
||||
pty.STDIN_FILENO = mock_stdin_fd
|
||||
socketpair = socket.socketpair()
|
||||
socketpair = self._socketpair()
|
||||
masters = [s.fileno() for s in socketpair]
|
||||
self.fds.extend(masters)
|
||||
|
||||
# Feed data. Smaller than PIPEBUF. These writes will not block.
|
||||
os.write(masters[1], b'from master')
|
||||
|
@ -263,9 +273,8 @@ class SmallPtyTests(unittest.TestCase):
|
|||
pty.STDOUT_FILENO = mock_stdout_fd
|
||||
mock_stdin_fd, write_to_stdin_fd = self._pipe()
|
||||
pty.STDIN_FILENO = mock_stdin_fd
|
||||
socketpair = socket.socketpair()
|
||||
socketpair = self._socketpair()
|
||||
masters = [s.fileno() for s in socketpair]
|
||||
self.fds.extend(masters)
|
||||
|
||||
os.close(masters[1])
|
||||
socketpair[1].close()
|
||||
|
|
|
@ -662,7 +662,7 @@ class PendingSignalsTests(unittest.TestCase):
|
|||
self.wait_helper(signal.SIGALRM, '''
|
||||
def test(signum):
|
||||
signal.alarm(1)
|
||||
info = signal.sigtimedwait([signum], (10, 1000))
|
||||
info = signal.sigtimedwait([signum], 10.1000)
|
||||
if info.si_signo != signum:
|
||||
raise Exception('info.si_signo != %s' % signum)
|
||||
''')
|
||||
|
@ -675,7 +675,7 @@ class PendingSignalsTests(unittest.TestCase):
|
|||
def test(signum):
|
||||
import os
|
||||
os.kill(os.getpid(), signum)
|
||||
info = signal.sigtimedwait([signum], (0, 0))
|
||||
info = signal.sigtimedwait([signum], 0)
|
||||
if info.si_signo != signum:
|
||||
raise Exception('info.si_signo != %s' % signum)
|
||||
''')
|
||||
|
@ -685,7 +685,7 @@ class PendingSignalsTests(unittest.TestCase):
|
|||
def test_sigtimedwait_timeout(self):
|
||||
self.wait_helper(signal.SIGALRM, '''
|
||||
def test(signum):
|
||||
received = signal.sigtimedwait([signum], (1, 0))
|
||||
received = signal.sigtimedwait([signum], 1.0)
|
||||
if received is not None:
|
||||
raise Exception("received=%r" % (received,))
|
||||
''')
|
||||
|
@ -694,9 +694,7 @@ class PendingSignalsTests(unittest.TestCase):
|
|||
'need signal.sigtimedwait()')
|
||||
def test_sigtimedwait_negative_timeout(self):
|
||||
signum = signal.SIGALRM
|
||||
self.assertRaises(ValueError, signal.sigtimedwait, [signum], (-1, -1))
|
||||
self.assertRaises(ValueError, signal.sigtimedwait, [signum], (0, -1))
|
||||
self.assertRaises(ValueError, signal.sigtimedwait, [signum], (-1, 0))
|
||||
self.assertRaises(ValueError, signal.sigtimedwait, [signum], -1.0)
|
||||
|
||||
@unittest.skipUnless(hasattr(signal, 'sigwaitinfo'),
|
||||
'need signal.sigwaitinfo()')
|
||||
|
|
|
@ -497,12 +497,31 @@ class TestStrftime4dyear(_TestStrftimeYear, _Test4dYear):
|
|||
pass
|
||||
|
||||
|
||||
class TestPytime(unittest.TestCase):
|
||||
def test_timespec(self):
|
||||
from _testcapi import pytime_object_to_timespec
|
||||
for obj, timespec in (
|
||||
(0, (0, 0)),
|
||||
(-1, (-1, 0)),
|
||||
(-1.0, (-1, 0)),
|
||||
(-1e-9, (-1, 999999999)),
|
||||
(-1.2, (-2, 800000000)),
|
||||
(1.123456789, (1, 123456789)),
|
||||
):
|
||||
self.assertEqual(pytime_object_to_timespec(obj), timespec)
|
||||
|
||||
for invalid in (-(2 ** 100), -(2.0 ** 100.0), 2 ** 100, 2.0 ** 100.0):
|
||||
self.assertRaises(OverflowError, pytime_object_to_timespec, invalid)
|
||||
|
||||
|
||||
|
||||
def test_main():
|
||||
support.run_unittest(
|
||||
TimeTestCase,
|
||||
TestLocale,
|
||||
TestAsctime4dyear,
|
||||
TestStrftime4dyear)
|
||||
TestStrftime4dyear,
|
||||
TestPytime)
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_main()
|
||||
|
|
|
@ -563,6 +563,18 @@ Non-ascii identifiers
|
|||
NAME 'grün' (2, 0) (2, 4)
|
||||
OP '=' (2, 5) (2, 6)
|
||||
STRING "'green'" (2, 7) (2, 14)
|
||||
|
||||
Legacy unicode literals:
|
||||
|
||||
>>> dump_tokens("Örter = u'places'\\ngrün = UR'green'")
|
||||
ENCODING 'utf-8' (0, 0) (0, 0)
|
||||
NAME 'Örter' (1, 0) (1, 5)
|
||||
OP '=' (1, 6) (1, 7)
|
||||
STRING "u'places'" (1, 8) (1, 17)
|
||||
NEWLINE '\\n' (1, 17) (1, 18)
|
||||
NAME 'grün' (2, 0) (2, 4)
|
||||
OP '=' (2, 5) (2, 6)
|
||||
STRING "UR'green'" (2, 7) (2, 16)
|
||||
"""
|
||||
|
||||
from test import support
|
||||
|
|
|
@ -28,6 +28,12 @@ class TestWeakSet(unittest.TestCase):
|
|||
# need to keep references to them
|
||||
self.items = [ustr(c) for c in ('a', 'b', 'c')]
|
||||
self.items2 = [ustr(c) for c in ('x', 'y', 'z')]
|
||||
self.ab_items = [ustr(c) for c in 'ab']
|
||||
self.abcde_items = [ustr(c) for c in 'abcde']
|
||||
self.def_items = [ustr(c) for c in 'def']
|
||||
self.ab_weakset = WeakSet(self.ab_items)
|
||||
self.abcde_weakset = WeakSet(self.abcde_items)
|
||||
self.def_weakset = WeakSet(self.def_items)
|
||||
self.letters = [ustr(c) for c in string.ascii_letters]
|
||||
self.s = WeakSet(self.items)
|
||||
self.d = dict.fromkeys(self.items)
|
||||
|
@ -71,6 +77,11 @@ class TestWeakSet(unittest.TestCase):
|
|||
x = WeakSet(self.items + self.items2)
|
||||
c = C(self.items2)
|
||||
self.assertEqual(self.s.union(c), x)
|
||||
del c
|
||||
self.assertEqual(len(u), len(self.items) + len(self.items2))
|
||||
self.items2.pop()
|
||||
gc.collect()
|
||||
self.assertEqual(len(u), len(self.items) + len(self.items2))
|
||||
|
||||
def test_or(self):
|
||||
i = self.s.union(self.items2)
|
||||
|
@ -78,14 +89,19 @@ class TestWeakSet(unittest.TestCase):
|
|||
self.assertEqual(self.s | frozenset(self.items2), i)
|
||||
|
||||
def test_intersection(self):
|
||||
i = self.s.intersection(self.items2)
|
||||
s = WeakSet(self.letters)
|
||||
i = s.intersection(self.items2)
|
||||
for c in self.letters:
|
||||
self.assertEqual(c in i, c in self.d and c in self.items2)
|
||||
self.assertEqual(self.s, WeakSet(self.items))
|
||||
self.assertEqual(c in i, c in self.items2 and c in self.letters)
|
||||
self.assertEqual(s, WeakSet(self.letters))
|
||||
self.assertEqual(type(i), WeakSet)
|
||||
for C in set, frozenset, dict.fromkeys, list, tuple:
|
||||
x = WeakSet([])
|
||||
self.assertEqual(self.s.intersection(C(self.items2)), x)
|
||||
self.assertEqual(i.intersection(C(self.items)), x)
|
||||
self.assertEqual(len(i), len(self.items2))
|
||||
self.items2.pop()
|
||||
gc.collect()
|
||||
self.assertEqual(len(i), len(self.items2))
|
||||
|
||||
def test_isdisjoint(self):
|
||||
self.assertTrue(self.s.isdisjoint(WeakSet(self.items2)))
|
||||
|
@ -116,6 +132,10 @@ class TestWeakSet(unittest.TestCase):
|
|||
self.assertEqual(self.s, WeakSet(self.items))
|
||||
self.assertEqual(type(i), WeakSet)
|
||||
self.assertRaises(TypeError, self.s.symmetric_difference, [[]])
|
||||
self.assertEqual(len(i), len(self.items) + len(self.items2))
|
||||
self.items2.pop()
|
||||
gc.collect()
|
||||
self.assertEqual(len(i), len(self.items) + len(self.items2))
|
||||
|
||||
def test_xor(self):
|
||||
i = self.s.symmetric_difference(self.items2)
|
||||
|
@ -123,22 +143,28 @@ class TestWeakSet(unittest.TestCase):
|
|||
self.assertEqual(self.s ^ frozenset(self.items2), i)
|
||||
|
||||
def test_sub_and_super(self):
|
||||
pl, ql, rl = map(lambda s: [ustr(c) for c in s], ['ab', 'abcde', 'def'])
|
||||
p, q, r = map(WeakSet, (pl, ql, rl))
|
||||
self.assertTrue(p < q)
|
||||
self.assertTrue(p <= q)
|
||||
self.assertTrue(q <= q)
|
||||
self.assertTrue(q > p)
|
||||
self.assertTrue(q >= p)
|
||||
self.assertFalse(q < r)
|
||||
self.assertFalse(q <= r)
|
||||
self.assertFalse(q > r)
|
||||
self.assertFalse(q >= r)
|
||||
self.assertTrue(self.ab_weakset <= self.abcde_weakset)
|
||||
self.assertTrue(self.abcde_weakset <= self.abcde_weakset)
|
||||
self.assertTrue(self.abcde_weakset >= self.ab_weakset)
|
||||
self.assertFalse(self.abcde_weakset <= self.def_weakset)
|
||||
self.assertFalse(self.abcde_weakset >= self.def_weakset)
|
||||
self.assertTrue(set('a').issubset('abc'))
|
||||
self.assertTrue(set('abc').issuperset('a'))
|
||||
self.assertFalse(set('a').issubset('cbs'))
|
||||
self.assertFalse(set('cbs').issuperset('a'))
|
||||
|
||||
def test_lt(self):
|
||||
self.assertTrue(self.ab_weakset < self.abcde_weakset)
|
||||
self.assertFalse(self.abcde_weakset < self.def_weakset)
|
||||
self.assertFalse(self.ab_weakset < self.ab_weakset)
|
||||
self.assertFalse(WeakSet() < WeakSet())
|
||||
|
||||
def test_gt(self):
|
||||
self.assertTrue(self.abcde_weakset > self.ab_weakset)
|
||||
self.assertFalse(self.abcde_weakset > self.def_weakset)
|
||||
self.assertFalse(self.ab_weakset > self.ab_weakset)
|
||||
self.assertFalse(WeakSet() > WeakSet())
|
||||
|
||||
def test_gc(self):
|
||||
# Create a nest of cycles to exercise overall ref count check
|
||||
s = WeakSet(Foo() for i in range(1000))
|
||||
|
|
|
@ -1855,6 +1855,102 @@ def check_issue10777():
|
|||
# --------------------------------------------------------------------
|
||||
|
||||
|
||||
class ElementTreeTest(unittest.TestCase):
|
||||
|
||||
def test_istype(self):
|
||||
self.assertIsInstance(ET.ParseError, type)
|
||||
self.assertIsInstance(ET.QName, type)
|
||||
self.assertIsInstance(ET.ElementTree, type)
|
||||
self.assertIsInstance(ET.Element, type)
|
||||
# XXX issue 14128 with C ElementTree
|
||||
# self.assertIsInstance(ET.TreeBuilder, type)
|
||||
# self.assertIsInstance(ET.XMLParser, type)
|
||||
|
||||
def test_Element_subclass_trivial(self):
|
||||
class MyElement(ET.Element):
|
||||
pass
|
||||
|
||||
mye = MyElement('foo')
|
||||
self.assertIsInstance(mye, ET.Element)
|
||||
self.assertIsInstance(mye, MyElement)
|
||||
self.assertEqual(mye.tag, 'foo')
|
||||
|
||||
def test_Element_subclass_constructor(self):
|
||||
class MyElement(ET.Element):
|
||||
def __init__(self, tag, attrib={}, **extra):
|
||||
super(MyElement, self).__init__(tag + '__', attrib, **extra)
|
||||
|
||||
mye = MyElement('foo', {'a': 1, 'b': 2}, c=3, d=4)
|
||||
self.assertEqual(mye.tag, 'foo__')
|
||||
self.assertEqual(sorted(mye.items()),
|
||||
[('a', 1), ('b', 2), ('c', 3), ('d', 4)])
|
||||
|
||||
def test_Element_subclass_new_method(self):
|
||||
class MyElement(ET.Element):
|
||||
def newmethod(self):
|
||||
return self.tag
|
||||
|
||||
mye = MyElement('joe')
|
||||
self.assertEqual(mye.newmethod(), 'joe')
|
||||
|
||||
|
||||
class TreeBuilderTest(unittest.TestCase):
|
||||
|
||||
sample1 = ('<!DOCTYPE html PUBLIC'
|
||||
' "-//W3C//DTD XHTML 1.0 Transitional//EN"'
|
||||
' "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">'
|
||||
'<html>text</html>')
|
||||
|
||||
def test_dummy_builder(self):
|
||||
class BaseDummyBuilder:
|
||||
def close(self):
|
||||
return 42
|
||||
|
||||
class DummyBuilder(BaseDummyBuilder):
|
||||
data = start = end = lambda *a: None
|
||||
|
||||
parser = ET.XMLParser(target=DummyBuilder())
|
||||
parser.feed(self.sample1)
|
||||
self.assertEqual(parser.close(), 42)
|
||||
|
||||
parser = ET.XMLParser(target=BaseDummyBuilder())
|
||||
parser.feed(self.sample1)
|
||||
self.assertEqual(parser.close(), 42)
|
||||
|
||||
parser = ET.XMLParser(target=object())
|
||||
parser.feed(self.sample1)
|
||||
self.assertIsNone(parser.close())
|
||||
|
||||
|
||||
@unittest.expectedFailure # XXX issue 14007 with C ElementTree
|
||||
def test_doctype(self):
|
||||
class DoctypeParser:
|
||||
_doctype = None
|
||||
|
||||
def doctype(self, name, pubid, system):
|
||||
self._doctype = (name, pubid, system)
|
||||
|
||||
def close(self):
|
||||
return self._doctype
|
||||
|
||||
parser = ET.XMLParser(target=DoctypeParser())
|
||||
parser.feed(self.sample1)
|
||||
|
||||
self.assertEqual(parser.close(),
|
||||
('html', '-//W3C//DTD XHTML 1.0 Transitional//EN',
|
||||
'http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd'))
|
||||
|
||||
|
||||
class NoAcceleratorTest(unittest.TestCase):
|
||||
|
||||
# Test that the C accelerator was not imported for pyET
|
||||
def test_correct_import_pyET(self):
|
||||
self.assertEqual(pyET.Element.__module__, 'xml.etree.ElementTree')
|
||||
self.assertEqual(pyET.SubElement.__module__, 'xml.etree.ElementTree')
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
|
||||
|
||||
class CleanContext(object):
|
||||
"""Provide default namespace mapping and path cache."""
|
||||
checkwarnings = None
|
||||
|
@ -1873,10 +1969,7 @@ class CleanContext(object):
|
|||
("This method will be removed in future versions. "
|
||||
"Use .+ instead.", DeprecationWarning),
|
||||
("This method will be removed in future versions. "
|
||||
"Use .+ instead.", PendingDeprecationWarning),
|
||||
# XMLParser.doctype() is deprecated.
|
||||
("This method of XMLParser is deprecated. Define doctype.. "
|
||||
"method on the TreeBuilder target.", DeprecationWarning))
|
||||
"Use .+ instead.", PendingDeprecationWarning))
|
||||
self.checkwarnings = support.check_warnings(*deprecations, quiet=quiet)
|
||||
|
||||
def __enter__(self):
|
||||
|
@ -1898,19 +1991,18 @@ class CleanContext(object):
|
|||
self.checkwarnings.__exit__(*args)
|
||||
|
||||
|
||||
class TestAcceleratorNotImported(unittest.TestCase):
|
||||
# Test that the C accelerator was not imported for pyET
|
||||
def test_correct_import_pyET(self):
|
||||
self.assertEqual(pyET.Element.__module__, 'xml.etree.ElementTree')
|
||||
|
||||
|
||||
def test_main(module=pyET):
|
||||
from test import test_xml_etree
|
||||
|
||||
# The same doctests are used for both the Python and the C implementations
|
||||
test_xml_etree.ET = module
|
||||
|
||||
support.run_unittest(TestAcceleratorNotImported)
|
||||
test_classes = [ElementTreeTest, TreeBuilderTest]
|
||||
if module is pyET:
|
||||
# Run the tests specific to the Python implementation
|
||||
test_classes += [NoAcceleratorTest]
|
||||
|
||||
support.run_unittest(*test_classes)
|
||||
|
||||
# XXX the C module should give the same warnings as the Python module
|
||||
with CleanContext(quiet=(module is not pyET)):
|
||||
|
|
|
@ -46,14 +46,21 @@ class MiscTests(unittest.TestCase):
|
|||
finally:
|
||||
data = None
|
||||
|
||||
@unittest.skipUnless(cET, 'requires _elementtree')
|
||||
class TestAliasWorking(unittest.TestCase):
|
||||
# Test that the cET alias module is alive
|
||||
def test_alias_working(self):
|
||||
e = cET_alias.Element('foo')
|
||||
self.assertEqual(e.tag, 'foo')
|
||||
|
||||
@unittest.skipUnless(cET, 'requires _elementtree')
|
||||
class TestAcceleratorImported(unittest.TestCase):
|
||||
# Test that the C accelerator was imported, as expected
|
||||
def test_correct_import_cET(self):
|
||||
self.assertEqual(cET.Element.__module__, '_elementtree')
|
||||
self.assertEqual(cET.SubElement.__module__, '_elementtree')
|
||||
|
||||
def test_correct_import_cET_alias(self):
|
||||
self.assertEqual(cET_alias.Element.__module__, '_elementtree')
|
||||
self.assertEqual(cET_alias.SubElement.__module__, '_elementtree')
|
||||
|
||||
|
||||
def test_main():
|
||||
|
@ -61,13 +68,15 @@ def test_main():
|
|||
|
||||
# Run the tests specific to the C implementation
|
||||
support.run_doctest(test_xml_etree_c, verbosity=True)
|
||||
|
||||
support.run_unittest(MiscTests, TestAcceleratorImported)
|
||||
support.run_unittest(
|
||||
MiscTests,
|
||||
TestAliasWorking,
|
||||
TestAcceleratorImported
|
||||
)
|
||||
|
||||
# Run the same test suite as the Python module
|
||||
test_xml_etree.test_main(module=cET)
|
||||
# Exercise the deprecated alias
|
||||
test_xml_etree.test_main(module=cET_alias)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
test_main()
|
||||
|
|
152
Lib/threading.py
152
Lib/threading.py
|
@ -34,40 +34,6 @@ TIMEOUT_MAX = _thread.TIMEOUT_MAX
|
|||
del _thread
|
||||
|
||||
|
||||
# Debug support (adapted from ihooks.py).
|
||||
|
||||
_VERBOSE = False
|
||||
|
||||
if __debug__:
|
||||
|
||||
class _Verbose(object):
|
||||
|
||||
def __init__(self, verbose=None):
|
||||
if verbose is None:
|
||||
verbose = _VERBOSE
|
||||
self._verbose = verbose
|
||||
|
||||
def _note(self, format, *args):
|
||||
if self._verbose:
|
||||
format = format % args
|
||||
# Issue #4188: calling current_thread() can incur an infinite
|
||||
# recursion if it has to create a DummyThread on the fly.
|
||||
ident = get_ident()
|
||||
try:
|
||||
name = _active[ident].name
|
||||
except KeyError:
|
||||
name = "<OS thread %d>" % ident
|
||||
format = "%s: %s\n" % (name, format)
|
||||
_sys.stderr.write(format)
|
||||
|
||||
else:
|
||||
# Disable this when using "python -O"
|
||||
class _Verbose(object):
|
||||
def __init__(self, verbose=None):
|
||||
pass
|
||||
def _note(self, *args):
|
||||
pass
|
||||
|
||||
# Support for profile and trace hooks
|
||||
|
||||
_profile_hook = None
|
||||
|
@ -85,17 +51,14 @@ def settrace(func):
|
|||
|
||||
Lock = _allocate_lock
|
||||
|
||||
def RLock(verbose=None, *args, **kwargs):
|
||||
if verbose is None:
|
||||
verbose = _VERBOSE
|
||||
if (__debug__ and verbose) or _CRLock is None:
|
||||
return _PyRLock(verbose, *args, **kwargs)
|
||||
def RLock(*args, **kwargs):
|
||||
if _CRLock is None:
|
||||
return _PyRLock(*args, **kwargs)
|
||||
return _CRLock(*args, **kwargs)
|
||||
|
||||
class _RLock(_Verbose):
|
||||
class _RLock:
|
||||
|
||||
def __init__(self, verbose=None):
|
||||
_Verbose.__init__(self, verbose)
|
||||
def __init__(self):
|
||||
self._block = _allocate_lock()
|
||||
self._owner = None
|
||||
self._count = 0
|
||||
|
@ -113,18 +76,11 @@ class _RLock(_Verbose):
|
|||
me = get_ident()
|
||||
if self._owner == me:
|
||||
self._count = self._count + 1
|
||||
if __debug__:
|
||||
self._note("%s.acquire(%s): recursive success", self, blocking)
|
||||
return 1
|
||||
rc = self._block.acquire(blocking, timeout)
|
||||
if rc:
|
||||
self._owner = me
|
||||
self._count = 1
|
||||
if __debug__:
|
||||
self._note("%s.acquire(%s): initial success", self, blocking)
|
||||
else:
|
||||
if __debug__:
|
||||
self._note("%s.acquire(%s): failure", self, blocking)
|
||||
return rc
|
||||
|
||||
__enter__ = acquire
|
||||
|
@ -136,11 +92,6 @@ class _RLock(_Verbose):
|
|||
if not count:
|
||||
self._owner = None
|
||||
self._block.release()
|
||||
if __debug__:
|
||||
self._note("%s.release(): final release", self)
|
||||
else:
|
||||
if __debug__:
|
||||
self._note("%s.release(): non-final release", self)
|
||||
|
||||
def __exit__(self, t, v, tb):
|
||||
self.release()
|
||||
|
@ -150,12 +101,8 @@ class _RLock(_Verbose):
|
|||
def _acquire_restore(self, state):
|
||||
self._block.acquire()
|
||||
self._count, self._owner = state
|
||||
if __debug__:
|
||||
self._note("%s._acquire_restore()", self)
|
||||
|
||||
def _release_save(self):
|
||||
if __debug__:
|
||||
self._note("%s._release_save()", self)
|
||||
if self._count == 0:
|
||||
raise RuntimeError("cannot release un-acquired lock")
|
||||
count = self._count
|
||||
|
@ -171,10 +118,9 @@ class _RLock(_Verbose):
|
|||
_PyRLock = _RLock
|
||||
|
||||
|
||||
class Condition(_Verbose):
|
||||
class Condition:
|
||||
|
||||
def __init__(self, lock=None, verbose=None):
|
||||
_Verbose.__init__(self, verbose)
|
||||
def __init__(self, lock=None):
|
||||
if lock is None:
|
||||
lock = RLock()
|
||||
self._lock = lock
|
||||
|
@ -233,23 +179,16 @@ class Condition(_Verbose):
|
|||
if timeout is None:
|
||||
waiter.acquire()
|
||||
gotit = True
|
||||
if __debug__:
|
||||
self._note("%s.wait(): got it", self)
|
||||
else:
|
||||
if timeout > 0:
|
||||
gotit = waiter.acquire(True, timeout)
|
||||
else:
|
||||
gotit = waiter.acquire(False)
|
||||
if not gotit:
|
||||
if __debug__:
|
||||
self._note("%s.wait(%s): timed out", self, timeout)
|
||||
try:
|
||||
self._waiters.remove(waiter)
|
||||
except ValueError:
|
||||
pass
|
||||
else:
|
||||
if __debug__:
|
||||
self._note("%s.wait(%s): got it", self, timeout)
|
||||
return gotit
|
||||
finally:
|
||||
self._acquire_restore(saved_state)
|
||||
|
@ -265,19 +204,9 @@ class Condition(_Verbose):
|
|||
else:
|
||||
waittime = endtime - _time()
|
||||
if waittime <= 0:
|
||||
if __debug__:
|
||||
self._note("%s.wait_for(%r, %r): Timed out.",
|
||||
self, predicate, timeout)
|
||||
break
|
||||
if __debug__:
|
||||
self._note("%s.wait_for(%r, %r): Waiting with timeout=%s.",
|
||||
self, predicate, timeout, waittime)
|
||||
self.wait(waittime)
|
||||
result = predicate()
|
||||
else:
|
||||
if __debug__:
|
||||
self._note("%s.wait_for(%r, %r): Success.",
|
||||
self, predicate, timeout)
|
||||
return result
|
||||
|
||||
def notify(self, n=1):
|
||||
|
@ -286,11 +215,7 @@ class Condition(_Verbose):
|
|||
__waiters = self._waiters
|
||||
waiters = __waiters[:n]
|
||||
if not waiters:
|
||||
if __debug__:
|
||||
self._note("%s.notify(): no waiters", self)
|
||||
return
|
||||
self._note("%s.notify(): notifying %d waiter%s", self, n,
|
||||
n!=1 and "s" or "")
|
||||
for waiter in waiters:
|
||||
waiter.release()
|
||||
try:
|
||||
|
@ -304,14 +229,13 @@ class Condition(_Verbose):
|
|||
notifyAll = notify_all
|
||||
|
||||
|
||||
class Semaphore(_Verbose):
|
||||
class Semaphore:
|
||||
|
||||
# After Tim Peters' semaphore class, but not quite the same (no maximum)
|
||||
|
||||
def __init__(self, value=1, verbose=None):
|
||||
def __init__(self, value=1):
|
||||
if value < 0:
|
||||
raise ValueError("semaphore initial value must be >= 0")
|
||||
_Verbose.__init__(self, verbose)
|
||||
self._cond = Condition(Lock())
|
||||
self._value = value
|
||||
|
||||
|
@ -324,9 +248,6 @@ class Semaphore(_Verbose):
|
|||
while self._value == 0:
|
||||
if not blocking:
|
||||
break
|
||||
if __debug__:
|
||||
self._note("%s.acquire(%s): blocked waiting, value=%s",
|
||||
self, blocking, self._value)
|
||||
if timeout is not None:
|
||||
if endtime is None:
|
||||
endtime = _time() + timeout
|
||||
|
@ -337,9 +258,6 @@ class Semaphore(_Verbose):
|
|||
self._cond.wait(timeout)
|
||||
else:
|
||||
self._value = self._value - 1
|
||||
if __debug__:
|
||||
self._note("%s.acquire: success, value=%s",
|
||||
self, self._value)
|
||||
rc = True
|
||||
self._cond.release()
|
||||
return rc
|
||||
|
@ -349,9 +267,6 @@ class Semaphore(_Verbose):
|
|||
def release(self):
|
||||
self._cond.acquire()
|
||||
self._value = self._value + 1
|
||||
if __debug__:
|
||||
self._note("%s.release: success, value=%s",
|
||||
self, self._value)
|
||||
self._cond.notify()
|
||||
self._cond.release()
|
||||
|
||||
|
@ -361,8 +276,8 @@ class Semaphore(_Verbose):
|
|||
|
||||
class BoundedSemaphore(Semaphore):
|
||||
"""Semaphore that checks that # releases is <= # acquires"""
|
||||
def __init__(self, value=1, verbose=None):
|
||||
Semaphore.__init__(self, value, verbose)
|
||||
def __init__(self, value=1):
|
||||
Semaphore.__init__(self, value)
|
||||
self._initial_value = value
|
||||
|
||||
def release(self):
|
||||
|
@ -371,12 +286,11 @@ class BoundedSemaphore(Semaphore):
|
|||
return Semaphore.release(self)
|
||||
|
||||
|
||||
class Event(_Verbose):
|
||||
class Event:
|
||||
|
||||
# After Tim Peters' event class (without is_posted())
|
||||
|
||||
def __init__(self, verbose=None):
|
||||
_Verbose.__init__(self, verbose)
|
||||
def __init__(self):
|
||||
self._cond = Condition(Lock())
|
||||
self._flag = False
|
||||
|
||||
|
@ -426,13 +340,13 @@ class Event(_Verbose):
|
|||
# since the previous cycle. In addition, a 'resetting' state exists which is
|
||||
# similar to 'draining' except that threads leave with a BrokenBarrierError,
|
||||
# and a 'broken' state in which all threads get the exception.
|
||||
class Barrier(_Verbose):
|
||||
class Barrier:
|
||||
"""
|
||||
Barrier. Useful for synchronizing a fixed number of threads
|
||||
at known synchronization points. Threads block on 'wait()' and are
|
||||
simultaneously once they have all made that call.
|
||||
"""
|
||||
def __init__(self, parties, action=None, timeout=None, verbose=None):
|
||||
def __init__(self, parties, action=None, timeout=None):
|
||||
"""
|
||||
Create a barrier, initialised to 'parties' threads.
|
||||
'action' is a callable which, when supplied, will be called
|
||||
|
@ -441,7 +355,6 @@ class Barrier(_Verbose):
|
|||
If a 'timeout' is provided, it is uses as the default for
|
||||
all subsequent 'wait()' calls.
|
||||
"""
|
||||
_Verbose.__init__(self, verbose)
|
||||
self._cond = Condition(Lock())
|
||||
self._action = action
|
||||
self._timeout = timeout
|
||||
|
@ -602,7 +515,7 @@ _dangling = WeakSet()
|
|||
|
||||
# Main class for threads
|
||||
|
||||
class Thread(_Verbose):
|
||||
class Thread:
|
||||
|
||||
__initialized = False
|
||||
# Need to store a reference to sys.exc_info for printing
|
||||
|
@ -615,9 +528,8 @@ class Thread(_Verbose):
|
|||
#XXX __exc_clear = _sys.exc_clear
|
||||
|
||||
def __init__(self, group=None, target=None, name=None,
|
||||
args=(), kwargs=None, verbose=None, *, daemon=None):
|
||||
args=(), kwargs=None, *, daemon=None):
|
||||
assert group is None, "group argument must be None for now"
|
||||
_Verbose.__init__(self, verbose)
|
||||
if kwargs is None:
|
||||
kwargs = {}
|
||||
self._target = target
|
||||
|
@ -664,8 +576,6 @@ class Thread(_Verbose):
|
|||
|
||||
if self._started.is_set():
|
||||
raise RuntimeError("threads can only be started once")
|
||||
if __debug__:
|
||||
self._note("%s.start(): starting thread", self)
|
||||
with _active_limbo_lock:
|
||||
_limbo[self] = self
|
||||
try:
|
||||
|
@ -715,24 +625,17 @@ class Thread(_Verbose):
|
|||
with _active_limbo_lock:
|
||||
_active[self._ident] = self
|
||||
del _limbo[self]
|
||||
if __debug__:
|
||||
self._note("%s._bootstrap(): thread started", self)
|
||||
|
||||
if _trace_hook:
|
||||
self._note("%s._bootstrap(): registering trace hook", self)
|
||||
_sys.settrace(_trace_hook)
|
||||
if _profile_hook:
|
||||
self._note("%s._bootstrap(): registering profile hook", self)
|
||||
_sys.setprofile(_profile_hook)
|
||||
|
||||
try:
|
||||
self.run()
|
||||
except SystemExit:
|
||||
if __debug__:
|
||||
self._note("%s._bootstrap(): raised SystemExit", self)
|
||||
pass
|
||||
except:
|
||||
if __debug__:
|
||||
self._note("%s._bootstrap(): unhandled exception", self)
|
||||
# If sys.stderr is no more (most likely from interpreter
|
||||
# shutdown) use self._stderr. Otherwise still use sys (as in
|
||||
# _sys) in case sys.stderr was redefined since the creation of
|
||||
|
@ -763,9 +666,6 @@ class Thread(_Verbose):
|
|||
# hog; deleting everything else is just for thoroughness
|
||||
finally:
|
||||
del exc_type, exc_value, exc_tb
|
||||
else:
|
||||
if __debug__:
|
||||
self._note("%s._bootstrap(): normal return", self)
|
||||
finally:
|
||||
# Prevent a race in
|
||||
# test_threading.test_no_refcycle_through_target when
|
||||
|
@ -832,29 +732,18 @@ class Thread(_Verbose):
|
|||
if self is current_thread():
|
||||
raise RuntimeError("cannot join current thread")
|
||||
|
||||
if __debug__:
|
||||
if not self._stopped:
|
||||
self._note("%s.join(): waiting until thread stops", self)
|
||||
|
||||
self._block.acquire()
|
||||
try:
|
||||
if timeout is None:
|
||||
while not self._stopped:
|
||||
self._block.wait()
|
||||
if __debug__:
|
||||
self._note("%s.join(): thread stopped", self)
|
||||
else:
|
||||
deadline = _time() + timeout
|
||||
while not self._stopped:
|
||||
delay = deadline - _time()
|
||||
if delay <= 0:
|
||||
if __debug__:
|
||||
self._note("%s.join(): timed out", self)
|
||||
break
|
||||
self._block.wait(delay)
|
||||
else:
|
||||
if __debug__:
|
||||
self._note("%s.join(): thread stopped", self)
|
||||
finally:
|
||||
self._block.release()
|
||||
|
||||
|
@ -947,14 +836,9 @@ class _MainThread(Thread):
|
|||
def _exitfunc(self):
|
||||
self._stop()
|
||||
t = _pickSomeNonDaemonThread()
|
||||
if t:
|
||||
if __debug__:
|
||||
self._note("%s: waiting for other threads", self)
|
||||
while t:
|
||||
t.join()
|
||||
t = _pickSomeNonDaemonThread()
|
||||
if __debug__:
|
||||
self._note("%s: exiting", self)
|
||||
self._delete()
|
||||
|
||||
def _pickSomeNonDaemonThread():
|
||||
|
|
|
@ -127,6 +127,8 @@ Floatnumber = group(Pointfloat, Expfloat)
|
|||
Imagnumber = group(r'[0-9]+[jJ]', Floatnumber + r'[jJ]')
|
||||
Number = group(Imagnumber, Floatnumber, Intnumber)
|
||||
|
||||
StringPrefix = r'(?:[uU][rR]?|[bB][rR]|[rR][bB]|[rR]|[uU])?'
|
||||
|
||||
# Tail end of ' string.
|
||||
Single = r"[^'\\]*(?:\\.[^'\\]*)*'"
|
||||
# Tail end of " string.
|
||||
|
@ -135,10 +137,10 @@ Double = r'[^"\\]*(?:\\.[^"\\]*)*"'
|
|||
Single3 = r"[^'\\]*(?:(?:\\.|'(?!''))[^'\\]*)*'''"
|
||||
# Tail end of """ string.
|
||||
Double3 = r'[^"\\]*(?:(?:\\.|"(?!""))[^"\\]*)*"""'
|
||||
Triple = group("[bB]?[rR]?'''", '[bB]?[rR]?"""')
|
||||
Triple = group(StringPrefix + "'''", StringPrefix + '"""')
|
||||
# Single-line ' or " string.
|
||||
String = group(r"[bB]?[rR]?'[^\n'\\]*(?:\\.[^\n'\\]*)*'",
|
||||
r'[bB]?[rR]?"[^\n"\\]*(?:\\.[^\n"\\]*)*"')
|
||||
String = group(StringPrefix + r"'[^\n'\\]*(?:\\.[^\n'\\]*)*'",
|
||||
StringPrefix + r'"[^\n"\\]*(?:\\.[^\n"\\]*)*"')
|
||||
|
||||
# Because of leftmost-then-longest match semantics, be sure to put the
|
||||
# longest operators first (e.g., if = came before ==, == would get
|
||||
|
@ -156,9 +158,9 @@ PlainToken = group(Number, Funny, String, Name)
|
|||
Token = Ignore + PlainToken
|
||||
|
||||
# First (or only) line of ' or " string.
|
||||
ContStr = group(r"[bB]?[rR]?'[^\n'\\]*(?:\\.[^\n'\\]*)*" +
|
||||
ContStr = group(StringPrefix + r"'[^\n'\\]*(?:\\.[^\n'\\]*)*" +
|
||||
group("'", r'\\\r?\n'),
|
||||
r'[bB]?[rR]?"[^\n"\\]*(?:\\.[^\n"\\]*)*' +
|
||||
StringPrefix + r'"[^\n"\\]*(?:\\.[^\n"\\]*)*' +
|
||||
group('"', r'\\\r?\n'))
|
||||
PseudoExtras = group(r'\\\r?\n', Comment, Triple)
|
||||
PseudoToken = Whitespace + group(PseudoExtras, Number, Funny, ContStr, Name)
|
||||
|
@ -170,27 +172,49 @@ endpats = {"'": Single, '"': Double,
|
|||
"'''": Single3, '"""': Double3,
|
||||
"r'''": Single3, 'r"""': Double3,
|
||||
"b'''": Single3, 'b"""': Double3,
|
||||
"br'''": Single3, 'br"""': Double3,
|
||||
"R'''": Single3, 'R"""': Double3,
|
||||
"B'''": Single3, 'B"""': Double3,
|
||||
"br'''": Single3, 'br"""': Double3,
|
||||
"bR'''": Single3, 'bR"""': Double3,
|
||||
"Br'''": Single3, 'Br"""': Double3,
|
||||
"BR'''": Single3, 'BR"""': Double3,
|
||||
'r': None, 'R': None, 'b': None, 'B': None}
|
||||
"rb'''": Single3, 'rb"""': Double3,
|
||||
"Rb'''": Single3, 'Rb"""': Double3,
|
||||
"rB'''": Single3, 'rB"""': Double3,
|
||||
"RB'''": Single3, 'RB"""': Double3,
|
||||
"u'''": Single3, 'u"""': Double3,
|
||||
"ur'''": Single3, 'ur"""': Double3,
|
||||
"R'''": Single3, 'R"""': Double3,
|
||||
"U'''": Single3, 'U"""': Double3,
|
||||
"uR'''": Single3, 'uR"""': Double3,
|
||||
"Ur'''": Single3, 'Ur"""': Double3,
|
||||
"UR'''": Single3, 'UR"""': Double3,
|
||||
'r': None, 'R': None, 'b': None, 'B': None,
|
||||
'u': None, 'U': None}
|
||||
|
||||
triple_quoted = {}
|
||||
for t in ("'''", '"""',
|
||||
"r'''", 'r"""', "R'''", 'R"""',
|
||||
"b'''", 'b"""', "B'''", 'B"""',
|
||||
"br'''", 'br"""', "Br'''", 'Br"""',
|
||||
"bR'''", 'bR"""', "BR'''", 'BR"""'):
|
||||
"bR'''", 'bR"""', "BR'''", 'BR"""',
|
||||
"rb'''", 'rb"""', "rB'''", 'rB"""',
|
||||
"Rb'''", 'Rb"""', "RB'''", 'RB"""',
|
||||
"u'''", 'u"""', "U'''", 'U"""',
|
||||
"ur'''", 'ur"""', "Ur'''", 'Ur"""',
|
||||
"uR'''", 'uR"""', "UR'''", 'UR"""'):
|
||||
triple_quoted[t] = t
|
||||
single_quoted = {}
|
||||
for t in ("'", '"',
|
||||
"r'", 'r"', "R'", 'R"',
|
||||
"b'", 'b"', "B'", 'B"',
|
||||
"br'", 'br"', "Br'", 'Br"',
|
||||
"bR'", 'bR"', "BR'", 'BR"' ):
|
||||
"bR'", 'bR"', "BR'", 'BR"' ,
|
||||
"rb'", 'rb"', "rB'", 'rB"',
|
||||
"Rb'", 'Rb"', "RB'", 'RB"' ,
|
||||
"u'", 'u"', "U'", 'U"',
|
||||
"ur'", 'ur"', "Ur'", 'Ur"',
|
||||
"uR'", 'uR"', "UR'", 'UR"' ):
|
||||
single_quoted[t] = t
|
||||
|
||||
tabsize = 8
|
||||
|
|
|
@ -17,6 +17,7 @@ pulldom -- DOM builder supporting on-demand tree-building for selected
|
|||
|
||||
class Node:
|
||||
"""Class giving the NodeType constants."""
|
||||
__slots__ = ()
|
||||
|
||||
# DOM implementations may use this as a base class for their own
|
||||
# Node implementations. If they don't, the constants defined here
|
||||
|
|
|
@ -2,8 +2,6 @@
|
|||
directly. Instead, the functions getDOMImplementation and
|
||||
registerDOMImplementation should be imported from xml.dom."""
|
||||
|
||||
from xml.dom.minicompat import * # isinstance, StringTypes
|
||||
|
||||
# This is a list of well-known implementations. Well-known names
|
||||
# should be published by posting to xml-sig@python.org, and are
|
||||
# subsequently recorded in this file.
|
||||
|
|
|
@ -33,8 +33,6 @@ from xml.parsers import expat
|
|||
from xml.dom.minidom import _append_child, _set_attribute_node
|
||||
from xml.dom.NodeFilter import NodeFilter
|
||||
|
||||
from xml.dom.minicompat import *
|
||||
|
||||
TEXT_NODE = Node.TEXT_NODE
|
||||
CDATA_SECTION_NODE = Node.CDATA_SECTION_NODE
|
||||
DOCUMENT_NODE = Node.DOCUMENT_NODE
|
||||
|
@ -755,7 +753,7 @@ class Namespaces:
|
|||
a = minidom.Attr("xmlns", XMLNS_NAMESPACE,
|
||||
"xmlns", EMPTY_PREFIX)
|
||||
a.value = uri
|
||||
a.ownerDocuemnt = self.document
|
||||
a.ownerDocument = self.document
|
||||
_set_attribute_node(node, a)
|
||||
del self._ns_ordered_prefixes[:]
|
||||
|
||||
|
|
|
@ -62,10 +62,7 @@ class Node(xml.dom.Node):
|
|||
return writer.stream.getvalue()
|
||||
|
||||
def hasChildNodes(self):
|
||||
if self.childNodes:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
return bool(self.childNodes)
|
||||
|
||||
def _get_childNodes(self):
|
||||
return self.childNodes
|
||||
|
@ -723,12 +720,16 @@ class Element(Node):
|
|||
Node.unlink(self)
|
||||
|
||||
def getAttribute(self, attname):
|
||||
if self._attrs is None:
|
||||
return ""
|
||||
try:
|
||||
return self._attrs[attname].value
|
||||
except KeyError:
|
||||
return ""
|
||||
|
||||
def getAttributeNS(self, namespaceURI, localName):
|
||||
if self._attrsNS is None:
|
||||
return ""
|
||||
try:
|
||||
return self._attrsNS[(namespaceURI, localName)].value
|
||||
except KeyError:
|
||||
|
@ -926,6 +927,7 @@ class Childless:
|
|||
"""Mixin that makes childless-ness easy to implement and avoids
|
||||
the complexity of the Node methods that deal with children.
|
||||
"""
|
||||
__slots__ = ()
|
||||
|
||||
attributes = None
|
||||
childNodes = EmptyNodeList()
|
||||
|
@ -1063,6 +1065,8 @@ defproperty(CharacterData, "length", doc="Length of the string data.")
|
|||
|
||||
|
||||
class Text(CharacterData):
|
||||
__slots__ = ()
|
||||
|
||||
nodeType = Node.TEXT_NODE
|
||||
nodeName = "#text"
|
||||
attributes = None
|
||||
|
@ -1184,6 +1188,8 @@ class Comment(CharacterData):
|
|||
|
||||
|
||||
class CDATASection(Text):
|
||||
__slots__ = ()
|
||||
|
||||
nodeType = Node.CDATA_SECTION_NODE
|
||||
nodeName = "#cdata-section"
|
||||
|
||||
|
@ -1262,8 +1268,7 @@ defproperty(ReadOnlySequentialNamedNodeMap, "length",
|
|||
class Identified:
|
||||
"""Mix-in class that supports the publicId and systemId attributes."""
|
||||
|
||||
# XXX this does not work, this is an old-style class
|
||||
# __slots__ = 'publicId', 'systemId'
|
||||
__slots__ = 'publicId', 'systemId'
|
||||
|
||||
def _identified_mixin_init(self, publicId, systemId):
|
||||
self.publicId = publicId
|
||||
|
|
|
@ -101,7 +101,6 @@ import sys
|
|||
import re
|
||||
import warnings
|
||||
|
||||
|
||||
class _SimpleElementPath:
|
||||
# emulate pre-1.2 find/findtext/findall behaviour
|
||||
def find(self, element, tag, namespaces=None):
|
||||
|
@ -1512,24 +1511,30 @@ class XMLParser:
|
|||
self.target = self._target = target
|
||||
self._error = expat.error
|
||||
self._names = {} # name memo cache
|
||||
# callbacks
|
||||
# main callbacks
|
||||
parser.DefaultHandlerExpand = self._default
|
||||
parser.StartElementHandler = self._start
|
||||
parser.EndElementHandler = self._end
|
||||
parser.CharacterDataHandler = self._data
|
||||
# optional callbacks
|
||||
parser.CommentHandler = self._comment
|
||||
parser.ProcessingInstructionHandler = self._pi
|
||||
if hasattr(target, 'start'):
|
||||
parser.StartElementHandler = self._start
|
||||
if hasattr(target, 'end'):
|
||||
parser.EndElementHandler = self._end
|
||||
if hasattr(target, 'data'):
|
||||
parser.CharacterDataHandler = target.data
|
||||
# miscellaneous callbacks
|
||||
if hasattr(target, 'comment'):
|
||||
parser.CommentHandler = target.comment
|
||||
if hasattr(target, 'pi'):
|
||||
parser.ProcessingInstructionHandler = target.pi
|
||||
# let expat do the buffering, if supported
|
||||
try:
|
||||
self._parser.buffer_text = 1
|
||||
parser.buffer_text = 1
|
||||
except AttributeError:
|
||||
pass
|
||||
# use new-style attribute handling, if supported
|
||||
try:
|
||||
self._parser.ordered_attributes = 1
|
||||
self._parser.specified_attributes = 1
|
||||
parser.StartElementHandler = self._start_list
|
||||
parser.ordered_attributes = 1
|
||||
parser.specified_attributes = 1
|
||||
if hasattr(target, 'start'):
|
||||
parser.StartElementHandler = self._start_list
|
||||
except AttributeError:
|
||||
pass
|
||||
self._doctype = None
|
||||
|
@ -1573,44 +1578,29 @@ class XMLParser:
|
|||
attrib[fixname(attrib_in[i])] = attrib_in[i+1]
|
||||
return self.target.start(tag, attrib)
|
||||
|
||||
def _data(self, text):
|
||||
return self.target.data(text)
|
||||
|
||||
def _end(self, tag):
|
||||
return self.target.end(self._fixname(tag))
|
||||
|
||||
def _comment(self, data):
|
||||
try:
|
||||
comment = self.target.comment
|
||||
except AttributeError:
|
||||
pass
|
||||
else:
|
||||
return comment(data)
|
||||
|
||||
def _pi(self, target, data):
|
||||
try:
|
||||
pi = self.target.pi
|
||||
except AttributeError:
|
||||
pass
|
||||
else:
|
||||
return pi(target, data)
|
||||
|
||||
def _default(self, text):
|
||||
prefix = text[:1]
|
||||
if prefix == "&":
|
||||
# deal with undefined entities
|
||||
try:
|
||||
self.target.data(self.entity[text[1:-1]])
|
||||
data_handler = self.target.data
|
||||
except AttributeError:
|
||||
return
|
||||
try:
|
||||
data_handler(self.entity[text[1:-1]])
|
||||
except KeyError:
|
||||
from xml.parsers import expat
|
||||
err = expat.error(
|
||||
"undefined entity %s: line %d, column %d" %
|
||||
(text, self._parser.ErrorLineNumber,
|
||||
self._parser.ErrorColumnNumber)
|
||||
(text, self.parser.ErrorLineNumber,
|
||||
self.parser.ErrorColumnNumber)
|
||||
)
|
||||
err.code = 11 # XML_ERROR_UNDEFINED_ENTITY
|
||||
err.lineno = self._parser.ErrorLineNumber
|
||||
err.offset = self._parser.ErrorColumnNumber
|
||||
err.lineno = self.parser.ErrorLineNumber
|
||||
err.offset = self.parser.ErrorColumnNumber
|
||||
raise err
|
||||
elif prefix == "<" and text[:9] == "<!DOCTYPE":
|
||||
self._doctype = [] # inside a doctype declaration
|
||||
|
@ -1637,7 +1627,7 @@ class XMLParser:
|
|||
pubid = pubid[1:-1]
|
||||
if hasattr(self.target, "doctype"):
|
||||
self.target.doctype(name, pubid, system[1:-1])
|
||||
elif self.doctype is not self._XMLParser__doctype:
|
||||
elif self.doctype != self._XMLParser__doctype:
|
||||
# warn about deprecated call
|
||||
self._XMLParser__doctype(name, pubid, system[1:-1])
|
||||
self.doctype(name, pubid, system[1:-1])
|
||||
|
@ -1668,7 +1658,7 @@ class XMLParser:
|
|||
|
||||
def feed(self, data):
|
||||
try:
|
||||
self._parser.Parse(data, 0)
|
||||
self.parser.Parse(data, 0)
|
||||
except self._error as v:
|
||||
self._raiseerror(v)
|
||||
|
||||
|
@ -1680,12 +1670,19 @@ class XMLParser:
|
|||
|
||||
def close(self):
|
||||
try:
|
||||
self._parser.Parse("", 1) # end of data
|
||||
self.parser.Parse("", 1) # end of data
|
||||
except self._error as v:
|
||||
self._raiseerror(v)
|
||||
tree = self.target.close()
|
||||
del self.target, self._parser # get rid of circular references
|
||||
return tree
|
||||
try:
|
||||
close_handler = self.target.close
|
||||
except AttributeError:
|
||||
pass
|
||||
else:
|
||||
return close_handler()
|
||||
finally:
|
||||
# get rid of circular references
|
||||
del self.parser, self._parser
|
||||
del self.target, self._target
|
||||
|
||||
|
||||
# Import the C accelerators
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
"""XML-RPC Servers.
|
||||
r"""XML-RPC Servers.
|
||||
|
||||
This module can be used to create simple XML-RPC servers
|
||||
by creating a server and either installing functions, a
|
||||
|
|
|
@ -371,6 +371,7 @@ Yannick Gingras
|
|||
Michael Goderbauer
|
||||
Christoph Gohlke
|
||||
Tim Golden
|
||||
Guilherme Gonçalves
|
||||
Tiago Gonçalves
|
||||
Chris Gonnerman
|
||||
David Goodger
|
||||
|
|
52
Misc/NEWS
52
Misc/NEWS
|
@ -2,14 +2,46 @@
|
|||
Python News
|
||||
+++++++++++
|
||||
|
||||
What's New in Python 3.3 Alpha 1?
|
||||
=================================
|
||||
What's New in Python 3.3.0 Alpha 2?
|
||||
===================================
|
||||
|
||||
*Release date: XX-XXX-20XX*
|
||||
*Release date: XXXX-XX-XX*
|
||||
|
||||
Core and Builtins
|
||||
-----------------
|
||||
|
||||
- Issue #14205: dict lookup raises a RuntimeError if the dict is modified
|
||||
during a lookup.
|
||||
|
||||
Library
|
||||
-------
|
||||
|
||||
- Issue #14168: Check for presence of Element._attrs in minidom before
|
||||
accessing it.
|
||||
|
||||
- Issue #12328: Fix multiprocessing's use of overlapped I/O on Windows.
|
||||
Also, add a multiprocessing.connection.wait(rlist, timeout=None) function
|
||||
for polling multiple objects at once. Patch by sbt.
|
||||
|
||||
- Issue #13719: Make the distutils and packaging upload commands aware of
|
||||
bdist_msi products.
|
||||
|
||||
- Issue #14007: Accept incomplete TreeBuilder objects (missing start, end,
|
||||
data or close method) for the Python implementation as well.
|
||||
Drop the no-op TreeBuilder().xml() method from the C implementation.
|
||||
|
||||
|
||||
What's New in Python 3.3.0 Alpha 1?
|
||||
===================================
|
||||
|
||||
*Release date: 05-Mar-2012*
|
||||
|
||||
Core and Builtins
|
||||
-----------------
|
||||
|
||||
- Issue #14172: Fix reference leak when marshalling a buffer-like object
|
||||
(other than a bytes object).
|
||||
|
||||
- Issue #13521: dict.setdefault() now does only one lookup for the given key,
|
||||
making it "atomic" for many purposes. Patch by Filip Gruszczyński.
|
||||
|
||||
|
@ -508,6 +540,20 @@ Core and Builtins
|
|||
Library
|
||||
-------
|
||||
|
||||
- Issue #14195: An issue that caused weakref.WeakSet instances to incorrectly
|
||||
return True for a WeakSet instance 'a' in both 'a < a' and 'a > a' has been
|
||||
fixed.
|
||||
|
||||
- Issue #14166: Pickler objects now have an optional ``dispatch_table``
|
||||
attribute which allows to set custom per-pickler reduction functions.
|
||||
Patch by sbt.
|
||||
|
||||
- Issue #14177: marshal.loads() now raises TypeError when given an unicode
|
||||
string. Patch by Guilherme Gonçalves.
|
||||
|
||||
- Issue #13550: Remove the debug machinery from the threading module: remove
|
||||
verbose arguments from all threading classes and functions.
|
||||
|
||||
- Issue #14159: Fix the len() of weak containers (WeakSet, WeakKeyDictionary,
|
||||
WeakValueDictionary) to return a better approximation when some objects
|
||||
are dead or dying. Moreover, the implementation is now O(1) rather than
|
||||
|
|
|
@ -39,7 +39,7 @@
|
|||
|
||||
%define name python
|
||||
#--start constants--
|
||||
%define version 3.3a0
|
||||
%define version 3.3.0a1
|
||||
%define libvers 3.3
|
||||
#--end constants--
|
||||
%define release 1pydotorg
|
||||
|
|
|
@ -310,6 +310,38 @@
|
|||
### fun:MD5_Update
|
||||
###}
|
||||
|
||||
# Fedora's package "openssl-1.0.1-0.1.beta2.fc17.x86_64" on x86_64
|
||||
# See http://bugs.python.org/issue14171
|
||||
{
|
||||
openssl 1.0.1 prng 1
|
||||
Memcheck:Cond
|
||||
fun:bcmp
|
||||
fun:fips_get_entropy
|
||||
fun:FIPS_drbg_instantiate
|
||||
fun:RAND_init_fips
|
||||
fun:OPENSSL_init_library
|
||||
fun:SSL_library_init
|
||||
fun:init_hashlib
|
||||
}
|
||||
|
||||
{
|
||||
openssl 1.0.1 prng 2
|
||||
Memcheck:Cond
|
||||
fun:fips_get_entropy
|
||||
fun:FIPS_drbg_instantiate
|
||||
fun:RAND_init_fips
|
||||
fun:OPENSSL_init_library
|
||||
fun:SSL_library_init
|
||||
fun:init_hashlib
|
||||
}
|
||||
|
||||
{
|
||||
openssl 1.0.1 prng 3
|
||||
Memcheck:Value8
|
||||
fun:_x86_64_AES_encrypt_compact
|
||||
fun:AES_encrypt
|
||||
}
|
||||
|
||||
#
|
||||
# All of these problems come from using test_socket_ssl
|
||||
#
|
||||
|
|
|
@ -191,7 +191,7 @@ list_join(PyObject* list)
|
|||
}
|
||||
|
||||
/* -------------------------------------------------------------------- */
|
||||
/* the element type */
|
||||
/* the Element type */
|
||||
|
||||
typedef struct {
|
||||
|
||||
|
@ -236,10 +236,10 @@ static PyTypeObject Element_Type;
|
|||
#define Element_CheckExact(op) (Py_TYPE(op) == &Element_Type)
|
||||
|
||||
/* -------------------------------------------------------------------- */
|
||||
/* element constructor and destructor */
|
||||
/* Element constructors and destructor */
|
||||
|
||||
LOCAL(int)
|
||||
element_new_extra(ElementObject* self, PyObject* attrib)
|
||||
create_extra(ElementObject* self, PyObject* attrib)
|
||||
{
|
||||
self->extra = PyObject_Malloc(sizeof(ElementObjectExtra));
|
||||
if (!self->extra)
|
||||
|
@ -259,7 +259,7 @@ element_new_extra(ElementObject* self, PyObject* attrib)
|
|||
}
|
||||
|
||||
LOCAL(void)
|
||||
element_dealloc_extra(ElementObject* self)
|
||||
dealloc_extra(ElementObject* self)
|
||||
{
|
||||
int i;
|
||||
|
||||
|
@ -274,8 +274,11 @@ element_dealloc_extra(ElementObject* self)
|
|||
PyObject_Free(self->extra);
|
||||
}
|
||||
|
||||
/* Convenience internal function to create new Element objects with the given
|
||||
* tag and attributes.
|
||||
*/
|
||||
LOCAL(PyObject*)
|
||||
element_new(PyObject* tag, PyObject* attrib)
|
||||
create_new_element(PyObject* tag, PyObject* attrib)
|
||||
{
|
||||
ElementObject* self;
|
||||
|
||||
|
@ -290,16 +293,10 @@ element_new(PyObject* tag, PyObject* attrib)
|
|||
self->extra = NULL;
|
||||
|
||||
if (attrib != Py_None) {
|
||||
|
||||
if (element_new_extra(self, attrib) < 0) {
|
||||
if (create_extra(self, attrib) < 0) {
|
||||
PyObject_Del(self);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
self->extra->length = 0;
|
||||
self->extra->allocated = STATIC_CHILDREN;
|
||||
self->extra->children = self->extra->_children;
|
||||
|
||||
}
|
||||
|
||||
Py_INCREF(tag);
|
||||
|
@ -316,6 +313,86 @@ element_new(PyObject* tag, PyObject* attrib)
|
|||
return (PyObject*) self;
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
element_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
|
||||
{
|
||||
ElementObject *e = (ElementObject *)type->tp_alloc(type, 0);
|
||||
if (e != NULL) {
|
||||
Py_INCREF(Py_None);
|
||||
e->tag = Py_None;
|
||||
|
||||
Py_INCREF(Py_None);
|
||||
e->text = Py_None;
|
||||
|
||||
Py_INCREF(Py_None);
|
||||
e->tail = Py_None;
|
||||
|
||||
e->extra = NULL;
|
||||
}
|
||||
return (PyObject *)e;
|
||||
}
|
||||
|
||||
static int
|
||||
element_init(PyObject *self, PyObject *args, PyObject *kwds)
|
||||
{
|
||||
PyObject *tag;
|
||||
PyObject *tmp;
|
||||
PyObject *attrib = NULL;
|
||||
ElementObject *self_elem;
|
||||
|
||||
if (!PyArg_ParseTuple(args, "O|O!:Element", &tag, &PyDict_Type, &attrib))
|
||||
return -1;
|
||||
|
||||
if (attrib || kwds) {
|
||||
attrib = (attrib) ? PyDict_Copy(attrib) : PyDict_New();
|
||||
if (!attrib)
|
||||
return -1;
|
||||
if (kwds)
|
||||
PyDict_Update(attrib, kwds);
|
||||
} else {
|
||||
Py_INCREF(Py_None);
|
||||
attrib = Py_None;
|
||||
}
|
||||
|
||||
self_elem = (ElementObject *)self;
|
||||
|
||||
/* Use None for empty dictionaries */
|
||||
if (PyDict_CheckExact(attrib) && PyDict_Size(attrib) == 0) {
|
||||
Py_INCREF(Py_None);
|
||||
attrib = Py_None;
|
||||
}
|
||||
|
||||
if (attrib != Py_None) {
|
||||
if (create_extra(self_elem, attrib) < 0) {
|
||||
PyObject_Del(self_elem);
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
/* If create_extra needed attrib, it took a reference to it, so we can
|
||||
* release ours anyway.
|
||||
*/
|
||||
Py_DECREF(attrib);
|
||||
|
||||
/* Replace the objects already pointed to by tag, text and tail. */
|
||||
tmp = self_elem->tag;
|
||||
self_elem->tag = tag;
|
||||
Py_INCREF(tag);
|
||||
Py_DECREF(tmp);
|
||||
|
||||
tmp = self_elem->text;
|
||||
self_elem->text = Py_None;
|
||||
Py_INCREF(Py_None);
|
||||
Py_DECREF(JOIN_OBJ(tmp));
|
||||
|
||||
tmp = self_elem->tail;
|
||||
self_elem->tail = Py_None;
|
||||
Py_INCREF(Py_None);
|
||||
Py_DECREF(JOIN_OBJ(tmp));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
LOCAL(int)
|
||||
element_resize(ElementObject* self, int extra)
|
||||
{
|
||||
|
@ -326,7 +403,7 @@ element_resize(ElementObject* self, int extra)
|
|||
elements. set an exception and return -1 if allocation failed */
|
||||
|
||||
if (!self->extra)
|
||||
element_new_extra(self, NULL);
|
||||
create_extra(self, NULL);
|
||||
|
||||
size = self->extra->length + extra;
|
||||
|
||||
|
@ -443,35 +520,6 @@ element_get_tail(ElementObject* self)
|
|||
return res;
|
||||
}
|
||||
|
||||
static PyObject*
|
||||
element(PyObject* self, PyObject* args, PyObject* kw)
|
||||
{
|
||||
PyObject* elem;
|
||||
|
||||
PyObject* tag;
|
||||
PyObject* attrib = NULL;
|
||||
if (!PyArg_ParseTuple(args, "O|O!:Element", &tag,
|
||||
&PyDict_Type, &attrib))
|
||||
return NULL;
|
||||
|
||||
if (attrib || kw) {
|
||||
attrib = (attrib) ? PyDict_Copy(attrib) : PyDict_New();
|
||||
if (!attrib)
|
||||
return NULL;
|
||||
if (kw)
|
||||
PyDict_Update(attrib, kw);
|
||||
} else {
|
||||
Py_INCREF(Py_None);
|
||||
attrib = Py_None;
|
||||
}
|
||||
|
||||
elem = element_new(tag, attrib);
|
||||
|
||||
Py_DECREF(attrib);
|
||||
|
||||
return elem;
|
||||
}
|
||||
|
||||
static PyObject*
|
||||
subelement(PyObject* self, PyObject* args, PyObject* kw)
|
||||
{
|
||||
|
@ -496,7 +544,7 @@ subelement(PyObject* self, PyObject* args, PyObject* kw)
|
|||
attrib = Py_None;
|
||||
}
|
||||
|
||||
elem = element_new(tag, attrib);
|
||||
elem = create_new_element(tag, attrib);
|
||||
|
||||
Py_DECREF(attrib);
|
||||
|
||||
|
@ -512,7 +560,7 @@ static void
|
|||
element_dealloc(ElementObject* self)
|
||||
{
|
||||
if (self->extra)
|
||||
element_dealloc_extra(self);
|
||||
dealloc_extra(self);
|
||||
|
||||
/* discard attributes */
|
||||
Py_DECREF(self->tag);
|
||||
|
@ -521,7 +569,7 @@ element_dealloc(ElementObject* self)
|
|||
|
||||
RELEASE(sizeof(ElementObject), "destroy element");
|
||||
|
||||
PyObject_Del(self);
|
||||
Py_TYPE(self)->tp_free((PyObject *)self);
|
||||
}
|
||||
|
||||
/* -------------------------------------------------------------------- */
|
||||
|
@ -547,7 +595,7 @@ element_clear(ElementObject* self, PyObject* args)
|
|||
return NULL;
|
||||
|
||||
if (self->extra) {
|
||||
element_dealloc_extra(self);
|
||||
dealloc_extra(self);
|
||||
self->extra = NULL;
|
||||
}
|
||||
|
||||
|
@ -571,7 +619,7 @@ element_copy(ElementObject* self, PyObject* args)
|
|||
if (!PyArg_ParseTuple(args, ":__copy__"))
|
||||
return NULL;
|
||||
|
||||
element = (ElementObject*) element_new(
|
||||
element = (ElementObject*) create_new_element(
|
||||
self->tag, (self->extra) ? self->extra->attrib : Py_None
|
||||
);
|
||||
if (!element)
|
||||
|
@ -634,7 +682,7 @@ element_deepcopy(ElementObject* self, PyObject* args)
|
|||
attrib = Py_None;
|
||||
}
|
||||
|
||||
element = (ElementObject*) element_new(tag, attrib);
|
||||
element = (ElementObject*) create_new_element(tag, attrib);
|
||||
|
||||
Py_DECREF(tag);
|
||||
Py_DECREF(attrib);
|
||||
|
@ -1029,7 +1077,7 @@ element_insert(ElementObject* self, PyObject* args)
|
|||
return NULL;
|
||||
|
||||
if (!self->extra)
|
||||
element_new_extra(self, NULL);
|
||||
create_extra(self, NULL);
|
||||
|
||||
if (index < 0) {
|
||||
index += self->extra->length;
|
||||
|
@ -1100,7 +1148,7 @@ element_makeelement(PyObject* self, PyObject* args, PyObject* kw)
|
|||
if (!attrib)
|
||||
return NULL;
|
||||
|
||||
elem = element_new(tag, attrib);
|
||||
elem = create_new_element(tag, attrib);
|
||||
|
||||
Py_DECREF(attrib);
|
||||
|
||||
|
@ -1154,7 +1202,10 @@ element_remove(ElementObject* self, PyObject* args)
|
|||
static PyObject*
|
||||
element_repr(ElementObject* self)
|
||||
{
|
||||
return PyUnicode_FromFormat("<Element %R at %p>", self->tag, self);
|
||||
if (self->tag)
|
||||
return PyUnicode_FromFormat("<Element %R at %p>", self->tag, self);
|
||||
else
|
||||
return PyUnicode_FromFormat("<Element at %p>", self);
|
||||
}
|
||||
|
||||
static PyObject*
|
||||
|
@ -1168,7 +1219,7 @@ element_set(ElementObject* self, PyObject* args)
|
|||
return NULL;
|
||||
|
||||
if (!self->extra)
|
||||
element_new_extra(self, NULL);
|
||||
create_extra(self, NULL);
|
||||
|
||||
attrib = element_get_attrib(self);
|
||||
if (!attrib)
|
||||
|
@ -1284,7 +1335,7 @@ element_ass_subscr(PyObject* self_, PyObject* item, PyObject* value)
|
|||
PyObject* seq = NULL;
|
||||
|
||||
if (!self->extra)
|
||||
element_new_extra(self, NULL);
|
||||
create_extra(self, NULL);
|
||||
|
||||
if (PySlice_GetIndicesEx(item,
|
||||
self->extra->length,
|
||||
|
@ -1448,7 +1499,7 @@ element_getattro(ElementObject* self, PyObject* nameobj)
|
|||
} else if (strcmp(name, "attrib") == 0) {
|
||||
PyErr_Clear();
|
||||
if (!self->extra)
|
||||
element_new_extra(self, NULL);
|
||||
create_extra(self, NULL);
|
||||
res = element_get_attrib(self);
|
||||
}
|
||||
|
||||
|
@ -1484,7 +1535,7 @@ element_setattr(ElementObject* self, const char* name, PyObject* value)
|
|||
Py_INCREF(self->tail);
|
||||
} else if (strcmp(name, "attrib") == 0) {
|
||||
if (!self->extra)
|
||||
element_new_extra(self, NULL);
|
||||
create_extra(self, NULL);
|
||||
Py_DECREF(self->extra->attrib);
|
||||
self->extra->attrib = value;
|
||||
Py_INCREF(self->extra->attrib);
|
||||
|
@ -1516,31 +1567,41 @@ static PyTypeObject Element_Type = {
|
|||
PyVarObject_HEAD_INIT(NULL, 0)
|
||||
"Element", sizeof(ElementObject), 0,
|
||||
/* methods */
|
||||
(destructor)element_dealloc, /* tp_dealloc */
|
||||
0, /* tp_print */
|
||||
0, /* tp_getattr */
|
||||
(setattrfunc)element_setattr, /* tp_setattr */
|
||||
0, /* tp_reserved */
|
||||
(reprfunc)element_repr, /* tp_repr */
|
||||
0, /* tp_as_number */
|
||||
&element_as_sequence, /* tp_as_sequence */
|
||||
&element_as_mapping, /* tp_as_mapping */
|
||||
0, /* tp_hash */
|
||||
0, /* tp_call */
|
||||
0, /* tp_str */
|
||||
(getattrofunc)element_getattro, /* tp_getattro */
|
||||
0, /* tp_setattro */
|
||||
0, /* tp_as_buffer */
|
||||
Py_TPFLAGS_DEFAULT, /* tp_flags */
|
||||
0, /* tp_doc */
|
||||
0, /* tp_traverse */
|
||||
0, /* tp_clear */
|
||||
0, /* tp_richcompare */
|
||||
0, /* tp_weaklistoffset */
|
||||
0, /* tp_iter */
|
||||
0, /* tp_iternext */
|
||||
element_methods, /* tp_methods */
|
||||
0, /* tp_members */
|
||||
(destructor)element_dealloc, /* tp_dealloc */
|
||||
0, /* tp_print */
|
||||
0, /* tp_getattr */
|
||||
(setattrfunc)element_setattr, /* tp_setattr */
|
||||
0, /* tp_reserved */
|
||||
(reprfunc)element_repr, /* tp_repr */
|
||||
0, /* tp_as_number */
|
||||
&element_as_sequence, /* tp_as_sequence */
|
||||
&element_as_mapping, /* tp_as_mapping */
|
||||
0, /* tp_hash */
|
||||
0, /* tp_call */
|
||||
0, /* tp_str */
|
||||
(getattrofunc)element_getattro, /* tp_getattro */
|
||||
0, /* tp_setattro */
|
||||
0, /* tp_as_buffer */
|
||||
Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */
|
||||
0, /* tp_doc */
|
||||
0, /* tp_traverse */
|
||||
0, /* tp_clear */
|
||||
0, /* tp_richcompare */
|
||||
0, /* tp_weaklistoffset */
|
||||
0, /* tp_iter */
|
||||
0, /* tp_iternext */
|
||||
element_methods, /* tp_methods */
|
||||
0, /* tp_members */
|
||||
0, /* tp_getset */
|
||||
0, /* tp_base */
|
||||
0, /* tp_dict */
|
||||
0, /* tp_descr_get */
|
||||
0, /* tp_descr_set */
|
||||
0, /* tp_dictoffset */
|
||||
(initproc)element_init, /* tp_init */
|
||||
PyType_GenericAlloc, /* tp_alloc */
|
||||
element_new, /* tp_new */
|
||||
0, /* tp_free */
|
||||
};
|
||||
|
||||
/* ==================================================================== */
|
||||
|
@ -1637,13 +1698,6 @@ treebuilder_dealloc(TreeBuilderObject* self)
|
|||
/* -------------------------------------------------------------------- */
|
||||
/* handlers */
|
||||
|
||||
LOCAL(PyObject*)
|
||||
treebuilder_handle_xml(TreeBuilderObject* self, PyObject* encoding,
|
||||
PyObject* standalone)
|
||||
{
|
||||
Py_RETURN_NONE;
|
||||
}
|
||||
|
||||
LOCAL(PyObject*)
|
||||
treebuilder_handle_start(TreeBuilderObject* self, PyObject* tag,
|
||||
PyObject* attrib)
|
||||
|
@ -1666,7 +1720,7 @@ treebuilder_handle_start(TreeBuilderObject* self, PyObject* tag,
|
|||
self->data = NULL;
|
||||
}
|
||||
|
||||
node = element_new(tag, attrib);
|
||||
node = create_new_element(tag, attrib);
|
||||
if (!node)
|
||||
return NULL;
|
||||
|
||||
|
@ -1915,22 +1969,10 @@ treebuilder_start(TreeBuilderObject* self, PyObject* args)
|
|||
return treebuilder_handle_start(self, tag, attrib);
|
||||
}
|
||||
|
||||
static PyObject*
|
||||
treebuilder_xml(TreeBuilderObject* self, PyObject* args)
|
||||
{
|
||||
PyObject* encoding;
|
||||
PyObject* standalone;
|
||||
if (!PyArg_ParseTuple(args, "OO:xml", &encoding, &standalone))
|
||||
return NULL;
|
||||
|
||||
return treebuilder_handle_xml(self, encoding, standalone);
|
||||
}
|
||||
|
||||
static PyMethodDef treebuilder_methods[] = {
|
||||
{"data", (PyCFunction) treebuilder_data, METH_VARARGS},
|
||||
{"start", (PyCFunction) treebuilder_start, METH_VARARGS},
|
||||
{"end", (PyCFunction) treebuilder_end, METH_VARARGS},
|
||||
{"xml", (PyCFunction) treebuilder_xml, METH_VARARGS},
|
||||
{"close", (PyCFunction) treebuilder_close, METH_VARARGS},
|
||||
{NULL, NULL}
|
||||
};
|
||||
|
@ -1991,8 +2033,6 @@ typedef struct {
|
|||
|
||||
PyObject* names;
|
||||
|
||||
PyObject* handle_xml;
|
||||
|
||||
PyObject* handle_start;
|
||||
PyObject* handle_data;
|
||||
PyObject* handle_end;
|
||||
|
@ -2445,7 +2485,6 @@ xmlparser(PyObject* self_, PyObject* args, PyObject* kw)
|
|||
Py_INCREF(target);
|
||||
self->target = target;
|
||||
|
||||
self->handle_xml = PyObject_GetAttrString(target, "xml");
|
||||
self->handle_start = PyObject_GetAttrString(target, "start");
|
||||
self->handle_data = PyObject_GetAttrString(target, "data");
|
||||
self->handle_end = PyObject_GetAttrString(target, "end");
|
||||
|
@ -2501,7 +2540,6 @@ xmlparser_dealloc(XMLParserObject* self)
|
|||
Py_XDECREF(self->handle_end);
|
||||
Py_XDECREF(self->handle_data);
|
||||
Py_XDECREF(self->handle_start);
|
||||
Py_XDECREF(self->handle_xml);
|
||||
|
||||
Py_DECREF(self->target);
|
||||
Py_DECREF(self->entity);
|
||||
|
@ -2801,7 +2839,6 @@ static PyTypeObject XMLParser_Type = {
|
|||
/* python module interface */
|
||||
|
||||
static PyMethodDef _functions[] = {
|
||||
{"Element", (PyCFunction) element, METH_VARARGS|METH_KEYWORDS},
|
||||
{"SubElement", (PyCFunction) subelement, METH_VARARGS|METH_KEYWORDS},
|
||||
{"TreeBuilder", (PyCFunction) treebuilder, METH_VARARGS},
|
||||
#if defined(USE_EXPAT)
|
||||
|
@ -2911,5 +2948,8 @@ PyInit__elementtree(void)
|
|||
Py_INCREF(elementtree_parseerror_obj);
|
||||
PyModule_AddObject(m, "ParseError", elementtree_parseerror_obj);
|
||||
|
||||
Py_INCREF((PyObject *)&Element_Type);
|
||||
PyModule_AddObject(m, "Element", (PyObject *)&Element_Type);
|
||||
|
||||
return m;
|
||||
}
|
||||
|
|
|
@ -60,16 +60,18 @@ typedef struct {
|
|||
static void
|
||||
overlapped_dealloc(OverlappedObject *self)
|
||||
{
|
||||
DWORD bytes;
|
||||
int err = GetLastError();
|
||||
if (self->pending) {
|
||||
if (check_CancelIoEx())
|
||||
Py_CancelIoEx(self->handle, &self->overlapped);
|
||||
else {
|
||||
PyErr_SetString(PyExc_RuntimeError,
|
||||
"I/O operations still in flight while destroying "
|
||||
"Overlapped object, the process may crash");
|
||||
PyErr_WriteUnraisable(NULL);
|
||||
}
|
||||
/* make it a programming error to deallocate while operation
|
||||
is pending, even if we can safely cancel it */
|
||||
if (check_CancelIoEx() &&
|
||||
Py_CancelIoEx(self->handle, &self->overlapped))
|
||||
GetOverlappedResult(self->handle, &self->overlapped, &bytes, TRUE);
|
||||
PyErr_SetString(PyExc_RuntimeError,
|
||||
"I/O operations still in flight while destroying "
|
||||
"Overlapped object, the process may crash");
|
||||
PyErr_WriteUnraisable(NULL);
|
||||
}
|
||||
CloseHandle(self->overlapped.hEvent);
|
||||
SetLastError(err);
|
||||
|
@ -85,6 +87,7 @@ overlapped_GetOverlappedResult(OverlappedObject *self, PyObject *waitobj)
|
|||
int wait;
|
||||
BOOL res;
|
||||
DWORD transferred = 0;
|
||||
DWORD err;
|
||||
|
||||
wait = PyObject_IsTrue(waitobj);
|
||||
if (wait < 0)
|
||||
|
@ -94,23 +97,27 @@ overlapped_GetOverlappedResult(OverlappedObject *self, PyObject *waitobj)
|
|||
wait != 0);
|
||||
Py_END_ALLOW_THREADS
|
||||
|
||||
if (!res) {
|
||||
int err = GetLastError();
|
||||
if (err == ERROR_IO_INCOMPLETE)
|
||||
Py_RETURN_NONE;
|
||||
if (err != ERROR_MORE_DATA) {
|
||||
err = res ? ERROR_SUCCESS : GetLastError();
|
||||
switch (err) {
|
||||
case ERROR_SUCCESS:
|
||||
case ERROR_MORE_DATA:
|
||||
case ERROR_OPERATION_ABORTED:
|
||||
self->completed = 1;
|
||||
self->pending = 0;
|
||||
break;
|
||||
case ERROR_IO_INCOMPLETE:
|
||||
break;
|
||||
default:
|
||||
self->pending = 0;
|
||||
return PyErr_SetExcFromWindowsErr(PyExc_IOError, err);
|
||||
}
|
||||
}
|
||||
self->pending = 0;
|
||||
self->completed = 1;
|
||||
if (self->read_buffer) {
|
||||
if (self->completed && self->read_buffer != NULL) {
|
||||
assert(PyBytes_CheckExact(self->read_buffer));
|
||||
if (_PyBytes_Resize(&self->read_buffer, transferred))
|
||||
if (transferred != PyBytes_GET_SIZE(self->read_buffer) &&
|
||||
_PyBytes_Resize(&self->read_buffer, transferred))
|
||||
return NULL;
|
||||
}
|
||||
return Py_BuildValue("lN", (long) transferred, PyBool_FromLong(res));
|
||||
return Py_BuildValue("II", (unsigned) transferred, (unsigned) err);
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
|
@ -522,9 +529,10 @@ win32_WriteFile(PyObject *self, PyObject *args, PyObject *kwds)
|
|||
HANDLE handle;
|
||||
Py_buffer _buf, *buf;
|
||||
PyObject *bufobj;
|
||||
int written;
|
||||
DWORD written;
|
||||
BOOL ret;
|
||||
int use_overlapped = 0;
|
||||
DWORD err;
|
||||
OverlappedObject *overlapped = NULL;
|
||||
static char *kwlist[] = {"handle", "buffer", "overlapped", NULL};
|
||||
|
||||
|
@ -553,8 +561,9 @@ win32_WriteFile(PyObject *self, PyObject *args, PyObject *kwds)
|
|||
overlapped ? &overlapped->overlapped : NULL);
|
||||
Py_END_ALLOW_THREADS
|
||||
|
||||
err = ret ? 0 : GetLastError();
|
||||
|
||||
if (overlapped) {
|
||||
int err = GetLastError();
|
||||
if (!ret) {
|
||||
if (err == ERROR_IO_PENDING)
|
||||
overlapped->pending = 1;
|
||||
|
@ -563,13 +572,13 @@ win32_WriteFile(PyObject *self, PyObject *args, PyObject *kwds)
|
|||
return PyErr_SetExcFromWindowsErr(PyExc_IOError, 0);
|
||||
}
|
||||
}
|
||||
return (PyObject *) overlapped;
|
||||
return Py_BuildValue("NI", (PyObject *) overlapped, err);
|
||||
}
|
||||
|
||||
PyBuffer_Release(buf);
|
||||
if (!ret)
|
||||
return PyErr_SetExcFromWindowsErr(PyExc_IOError, 0);
|
||||
return PyLong_FromLong(written);
|
||||
return Py_BuildValue("II", written, err);
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
|
@ -581,6 +590,7 @@ win32_ReadFile(PyObject *self, PyObject *args, PyObject *kwds)
|
|||
PyObject *buf;
|
||||
BOOL ret;
|
||||
int use_overlapped = 0;
|
||||
DWORD err;
|
||||
OverlappedObject *overlapped = NULL;
|
||||
static char *kwlist[] = {"handle", "size", "overlapped", NULL};
|
||||
|
||||
|
@ -607,8 +617,9 @@ win32_ReadFile(PyObject *self, PyObject *args, PyObject *kwds)
|
|||
overlapped ? &overlapped->overlapped : NULL);
|
||||
Py_END_ALLOW_THREADS
|
||||
|
||||
err = ret ? 0 : GetLastError();
|
||||
|
||||
if (overlapped) {
|
||||
int err = GetLastError();
|
||||
if (!ret) {
|
||||
if (err == ERROR_IO_PENDING)
|
||||
overlapped->pending = 1;
|
||||
|
@ -617,16 +628,16 @@ win32_ReadFile(PyObject *self, PyObject *args, PyObject *kwds)
|
|||
return PyErr_SetExcFromWindowsErr(PyExc_IOError, 0);
|
||||
}
|
||||
}
|
||||
return (PyObject *) overlapped;
|
||||
return Py_BuildValue("NI", (PyObject *) overlapped, err);
|
||||
}
|
||||
|
||||
if (!ret && GetLastError() != ERROR_MORE_DATA) {
|
||||
if (!ret && err != ERROR_MORE_DATA) {
|
||||
Py_DECREF(buf);
|
||||
return PyErr_SetExcFromWindowsErr(PyExc_IOError, 0);
|
||||
}
|
||||
if (_PyBytes_Resize(&buf, nread))
|
||||
return NULL;
|
||||
return Py_BuildValue("NN", buf, PyBool_FromLong(ret));
|
||||
return Py_BuildValue("NI", buf, err);
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
|
@ -783,7 +794,11 @@ create_win32_namespace(void)
|
|||
|
||||
WIN32_CONSTANT(F_DWORD, ERROR_ALREADY_EXISTS);
|
||||
WIN32_CONSTANT(F_DWORD, ERROR_BROKEN_PIPE);
|
||||
WIN32_CONSTANT(F_DWORD, ERROR_IO_PENDING);
|
||||
WIN32_CONSTANT(F_DWORD, ERROR_MORE_DATA);
|
||||
WIN32_CONSTANT(F_DWORD, ERROR_NETNAME_DELETED);
|
||||
WIN32_CONSTANT(F_DWORD, ERROR_NO_SYSTEM_RESOURCES);
|
||||
WIN32_CONSTANT(F_DWORD, ERROR_OPERATION_ABORTED);
|
||||
WIN32_CONSTANT(F_DWORD, ERROR_PIPE_BUSY);
|
||||
WIN32_CONSTANT(F_DWORD, ERROR_PIPE_CONNECTED);
|
||||
WIN32_CONSTANT(F_DWORD, ERROR_SEM_TIMEOUT);
|
||||
|
|
|
@ -319,6 +319,7 @@ typedef struct PicklerObject {
|
|||
objects to support self-referential objects
|
||||
pickling. */
|
||||
PyObject *pers_func; /* persistent_id() method, can be NULL */
|
||||
PyObject *dispatch_table; /* private dispatch_table, can be NULL */
|
||||
PyObject *arg;
|
||||
|
||||
PyObject *write; /* write() method of the output stream. */
|
||||
|
@ -764,6 +765,7 @@ _Pickler_New(void)
|
|||
return NULL;
|
||||
|
||||
self->pers_func = NULL;
|
||||
self->dispatch_table = NULL;
|
||||
self->arg = NULL;
|
||||
self->write = NULL;
|
||||
self->proto = 0;
|
||||
|
@ -3176,17 +3178,24 @@ save(PicklerObject *self, PyObject *obj, int pers_save)
|
|||
/* XXX: This part needs some unit tests. */
|
||||
|
||||
/* Get a reduction callable, and call it. This may come from
|
||||
* copyreg.dispatch_table, the object's __reduce_ex__ method,
|
||||
* or the object's __reduce__ method.
|
||||
* self.dispatch_table, copyreg.dispatch_table, the object's
|
||||
* __reduce_ex__ method, or the object's __reduce__ method.
|
||||
*/
|
||||
reduce_func = PyDict_GetItem(dispatch_table, (PyObject *)type);
|
||||
if (self->dispatch_table == NULL) {
|
||||
reduce_func = PyDict_GetItem(dispatch_table, (PyObject *)type);
|
||||
/* PyDict_GetItem() unlike PyObject_GetItem() and
|
||||
PyObject_GetAttr() returns a borrowed ref */
|
||||
Py_XINCREF(reduce_func);
|
||||
} else {
|
||||
reduce_func = PyObject_GetItem(self->dispatch_table, (PyObject *)type);
|
||||
if (reduce_func == NULL) {
|
||||
if (PyErr_ExceptionMatches(PyExc_KeyError))
|
||||
PyErr_Clear();
|
||||
else
|
||||
goto error;
|
||||
}
|
||||
}
|
||||
if (reduce_func != NULL) {
|
||||
/* Here, the reference count of the reduce_func object returned by
|
||||
PyDict_GetItem needs to be increased to be consistent with the one
|
||||
returned by PyObject_GetAttr. This is allow us to blindly DECREF
|
||||
reduce_func at the end of the save() routine.
|
||||
*/
|
||||
Py_INCREF(reduce_func);
|
||||
Py_INCREF(obj);
|
||||
reduce_value = _Pickler_FastCall(self, reduce_func, obj);
|
||||
}
|
||||
|
@ -3359,6 +3368,7 @@ Pickler_dealloc(PicklerObject *self)
|
|||
Py_XDECREF(self->output_buffer);
|
||||
Py_XDECREF(self->write);
|
||||
Py_XDECREF(self->pers_func);
|
||||
Py_XDECREF(self->dispatch_table);
|
||||
Py_XDECREF(self->arg);
|
||||
Py_XDECREF(self->fast_memo);
|
||||
|
||||
|
@ -3372,6 +3382,7 @@ Pickler_traverse(PicklerObject *self, visitproc visit, void *arg)
|
|||
{
|
||||
Py_VISIT(self->write);
|
||||
Py_VISIT(self->pers_func);
|
||||
Py_VISIT(self->dispatch_table);
|
||||
Py_VISIT(self->arg);
|
||||
Py_VISIT(self->fast_memo);
|
||||
return 0;
|
||||
|
@ -3383,6 +3394,7 @@ Pickler_clear(PicklerObject *self)
|
|||
Py_CLEAR(self->output_buffer);
|
||||
Py_CLEAR(self->write);
|
||||
Py_CLEAR(self->pers_func);
|
||||
Py_CLEAR(self->dispatch_table);
|
||||
Py_CLEAR(self->arg);
|
||||
Py_CLEAR(self->fast_memo);
|
||||
|
||||
|
@ -3427,6 +3439,7 @@ Pickler_init(PicklerObject *self, PyObject *args, PyObject *kwds)
|
|||
PyObject *proto_obj = NULL;
|
||||
PyObject *fix_imports = Py_True;
|
||||
_Py_IDENTIFIER(persistent_id);
|
||||
_Py_IDENTIFIER(dispatch_table);
|
||||
|
||||
if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|OO:Pickler",
|
||||
kwlist, &file, &proto_obj, &fix_imports))
|
||||
|
@ -3468,6 +3481,13 @@ Pickler_init(PicklerObject *self, PyObject *args, PyObject *kwds)
|
|||
if (self->pers_func == NULL)
|
||||
return -1;
|
||||
}
|
||||
self->dispatch_table = NULL;
|
||||
if (_PyObject_HasAttrId((PyObject *)self, &PyId_dispatch_table)) {
|
||||
self->dispatch_table = _PyObject_GetAttrId((PyObject *)self,
|
||||
&PyId_dispatch_table);
|
||||
if (self->dispatch_table == NULL)
|
||||
return -1;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -3749,6 +3769,7 @@ Pickler_set_persid(PicklerObject *self, PyObject *value)
|
|||
static PyMemberDef Pickler_members[] = {
|
||||
{"bin", T_INT, offsetof(PicklerObject, bin)},
|
||||
{"fast", T_INT, offsetof(PicklerObject, fast)},
|
||||
{"dispatch_table", T_OBJECT_EX, offsetof(PicklerObject, dispatch_table)},
|
||||
{NULL}
|
||||
};
|
||||
|
||||
|
|
|
@ -44,23 +44,21 @@ static PyTypeObject NDArray_Type;
|
|||
#define ADJUST_PTR(ptr, suboffsets) \
|
||||
(HAVE_PTR(suboffsets) ? *((char**)ptr) + suboffsets[0] : ptr)
|
||||
|
||||
/* User configurable flags for the ndarray */
|
||||
#define ND_VAREXPORT 0x001 /* change layout while buffers are exported */
|
||||
|
||||
/* User configurable flags for each base buffer */
|
||||
#define ND_WRITABLE 0x002 /* mark base buffer as writable */
|
||||
#define ND_FORTRAN 0x004 /* Fortran contiguous layout */
|
||||
#define ND_SCALAR 0x008 /* scalar: ndim = 0 */
|
||||
#define ND_PIL 0x010 /* convert to PIL-style array (suboffsets) */
|
||||
#define ND_GETBUF_FAIL 0x020 /* test issue 7385 */
|
||||
|
||||
/* Default: NumPy style (strides), read-only, no var-export, C-style layout */
|
||||
#define ND_DEFAULT 0x0
|
||||
|
||||
#define ND_DEFAULT 0x000
|
||||
/* User configurable flags for the ndarray */
|
||||
#define ND_VAREXPORT 0x001 /* change layout while buffers are exported */
|
||||
/* User configurable flags for each base buffer */
|
||||
#define ND_WRITABLE 0x002 /* mark base buffer as writable */
|
||||
#define ND_FORTRAN 0x004 /* Fortran contiguous layout */
|
||||
#define ND_SCALAR 0x008 /* scalar: ndim = 0 */
|
||||
#define ND_PIL 0x010 /* convert to PIL-style array (suboffsets) */
|
||||
#define ND_REDIRECT 0x020 /* redirect buffer requests */
|
||||
#define ND_GETBUF_FAIL 0x040 /* trigger getbuffer failure */
|
||||
#define ND_GETBUF_UNDEFINED 0x080 /* undefined view.obj */
|
||||
/* Internal flags for the base buffer */
|
||||
#define ND_C 0x040 /* C contiguous layout (default) */
|
||||
#define ND_OWN_ARRAYS 0x080 /* consumer owns arrays */
|
||||
#define ND_UNUSED 0x100 /* initializer */
|
||||
#define ND_C 0x100 /* C contiguous layout (default) */
|
||||
#define ND_OWN_ARRAYS 0x200 /* consumer owns arrays */
|
||||
|
||||
/* ndarray properties */
|
||||
#define ND_IS_CONSUMER(nd) \
|
||||
|
@ -1290,7 +1288,7 @@ ndarray_init(PyObject *self, PyObject *args, PyObject *kwds)
|
|||
PyObject *strides = NULL; /* number of bytes to the next elt in each dim */
|
||||
Py_ssize_t offset = 0; /* buffer offset */
|
||||
PyObject *format = simple_format; /* struct module specifier: "B" */
|
||||
int flags = ND_UNUSED; /* base buffer and ndarray flags */
|
||||
int flags = ND_DEFAULT; /* base buffer and ndarray flags */
|
||||
|
||||
int getbuf = PyBUF_UNUSED; /* re-exporter: getbuffer request flags */
|
||||
|
||||
|
@ -1302,10 +1300,10 @@ ndarray_init(PyObject *self, PyObject *args, PyObject *kwds)
|
|||
/* NDArrayObject is re-exporter */
|
||||
if (PyObject_CheckBuffer(v) && shape == NULL) {
|
||||
if (strides || offset || format != simple_format ||
|
||||
flags != ND_UNUSED) {
|
||||
!(flags == ND_DEFAULT || flags == ND_REDIRECT)) {
|
||||
PyErr_SetString(PyExc_TypeError,
|
||||
"construction from exporter object only takes a single "
|
||||
"additional getbuf argument");
|
||||
"construction from exporter object only takes 'obj', 'getbuf' "
|
||||
"and 'flags' arguments");
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
@ -1315,6 +1313,7 @@ ndarray_init(PyObject *self, PyObject *args, PyObject *kwds)
|
|||
return -1;
|
||||
|
||||
init_flags(nd->head);
|
||||
nd->head->flags |= flags;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -1333,8 +1332,6 @@ ndarray_init(PyObject *self, PyObject *args, PyObject *kwds)
|
|||
return -1;
|
||||
}
|
||||
|
||||
if (flags == ND_UNUSED)
|
||||
flags = ND_DEFAULT;
|
||||
if (flags & ND_VAREXPORT) {
|
||||
nd->flags |= ND_VAREXPORT;
|
||||
flags &= ~ND_VAREXPORT;
|
||||
|
@ -1357,7 +1354,7 @@ ndarray_push(PyObject *self, PyObject *args, PyObject *kwds)
|
|||
PyObject *strides = NULL; /* number of bytes to the next elt in each dim */
|
||||
PyObject *format = simple_format; /* struct module specifier: "B" */
|
||||
Py_ssize_t offset = 0; /* buffer offset */
|
||||
int flags = ND_UNUSED; /* base buffer flags */
|
||||
int flags = ND_DEFAULT; /* base buffer flags */
|
||||
|
||||
if (!PyArg_ParseTupleAndKeywords(args, kwds, "OO|OnOi", kwlist,
|
||||
&items, &shape, &strides, &offset, &format, &flags))
|
||||
|
@ -1423,6 +1420,11 @@ ndarray_getbuf(NDArrayObject *self, Py_buffer *view, int flags)
|
|||
Py_buffer *base = &ndbuf->base;
|
||||
int baseflags = ndbuf->flags;
|
||||
|
||||
/* redirect mode */
|
||||
if (base->obj != NULL && (baseflags&ND_REDIRECT)) {
|
||||
return PyObject_GetBuffer(base->obj, view, flags);
|
||||
}
|
||||
|
||||
/* start with complete information */
|
||||
*view = *base;
|
||||
view->obj = NULL;
|
||||
|
@ -1445,6 +1447,8 @@ ndarray_getbuf(NDArrayObject *self, Py_buffer *view, int flags)
|
|||
if (baseflags & ND_GETBUF_FAIL) {
|
||||
PyErr_SetString(PyExc_BufferError,
|
||||
"ND_GETBUF_FAIL: forced test exception");
|
||||
if (baseflags & ND_GETBUF_UNDEFINED)
|
||||
view->obj = (PyObject *)0x1; /* wrong but permitted in <= 3.2 */
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
@ -2598,6 +2602,126 @@ static PyTypeObject NDArray_Type = {
|
|||
ndarray_new, /* tp_new */
|
||||
};
|
||||
|
||||
/**************************************************************************/
|
||||
/* StaticArray Object */
|
||||
/**************************************************************************/
|
||||
|
||||
static PyTypeObject StaticArray_Type;
|
||||
|
||||
typedef struct {
|
||||
PyObject_HEAD
|
||||
int legacy_mode; /* if true, use the view.obj==NULL hack */
|
||||
} StaticArrayObject;
|
||||
|
||||
static char static_mem[12] = {0,1,2,3,4,5,6,7,8,9,10,11};
|
||||
static Py_ssize_t static_shape[1] = {12};
|
||||
static Py_ssize_t static_strides[1] = {1};
|
||||
static Py_buffer static_buffer = {
|
||||
static_mem, /* buf */
|
||||
NULL, /* obj */
|
||||
12, /* len */
|
||||
1, /* itemsize */
|
||||
1, /* readonly */
|
||||
1, /* ndim */
|
||||
"B", /* format */
|
||||
static_shape, /* shape */
|
||||
static_strides, /* strides */
|
||||
NULL, /* suboffsets */
|
||||
NULL /* internal */
|
||||
};
|
||||
|
||||
static PyObject *
|
||||
staticarray_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
|
||||
{
|
||||
return (PyObject *)PyObject_New(StaticArrayObject, &StaticArray_Type);
|
||||
}
|
||||
|
||||
static int
|
||||
staticarray_init(PyObject *self, PyObject *args, PyObject *kwds)
|
||||
{
|
||||
StaticArrayObject *a = (StaticArrayObject *)self;
|
||||
static char *kwlist[] = {
|
||||
"legacy_mode", NULL
|
||||
};
|
||||
PyObject *legacy_mode = Py_False;
|
||||
|
||||
if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O", kwlist, &legacy_mode))
|
||||
return -1;
|
||||
|
||||
a->legacy_mode = (legacy_mode != Py_False);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void
|
||||
staticarray_dealloc(StaticArrayObject *self)
|
||||
{
|
||||
PyObject_Del(self);
|
||||
}
|
||||
|
||||
/* Return a buffer for a PyBUF_FULL_RO request. Flags are not checked,
|
||||
which makes this object a non-compliant exporter! */
|
||||
static int
|
||||
staticarray_getbuf(StaticArrayObject *self, Py_buffer *view, int flags)
|
||||
{
|
||||
*view = static_buffer;
|
||||
|
||||
if (self->legacy_mode) {
|
||||
view->obj = NULL; /* Don't use this in new code. */
|
||||
}
|
||||
else {
|
||||
view->obj = (PyObject *)self;
|
||||
Py_INCREF(view->obj);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static PyBufferProcs staticarray_as_buffer = {
|
||||
(getbufferproc)staticarray_getbuf, /* bf_getbuffer */
|
||||
NULL, /* bf_releasebuffer */
|
||||
};
|
||||
|
||||
static PyTypeObject StaticArray_Type = {
|
||||
PyVarObject_HEAD_INIT(NULL, 0)
|
||||
"staticarray", /* Name of this type */
|
||||
sizeof(StaticArrayObject), /* Basic object size */
|
||||
0, /* Item size for varobject */
|
||||
(destructor)staticarray_dealloc, /* tp_dealloc */
|
||||
0, /* tp_print */
|
||||
0, /* tp_getattr */
|
||||
0, /* tp_setattr */
|
||||
0, /* tp_compare */
|
||||
0, /* tp_repr */
|
||||
0, /* tp_as_number */
|
||||
0, /* tp_as_sequence */
|
||||
0, /* tp_as_mapping */
|
||||
0, /* tp_hash */
|
||||
0, /* tp_call */
|
||||
0, /* tp_str */
|
||||
0, /* tp_getattro */
|
||||
0, /* tp_setattro */
|
||||
&staticarray_as_buffer, /* tp_as_buffer */
|
||||
Py_TPFLAGS_DEFAULT, /* tp_flags */
|
||||
0, /* tp_doc */
|
||||
0, /* tp_traverse */
|
||||
0, /* tp_clear */
|
||||
0, /* tp_richcompare */
|
||||
0, /* tp_weaklistoffset */
|
||||
0, /* tp_iter */
|
||||
0, /* tp_iternext */
|
||||
0, /* tp_methods */
|
||||
0, /* tp_members */
|
||||
0, /* tp_getset */
|
||||
0, /* tp_base */
|
||||
0, /* tp_dict */
|
||||
0, /* tp_descr_get */
|
||||
0, /* tp_descr_set */
|
||||
0, /* tp_dictoffset */
|
||||
staticarray_init, /* tp_init */
|
||||
0, /* tp_alloc */
|
||||
staticarray_new, /* tp_new */
|
||||
};
|
||||
|
||||
|
||||
static struct PyMethodDef _testbuffer_functions[] = {
|
||||
{"slice_indices", slice_indices, METH_VARARGS, NULL},
|
||||
|
@ -2630,10 +2754,14 @@ PyInit__testbuffer(void)
|
|||
if (m == NULL)
|
||||
return NULL;
|
||||
|
||||
Py_TYPE(&NDArray_Type)=&PyType_Type;
|
||||
Py_TYPE(&NDArray_Type) = &PyType_Type;
|
||||
Py_INCREF(&NDArray_Type);
|
||||
PyModule_AddObject(m, "ndarray", (PyObject *)&NDArray_Type);
|
||||
|
||||
Py_TYPE(&StaticArray_Type) = &PyType_Type;
|
||||
Py_INCREF(&StaticArray_Type);
|
||||
PyModule_AddObject(m, "staticarray", (PyObject *)&StaticArray_Type);
|
||||
|
||||
structmodule = PyImport_ImportModule("struct");
|
||||
if (structmodule == NULL)
|
||||
return NULL;
|
||||
|
@ -2654,6 +2782,8 @@ PyInit__testbuffer(void)
|
|||
PyModule_AddIntConstant(m, "ND_SCALAR", ND_SCALAR);
|
||||
PyModule_AddIntConstant(m, "ND_PIL", ND_PIL);
|
||||
PyModule_AddIntConstant(m, "ND_GETBUF_FAIL", ND_GETBUF_FAIL);
|
||||
PyModule_AddIntConstant(m, "ND_GETBUF_UNDEFINED", ND_GETBUF_UNDEFINED);
|
||||
PyModule_AddIntConstant(m, "ND_REDIRECT", ND_REDIRECT);
|
||||
|
||||
PyModule_AddIntConstant(m, "PyBUF_SIMPLE", PyBUF_SIMPLE);
|
||||
PyModule_AddIntConstant(m, "PyBUF_WRITABLE", PyBUF_WRITABLE);
|
||||
|
|
|
@ -2323,6 +2323,24 @@ run_in_subinterp(PyObject *self, PyObject *args)
|
|||
return PyLong_FromLong(r);
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
test_pytime_object_to_timespec(PyObject *self, PyObject *args)
|
||||
{
|
||||
PyObject *obj;
|
||||
time_t sec;
|
||||
long nsec;
|
||||
if (!PyArg_ParseTuple(args, "O:pytime_object_to_timespec", &obj))
|
||||
return NULL;
|
||||
if (_PyTime_ObjectToTimespec(obj, &sec, &nsec) == -1)
|
||||
return NULL;
|
||||
#if defined(HAVE_LONG_LONG) && SIZEOF_TIME_T == SIZEOF_LONG_LONG
|
||||
return Py_BuildValue("Ll", (PY_LONG_LONG)sec, nsec);
|
||||
#else
|
||||
assert(sizeof(time_t) <= sizeof(long));
|
||||
return Py_BuildValue("ll", (long)sec, nsec);
|
||||
#endif
|
||||
}
|
||||
|
||||
|
||||
static PyMethodDef TestMethods[] = {
|
||||
{"raise_exception", raise_exception, METH_VARARGS},
|
||||
|
@ -2412,6 +2430,7 @@ static PyMethodDef TestMethods[] = {
|
|||
METH_NOARGS},
|
||||
{"crash_no_current_thread", (PyCFunction)crash_no_current_thread, METH_NOARGS},
|
||||
{"run_in_subinterp", run_in_subinterp, METH_VARARGS},
|
||||
{"pytime_object_to_timespec", test_pytime_object_to_timespec, METH_VARARGS},
|
||||
{NULL, NULL} /* sentinel */
|
||||
};
|
||||
|
||||
|
|
|
@ -959,13 +959,13 @@ mmap_ass_subscript(mmap_object *self, PyObject *item, PyObject *value)
|
|||
}
|
||||
|
||||
static PySequenceMethods mmap_as_sequence = {
|
||||
(lenfunc)mmap_length, /*sq_length*/
|
||||
(binaryfunc)mmap_concat, /*sq_concat*/
|
||||
(ssizeargfunc)mmap_repeat, /*sq_repeat*/
|
||||
(ssizeargfunc)mmap_item, /*sq_item*/
|
||||
0, /*sq_slice*/
|
||||
(ssizeobjargproc)mmap_ass_item, /*sq_ass_item*/
|
||||
0, /*sq_ass_slice*/
|
||||
(lenfunc)mmap_length, /*sq_length*/
|
||||
(binaryfunc)mmap_concat, /*sq_concat*/
|
||||
(ssizeargfunc)mmap_repeat, /*sq_repeat*/
|
||||
(ssizeargfunc)mmap_item, /*sq_item*/
|
||||
0, /*sq_slice*/
|
||||
(ssizeobjargproc)mmap_ass_item, /*sq_ass_item*/
|
||||
0, /*sq_ass_slice*/
|
||||
};
|
||||
|
||||
static PyMappingMethods mmap_as_mapping = {
|
||||
|
@ -1027,7 +1027,7 @@ static PyTypeObject mmap_object_type = {
|
|||
PyObject_GenericGetAttr, /*tp_getattro*/
|
||||
0, /*tp_setattro*/
|
||||
&mmap_as_buffer, /*tp_as_buffer*/
|
||||
Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /*tp_flags*/
|
||||
Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /*tp_flags*/
|
||||
mmap_doc, /*tp_doc*/
|
||||
0, /* tp_traverse */
|
||||
0, /* tp_clear */
|
||||
|
@ -1043,10 +1043,10 @@ static PyTypeObject mmap_object_type = {
|
|||
0, /* tp_descr_get */
|
||||
0, /* tp_descr_set */
|
||||
0, /* tp_dictoffset */
|
||||
0, /* tp_init */
|
||||
0, /* tp_init */
|
||||
PyType_GenericAlloc, /* tp_alloc */
|
||||
new_mmap_object, /* tp_new */
|
||||
PyObject_Del, /* tp_free */
|
||||
PyObject_Del, /* tp_free */
|
||||
};
|
||||
|
||||
|
||||
|
@ -1097,8 +1097,8 @@ new_mmap_object(PyTypeObject *type, PyObject *args, PyObject *kwdict)
|
|||
int devzero = -1;
|
||||
int access = (int)ACCESS_DEFAULT;
|
||||
static char *keywords[] = {"fileno", "length",
|
||||
"flags", "prot",
|
||||
"access", "offset", NULL};
|
||||
"flags", "prot",
|
||||
"access", "offset", NULL};
|
||||
|
||||
if (!PyArg_ParseTupleAndKeywords(args, kwdict, "iO|iii" _Py_PARSE_OFF_T, keywords,
|
||||
&fd, &map_size_obj, &flags, &prot,
|
||||
|
@ -1260,8 +1260,8 @@ new_mmap_object(PyTypeObject *type, PyObject *args, PyObject *kwdict)
|
|||
int access = (access_mode)ACCESS_DEFAULT;
|
||||
DWORD flProtect, dwDesiredAccess;
|
||||
static char *keywords[] = { "fileno", "length",
|
||||
"tagname",
|
||||
"access", "offset", NULL };
|
||||
"tagname",
|
||||
"access", "offset", NULL };
|
||||
|
||||
if (!PyArg_ParseTupleAndKeywords(args, kwdict, "iO|ziL", keywords,
|
||||
&fileno, &map_size_obj,
|
||||
|
|
|
@ -783,16 +783,11 @@ signal_sigtimedwait(PyObject *self, PyObject *args)
|
|||
siginfo_t si;
|
||||
int err;
|
||||
|
||||
if (!PyArg_ParseTuple(args, "OO:sigtimedwait", &signals, &timeout))
|
||||
if (!PyArg_ParseTuple(args, "OO:sigtimedwait",
|
||||
&signals, &timeout))
|
||||
return NULL;
|
||||
|
||||
if (!PyTuple_Check(timeout) || PyTuple_Size(timeout) != 2) {
|
||||
PyErr_SetString(PyExc_TypeError,
|
||||
"sigtimedwait() arg 2 must be a tuple "
|
||||
"(timeout_sec, timeout_nsec)");
|
||||
return NULL;
|
||||
} else if (!PyArg_ParseTuple(timeout, "ll:sigtimedwait",
|
||||
&(buf.tv_sec), &(buf.tv_nsec)))
|
||||
if (_PyTime_ObjectToTimespec(timeout, &buf.tv_sec, &buf.tv_nsec) == -1)
|
||||
return NULL;
|
||||
|
||||
if (buf.tv_sec < 0 || buf.tv_nsec < 0) {
|
||||
|
|
|
@ -649,7 +649,7 @@ PyBuffer_FillContiguousStrides(int nd, Py_ssize_t *shape,
|
|||
|
||||
int
|
||||
PyBuffer_FillInfo(Py_buffer *view, PyObject *obj, void *buf, Py_ssize_t len,
|
||||
int readonly, int flags)
|
||||
int readonly, int flags)
|
||||
{
|
||||
if (view == NULL) return 0; /* XXX why not -1? */
|
||||
if (((flags & PyBUF_WRITABLE) == PyBUF_WRITABLE) &&
|
||||
|
|
|
@ -347,12 +347,9 @@ lookdict(PyDictObject *mp, PyObject *key, register Py_hash_t hash)
|
|||
return ep;
|
||||
}
|
||||
else {
|
||||
/* The compare did major nasty stuff to the
|
||||
* dict: start over.
|
||||
* XXX A clever adversary could prevent this
|
||||
* XXX from terminating.
|
||||
*/
|
||||
return lookdict(mp, key, hash);
|
||||
PyErr_SetString(PyExc_RuntimeError,
|
||||
"dictionary changed size during lookup");
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
freeslot = NULL;
|
||||
|
@ -379,12 +376,9 @@ lookdict(PyDictObject *mp, PyObject *key, register Py_hash_t hash)
|
|||
return ep;
|
||||
}
|
||||
else {
|
||||
/* The compare did major nasty stuff to the
|
||||
* dict: start over.
|
||||
* XXX A clever adversary could prevent this
|
||||
* XXX from terminating.
|
||||
*/
|
||||
return lookdict(mp, key, hash);
|
||||
PyErr_SetString(PyExc_RuntimeError,
|
||||
"dictionary changed size during lookup");
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
else if (ep->me_key == dummy && freeslot == NULL)
|
||||
|
|
|
@ -20,7 +20,6 @@ static PyMemberDef frame_memberlist[] = {
|
|||
{"f_builtins", T_OBJECT, OFF(f_builtins), READONLY},
|
||||
{"f_globals", T_OBJECT, OFF(f_globals), READONLY},
|
||||
{"f_lasti", T_INT, OFF(f_lasti), READONLY},
|
||||
{"f_yieldfrom", T_OBJECT, OFF(f_yieldfrom), READONLY},
|
||||
{NULL} /* Sentinel */
|
||||
};
|
||||
|
||||
|
|
|
@ -86,14 +86,11 @@ _PyManagedBuffer_FromObject(PyObject *base)
|
|||
return NULL;
|
||||
|
||||
if (PyObject_GetBuffer(base, &mbuf->master, PyBUF_FULL_RO) < 0) {
|
||||
/* mbuf->master.obj must be NULL. */
|
||||
mbuf->master.obj = NULL;
|
||||
Py_DECREF(mbuf);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/* Assume that master.obj is a new reference to base. */
|
||||
assert(mbuf->master.obj == base);
|
||||
|
||||
return (PyObject *)mbuf;
|
||||
}
|
||||
|
||||
|
|
|
@ -998,7 +998,11 @@ PyUnicode_New(Py_ssize_t size, Py_UCS4 maxchar)
|
|||
is_sharing = 1;
|
||||
}
|
||||
else {
|
||||
assert(maxchar <= MAX_UNICODE);
|
||||
if (maxchar > MAX_UNICODE) {
|
||||
PyErr_SetString(PyExc_SystemError,
|
||||
"invalid maximum character passed to PyUnicode_New");
|
||||
return NULL;
|
||||
}
|
||||
kind_state = PyUnicode_4BYTE_KIND;
|
||||
char_size = 4;
|
||||
if (sizeof(wchar_t) == 4)
|
||||
|
@ -3942,6 +3946,10 @@ PyUnicode_WriteChar(PyObject *unicode, Py_ssize_t index, Py_UCS4 ch)
|
|||
}
|
||||
if (unicode_check_modifiable(unicode))
|
||||
return -1;
|
||||
if (ch > PyUnicode_MAX_CHAR_VALUE(unicode)) {
|
||||
PyErr_SetString(PyExc_ValueError, "character out of range");
|
||||
return -1;
|
||||
}
|
||||
PyUnicode_WRITE(PyUnicode_KIND(unicode), PyUnicode_DATA(unicode),
|
||||
index, ch);
|
||||
return 0;
|
||||
|
|
|
@ -12,7 +12,7 @@
|
|||
# include "pythonnt_rc.h"
|
||||
#endif
|
||||
|
||||
/* e.g., 2.1a2
|
||||
/* e.g., 3.3.0a1
|
||||
* PY_VERSION comes from patchevel.h
|
||||
*/
|
||||
#define PYTHON_VERSION PY_VERSION "\0"
|
||||
|
|
|
@ -584,16 +584,16 @@ Global
|
|||
{6DE10744-E396-40A5-B4E2-1B69AA7C8D31}.Release|x64.ActiveCfg = Release|x64
|
||||
{6DE10744-E396-40A5-B4E2-1B69AA7C8D31}.Release|x64.Build.0 = Release|x64
|
||||
{885D4898-D08D-4091-9C40-C700CFE3FC5A}.Debug|Win32.ActiveCfg = PGInstrument|Win32
|
||||
{885D4898-D08D-4091-9C40-C700CFE3FC5A}.Debug|x64.ActiveCfg = Debug|x64
|
||||
{885D4898-D08D-4091-9C40-C700CFE3FC5A}.Debug|x64.Build.0 = Debug|x64
|
||||
{885D4898-D08D-4091-9C40-C700CFE3FC5A}.Debug|x64.ActiveCfg = PGUpdate|x64
|
||||
{885D4898-D08D-4091-9C40-C700CFE3FC5A}.Debug|x64.Build.0 = PGUpdate|x64
|
||||
{885D4898-D08D-4091-9C40-C700CFE3FC5A}.PGInstrument|Win32.ActiveCfg = PGInstrument|Win32
|
||||
{885D4898-D08D-4091-9C40-C700CFE3FC5A}.PGInstrument|Win32.Build.0 = PGInstrument|Win32
|
||||
{885D4898-D08D-4091-9C40-C700CFE3FC5A}.PGInstrument|x64.ActiveCfg = Release|x64
|
||||
{885D4898-D08D-4091-9C40-C700CFE3FC5A}.PGInstrument|x64.Build.0 = Release|x64
|
||||
{885D4898-D08D-4091-9C40-C700CFE3FC5A}.PGInstrument|x64.ActiveCfg = PGInstrument|x64
|
||||
{885D4898-D08D-4091-9C40-C700CFE3FC5A}.PGInstrument|x64.Build.0 = PGInstrument|x64
|
||||
{885D4898-D08D-4091-9C40-C700CFE3FC5A}.PGUpdate|Win32.ActiveCfg = PGUpdate|Win32
|
||||
{885D4898-D08D-4091-9C40-C700CFE3FC5A}.PGUpdate|Win32.Build.0 = PGUpdate|Win32
|
||||
{885D4898-D08D-4091-9C40-C700CFE3FC5A}.PGUpdate|x64.ActiveCfg = Release|x64
|
||||
{885D4898-D08D-4091-9C40-C700CFE3FC5A}.PGUpdate|x64.Build.0 = Release|x64
|
||||
{885D4898-D08D-4091-9C40-C700CFE3FC5A}.PGUpdate|x64.ActiveCfg = PGUpdate|x64
|
||||
{885D4898-D08D-4091-9C40-C700CFE3FC5A}.PGUpdate|x64.Build.0 = PGUpdate|x64
|
||||
{885D4898-D08D-4091-9C40-C700CFE3FC5A}.Release|Win32.ActiveCfg = Release|Win32
|
||||
{885D4898-D08D-4091-9C40-C700CFE3FC5A}.Release|Win32.Build.0 = Release|Win32
|
||||
{885D4898-D08D-4091-9C40-C700CFE3FC5A}.Release|x64.ActiveCfg = Release|x64
|
||||
|
|
|
@ -1412,11 +1412,15 @@ tok_get(register struct tok_state *tok, char **p_start, char **p_end)
|
|||
/* Identifier (most frequent token!) */
|
||||
nonascii = 0;
|
||||
if (is_potential_identifier_start(c)) {
|
||||
/* Process b"", r"", br"" and rb"" */
|
||||
int saw_b = 0, saw_r = 0;
|
||||
/* Process b"", r"", u"", br"", rb"" and ur"" */
|
||||
int saw_b = 0, saw_r = 0, saw_u = 0;
|
||||
while (1) {
|
||||
if (!saw_b && (c == 'b' || c == 'B'))
|
||||
if (!(saw_b || saw_u) && (c == 'b' || c == 'B'))
|
||||
saw_b = 1;
|
||||
/* Since this is a backwards compatibility support literal we don't
|
||||
want to support it in arbitrary order like byte literals. */
|
||||
else if (!(saw_b || saw_u || saw_r) && (c == 'u' || c == 'U'))
|
||||
saw_u = 1;
|
||||
else if (!saw_r && (c == 'r' || c == 'R'))
|
||||
saw_r = 1;
|
||||
else
|
||||
|
|
|
@ -3796,6 +3796,9 @@ parsestr(struct compiling *c, const node *n, int *bytesmode)
|
|||
quote = *++s;
|
||||
*bytesmode = 1;
|
||||
}
|
||||
else if (quote == 'u' || quote == 'U') {
|
||||
quote = *++s;
|
||||
}
|
||||
else if (quote == 'r' || quote == 'R') {
|
||||
quote = *++s;
|
||||
rawmode = 1;
|
||||
|
|
|
@ -19,5 +19,5 @@ All Rights Reserved.";
|
|||
const char *
|
||||
Py_GetCopyright(void)
|
||||
{
|
||||
return cprt;
|
||||
return cprt;
|
||||
}
|
||||
|
|
|
@ -409,11 +409,12 @@ w_object(PyObject *v, WFILE *p)
|
|||
else if (PyObject_CheckBuffer(v)) {
|
||||
/* Write unknown buffer-style objects as a string */
|
||||
char *s;
|
||||
PyBufferProcs *pb = v->ob_type->tp_as_buffer;
|
||||
Py_buffer view;
|
||||
if ((*pb->bf_getbuffer)(v, &view, PyBUF_SIMPLE) != 0) {
|
||||
if (PyObject_GetBuffer(v, &view, PyBUF_SIMPLE) != 0) {
|
||||
w_byte(TYPE_UNKNOWN, p);
|
||||
p->depth--;
|
||||
p->error = WFERR_UNMARSHALLABLE;
|
||||
return;
|
||||
}
|
||||
w_byte(TYPE_STRING, p);
|
||||
n = view.len;
|
||||
|
@ -425,8 +426,7 @@ w_object(PyObject *v, WFILE *p)
|
|||
}
|
||||
w_long((long)n, p);
|
||||
w_string(s, (int)n, p);
|
||||
if (pb->bf_releasebuffer != NULL)
|
||||
(*pb->bf_releasebuffer)(v, &view);
|
||||
PyBuffer_Release(&view);
|
||||
}
|
||||
else {
|
||||
w_byte(TYPE_UNKNOWN, p);
|
||||
|
@ -1239,7 +1239,6 @@ PyObject *
|
|||
PyMarshal_WriteObjectToString(PyObject *x, int version)
|
||||
{
|
||||
WFILE wf;
|
||||
PyObject *res = NULL;
|
||||
|
||||
wf.fp = NULL;
|
||||
wf.readable = NULL;
|
||||
|
@ -1273,12 +1272,7 @@ PyMarshal_WriteObjectToString(PyObject *x, int version)
|
|||
:"object too deeply nested to marshal");
|
||||
return NULL;
|
||||
}
|
||||
if (wf.str != NULL) {
|
||||
/* XXX Quick hack -- need to do this differently */
|
||||
res = PyBytes_FromObject(wf.str);
|
||||
Py_DECREF(wf.str);
|
||||
}
|
||||
return res;
|
||||
return wf.str;
|
||||
}
|
||||
|
||||
/* And an interface for Python programs... */
|
||||
|
@ -1390,7 +1384,7 @@ marshal_loads(PyObject *self, PyObject *args)
|
|||
char *s;
|
||||
Py_ssize_t n;
|
||||
PyObject* result;
|
||||
if (!PyArg_ParseTuple(args, "s*:loads", &p))
|
||||
if (!PyArg_ParseTuple(args, "y*:loads", &p))
|
||||
return NULL;
|
||||
s = p.buf;
|
||||
n = p.len;
|
||||
|
@ -1406,10 +1400,10 @@ marshal_loads(PyObject *self, PyObject *args)
|
|||
}
|
||||
|
||||
PyDoc_STRVAR(loads_doc,
|
||||
"loads(string)\n\
|
||||
"loads(bytes)\n\
|
||||
\n\
|
||||
Convert the string to a value. If no valid value is found, raise\n\
|
||||
EOFError, ValueError or TypeError. Extra characters in the string are\n\
|
||||
Convert the bytes object to a value. If no valid value is found, raise\n\
|
||||
EOFError, ValueError or TypeError. Extra characters in the input are\n\
|
||||
ignored.");
|
||||
|
||||
static PyMethodDef marshal_methods[] = {
|
||||
|
|
|
@ -70,6 +70,51 @@ _PyTime_gettimeofday(_PyTime_timeval *tp)
|
|||
#endif /* MS_WINDOWS */
|
||||
}
|
||||
|
||||
int
|
||||
_PyTime_ObjectToTimespec(PyObject *obj, time_t *sec, long *nsec)
|
||||
{
|
||||
if (PyFloat_Check(obj)) {
|
||||
double d, intpart, floatpart, err;
|
||||
|
||||
d = PyFloat_AsDouble(obj);
|
||||
floatpart = modf(d, &intpart);
|
||||
if (floatpart < 0) {
|
||||
floatpart = 1.0 + floatpart;
|
||||
intpart -= 1.0;
|
||||
}
|
||||
|
||||
*sec = (time_t)intpart;
|
||||
err = intpart - (double)*sec;
|
||||
if (err <= -1.0 || err >= 1.0)
|
||||
goto overflow;
|
||||
|
||||
floatpart *= 1e9;
|
||||
*nsec = (long)floatpart;
|
||||
return 0;
|
||||
}
|
||||
else {
|
||||
#if defined(HAVE_LONG_LONG) && SIZEOF_TIME_T == SIZEOF_LONG_LONG
|
||||
*sec = PyLong_AsLongLong(obj);
|
||||
#else
|
||||
assert(sizeof(time_t) <= sizeof(long));
|
||||
*sec = PyLong_AsLong(obj);
|
||||
#endif
|
||||
if (*sec == -1 && PyErr_Occurred()) {
|
||||
if (PyErr_ExceptionMatches(PyExc_OverflowError))
|
||||
goto overflow;
|
||||
else
|
||||
return -1;
|
||||
}
|
||||
*nsec = 0;
|
||||
return 0;
|
||||
}
|
||||
|
||||
overflow:
|
||||
PyErr_SetString(PyExc_OverflowError,
|
||||
"timestamp out of range for platform time_t");
|
||||
return -1;
|
||||
}
|
||||
|
||||
void
|
||||
_PyTime_Init()
|
||||
{
|
||||
|
|
4
README
4
README
|
@ -1,5 +1,5 @@
|
|||
This is Python version 3.3 alpha 0
|
||||
==================================
|
||||
This is Python version 3.3.0 alpha 1
|
||||
====================================
|
||||
|
||||
Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011,
|
||||
2012 Python Software Foundation. All rights reserved.
|
||||
|
|
184
Tools/msi/msi.py
184
Tools/msi/msi.py
|
@ -2,12 +2,11 @@
|
|||
# (C) 2003 Martin v. Loewis
|
||||
# See "FOO" in comments refers to MSDN sections with the title FOO.
|
||||
import msilib, schema, sequence, os, glob, time, re, shutil, zipfile
|
||||
import subprocess, tempfile
|
||||
from msilib import Feature, CAB, Directory, Dialog, Binary, add_data
|
||||
import uisample
|
||||
from win32com.client import constants
|
||||
from distutils.spawn import find_executable
|
||||
from uuids import product_codes
|
||||
import tempfile
|
||||
|
||||
# Settings can be overridden in config.py below
|
||||
# 0 for official python.org releases
|
||||
|
@ -77,9 +76,6 @@ upgrade_code_64='{6A965A0C-6EE6-4E3A-9983-3263F56311EC}'
|
|||
|
||||
if snapshot:
|
||||
current_version = "%s.%s.%s" % (major, minor, int(time.time()/3600/24))
|
||||
product_code = msilib.gen_uuid()
|
||||
else:
|
||||
product_code = product_codes[current_version]
|
||||
|
||||
if full_current_version is None:
|
||||
full_current_version = current_version
|
||||
|
@ -187,12 +183,19 @@ dll_path = os.path.join(srcdir, PCBUILD, dll_file)
|
|||
msilib.set_arch_from_file(dll_path)
|
||||
if msilib.pe_type(dll_path) != msilib.pe_type("msisupport.dll"):
|
||||
raise SystemError("msisupport.dll for incorrect architecture")
|
||||
|
||||
if msilib.Win64:
|
||||
upgrade_code = upgrade_code_64
|
||||
# Bump the last digit of the code by one, so that 32-bit and 64-bit
|
||||
# releases get separate product codes
|
||||
digit = hex((int(product_code[-2],16)+1)%16)[-1]
|
||||
product_code = product_code[:-2] + digit + '}'
|
||||
|
||||
if snapshot:
|
||||
product_code = msilib.gen_uuid()
|
||||
else:
|
||||
# official release: generate UUID from the download link that the file will have
|
||||
import uuid
|
||||
product_code = uuid.uuid3(uuid.NAMESPACE_URL,
|
||||
'http://www.python.org/ftp/python/%s.%s.%s/python-%s%s.msi' %
|
||||
(major, minor, micro, full_current_version, msilib.arch_ext))
|
||||
product_code = '{%s}' % product_code
|
||||
|
||||
if testpackage:
|
||||
ext = 'px'
|
||||
|
@ -906,31 +909,27 @@ class PyDirectory(Directory):
|
|||
kw['componentflags'] = 2 #msidbComponentAttributesOptional
|
||||
Directory.__init__(self, *args, **kw)
|
||||
|
||||
def check_unpackaged(self):
|
||||
self.unpackaged_files.discard('__pycache__')
|
||||
self.unpackaged_files.discard('.svn')
|
||||
if self.unpackaged_files:
|
||||
print "Warning: Unpackaged files in %s" % self.absolute
|
||||
print self.unpackaged_files
|
||||
def hgmanifest():
|
||||
# Fetch file list from Mercurial
|
||||
process = subprocess.Popen(['hg', 'manifest'], stdout=subprocess.PIPE)
|
||||
stdout, stderr = process.communicate()
|
||||
# Create nested directories for file tree
|
||||
result = {}
|
||||
for line in stdout.splitlines():
|
||||
components = line.split('/')
|
||||
d = result
|
||||
while len(components) > 1:
|
||||
d1 = d.setdefault(components[0], {})
|
||||
d = d1
|
||||
del components[0]
|
||||
d[components[0]] = None
|
||||
return result
|
||||
|
||||
|
||||
def inside_test(dir):
|
||||
if dir.physical in ('test', 'tests'):
|
||||
return True
|
||||
if dir.basedir:
|
||||
return inside_test(dir.basedir)
|
||||
return False
|
||||
|
||||
def in_packaging_tests(dir):
|
||||
if dir.physical == 'tests' and dir.basedir.physical == 'packaging':
|
||||
return True
|
||||
if dir.basedir:
|
||||
return in_packaging_tests(dir.basedir)
|
||||
return False
|
||||
|
||||
# See "File Table", "Component Table", "Directory Table",
|
||||
# "FeatureComponents Table"
|
||||
def add_files(db):
|
||||
hgfiles = hgmanifest()
|
||||
cab = CAB("python")
|
||||
tmpfiles = []
|
||||
# Add all executables, icons, text files into the TARGETDIR component
|
||||
|
@ -992,123 +991,40 @@ def add_files(db):
|
|||
|
||||
# Add all .py files in Lib, except tkinter, test
|
||||
dirs = []
|
||||
pydirs = [(root,"Lib")]
|
||||
pydirs = [(root, "Lib", hgfiles["Lib"], default_feature)]
|
||||
while pydirs:
|
||||
# Commit every now and then, or else installer will complain
|
||||
db.Commit()
|
||||
parent, dir = pydirs.pop()
|
||||
if dir == ".svn" or dir == '__pycache__' or dir.startswith("plat-"):
|
||||
parent, dir, files, feature = pydirs.pop()
|
||||
if dir.startswith("plat-"):
|
||||
continue
|
||||
elif dir in ["tkinter", "idlelib", "Icons"]:
|
||||
if dir in ["tkinter", "idlelib", "turtledemo"]:
|
||||
if not have_tcl:
|
||||
continue
|
||||
feature = tcltk
|
||||
tcltk.set_current()
|
||||
elif dir in ('test', 'tests') or inside_test(parent):
|
||||
testsuite.set_current()
|
||||
elif dir in ('test', 'tests'):
|
||||
feature = testsuite
|
||||
elif not have_ctypes and dir == "ctypes":
|
||||
continue
|
||||
else:
|
||||
default_feature.set_current()
|
||||
feature.set_current()
|
||||
lib = PyDirectory(db, cab, parent, dir, dir, "%s|%s" % (parent.make_short(dir), dir))
|
||||
# Add additional files
|
||||
dirs.append(lib)
|
||||
lib.glob("*.txt")
|
||||
if dir=='site-packages':
|
||||
lib.add_file("README.txt", src="README")
|
||||
continue
|
||||
files = lib.glob("*.py")
|
||||
files += lib.glob("*.pyw")
|
||||
if files:
|
||||
# Add an entry to the RemoveFile table to remove bytecode files.
|
||||
lib.remove_pyc()
|
||||
# package READMEs if present
|
||||
lib.glob("README")
|
||||
if dir=='Lib':
|
||||
lib.add_file("sysconfig.cfg")
|
||||
if dir=='test' and parent.physical=='Lib':
|
||||
lib.add_file("185test.db")
|
||||
lib.add_file("audiotest.au")
|
||||
lib.add_file("sgml_input.html")
|
||||
lib.add_file("testtar.tar")
|
||||
lib.add_file("test_difflib_expect.html")
|
||||
lib.add_file("check_soundcard.vbs")
|
||||
lib.add_file("empty.vbs")
|
||||
lib.add_file("Sine-1000Hz-300ms.aif")
|
||||
lib.glob("*.uue")
|
||||
lib.glob("*.pem")
|
||||
lib.glob("*.pck")
|
||||
lib.glob("cfgparser.*")
|
||||
lib.add_file("zip_cp437_header.zip")
|
||||
lib.add_file("zipdir.zip")
|
||||
lib.add_file("mime.types")
|
||||
if dir=='capath':
|
||||
lib.glob("*.0")
|
||||
if dir=='tests' and parent.physical=='distutils':
|
||||
lib.add_file("Setup.sample")
|
||||
if dir=='decimaltestdata':
|
||||
lib.glob("*.decTest")
|
||||
if dir=='xmltestdata':
|
||||
lib.glob("*.xml")
|
||||
lib.add_file("test.xml.out")
|
||||
if dir=='output':
|
||||
lib.glob("test_*")
|
||||
if dir=='sndhdrdata':
|
||||
lib.glob("sndhdr.*")
|
||||
if dir=='idlelib':
|
||||
lib.glob("*.def")
|
||||
lib.add_file("idle.bat")
|
||||
lib.add_file("ChangeLog")
|
||||
if dir=="Icons":
|
||||
lib.glob("*.gif")
|
||||
lib.add_file("idle.icns")
|
||||
if dir=="command" and parent.physical in ("distutils", "packaging"):
|
||||
lib.glob("wininst*.exe")
|
||||
lib.add_file("command_template")
|
||||
if dir=="lib2to3":
|
||||
lib.removefile("pickle", "*.pickle")
|
||||
if dir=="macholib":
|
||||
lib.add_file("README.ctypes")
|
||||
lib.glob("fetch_macholib*")
|
||||
if dir=='turtledemo':
|
||||
lib.add_file("turtle.cfg")
|
||||
if dir=="pydoc_data":
|
||||
lib.add_file("_pydoc.css")
|
||||
if dir.endswith('.dist-info'):
|
||||
lib.add_file('INSTALLER')
|
||||
lib.add_file('REQUESTED')
|
||||
lib.add_file('RECORD')
|
||||
lib.add_file('METADATA')
|
||||
lib.glob('RESOURCES')
|
||||
if dir.endswith('.egg-info') or dir == 'EGG-INFO':
|
||||
lib.add_file('PKG-INFO')
|
||||
if in_packaging_tests(parent):
|
||||
lib.glob('*.html')
|
||||
lib.glob('*.tar.gz')
|
||||
if dir=='fake_dists':
|
||||
# cannot use glob since there are also egg-info directories here
|
||||
lib.add_file('cheese-2.0.2.egg-info')
|
||||
lib.add_file('nut-funkyversion.egg-info')
|
||||
lib.add_file('strawberry-0.6.egg')
|
||||
lib.add_file('truffles-5.0.egg-info')
|
||||
lib.add_file('babar.cfg')
|
||||
lib.add_file('babar.png')
|
||||
if dir=="data" and parent.physical=="test_email":
|
||||
# This should contain all non-.svn files listed in subversion
|
||||
for f in os.listdir(lib.absolute):
|
||||
if f.endswith(".txt") or f==".svn":continue
|
||||
if f.endswith(".au") or f.endswith(".gif"):
|
||||
lib.add_file(f)
|
||||
has_py = False
|
||||
for name, subdir in files.items():
|
||||
if subdir is None:
|
||||
assert os.path.isfile(os.path.join(lib.absolute, name))
|
||||
if name == 'README':
|
||||
lib.add_file("README.txt", src="README")
|
||||
else:
|
||||
print("WARNING: New file %s in test/test_email/data" % f)
|
||||
if dir=='tests' and parent.physical == 'packaging':
|
||||
lib.add_file('SETUPTOOLS-PKG-INFO2')
|
||||
lib.add_file('SETUPTOOLS-PKG-INFO')
|
||||
lib.add_file('PKG-INFO')
|
||||
for f in os.listdir(lib.absolute):
|
||||
if os.path.isdir(os.path.join(lib.absolute, f)):
|
||||
pydirs.append((lib, f))
|
||||
for d in dirs:
|
||||
d.check_unpackaged()
|
||||
lib.add_file(name)
|
||||
has_py = has_py or name.endswith(".py") or name.endswith(".pyw")
|
||||
else:
|
||||
assert os.path.isdir(os.path.join(lib.absolute, name))
|
||||
pydirs.append((lib, name, subdir, feature))
|
||||
|
||||
if has_py:
|
||||
lib.remove_pyc()
|
||||
# Add DLLs
|
||||
default_feature.set_current()
|
||||
lib = DLLs
|
||||
|
|
|
@ -1,38 +0,0 @@
|
|||
# This should be extended for each Python release.
|
||||
# The product code must change whenever the name of the MSI file
|
||||
# changes, and when new component codes are issued for existing
|
||||
# components. See "Changing the Product Code". As we change the
|
||||
# component codes with every build, we need a new product code
|
||||
# each time. For intermediate (snapshot) releases, they are automatically
|
||||
# generated. For official releases, we record the product codes,
|
||||
# so people can refer to them.
|
||||
product_codes = {
|
||||
'3.1.101': '{c423eada-c498-4d51-9eb4-bfeae647e0a0}', # 3.1a1
|
||||
'3.1.102': '{f6e199bf-dc64-42f3-87d4-1525991a013e}', # 3.1a2
|
||||
'3.1.111': '{c3c82893-69b2-4676-8554-1b6ee6c191e9}', # 3.1b1
|
||||
'3.1.121': '{da2b5170-12f3-4d99-8a1f-54926cca7acd}', # 3.1c1
|
||||
'3.1.122': '{bceb5133-e2ee-4109-951f-ac7e941a1692}', # 3.1c2
|
||||
'3.1.150': '{3ad61ee5-81d2-4d7e-adef-da1dd37277d1}', # 3.1.0
|
||||
'3.1.1121':'{5782f957-6d49-41d4-bad0-668715dfd638}', # 3.1.1c1
|
||||
'3.1.1150':'{7ff90460-89b7-435b-b583-b37b2815ccc7}', # 3.1.1
|
||||
'3.1.2121':'{ec45624a-378c-43be-91f3-3f7a59b0d90c}', # 3.1.2c1
|
||||
'3.1.2150':'{d40af016-506c-43fb-a738-bd54fa8c1e85}', # 3.1.2
|
||||
'3.2.101' :'{b411f168-7a36-4fff-902c-a554d1c78a4f}', # 3.2a1
|
||||
'3.2.102' :'{79ff73b7-8359-410f-b9c5-152d2026f8c8}', # 3.2a2
|
||||
'3.2.103' :'{e7635c65-c221-4b9b-b70a-5611b8369d77}', # 3.2a3
|
||||
'3.2.104' :'{748cd139-75b8-4ca8-98a7-58262298181e}', # 3.2a4
|
||||
'3.2.111' :'{20bfc16f-c7cd-4fc0-8f96-9914614a3c50}', # 3.2b1
|
||||
'3.2.112' :'{0e350c98-8d73-4993-b686-cfe87160046e}', # 3.2b2
|
||||
'3.2.121' :'{2094968d-7583-47f6-a7fd-22304532e09f}', # 3.2rc1
|
||||
'3.2.122' :'{4f3edfa6-cf70-469a-825f-e1206aa7f412}', # 3.2rc2
|
||||
'3.2.123' :'{90c673d7-8cfd-4969-9816-f7d70bad87f3}', # 3.2rc3
|
||||
'3.2.150' :'{b2042d5e-986d-44ec-aee3-afe4108ccc93}', # 3.2.0
|
||||
'3.2.1121':'{4f90de4a-83dd-4443-b625-ca130ff361dd}', # 3.2.1rc1
|
||||
'3.2.1122':'{dc5eb04d-ff8a-4bed-8f96-23942fd59e5f}', # 3.2.1rc2
|
||||
'3.2.1150':'{34b2530c-6349-4292-9dc3-60bda4aed93c}', # 3.2.1
|
||||
'3.2.2121':'{DFB29A53-ACC4-44e6-85A6-D0DA26FE8E4E}', # 3.2.2rc1
|
||||
'3.2.2150':'{4CDE3168-D060-4b7c-BC74-4D8F9BB01AFD}', # 3.2.2
|
||||
'3.2.3121':'{B8E8CFF7-E4C6-4a7c-9F06-BB3A8B75DDA8}', # 3.2.3rc1
|
||||
'3.2.3150':'{789C9644-9F82-44d3-B4CA-AC31F46F5882}', # 3.2.3
|
||||
|
||||
}
|
Loading…
Reference in New Issue