Some formatting & grammar fixes for the multiprocessing doc

This commit is contained in:
Eli Bendersky 2011-12-31 07:05:12 +02:00
parent c016f46df5
commit 4b76f8a76c
1 changed files with 17 additions and 17 deletions

View File

@ -282,7 +282,7 @@ For example::
if __name__ == '__main__': if __name__ == '__main__':
pool = Pool(processes=4) # start 4 worker processes pool = Pool(processes=4) # start 4 worker processes
result = pool.apply_async(f, [10]) # evaluate "f(10)" asynchronously result = pool.apply_async(f, [10]) # evaluate "f(10)" asynchronously
print result.get(timeout=1) # prints "100" unless your computer is *very* slow print result.get(timeout=1) # prints "100" unless your computer is *very* slow
print pool.map(f, range(10)) # prints "[0, 1, 4,..., 81]" print pool.map(f, range(10)) # prints "[0, 1, 4,..., 81]"
@ -472,7 +472,7 @@ into Python 2.5's :class:`Queue.Queue` class.
If you use :class:`JoinableQueue` then you **must** call If you use :class:`JoinableQueue` then you **must** call
:meth:`JoinableQueue.task_done` for each task removed from the queue or else the :meth:`JoinableQueue.task_done` for each task removed from the queue or else the
semaphore used to count the number of unfinished tasks may eventually overflow semaphore used to count the number of unfinished tasks may eventually overflow,
raising an exception. raising an exception.
Note that one can also create a shared queue by using a manager object -- see Note that one can also create a shared queue by using a manager object -- see
@ -490,7 +490,7 @@ Note that one can also create a shared queue by using a manager object -- see
If a process is killed using :meth:`Process.terminate` or :func:`os.kill` If a process is killed using :meth:`Process.terminate` or :func:`os.kill`
while it is trying to use a :class:`Queue`, then the data in the queue is while it is trying to use a :class:`Queue`, then the data in the queue is
likely to become corrupted. This may cause any other processes to get an likely to become corrupted. This may cause any other process to get an
exception when it tries to use the queue later on. exception when it tries to use the queue later on.
.. warning:: .. warning::
@ -692,7 +692,7 @@ Miscellaneous
(By default :data:`sys.executable` is used). Embedders will probably need to (By default :data:`sys.executable` is used). Embedders will probably need to
do some thing like :: do some thing like ::
setExecutable(os.path.join(sys.exec_prefix, 'pythonw.exe')) set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
before they can create child processes. (Windows only) before they can create child processes. (Windows only)
@ -711,7 +711,7 @@ Connection Objects
Connection objects allow the sending and receiving of picklable objects or Connection objects allow the sending and receiving of picklable objects or
strings. They can be thought of as message oriented connected sockets. strings. They can be thought of as message oriented connected sockets.
Connection objects usually created using :func:`Pipe` -- see also Connection objects are usually created using :func:`Pipe` -- see also
:ref:`multiprocessing-listeners-clients`. :ref:`multiprocessing-listeners-clients`.
.. class:: Connection .. class:: Connection
@ -722,7 +722,7 @@ Connection objects usually created using :func:`Pipe` -- see also
using :meth:`recv`. using :meth:`recv`.
The object must be picklable. Very large pickles (approximately 32 MB+, The object must be picklable. Very large pickles (approximately 32 MB+,
though it depends on the OS) may raise a ValueError exception. though it depends on the OS) may raise a :exc:`ValueError` exception.
.. method:: recv() .. method:: recv()
@ -732,7 +732,7 @@ Connection objects usually created using :func:`Pipe` -- see also
.. method:: fileno() .. method:: fileno()
Returns the file descriptor or handle used by the connection. Return the file descriptor or handle used by the connection.
.. method:: close() .. method:: close()
@ -756,7 +756,7 @@ Connection objects usually created using :func:`Pipe` -- see also
If *offset* is given then data is read from that position in *buffer*. If If *offset* is given then data is read from that position in *buffer*. If
*size* is given then that many bytes will be read from buffer. Very large *size* is given then that many bytes will be read from buffer. Very large
buffers (approximately 32 MB+, though it depends on the OS) may raise a buffers (approximately 32 MB+, though it depends on the OS) may raise a
ValueError exception :exc:`ValueError` exception
.. method:: recv_bytes([maxlength]) .. method:: recv_bytes([maxlength])
@ -1329,7 +1329,7 @@ Customized managers
>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>
To create one's own manager, one creates a subclass of :class:`BaseManager` and To create one's own manager, one creates a subclass of :class:`BaseManager` and
use the :meth:`~BaseManager.register` classmethod to register new types or uses the :meth:`~BaseManager.register` classmethod to register new types or
callables with the manager class. For example:: callables with the manager class. For example::
from multiprocessing.managers import BaseManager from multiprocessing.managers import BaseManager
@ -1579,10 +1579,10 @@ with the :class:`Pool` class.
.. method:: apply(func[, args[, kwds]]) .. method:: apply(func[, args[, kwds]])
Equivalent of the :func:`apply` built-in function. It blocks till the Equivalent of the :func:`apply` built-in function. It blocks until the
result is ready. Given this blocks, :meth:`apply_async` is better suited result is ready, so :meth:`apply_async` is better suited for performing
for performing work in parallel. Additionally, the passed work in parallel. Additionally, *func* is only executed in one of the
in function is only executed in one of the workers of the pool. workers of the pool.
.. method:: apply_async(func[, args[, kwds[, callback]]]) .. method:: apply_async(func[, args[, kwds[, callback]]])
@ -1596,7 +1596,7 @@ with the :class:`Pool` class.
.. method:: map(func, iterable[, chunksize]) .. method:: map(func, iterable[, chunksize])
A parallel equivalent of the :func:`map` built-in function (it supports only A parallel equivalent of the :func:`map` built-in function (it supports only
one *iterable* argument though). It blocks till the result is ready. one *iterable* argument though). It blocks until the result is ready.
This method chops the iterable into a number of chunks which it submits to This method chops the iterable into a number of chunks which it submits to
the process pool as separate tasks. The (approximate) size of these the process pool as separate tasks. The (approximate) size of these
@ -2046,7 +2046,7 @@ Better to inherit than pickle/unpickle
On Windows many types from :mod:`multiprocessing` need to be picklable so On Windows many types from :mod:`multiprocessing` need to be picklable so
that child processes can use them. However, one should generally avoid that child processes can use them. However, one should generally avoid
sending shared objects to other processes using pipes or queues. Instead sending shared objects to other processes using pipes or queues. Instead
you should arrange the program so that a process which need access to a you should arrange the program so that a process which needs access to a
shared resource created elsewhere can inherit it from an ancestor process. shared resource created elsewhere can inherit it from an ancestor process.
Avoid terminating processes Avoid terminating processes
@ -2125,7 +2125,7 @@ Explicitly pass resources to child processes
for i in range(10): for i in range(10):
Process(target=f, args=(lock,)).start() Process(target=f, args=(lock,)).start()
Beware replacing sys.stdin with a "file like object" Beware of replacing :data:`sys.stdin` with a "file like object"
:mod:`multiprocessing` originally unconditionally called:: :mod:`multiprocessing` originally unconditionally called::
@ -2243,7 +2243,7 @@ Synchronization types like locks, conditions and queues:
An example showing how to use queues to feed tasks to a collection of worker An example showing how to use queues to feed tasks to a collection of worker
process and collect the results: processes and collect the results:
.. literalinclude:: ../includes/mp_workers.py .. literalinclude:: ../includes/mp_workers.py