Issue #27895: Spelling fixes (Contributed by Ville Skyttä).

This commit is contained in:
Raymond Hettinger 2016-08-30 10:47:49 -07:00
parent 613debcf0a
commit 15f44ab043
72 changed files with 121 additions and 121 deletions

View File

@ -1842,7 +1842,7 @@ Note that the :class:`datetime` instances that differ only by the value of the
:attr:`~datetime.fold` attribute are considered equal in comparisons. :attr:`~datetime.fold` attribute are considered equal in comparisons.
Applications that can't bear wall-time ambiguities should explicitly check the Applications that can't bear wall-time ambiguities should explicitly check the
value of the :attr:`~datetime.fold` atribute or avoid using hybrid value of the :attr:`~datetime.fold` attribute or avoid using hybrid
:class:`tzinfo` subclasses; there are no ambiguities when using :class:`timezone`, :class:`tzinfo` subclasses; there are no ambiguities when using :class:`timezone`,
or any other fixed-offset :class:`tzinfo` subclass (such as a class representing or any other fixed-offset :class:`tzinfo` subclass (such as a class representing
only EST (fixed offset -5 hours), or only EDT (fixed offset -4 hours)). only EST (fixed offset -5 hours), or only EDT (fixed offset -4 hours)).

View File

@ -433,5 +433,5 @@ Currently the email package provides only one concrete content manager,
If *headers* is specified and is a list of strings of the form If *headers* is specified and is a list of strings of the form
``headername: headervalue`` or a list of ``header`` objects ``headername: headervalue`` or a list of ``header`` objects
(distinguised from strings by having a ``name`` attribute), add the (distinguished from strings by having a ``name`` attribute), add the
headers to *msg*. headers to *msg*.

View File

@ -531,7 +531,7 @@ Command line usage
-c command run command in the shell window -c command run command in the shell window
-d enable debugger and open shell window -d enable debugger and open shell window
-e open editor window -e open editor window
-h print help message with legal combinatios and exit -h print help message with legal combinations and exit
-i open shell window -i open shell window
-r file run file in shell window -r file run file in shell window
-s run $IDLESTARTUP or $PYTHONSTARTUP first, in shell window -s run $IDLESTARTUP or $PYTHONSTARTUP first, in shell window

View File

@ -44,7 +44,7 @@ SMTPServer Objects
dictionary is a suitable value). If not specified the :mod:`asyncore` dictionary is a suitable value). If not specified the :mod:`asyncore`
global socket map is used. global socket map is used.
*enable_SMTPUTF8* determins whether the ``SMTPUTF8`` extension (as defined *enable_SMTPUTF8* determines whether the ``SMTPUTF8`` extension (as defined
in :RFC:`6531`) should be enabled. The default is ``False``. in :RFC:`6531`) should be enabled. The default is ``False``.
When ``True``, ``SMTPUTF8`` is accepted as a parameter to the ``MAIL`` When ``True``, ``SMTPUTF8`` is accepted as a parameter to the ``MAIL``
command and when present is passed to :meth:`process_message` in the command and when present is passed to :meth:`process_message` in the
@ -162,7 +162,7 @@ SMTPChannel Objects
accepted in a ``DATA`` command. A value of ``None`` or ``0`` means no accepted in a ``DATA`` command. A value of ``None`` or ``0`` means no
limit. limit.
*enable_SMTPUTF8* determins whether the ``SMTPUTF8`` extension (as defined *enable_SMTPUTF8* determines whether the ``SMTPUTF8`` extension (as defined
in :RFC:`6531`) should be enabled. The default is ``False``. in :RFC:`6531`) should be enabled. The default is ``False``.
*decode_data* and *enable_SMTPUTF8* cannot be set to ``True`` at the same *decode_data* and *enable_SMTPUTF8* cannot be set to ``True`` at the same
time. time.

View File

@ -1954,7 +1954,7 @@ ssl
:attr:`~ssl.OP_NO_COMPRESSION` can be used to disable compression. :attr:`~ssl.OP_NO_COMPRESSION` can be used to disable compression.
(Contributed by Antoine Pitrou in :issue:`13634`.) (Contributed by Antoine Pitrou in :issue:`13634`.)
* Support has been added for the Next Procotol Negotiation extension using * Support has been added for the Next Protocol Negotiation extension using
the :meth:`ssl.SSLContext.set_npn_protocols` method. the :meth:`ssl.SSLContext.set_npn_protocols` method.
(Contributed by Colin Marc in :issue:`14204`.) (Contributed by Colin Marc in :issue:`14204`.)

View File

@ -487,7 +487,7 @@ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx*/
/* old buffer API /* old buffer API
FIXME: usage of these should all be replaced in Python itself FIXME: usage of these should all be replaced in Python itself
but for backwards compatibility we will implement them. but for backwards compatibility we will implement them.
Their usage without a corresponding "unlock" mechansim Their usage without a corresponding "unlock" mechanism
may create issues (but they would already be there). */ may create issues (but they would already be there). */
PyAPI_FUNC(int) PyObject_AsCharBuffer(PyObject *obj, PyAPI_FUNC(int) PyObject_AsCharBuffer(PyObject *obj,

View File

@ -131,7 +131,7 @@ PyAPI_FUNC(Py_ssize_t) _PyBytes_InsertThousandsGrouping(char *buffer,
#define F_ZERO (1<<4) #define F_ZERO (1<<4)
#ifndef Py_LIMITED_API #ifndef Py_LIMITED_API
/* The _PyBytesWriter structure is big: it contains an embeded "stack buffer". /* The _PyBytesWriter structure is big: it contains an embedded "stack buffer".
A _PyBytesWriter variable must be declared at the end of variables in a A _PyBytesWriter variable must be declared at the end of variables in a
function to optimize the memory allocation on the stack. */ function to optimize the memory allocation on the stack. */
typedef struct { typedef struct {

View File

@ -37,7 +37,7 @@ extern double pow(double, double);
#endif /* __STDC__ */ #endif /* __STDC__ */
#endif /* _MSC_VER */ #endif /* _MSC_VER */
/* High precision defintion of pi and e (Euler) /* High precision definition of pi and e (Euler)
* The values are taken from libc6's math.h. * The values are taken from libc6's math.h.
*/ */
#ifndef Py_MATH_PIl #ifndef Py_MATH_PIl

View File

@ -590,7 +590,7 @@ class StreamReader:
bytes. If the EOF was received and the internal buffer is empty, return bytes. If the EOF was received and the internal buffer is empty, return
an empty bytes object. an empty bytes object.
If n is zero, return empty bytes object immediatelly. If n is zero, return empty bytes object immediately.
If n is positive, this function try to read `n` bytes, and may return If n is positive, this function try to read `n` bytes, and may return
less or equal bytes than requested, but at least one byte. If EOF was less or equal bytes than requested, but at least one byte. If EOF was

View File

@ -63,7 +63,7 @@ import traceback
# interpreter to exit when there are still idle processes in a # interpreter to exit when there are still idle processes in a
# ProcessPoolExecutor's process pool (i.e. shutdown() was not called). However, # ProcessPoolExecutor's process pool (i.e. shutdown() was not called). However,
# allowing workers to die with the interpreter has two undesirable properties: # allowing workers to die with the interpreter has two undesirable properties:
# - The workers would still be running during interpretor shutdown, # - The workers would still be running during interpreter shutdown,
# meaning that they would fail in unpredictable ways. # meaning that they would fail in unpredictable ways.
# - The workers could be killed while evaluating a work item, which could # - The workers could be killed while evaluating a work item, which could
# be bad if the callable being evaluated has external side-effects e.g. # be bad if the callable being evaluated has external side-effects e.g.

View File

@ -16,7 +16,7 @@ import os
# to exit when there are still idle threads in a ThreadPoolExecutor's thread # to exit when there are still idle threads in a ThreadPoolExecutor's thread
# pool (i.e. shutdown() was not called). However, allowing workers to die with # pool (i.e. shutdown() was not called). However, allowing workers to die with
# the interpreter has two undesirable properties: # the interpreter has two undesirable properties:
# - The workers would still be running during interpretor shutdown, # - The workers would still be running during interpreter shutdown,
# meaning that they would fail in unpredictable ways. # meaning that they would fail in unpredictable ways.
# - The workers could be killed while evaluating a work item, which could # - The workers could be killed while evaluating a work item, which could
# be bad if the callable being evaluated has external side-effects e.g. # be bad if the callable being evaluated has external side-effects e.g.

View File

@ -125,7 +125,7 @@ class msvc9compilerTestCase(support.TempdirManager,
self.assertRaises(KeyError, Reg.get_value, 'xxx', 'xxx') self.assertRaises(KeyError, Reg.get_value, 'xxx', 'xxx')
# looking for values that should exist on all # looking for values that should exist on all
# windows registeries versions. # windows registry versions.
path = r'Control Panel\Desktop' path = r'Control Panel\Desktop'
v = Reg.get_value(path, 'dragfullwindows') v = Reg.get_value(path, 'dragfullwindows')
self.assertIn(v, ('0', '1', '2')) self.assertIn(v, ('0', '1', '2'))

View File

@ -141,7 +141,7 @@ def _encode_base64(data, max_line_length):
def _encode_text(string, charset, cte, policy): def _encode_text(string, charset, cte, policy):
lines = string.encode(charset).splitlines() lines = string.encode(charset).splitlines()
linesep = policy.linesep.encode('ascii') linesep = policy.linesep.encode('ascii')
def embeded_body(lines): return linesep.join(lines) + linesep def embedded_body(lines): return linesep.join(lines) + linesep
def normal_body(lines): return b'\n'.join(lines) + b'\n' def normal_body(lines): return b'\n'.join(lines) + b'\n'
if cte==None: if cte==None:
# Use heuristics to decide on the "best" encoding. # Use heuristics to decide on the "best" encoding.
@ -152,7 +152,7 @@ def _encode_text(string, charset, cte, policy):
if (policy.cte_type == '8bit' and if (policy.cte_type == '8bit' and
max(len(x) for x in lines) <= policy.max_line_length): max(len(x) for x in lines) <= policy.max_line_length):
return '8bit', normal_body(lines).decode('ascii', 'surrogateescape') return '8bit', normal_body(lines).decode('ascii', 'surrogateescape')
sniff = embeded_body(lines[:10]) sniff = embedded_body(lines[:10])
sniff_qp = quoprimime.body_encode(sniff.decode('latin-1'), sniff_qp = quoprimime.body_encode(sniff.decode('latin-1'),
policy.max_line_length) policy.max_line_length)
sniff_base64 = binascii.b2a_base64(sniff) sniff_base64 = binascii.b2a_base64(sniff)
@ -171,7 +171,7 @@ def _encode_text(string, charset, cte, policy):
data = quoprimime.body_encode(normal_body(lines).decode('latin-1'), data = quoprimime.body_encode(normal_body(lines).decode('latin-1'),
policy.max_line_length) policy.max_line_length)
elif cte == 'base64': elif cte == 'base64':
data = _encode_base64(embeded_body(lines), policy.max_line_length) data = _encode_base64(embedded_body(lines), policy.max_line_length)
else: else:
raise ValueError("Unknown content transfer encoding {}".format(cte)) raise ValueError("Unknown content transfer encoding {}".format(cte))
return cte, data return cte, data

View File

@ -97,7 +97,7 @@ class Generator:
self._NL = policy.linesep self._NL = policy.linesep
self._encoded_NL = self._encode(self._NL) self._encoded_NL = self._encode(self._NL)
self._EMPTY = '' self._EMPTY = ''
self._encoded_EMTPY = self._encode('') self._encoded_EMPTY = self._encode('')
# Because we use clone (below) when we recursively process message # Because we use clone (below) when we recursively process message
# subparts, and because clone uses the computed policy (not None), # subparts, and because clone uses the computed policy (not None),
# submessages will automatically get set to the computed policy when # submessages will automatically get set to the computed policy when

View File

@ -49,7 +49,7 @@ fcre = re.compile(r'[\041-\176]+:$')
# Find a header embedded in a putative header value. Used to check for # Find a header embedded in a putative header value. Used to check for
# header injection attack. # header injection attack.
_embeded_header = re.compile(r'\n[^ \t]+:') _embedded_header = re.compile(r'\n[^ \t]+:')
@ -385,7 +385,7 @@ class Header:
if self._chunks: if self._chunks:
formatter.add_transition() formatter.add_transition()
value = formatter._str(linesep) value = formatter._str(linesep)
if _embeded_header.search(value): if _embedded_header.search(value):
raise HeaderParseError("header value appears to contain " raise HeaderParseError("header value appears to contain "
"an embedded header: {!r}".format(value)) "an embedded header: {!r}".format(value))
return value return value

View File

@ -1043,7 +1043,7 @@ class MIMEPart(Message):
yield from parts yield from parts
return return
# Otherwise we more or less invert the remaining logic in get_body. # Otherwise we more or less invert the remaining logic in get_body.
# This only really works in edge cases (ex: non-text relateds or # This only really works in edge cases (ex: non-text related or
# alternatives) if the sending agent sets content-disposition. # alternatives) if the sending agent sets content-disposition.
seen = [] # Only skip the first example of each candidate type. seen = [] # Only skip the first example of each candidate type.
for part in parts: for part in parts:

View File

@ -136,7 +136,7 @@ _MAXHEADERS = 100
# #
# VCHAR defined in http://tools.ietf.org/html/rfc5234#appendix-B.1 # VCHAR defined in http://tools.ietf.org/html/rfc5234#appendix-B.1
# the patterns for both name and value are more leniant than RFC # the patterns for both name and value are more lenient than RFC
# definitions to allow for backwards compatibility # definitions to allow for backwards compatibility
_is_legal_header_name = re.compile(rb'[^:\s][^:\r\n]*').fullmatch _is_legal_header_name = re.compile(rb'[^:\s][^:\r\n]*').fullmatch
_is_illegal_header_value = re.compile(rb'\n(?![ \t])|\r(?![ \t\n])').search _is_illegal_header_value = re.compile(rb'\n(?![ \t])|\r(?![ \t\n])').search

View File

@ -65,7 +65,7 @@ pathbrowser.py # Create path browser window.
percolator.py # Manage delegator stack (nim). percolator.py # Manage delegator stack (nim).
pyparse.py # Give information on code indentation pyparse.py # Give information on code indentation
pyshell.py # Start IDLE, manage shell, complete editor window pyshell.py # Start IDLE, manage shell, complete editor window
query.py # Query user for informtion query.py # Query user for information
redirector.py # Intercept widget subcommands (for percolator) (nim). redirector.py # Intercept widget subcommands (for percolator) (nim).
replace.py # Search and replace pattern in text. replace.py # Search and replace pattern in text.
rpc.py # Commuicate between idle and user processes (nim). rpc.py # Commuicate between idle and user processes (nim).

View File

@ -497,7 +497,7 @@ functions to be used from IDLE&#8217;s Python shell.</p>
-c command run command in the shell window -c command run command in the shell window
-d enable debugger and open shell window -d enable debugger and open shell window
-e open editor window -e open editor window
-h print help message with legal combinatios and exit -h print help message with legal combinations and exit
-i open shell window -i open shell window
-r file run file in shell window -r file run file in shell window
-s run $IDLESTARTUP or $PYTHONSTARTUP first, in shell window -s run $IDLESTARTUP or $PYTHONSTARTUP first, in shell window

View File

@ -159,7 +159,7 @@ class FindTest(unittest.TestCase):
class ReformatFunctionTest(unittest.TestCase): class ReformatFunctionTest(unittest.TestCase):
"""Test the reformat_paragraph function without the editor window.""" """Test the reformat_paragraph function without the editor window."""
def test_reformat_paragrah(self): def test_reformat_paragraph(self):
Equal = self.assertEqual Equal = self.assertEqual
reform = fp.reformat_paragraph reform = fp.reformat_paragraph
hw = "O hello world" hw = "O hello world"

View File

@ -64,7 +64,7 @@ class ReadError(OSError):
class RegistryError(Exception): class RegistryError(Exception):
"""Raised when a registry operation with the archiving """Raised when a registry operation with the archiving
and unpacking registeries fails""" and unpacking registries fails"""
def copyfileobj(fsrc, fdst, length=16*1024): def copyfileobj(fsrc, fdst, length=16*1024):

View File

@ -454,7 +454,7 @@ class _nroot_NS:
"""Return the nth root of a positive huge number.""" """Return the nth root of a positive huge number."""
assert x > 0 assert x > 0
# I state without proof that ⁿ√x ≈ ⁿ√2·ⁿ√(x//2) # I state without proof that ⁿ√x ≈ ⁿ√2·ⁿ√(x//2)
# and that for sufficiently big x the error is acceptible. # and that for sufficiently big x the error is acceptable.
# We now halve x until it is small enough to get the root. # We now halve x until it is small enough to get the root.
m = 0 m = 0
while True: while True:

View File

@ -26,7 +26,7 @@ import test.support.script_helper
_multiprocessing = test.support.import_module('_multiprocessing') _multiprocessing = test.support.import_module('_multiprocessing')
# Skip tests if sem_open implementation is broken. # Skip tests if sem_open implementation is broken.
test.support.import_module('multiprocessing.synchronize') test.support.import_module('multiprocessing.synchronize')
# import threading after _multiprocessing to raise a more revelant error # import threading after _multiprocessing to raise a more relevant error
# message: "No module named _multiprocessing". _multiprocessing is not compiled # message: "No module named _multiprocessing". _multiprocessing is not compiled
# without thread support. # without thread support.
import threading import threading

View File

@ -3958,7 +3958,7 @@ class Oddballs(unittest.TestCase):
self.assertRaises(TypeError, lambda: as_date >= as_datetime) self.assertRaises(TypeError, lambda: as_date >= as_datetime)
self.assertRaises(TypeError, lambda: as_datetime >= as_date) self.assertRaises(TypeError, lambda: as_datetime >= as_date)
# Neverthelss, comparison should work with the base-class (date) # Nevertheless, comparison should work with the base-class (date)
# projection if use of a date method is forced. # projection if use of a date method is forced.
self.assertEqual(as_date.__eq__(as_datetime), True) self.assertEqual(as_date.__eq__(as_datetime), True)
different_day = (as_date.day + 1) % 20 + 1 different_day = (as_date.day + 1) % 20 + 1

View File

@ -130,8 +130,8 @@ class LockTests(test_utils.TestCase):
def test_cancel_race(self): def test_cancel_race(self):
# Several tasks: # Several tasks:
# - A acquires the lock # - A acquires the lock
# - B is blocked in aqcuire() # - B is blocked in acquire()
# - C is blocked in aqcuire() # - C is blocked in acquire()
# #
# Now, concurrently: # Now, concurrently:
# - B is cancelled # - B is cancelled

View File

@ -4,7 +4,7 @@ import test.support
test.support.import_module('_multiprocessing') test.support.import_module('_multiprocessing')
# Skip tests if sem_open implementation is broken. # Skip tests if sem_open implementation is broken.
test.support.import_module('multiprocessing.synchronize') test.support.import_module('multiprocessing.synchronize')
# import threading after _multiprocessing to raise a more revelant error # import threading after _multiprocessing to raise a more relevant error
# message: "No module named _multiprocessing". _multiprocessing is not compiled # message: "No module named _multiprocessing". _multiprocessing is not compiled
# without thread support. # without thread support.
test.support.import_module('threading') test.support.import_module('threading')

View File

@ -876,7 +876,7 @@ class ClassPropertiesAndMethods(unittest.TestCase):
self.assertEqual(Frag().__int__(), 42) self.assertEqual(Frag().__int__(), 42)
self.assertEqual(int(Frag()), 42) self.assertEqual(int(Frag()), 42)
def test_diamond_inheritence(self): def test_diamond_inheritance(self):
# Testing multiple inheritance special cases... # Testing multiple inheritance special cases...
class A(object): class A(object):
def spam(self): return "A" def spam(self): return "A"

View File

@ -122,17 +122,17 @@ patch914575_nonascii_to1 = """
""" """
patch914575_from2 = """ patch914575_from2 = """
\t\tLine 1: preceeded by from:[tt] to:[ssss] \t\tLine 1: preceded by from:[tt] to:[ssss]
\t\tLine 2: preceeded by from:[sstt] to:[sssst] \t\tLine 2: preceded by from:[sstt] to:[sssst]
\t \tLine 3: preceeded by from:[sstst] to:[ssssss] \t \tLine 3: preceded by from:[sstst] to:[ssssss]
Line 4: \thas from:[sst] to:[sss] after : Line 4: \thas from:[sst] to:[sss] after :
Line 5: has from:[t] to:[ss] at end\t Line 5: has from:[t] to:[ss] at end\t
""" """
patch914575_to2 = """ patch914575_to2 = """
Line 1: preceeded by from:[tt] to:[ssss] Line 1: preceded by from:[tt] to:[ssss]
\tLine 2: preceeded by from:[sstt] to:[sssst] \tLine 2: preceded by from:[sstt] to:[sssst]
Line 3: preceeded by from:[sstst] to:[ssssss] Line 3: preceded by from:[sstst] to:[ssssss]
Line 4: has from:[sst] to:[sss] after : Line 4: has from:[sst] to:[sss] after :
Line 5: has from:[t] to:[ss] at end Line 5: has from:[t] to:[ss] at end
""" """

View File

@ -387,9 +387,9 @@
<tbody> <tbody>
<tr><td class="diff_next" id="difflib_chg_to9__0"><a href="#difflib_chg_to9__0">f</a></td><td class="diff_header" id="from9_1">1</td><td nowrap="nowrap"></td><td class="diff_next"><a href="#difflib_chg_to9__0">f</a></td><td class="diff_header" id="to9_1">1</td><td nowrap="nowrap"></td></tr> <tr><td class="diff_next" id="difflib_chg_to9__0"><a href="#difflib_chg_to9__0">f</a></td><td class="diff_header" id="from9_1">1</td><td nowrap="nowrap"></td><td class="diff_next"><a href="#difflib_chg_to9__0">f</a></td><td class="diff_header" id="to9_1">1</td><td nowrap="nowrap"></td></tr>
<tr><td class="diff_next"><a href="#difflib_chg_to9__top">t</a></td><td class="diff_header" id="from9_2">2</td><td nowrap="nowrap"><span class="diff_chg">&nbsp;&nbsp;&nbsp;&nbsp;</span>Line&nbsp;1:&nbsp;preceeded&nbsp;by&nbsp;from:[tt]&nbsp;to:[ssss]</td><td class="diff_next"><a href="#difflib_chg_to9__top">t</a></td><td class="diff_header" id="to9_2">2</td><td nowrap="nowrap"><span class="diff_chg">&nbsp;&nbsp;&nbsp;&nbsp;</span>Line&nbsp;1:&nbsp;preceeded&nbsp;by&nbsp;from:[tt]&nbsp;to:[ssss]</td></tr> <tr><td class="diff_next"><a href="#difflib_chg_to9__top">t</a></td><td class="diff_header" id="from9_2">2</td><td nowrap="nowrap"><span class="diff_chg">&nbsp;&nbsp;&nbsp;&nbsp;</span>Line&nbsp;1:&nbsp;preceded&nbsp;by&nbsp;from:[tt]&nbsp;to:[ssss]</td><td class="diff_next"><a href="#difflib_chg_to9__top">t</a></td><td class="diff_header" id="to9_2">2</td><td nowrap="nowrap"><span class="diff_chg">&nbsp;&nbsp;&nbsp;&nbsp;</span>Line&nbsp;1:&nbsp;preceded&nbsp;by&nbsp;from:[tt]&nbsp;to:[ssss]</td></tr>
<tr><td class="diff_next"></td><td class="diff_header" id="from9_3">3</td><td nowrap="nowrap">&nbsp;&nbsp;<span class="diff_chg">&nbsp;&nbsp;</span>&nbsp;&nbsp;Line&nbsp;2:&nbsp;preceeded&nbsp;by&nbsp;from:[sstt]&nbsp;to:[sssst]</td><td class="diff_next"></td><td class="diff_header" id="to9_3">3</td><td nowrap="nowrap">&nbsp;&nbsp;<span class="diff_chg">&nbsp;&nbsp;</span>&nbsp;&nbsp;Line&nbsp;2:&nbsp;preceeded&nbsp;by&nbsp;from:[sstt]&nbsp;to:[sssst]</td></tr> <tr><td class="diff_next"></td><td class="diff_header" id="from9_3">3</td><td nowrap="nowrap">&nbsp;&nbsp;<span class="diff_chg">&nbsp;&nbsp;</span>&nbsp;&nbsp;Line&nbsp;2:&nbsp;preceded&nbsp;by&nbsp;from:[sstt]&nbsp;to:[sssst]</td><td class="diff_next"></td><td class="diff_header" id="to9_3">3</td><td nowrap="nowrap">&nbsp;&nbsp;<span class="diff_chg">&nbsp;&nbsp;</span>&nbsp;&nbsp;Line&nbsp;2:&nbsp;preceded&nbsp;by&nbsp;from:[sstt]&nbsp;to:[sssst]</td></tr>
<tr><td class="diff_next"></td><td class="diff_header" id="from9_4">4</td><td nowrap="nowrap">&nbsp;&nbsp;<span class="diff_chg">&nbsp;&nbsp;&nbsp;&nbsp;</span>Line&nbsp;3:&nbsp;preceeded&nbsp;by&nbsp;from:[sstst]&nbsp;to:[ssssss]</td><td class="diff_next"></td><td class="diff_header" id="to9_4">4</td><td nowrap="nowrap">&nbsp;&nbsp;<span class="diff_chg">&nbsp;&nbsp;&nbsp;&nbsp;</span>Line&nbsp;3:&nbsp;preceeded&nbsp;by&nbsp;from:[sstst]&nbsp;to:[ssssss]</td></tr> <tr><td class="diff_next"></td><td class="diff_header" id="from9_4">4</td><td nowrap="nowrap">&nbsp;&nbsp;<span class="diff_chg">&nbsp;&nbsp;&nbsp;&nbsp;</span>Line&nbsp;3:&nbsp;preceded&nbsp;by&nbsp;from:[sstst]&nbsp;to:[ssssss]</td><td class="diff_next"></td><td class="diff_header" id="to9_4">4</td><td nowrap="nowrap">&nbsp;&nbsp;<span class="diff_chg">&nbsp;&nbsp;&nbsp;&nbsp;</span>Line&nbsp;3:&nbsp;preceded&nbsp;by&nbsp;from:[sstst]&nbsp;to:[ssssss]</td></tr>
<tr><td class="diff_next"></td><td class="diff_header" id="from9_5">5</td><td nowrap="nowrap">Line&nbsp;4:&nbsp;&nbsp;<span class="diff_chg">&nbsp;</span>has&nbsp;from:[sst]&nbsp;to:[sss]&nbsp;after&nbsp;:</td><td class="diff_next"></td><td class="diff_header" id="to9_5">5</td><td nowrap="nowrap">Line&nbsp;4:&nbsp;&nbsp;<span class="diff_chg">&nbsp;</span>has&nbsp;from:[sst]&nbsp;to:[sss]&nbsp;after&nbsp;:</td></tr> <tr><td class="diff_next"></td><td class="diff_header" id="from9_5">5</td><td nowrap="nowrap">Line&nbsp;4:&nbsp;&nbsp;<span class="diff_chg">&nbsp;</span>has&nbsp;from:[sst]&nbsp;to:[sss]&nbsp;after&nbsp;:</td><td class="diff_next"></td><td class="diff_header" id="to9_5">5</td><td nowrap="nowrap">Line&nbsp;4:&nbsp;&nbsp;<span class="diff_chg">&nbsp;</span>has&nbsp;from:[sst]&nbsp;to:[sss]&nbsp;after&nbsp;:</td></tr>
<tr><td class="diff_next"></td><td class="diff_header" id="from9_6">6</td><td nowrap="nowrap">Line&nbsp;5:&nbsp;has&nbsp;from:[t]&nbsp;to:[ss]&nbsp;at&nbsp;end<span class="diff_sub">&nbsp;</span></td><td class="diff_next"></td><td class="diff_header" id="to9_6">6</td><td nowrap="nowrap">Line&nbsp;5:&nbsp;has&nbsp;from:[t]&nbsp;to:[ss]&nbsp;at&nbsp;end</td></tr> <tr><td class="diff_next"></td><td class="diff_header" id="from9_6">6</td><td nowrap="nowrap">Line&nbsp;5:&nbsp;has&nbsp;from:[t]&nbsp;to:[ss]&nbsp;at&nbsp;end<span class="diff_sub">&nbsp;</span></td><td class="diff_next"></td><td class="diff_header" id="to9_6">6</td><td nowrap="nowrap">Line&nbsp;5:&nbsp;has&nbsp;from:[t]&nbsp;to:[ss]&nbsp;at&nbsp;end</td></tr>
</tbody> </tbody>
@ -403,9 +403,9 @@
<tbody> <tbody>
<tr><td class="diff_next" id="difflib_chg_to10__0"><a href="#difflib_chg_to10__0">f</a></td><td class="diff_header" id="from10_1">1</td><td nowrap="nowrap"></td><td class="diff_next"><a href="#difflib_chg_to10__0">f</a></td><td class="diff_header" id="to10_1">1</td><td nowrap="nowrap"></td></tr> <tr><td class="diff_next" id="difflib_chg_to10__0"><a href="#difflib_chg_to10__0">f</a></td><td class="diff_header" id="from10_1">1</td><td nowrap="nowrap"></td><td class="diff_next"><a href="#difflib_chg_to10__0">f</a></td><td class="diff_header" id="to10_1">1</td><td nowrap="nowrap"></td></tr>
<tr><td class="diff_next"><a href="#difflib_chg_to10__top">t</a></td><td class="diff_header" id="from10_2">2</td><td nowrap="nowrap"><span class="diff_chg">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>Line&nbsp;1:&nbsp;preceeded&nbsp;by&nbsp;from:[tt]&nbsp;to:[ssss]</td><td class="diff_next"><a href="#difflib_chg_to10__top">t</a></td><td class="diff_header" id="to10_2">2</td><td nowrap="nowrap"><span class="diff_chg">&nbsp;&nbsp;&nbsp;&nbsp;</span>Line&nbsp;1:&nbsp;preceeded&nbsp;by&nbsp;from:[tt]&nbsp;to:[ssss]</td></tr> <tr><td class="diff_next"><a href="#difflib_chg_to10__top">t</a></td><td class="diff_header" id="from10_2">2</td><td nowrap="nowrap"><span class="diff_chg">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>Line&nbsp;1:&nbsp;preceded&nbsp;by&nbsp;from:[tt]&nbsp;to:[ssss]</td><td class="diff_next"><a href="#difflib_chg_to10__top">t</a></td><td class="diff_header" id="to10_2">2</td><td nowrap="nowrap"><span class="diff_chg">&nbsp;&nbsp;&nbsp;&nbsp;</span>Line&nbsp;1:&nbsp;preceded&nbsp;by&nbsp;from:[tt]&nbsp;to:[ssss]</td></tr>
<tr><td class="diff_next"></td><td class="diff_header" id="from10_3">3</td><td nowrap="nowrap">&nbsp;&nbsp;<span class="diff_chg">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>&nbsp;&nbsp;&nbsp;&nbsp;Line&nbsp;2:&nbsp;preceeded&nbsp;by&nbsp;from:[sstt]&nbsp;to:[sssst]</td><td class="diff_next"></td><td class="diff_header" id="to10_3">3</td><td nowrap="nowrap">&nbsp;&nbsp;<span class="diff_chg">&nbsp;&nbsp;</span>&nbsp;&nbsp;&nbsp;&nbsp;Line&nbsp;2:&nbsp;preceeded&nbsp;by&nbsp;from:[sstt]&nbsp;to:[sssst]</td></tr> <tr><td class="diff_next"></td><td class="diff_header" id="from10_3">3</td><td nowrap="nowrap">&nbsp;&nbsp;<span class="diff_chg">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>&nbsp;&nbsp;&nbsp;&nbsp;Line&nbsp;2:&nbsp;preceded&nbsp;by&nbsp;from:[sstt]&nbsp;to:[sssst]</td><td class="diff_next"></td><td class="diff_header" id="to10_3">3</td><td nowrap="nowrap">&nbsp;&nbsp;<span class="diff_chg">&nbsp;&nbsp;</span>&nbsp;&nbsp;&nbsp;&nbsp;Line&nbsp;2:&nbsp;preceded&nbsp;by&nbsp;from:[sstt]&nbsp;to:[sssst]</td></tr>
<tr><td class="diff_next"></td><td class="diff_header" id="from10_4">4</td><td nowrap="nowrap">&nbsp;&nbsp;<span class="diff_chg">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>Line&nbsp;3:&nbsp;preceeded&nbsp;by&nbsp;from:[sstst]&nbsp;to:[ssssss]</td><td class="diff_next"></td><td class="diff_header" id="to10_4">4</td><td nowrap="nowrap">&nbsp;&nbsp;<span class="diff_chg">&nbsp;&nbsp;&nbsp;&nbsp;</span>Line&nbsp;3:&nbsp;preceeded&nbsp;by&nbsp;from:[sstst]&nbsp;to:[ssssss]</td></tr> <tr><td class="diff_next"></td><td class="diff_header" id="from10_4">4</td><td nowrap="nowrap">&nbsp;&nbsp;<span class="diff_chg">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>Line&nbsp;3:&nbsp;preceded&nbsp;by&nbsp;from:[sstst]&nbsp;to:[ssssss]</td><td class="diff_next"></td><td class="diff_header" id="to10_4">4</td><td nowrap="nowrap">&nbsp;&nbsp;<span class="diff_chg">&nbsp;&nbsp;&nbsp;&nbsp;</span>Line&nbsp;3:&nbsp;preceded&nbsp;by&nbsp;from:[sstst]&nbsp;to:[ssssss]</td></tr>
<tr><td class="diff_next"></td><td class="diff_header" id="from10_5">5</td><td nowrap="nowrap">Line&nbsp;4:&nbsp;&nbsp;<span class="diff_chg">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>has&nbsp;from:[sst]&nbsp;to:[sss]&nbsp;after&nbsp;:</td><td class="diff_next"></td><td class="diff_header" id="to10_5">5</td><td nowrap="nowrap">Line&nbsp;4:&nbsp;&nbsp;<span class="diff_chg">&nbsp;</span>has&nbsp;from:[sst]&nbsp;to:[sss]&nbsp;after&nbsp;:</td></tr> <tr><td class="diff_next"></td><td class="diff_header" id="from10_5">5</td><td nowrap="nowrap">Line&nbsp;4:&nbsp;&nbsp;<span class="diff_chg">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>has&nbsp;from:[sst]&nbsp;to:[sss]&nbsp;after&nbsp;:</td><td class="diff_next"></td><td class="diff_header" id="to10_5">5</td><td nowrap="nowrap">Line&nbsp;4:&nbsp;&nbsp;<span class="diff_chg">&nbsp;</span>has&nbsp;from:[sst]&nbsp;to:[sss]&nbsp;after&nbsp;:</td></tr>
<tr><td class="diff_next"></td><td class="diff_header" id="from10_6">6</td><td nowrap="nowrap">Line&nbsp;5:&nbsp;has&nbsp;from:[t]&nbsp;to:[ss]&nbsp;at&nbsp;end<span class="diff_sub">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td><td class="diff_next"></td><td class="diff_header" id="to10_6">6</td><td nowrap="nowrap">Line&nbsp;5:&nbsp;has&nbsp;from:[t]&nbsp;to:[ss]&nbsp;at&nbsp;end</td></tr> <tr><td class="diff_next"></td><td class="diff_header" id="from10_6">6</td><td nowrap="nowrap">Line&nbsp;5:&nbsp;has&nbsp;from:[t]&nbsp;to:[ss]&nbsp;at&nbsp;end<span class="diff_sub">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td><td class="diff_next"></td><td class="diff_header" id="to10_6">6</td><td nowrap="nowrap">Line&nbsp;5:&nbsp;has&nbsp;from:[t]&nbsp;to:[ss]&nbsp;at&nbsp;end</td></tr>
</tbody> </tbody>

View File

@ -723,12 +723,12 @@ class TestMessageAPI(TestEmailBase):
# Issue 5871: reject an attempt to embed a header inside a header value # Issue 5871: reject an attempt to embed a header inside a header value
# (header injection attack). # (header injection attack).
def test_embeded_header_via_Header_rejected(self): def test_embedded_header_via_Header_rejected(self):
msg = Message() msg = Message()
msg['Dummy'] = Header('dummy\nX-Injected-Header: test') msg['Dummy'] = Header('dummy\nX-Injected-Header: test')
self.assertRaises(errors.HeaderParseError, msg.as_string) self.assertRaises(errors.HeaderParseError, msg.as_string)
def test_embeded_header_via_string_rejected(self): def test_embedded_header_via_string_rejected(self):
msg = Message() msg = Message()
msg['Dummy'] = 'dummy\nX-Injected-Header: test' msg['Dummy'] = 'dummy\nX-Injected-Header: test'
self.assertRaises(errors.HeaderParseError, msg.as_string) self.assertRaises(errors.HeaderParseError, msg.as_string)

View File

@ -143,7 +143,7 @@ class TestGeneratorBase:
def test_set_mangle_from_via_policy(self): def test_set_mangle_from_via_policy(self):
source = textwrap.dedent("""\ source = textwrap.dedent("""\
Subject: test that Subject: test that
from is mangeld in the body! from is mangled in the body!
From time to time I write a rhyme. From time to time I write a rhyme.
""") """)

View File

@ -372,7 +372,7 @@ class ResolveNameTests:
# bacon # bacon
self.assertEqual('bacon', self.util.resolve_name('bacon', None)) self.assertEqual('bacon', self.util.resolve_name('bacon', None))
def test_aboslute_within_package(self): def test_absolute_within_package(self):
# bacon in spam # bacon in spam
self.assertEqual('bacon', self.util.resolve_name('bacon', 'spam')) self.assertEqual('bacon', self.util.resolve_name('bacon', 'spam'))

View File

@ -1263,7 +1263,7 @@ class IpaddrUnitTest(unittest.TestCase):
ip4 = ipaddress.IPv4Address('1.1.1.3') ip4 = ipaddress.IPv4Address('1.1.1.3')
ip5 = ipaddress.IPv4Address('1.1.1.4') ip5 = ipaddress.IPv4Address('1.1.1.4')
ip6 = ipaddress.IPv4Address('1.1.1.0') ip6 = ipaddress.IPv4Address('1.1.1.0')
# check that addreses are subsumed properly. # check that addresses are subsumed properly.
collapsed = ipaddress.collapse_addresses( collapsed = ipaddress.collapse_addresses(
[ip1, ip2, ip3, ip4, ip5, ip6]) [ip1, ip2, ip3, ip4, ip5, ip6])
self.assertEqual(list(collapsed), self.assertEqual(list(collapsed),
@ -1277,7 +1277,7 @@ class IpaddrUnitTest(unittest.TestCase):
ip4 = ipaddress.IPv4Address('1.1.1.3') ip4 = ipaddress.IPv4Address('1.1.1.3')
#ip5 = ipaddress.IPv4Interface('1.1.1.4/30') #ip5 = ipaddress.IPv4Interface('1.1.1.4/30')
#ip6 = ipaddress.IPv4Interface('1.1.1.4/30') #ip6 = ipaddress.IPv4Interface('1.1.1.4/30')
# check that addreses are subsumed properly. # check that addresses are subsumed properly.
collapsed = ipaddress.collapse_addresses([ip1, ip2, ip3, ip4]) collapsed = ipaddress.collapse_addresses([ip1, ip2, ip3, ip4])
self.assertEqual(list(collapsed), self.assertEqual(list(collapsed),
[ipaddress.IPv4Network('1.1.1.0/30')]) [ipaddress.IPv4Network('1.1.1.0/30')])
@ -1291,7 +1291,7 @@ class IpaddrUnitTest(unittest.TestCase):
# stored in no particular order b/c we want CollapseAddr to call # stored in no particular order b/c we want CollapseAddr to call
# [].sort # [].sort
ip6 = ipaddress.IPv4Network('1.1.0.0/22') ip6 = ipaddress.IPv4Network('1.1.0.0/22')
# check that addreses are subsumed properly. # check that addresses are subsumed properly.
collapsed = ipaddress.collapse_addresses([ip1, ip2, ip3, ip4, ip5, collapsed = ipaddress.collapse_addresses([ip1, ip2, ip3, ip4, ip5,
ip6]) ip6])
self.assertEqual(list(collapsed), self.assertEqual(list(collapsed),

View File

@ -1,5 +1,5 @@
""" """
Test suite to check compilance with PEP 247, the standard API Test suite to check compliance with PEP 247, the standard API
for hashing algorithms for hashing algorithms
""" """

View File

@ -1306,10 +1306,10 @@ class TestShutil(unittest.TestCase):
shutil.chown(filename) shutil.chown(filename)
with self.assertRaises(LookupError): with self.assertRaises(LookupError):
shutil.chown(filename, user='non-exising username') shutil.chown(filename, user='non-existing username')
with self.assertRaises(LookupError): with self.assertRaises(LookupError):
shutil.chown(filename, group='non-exising groupname') shutil.chown(filename, group='non-existing groupname')
with self.assertRaises(TypeError): with self.assertRaises(TypeError):
shutil.chown(filename, b'spam') shutil.chown(filename, b'spam')

View File

@ -682,7 +682,7 @@ class ProcessTestCase(BaseTestCase):
self.assertEqual(stdout, "banana") self.assertEqual(stdout, "banana")
self.assertStderrEqual(stderr.encode(), b"pineapple\npear\n") self.assertStderrEqual(stderr.encode(), b"pineapple\npear\n")
def test_communicate_timeout_large_ouput(self): def test_communicate_timeout_large_output(self):
# Test an expiring timeout while the child is outputting lots of data. # Test an expiring timeout while the child is outputting lots of data.
p = subprocess.Popen([sys.executable, "-c", p = subprocess.Popen([sys.executable, "-c",
'import sys,os,time;' 'import sys,os,time;'

View File

@ -1,4 +1,4 @@
"""Regresssion tests for what was in Python 2's "urllib" module""" """Regression tests for what was in Python 2's "urllib" module"""
import urllib.parse import urllib.parse
import urllib.request import urllib.request

View File

@ -169,7 +169,7 @@ class BaseWinregTests(unittest.TestCase):
DeleteKey(key, subkeystr) DeleteKey(key, subkeystr)
try: try:
# Shouldnt be able to delete it twice! # Shouldn't be able to delete it twice!
DeleteKey(key, subkeystr) DeleteKey(key, subkeystr)
self.fail("Deleting the key twice succeeded") self.fail("Deleting the key twice succeeded")
except OSError: except OSError:

View File

@ -245,7 +245,7 @@ class Event:
if self.delta == 0: if self.delta == 0:
del attrs['delta'] del attrs['delta']
# widget usually is known # widget usually is known
# serial and time are not very interesing # serial and time are not very interesting
# keysym_num duplicates keysym # keysym_num duplicates keysym
# x_root and y_root mostly duplicate x and y # x_root and y_root mostly duplicate x and y
keys = ('send_event', keys = ('send_event',

View File

@ -349,7 +349,7 @@ class TestDiscovery(unittest.TestCase):
suite = list(loader._find_tests(abspath('/foo'), 'test*.py')) suite = list(loader._find_tests(abspath('/foo'), 'test*.py'))
# We should have loaded tests from both my_package and # We should have loaded tests from both my_package and
# my_pacakge.test_module, and also run the load_tests hook in both. # my_package.test_module, and also run the load_tests hook in both.
# (normally this would be nested TestSuites.) # (normally this would be nested TestSuites.)
self.assertEqual(suite, self.assertEqual(suite,
[['my_package load_tests', [], [['my_package load_tests', [],

View File

@ -27,7 +27,7 @@ class TestCallable(unittest.TestCase):
self.assertIn(mock.__class__.__name__, repr(mock)) self.assertIn(mock.__class__.__name__, repr(mock))
def test_heirarchy(self): def test_hierarchy(self):
self.assertTrue(issubclass(MagicMock, Mock)) self.assertTrue(issubclass(MagicMock, Mock))
self.assertTrue(issubclass(NonCallableMagicMock, NonCallableMock)) self.assertTrue(issubclass(NonCallableMagicMock, NonCallableMock))

View File

@ -34,7 +34,7 @@ deactivate () {
fi fi
} }
# unset irrelavent variables # unset irrelevant variables
deactivate nondestructive deactivate nondestructive
VIRTUAL_ENV="__VENV_DIR__" VIRTUAL_ENV="__VENV_DIR__"

View File

@ -5,7 +5,7 @@
alias deactivate 'test $?_OLD_VIRTUAL_PATH != 0 && setenv PATH "$_OLD_VIRTUAL_PATH" && unset _OLD_VIRTUAL_PATH; rehash; test $?_OLD_VIRTUAL_PROMPT != 0 && set prompt="$_OLD_VIRTUAL_PROMPT" && unset _OLD_VIRTUAL_PROMPT; unsetenv VIRTUAL_ENV; test "\!:*" != "nondestructive" && unalias deactivate' alias deactivate 'test $?_OLD_VIRTUAL_PATH != 0 && setenv PATH "$_OLD_VIRTUAL_PATH" && unset _OLD_VIRTUAL_PATH; rehash; test $?_OLD_VIRTUAL_PROMPT != 0 && set prompt="$_OLD_VIRTUAL_PROMPT" && unset _OLD_VIRTUAL_PROMPT; unsetenv VIRTUAL_ENV; test "\!:*" != "nondestructive" && unalias deactivate'
# Unset irrelavent variables. # Unset irrelevant variables.
deactivate nondestructive deactivate nondestructive
setenv VIRTUAL_ENV "__VENV_DIR__" setenv VIRTUAL_ENV "__VENV_DIR__"

View File

@ -29,7 +29,7 @@ function deactivate -d "Exit virtualenv and return to normal shell environment"
end end
end end
# unset irrelavent variables # unset irrelevant variables
deactivate nondestructive deactivate nondestructive
set -gx VIRTUAL_ENV "__VENV_DIR__" set -gx VIRTUAL_ENV "__VENV_DIR__"

View File

@ -34,7 +34,7 @@
- (BOOL)shouldShowUI - (BOOL)shouldShowUI
{ {
// if this call comes before applicationDidFinishLaunching: we // if this call comes before applicationDidFinishLaunching: we
// should terminate immedeately after starting the script. // should terminate immediately after starting the script.
if (!initial_action_done) if (!initial_action_done)
should_terminate = YES; should_terminate = YES;
initial_action_done = YES; initial_action_done = YES;

View File

@ -1131,7 +1131,7 @@ Library
and http.client. Patch by EungJun Yi. and http.client. Patch by EungJun Yi.
- Issue #14777: tkinter may return undecoded UTF-8 bytes as a string when - Issue #14777: tkinter may return undecoded UTF-8 bytes as a string when
accessing the Tk clipboard. Modify clipboad_get() to first request type accessing the Tk clipboard. Modify clipboard_get() to first request type
UTF8_STRING when no specific type is requested in an X11 windowing UTF8_STRING when no specific type is requested in an X11 windowing
environment, falling back to the current default type STRING if that fails. environment, falling back to the current default type STRING if that fails.
Original patch by Thomas Kluyver. Original patch by Thomas Kluyver.
@ -5693,7 +5693,7 @@ Library
for reading). for reading).
- hashlib has two new constant attributes: algorithms_guaranteed and - hashlib has two new constant attributes: algorithms_guaranteed and
algorithms_avaiable that respectively list the names of hash algorithms algorithms_available that respectively list the names of hash algorithms
guaranteed to exist in all Python implementations and the names of hash guaranteed to exist in all Python implementations and the names of hash
algorithms available in the current process. algorithms available in the current process.
@ -7344,7 +7344,7 @@ Library
- Issue #2846: Add support for gzip.GzipFile reading zero-padded files. Patch - Issue #2846: Add support for gzip.GzipFile reading zero-padded files. Patch
by Brian Curtin. by Brian Curtin.
- Issue #7681: Use floor division in appropiate places in the wave module. - Issue #7681: Use floor division in appropriate places in the wave module.
- Issue #5372: Drop the reuse of .o files in Distutils' ccompiler (since - Issue #5372: Drop the reuse of .o files in Distutils' ccompiler (since
Extension extra options may change the output without changing the .c Extension extra options may change the output without changing the .c
@ -10921,7 +10921,7 @@ Platforms
- Support for BeOS and AtheOS was removed (according to PEP 11). - Support for BeOS and AtheOS was removed (according to PEP 11).
- Support for RiscOS, Irix, Tru64 was removed (alledgedly). - Support for RiscOS, Irix, Tru64 was removed (allegedly).
Tools/Demos Tools/Demos
----------- -----------
@ -12912,7 +12912,7 @@ Library
- Bug #947906: An object oriented interface has been added to the calendar - Bug #947906: An object oriented interface has been added to the calendar
module. It's possible to generate HTML calendar now and the module can be module. It's possible to generate HTML calendar now and the module can be
called as a script (e.g. via ``python -mcalendar``). Localized month and called as a script (e.g. via ``python -mcalendar``). Localized month and
weekday names can be ouput (even if an exotic encoding is used) using weekday names can be output (even if an exotic encoding is used) using
special classes that use unicode. special classes that use unicode.
Build Build
@ -13295,7 +13295,7 @@ Library
``True`` for ``!=``, and raises ``TypeError`` for other comparison ``True`` for ``!=``, and raises ``TypeError`` for other comparison
operators. Because datetime is a subclass of date, comparing only the operators. Because datetime is a subclass of date, comparing only the
base class (date) members can still be done, if that's desired, by base class (date) members can still be done, if that's desired, by
forcing using of the approprate date method; e.g., forcing using of the appropriate date method; e.g.,
``a_date.__eq__(a_datetime)`` is true if and only if the year, month ``a_date.__eq__(a_datetime)`` is true if and only if the year, month
and day members of ``a_date`` and ``a_datetime`` are equal. and day members of ``a_date`` and ``a_datetime`` are equal.
@ -23770,7 +23770,7 @@ Netscape on Windows/Mac).
- copy.py: Make sure the objects returned by __getinitargs__() are - copy.py: Make sure the objects returned by __getinitargs__() are
kept alive (in the memo) to avoid a certain kind of nasty crash. (Not kept alive (in the memo) to avoid a certain kind of nasty crash. (Not
easily reproducable because it requires a later call to easily reproducible because it requires a later call to
__getinitargs__() to return a tuple that happens to be allocated at __getinitargs__() to return a tuple that happens to be allocated at
the same address.) the same address.)
@ -27402,7 +27402,7 @@ bullet-proof, after reports of (minor) trouble on certain platforms.
There is now a script to patch Makefile and config.c to add a new There is now a script to patch Makefile and config.c to add a new
optional built-in module: Addmodule.sh. Read the script before using! optional built-in module: Addmodule.sh. Read the script before using!
Useing Addmodule.sh, all optional modules can now be configured at Using Addmodule.sh, all optional modules can now be configured at
compile time using Configure.py, so there are no modules left that compile time using Configure.py, so there are no modules left that
require dynamic loading. require dynamic loading.
@ -27833,9 +27833,9 @@ SOCKET: symbolic constant definitions for socket options
SUNAUDIODEV: symbolic constant definitions for sunaudiodef (sun only) SUNAUDIODEV: symbolic constant definitions for sunaudiodef (sun only)
SV: symbolic constat definitions for sv (sgi only) SV: symbolic constant definitions for sv (sgi only)
CD: symbolic constat definitions for cd (sgi only) CD: symbolic constant definitions for cd (sgi only)
New demos New demos

View File

@ -425,7 +425,7 @@ Library
- Issue #27079: Fixed curses.ascii functions isblank(), iscntrl() and ispunct(). - Issue #27079: Fixed curses.ascii functions isblank(), iscntrl() and ispunct().
- Issue #27294: Numerical state in the repr for Tkinter event objects is now - Issue #27294: Numerical state in the repr for Tkinter event objects is now
represented as a compination of known flags. represented as a combination of known flags.
- Issue #27177: Match objects in the re module now support index-like objects - Issue #27177: Match objects in the re module now support index-like objects
as group indices. Based on patches by Jeroen Demeyer and Xiang Zhang. as group indices. Based on patches by Jeroen Demeyer and Xiang Zhang.
@ -5662,7 +5662,7 @@ Tools/Demos
- Issue #22120: For functions using an unsigned integer return converter, - Issue #22120: For functions using an unsigned integer return converter,
Argument Clinic now generates a cast to that type for the comparison Argument Clinic now generates a cast to that type for the comparison
to -1 in the generated code. (This supresses a compilation warning.) to -1 in the generated code. (This suppresses a compilation warning.)
- Issue #18974: Tools/scripts/diff.py now uses argparse instead of optparse. - Issue #18974: Tools/scripts/diff.py now uses argparse instead of optparse.
@ -6762,7 +6762,7 @@ Core and Builtins
- Issue #19466: Clear the frames of daemon threads earlier during the - Issue #19466: Clear the frames of daemon threads earlier during the
Python shutdown to call objects destructors. So "unclosed file" resource Python shutdown to call objects destructors. So "unclosed file" resource
warnings are now corretly emitted for daemon threads. warnings are now correctly emitted for daemon threads.
- Issue #19514: Deduplicate some _Py_IDENTIFIER declarations. - Issue #19514: Deduplicate some _Py_IDENTIFIER declarations.
Patch by Andrei Dorian Duma. Patch by Andrei Dorian Duma.
@ -7692,7 +7692,7 @@ Library
- Issue #18709: Fix CVE-2013-4238. The SSL module now handles NULL bytes - Issue #18709: Fix CVE-2013-4238. The SSL module now handles NULL bytes
inside subjectAltName correctly. Formerly the module has used OpenSSL's inside subjectAltName correctly. Formerly the module has used OpenSSL's
GENERAL_NAME_print() function to get the string represention of ASN.1 GENERAL_NAME_print() function to get the string representation of ASN.1
strings for ``rfc822Name`` (email), ``dNSName`` (DNS) and strings for ``rfc822Name`` (email), ``dNSName`` (DNS) and
``uniformResourceIdentifier`` (URI). ``uniformResourceIdentifier`` (URI).
@ -7785,7 +7785,7 @@ IDLE
Documentation Documentation
------------- -------------
- Issue #18743: Fix references to non-existant "StringIO" module. - Issue #18743: Fix references to non-existent "StringIO" module.
- Issue #18783: Removed existing mentions of Python long type in docstrings, - Issue #18783: Removed existing mentions of Python long type in docstrings,
error messages and comments. error messages and comments.
@ -8724,7 +8724,7 @@ Library
specifically addresses a stack misalignment issue on x86 and issues on specifically addresses a stack misalignment issue on x86 and issues on
some more recent platforms. some more recent platforms.
- Issue #8862: Fixed curses cleanup when getkey is interrputed by a signal. - Issue #8862: Fixed curses cleanup when getkey is interrupted by a signal.
- Issue #17443: imaplib.IMAP4_stream was using the default unbuffered IO - Issue #17443: imaplib.IMAP4_stream was using the default unbuffered IO
in subprocess, but the imap code assumes buffered IO. In Python2 this in subprocess, but the imap code assumes buffered IO. In Python2 this

View File

@ -238,7 +238,7 @@ typedef struct {
StgDictObject function to a generic one. StgDictObject function to a generic one.
Currently, PyCFuncPtr types have 'converters' and 'checker' entries in their Currently, PyCFuncPtr types have 'converters' and 'checker' entries in their
type dict. They are only used to cache attributes from other entries, whihc type dict. They are only used to cache attributes from other entries, which
is wrong. is wrong.
One use case is the .value attribute that all simple types have. But some One use case is the .value attribute that all simple types have. But some

View File

@ -724,7 +724,7 @@ generate_hash_name_list(void)
/* /*
* This macro generates constructor function definitions for specific * This macro generates constructor function definitions for specific
* hash algorithms. These constructors are much faster than calling * hash algorithms. These constructors are much faster than calling
* the generic one passing it a python string and are noticably * the generic one passing it a python string and are noticeably
* faster than calling a python new() wrapper. Thats important for * faster than calling a python new() wrapper. Thats important for
* code that wants to make hashes of a bunch of small strings. * code that wants to make hashes of a bunch of small strings.
*/ */

View File

@ -90,7 +90,7 @@ iobase_unsupported(const char *message)
return NULL; return NULL;
} }
/* Positionning */ /* Positioning */
PyDoc_STRVAR(iobase_seek_doc, PyDoc_STRVAR(iobase_seek_doc,
"Change stream position.\n" "Change stream position.\n"

View File

@ -2131,7 +2131,7 @@ raw_unicode_escape(PyObject *obj)
Py_UCS4 ch = PyUnicode_READ(kind, data, i); Py_UCS4 ch = PyUnicode_READ(kind, data, i);
/* Map 32-bit characters to '\Uxxxxxxxx' */ /* Map 32-bit characters to '\Uxxxxxxxx' */
if (ch >= 0x10000) { if (ch >= 0x10000) {
/* -1: substract 1 preallocated byte */ /* -1: subtract 1 preallocated byte */
p = _PyBytesWriter_Prepare(&writer, p, 10-1); p = _PyBytesWriter_Prepare(&writer, p, 10-1);
if (p == NULL) if (p == NULL)
goto error; goto error;
@ -2149,7 +2149,7 @@ raw_unicode_escape(PyObject *obj)
} }
/* Map 16-bit characters, '\\' and '\n' to '\uxxxx' */ /* Map 16-bit characters, '\\' and '\n' to '\uxxxx' */
else if (ch >= 256 || ch == '\\' || ch == '\n') { else if (ch >= 256 || ch == '\\' || ch == '\n') {
/* -1: substract 1 preallocated byte */ /* -1: subtract 1 preallocated byte */
p = _PyBytesWriter_Prepare(&writer, p, 6-1); p = _PyBytesWriter_Prepare(&writer, p, 6-1);
if (p == NULL) if (p == NULL)
goto error; goto error;

View File

@ -3798,7 +3798,7 @@ get_recursion_depth(PyObject *self, PyObject *args)
{ {
PyThreadState *tstate = PyThreadState_GET(); PyThreadState *tstate = PyThreadState_GET();
/* substract one to ignore the frame of the get_recursion_depth() call */ /* subtract one to ignore the frame of the get_recursion_depth() call */
return PyLong_FromLong(tstate->recursion_depth - 1); return PyLong_FromLong(tstate->recursion_depth - 1);
} }

View File

@ -45,7 +45,7 @@ lock_dealloc(lockobject *self)
/* Helper to acquire an interruptible lock with a timeout. If the lock acquire /* Helper to acquire an interruptible lock with a timeout. If the lock acquire
* is interrupted, signal handlers are run, and if they raise an exception, * is interrupted, signal handlers are run, and if they raise an exception,
* PY_LOCK_INTR is returned. Otherwise, PY_LOCK_ACQUIRED or PY_LOCK_FAILURE * PY_LOCK_INTR is returned. Otherwise, PY_LOCK_ACQUIRED or PY_LOCK_FAILURE
* are returned, depending on whether the lock can be acquired withing the * are returned, depending on whether the lock can be acquired within the
* timeout. * timeout.
*/ */
static PyLockStatus static PyLockStatus

View File

@ -716,7 +716,7 @@ tracemalloc_realloc(void *ctx, void *ptr, size_t new_size)
if (ADD_TRACE(ptr2, new_size) < 0) { if (ADD_TRACE(ptr2, new_size) < 0) {
/* Memory allocation failed. The error cannot be reported to /* Memory allocation failed. The error cannot be reported to
the caller, because realloc() may already have shrinked the the caller, because realloc() may already have shrunk the
memory block and so removed bytes. memory block and so removed bytes.
This case is very unlikely: a hash entry has just been This case is very unlikely: a hash entry has just been

View File

@ -837,7 +837,7 @@ binascii_rledecode_hqx_impl(PyObject *module, Py_buffer *data)
if (in_byte == RUNCHAR) { if (in_byte == RUNCHAR) {
INBYTE(in_repeat); INBYTE(in_repeat);
/* only 1 byte will be written, but 2 bytes were preallocated: /* only 1 byte will be written, but 2 bytes were preallocated:
substract 1 byte to prevent overallocation */ subtract 1 byte to prevent overallocation */
writer.min_size--; writer.min_size--;
if (in_repeat != 0) { if (in_repeat != 0) {
@ -858,7 +858,7 @@ binascii_rledecode_hqx_impl(PyObject *module, Py_buffer *data)
if (in_byte == RUNCHAR) { if (in_byte == RUNCHAR) {
INBYTE(in_repeat); INBYTE(in_repeat);
/* only 1 byte will be written, but 2 bytes were preallocated: /* only 1 byte will be written, but 2 bytes were preallocated:
substract 1 byte to prevent overallocation */ subtract 1 byte to prevent overallocation */
writer.min_size--; writer.min_size--;
if ( in_repeat == 0 ) { if ( in_repeat == 0 ) {

View File

@ -1274,7 +1274,7 @@ count_set_bits(unsigned long n)
/* Divide-and-conquer factorial algorithm /* Divide-and-conquer factorial algorithm
* *
* Based on the formula and psuedo-code provided at: * Based on the formula and pseudo-code provided at:
* http://www.luschny.de/math/factorial/binarysplitfact.html * http://www.luschny.de/math/factorial/binarysplitfact.html
* *
* Faster algorithms exist, but they're more complicated and depend on * Faster algorithms exist, but they're more complicated and depend on

View File

@ -6611,7 +6611,7 @@ PyInit__socket(void)
PyModule_AddIntConstant(m, "SOMAXCONN", 5); /* Common value */ PyModule_AddIntConstant(m, "SOMAXCONN", 5); /* Common value */
#endif #endif
/* Ancilliary message types */ /* Ancillary message types */
#ifdef SCM_RIGHTS #ifdef SCM_RIGHTS
PyModule_AddIntMacro(m, SCM_RIGHTS); PyModule_AddIntMacro(m, SCM_RIGHTS);
#endif #endif

View File

@ -1315,7 +1315,7 @@ unmarshal_code(PyObject *pathname, PyObject *data, time_t mtime)
return code; return code;
} }
/* Replace any occurances of "\r\n?" in the input string with "\n". /* Replace any occurrences of "\r\n?" in the input string with "\n".
This converts DOS and Mac line endings to Unix line endings. This converts DOS and Mac line endings to Unix line endings.
Also append a trailing "\n" to be compatible with Also append a trailing "\n" to be compatible with
PyParser_SimpleParseFile(). Returns a new reference. */ PyParser_SimpleParseFile(). Returns a new reference. */

View File

@ -481,7 +481,7 @@ bytearray_setslice_linear(PyByteArrayObject *self,
If growth < 0 and lo != 0, the operation is completed, but a If growth < 0 and lo != 0, the operation is completed, but a
MemoryError is still raised and the memory block is not MemoryError is still raised and the memory block is not
shrinked. Otherwise, the bytearray is restored in its previous shrunk. Otherwise, the bytearray is restored in its previous
state and a MemoryError is raised. */ state and a MemoryError is raised. */
if (lo == 0) { if (lo == 0) {
self->ob_start += growth; self->ob_start += growth;

View File

@ -247,7 +247,7 @@ PyBytes_FromFormatV(const char *format, va_list vargs)
++f; ++f;
} }
/* substract bytes preallocated for the format string /* subtract bytes preallocated for the format string
(ex: 2 for "%s") */ (ex: 2 for "%s") */
writer.min_size -= (f - p + 1); writer.min_size -= (f - p + 1);
@ -1093,7 +1093,7 @@ _PyBytes_DecodeEscapeRecode(const char **s, const char *end,
assert(PyBytes_Check(w)); assert(PyBytes_Check(w));
/* Append bytes to output buffer. */ /* Append bytes to output buffer. */
writer->min_size--; /* substract 1 preallocated byte */ writer->min_size--; /* subtract 1 preallocated byte */
p = _PyBytesWriter_WriteBytes(writer, p, p = _PyBytesWriter_WriteBytes(writer, p,
PyBytes_AS_STRING(w), PyBytes_AS_STRING(w),
PyBytes_GET_SIZE(w)); PyBytes_GET_SIZE(w));

View File

@ -719,7 +719,7 @@ _PyCode_CheckLineNumber(PyCodeObject* co, int lasti, PyAddrPair *bounds)
/* possible optimization: if f->f_lasti == instr_ub /* possible optimization: if f->f_lasti == instr_ub
(likely to be a common case) then we already know (likely to be a common case) then we already know
instr_lb -- if we stored the matching value of p instr_lb -- if we stored the matching value of p
somwhere we could skip the first while loop. */ somewhere we could skip the first while loop. */
/* See lnotab_notes.txt for the description of /* See lnotab_notes.txt for the description of
co_lnotab. A point to remember: increments to p co_lnotab. A point to remember: increments to p

View File

@ -694,7 +694,7 @@ search doesn't reduce the quadratic data movement costs.
But in CPython's case, comparisons are extraordinarily expensive compared to But in CPython's case, comparisons are extraordinarily expensive compared to
moving data, and the details matter. Moving objects is just copying moving data, and the details matter. Moving objects is just copying
pointers. Comparisons can be arbitrarily expensive (can invoke arbitary pointers. Comparisons can be arbitrarily expensive (can invoke arbitrary
user-supplied Python code), but even in simple cases (like 3 < 4) _all_ user-supplied Python code), but even in simple cases (like 3 < 4) _all_
decisions are made at runtime: what's the type of the left comparand? the decisions are made at runtime: what's the type of the left comparand? the
type of the right? do they need to be coerced to a common type? where's the type of the right? do they need to be coerced to a common type? where's the

View File

@ -368,7 +368,7 @@ PyLong_FromDouble(double dval)
/* Checking for overflow in PyLong_AsLong is a PITA since C doesn't define /* Checking for overflow in PyLong_AsLong is a PITA since C doesn't define
* anything about what happens when a signed integer operation overflows, * anything about what happens when a signed integer operation overflows,
* and some compilers think they're doing you a favor by being "clever" * and some compilers think they're doing you a favor by being "clever"
* then. The bit pattern for the largest postive signed long is * then. The bit pattern for the largest positive signed long is
* (unsigned long)LONG_MAX, and for the smallest negative signed long * (unsigned long)LONG_MAX, and for the smallest negative signed long
* it is abs(LONG_MIN), which we could write -(unsigned long)LONG_MIN. * it is abs(LONG_MIN), which we could write -(unsigned long)LONG_MIN.
* However, some other compilers warn about applying unary minus to an * However, some other compilers warn about applying unary minus to an

View File

@ -347,7 +347,7 @@ STRINGLIB(utf8_encoder)(PyObject *unicode,
break; break;
case _Py_ERROR_BACKSLASHREPLACE: case _Py_ERROR_BACKSLASHREPLACE:
/* substract preallocated bytes */ /* subtract preallocated bytes */
writer.min_size -= max_char_size * (endpos - startpos); writer.min_size -= max_char_size * (endpos - startpos);
p = backslashreplace(&writer, p, p = backslashreplace(&writer, p,
unicode, startpos, endpos); unicode, startpos, endpos);
@ -357,7 +357,7 @@ STRINGLIB(utf8_encoder)(PyObject *unicode,
break; break;
case _Py_ERROR_XMLCHARREFREPLACE: case _Py_ERROR_XMLCHARREFREPLACE:
/* substract preallocated bytes */ /* subtract preallocated bytes */
writer.min_size -= max_char_size * (endpos - startpos); writer.min_size -= max_char_size * (endpos - startpos);
p = xmlcharrefreplace(&writer, p, p = xmlcharrefreplace(&writer, p,
unicode, startpos, endpos); unicode, startpos, endpos);
@ -387,7 +387,7 @@ STRINGLIB(utf8_encoder)(PyObject *unicode,
if (!rep) if (!rep)
goto error; goto error;
/* substract preallocated bytes */ /* subtract preallocated bytes */
writer.min_size -= max_char_size; writer.min_size -= max_char_size;
if (PyBytes_Check(rep)) { if (PyBytes_Check(rep)) {

View File

@ -3792,7 +3792,7 @@ import_copyreg(void)
/* Try to fetch cached copy of copyreg from sys.modules first in an /* Try to fetch cached copy of copyreg from sys.modules first in an
attempt to avoid the import overhead. Previously this was implemented attempt to avoid the import overhead. Previously this was implemented
by storing a reference to the cached module in a static variable, but by storing a reference to the cached module in a static variable, but
this broke when multiple embeded interpreters were in use (see issue this broke when multiple embedded interpreters were in use (see issue
#17408 and #19088). */ #17408 and #19088). */
copyreg_module = PyDict_GetItemWithError(interp->modules, copyreg_str); copyreg_module = PyDict_GetItemWithError(interp->modules, copyreg_str);
if (copyreg_module != NULL) { if (copyreg_module != NULL) {

View File

@ -6110,7 +6110,7 @@ PyUnicode_AsUnicodeEscapeString(PyObject *unicode)
/* Escape backslashes */ /* Escape backslashes */
if (ch == '\\') { if (ch == '\\') {
/* -1: substract 1 preallocated byte */ /* -1: subtract 1 preallocated byte */
p = _PyBytesWriter_Prepare(&writer, p, 2-1); p = _PyBytesWriter_Prepare(&writer, p, 2-1);
if (p == NULL) if (p == NULL)
goto error; goto error;
@ -6183,7 +6183,7 @@ PyUnicode_AsUnicodeEscapeString(PyObject *unicode)
/* Map non-printable US ASCII to '\xhh' */ /* Map non-printable US ASCII to '\xhh' */
else if (ch < ' ' || ch >= 0x7F) { else if (ch < ' ' || ch >= 0x7F) {
/* -1: substract 1 preallocated byte */ /* -1: subtract 1 preallocated byte */
p = _PyBytesWriter_Prepare(&writer, p, 4-1); p = _PyBytesWriter_Prepare(&writer, p, 4-1);
if (p == NULL) if (p == NULL)
goto error; goto error;
@ -6363,7 +6363,7 @@ PyUnicode_AsRawUnicodeEscapeString(PyObject *unicode)
if (ch >= 0x10000) { if (ch >= 0x10000) {
assert(ch <= MAX_UNICODE); assert(ch <= MAX_UNICODE);
/* -1: substract 1 preallocated byte */ /* -1: subtract 1 preallocated byte */
p = _PyBytesWriter_Prepare(&writer, p, 10-1); p = _PyBytesWriter_Prepare(&writer, p, 10-1);
if (p == NULL) if (p == NULL)
goto error; goto error;
@ -6381,7 +6381,7 @@ PyUnicode_AsRawUnicodeEscapeString(PyObject *unicode)
} }
/* Map 16-bit characters to '\uxxxx' */ /* Map 16-bit characters to '\uxxxx' */
else if (ch >= 256) { else if (ch >= 256) {
/* -1: substract 1 preallocated byte */ /* -1: subtract 1 preallocated byte */
p = _PyBytesWriter_Prepare(&writer, p, 6-1); p = _PyBytesWriter_Prepare(&writer, p, 6-1);
if (p == NULL) if (p == NULL)
goto error; goto error;
@ -6705,7 +6705,7 @@ unicode_encode_ucs1(PyObject *unicode,
break; break;
case _Py_ERROR_BACKSLASHREPLACE: case _Py_ERROR_BACKSLASHREPLACE:
/* substract preallocated bytes */ /* subtract preallocated bytes */
writer.min_size -= (collend - collstart); writer.min_size -= (collend - collstart);
str = backslashreplace(&writer, str, str = backslashreplace(&writer, str,
unicode, collstart, collend); unicode, collstart, collend);
@ -6715,7 +6715,7 @@ unicode_encode_ucs1(PyObject *unicode,
break; break;
case _Py_ERROR_XMLCHARREFREPLACE: case _Py_ERROR_XMLCHARREFREPLACE:
/* substract preallocated bytes */ /* subtract preallocated bytes */
writer.min_size -= (collend - collstart); writer.min_size -= (collend - collstart);
str = xmlcharrefreplace(&writer, str, str = xmlcharrefreplace(&writer, str,
unicode, collstart, collend); unicode, collstart, collend);
@ -6747,7 +6747,7 @@ unicode_encode_ucs1(PyObject *unicode,
if (rep == NULL) if (rep == NULL)
goto onError; goto onError;
/* substract preallocated bytes */ /* subtract preallocated bytes */
writer.min_size -= 1; writer.min_size -= 1;
if (PyBytes_Check(rep)) { if (PyBytes_Check(rep)) {

View File

@ -2090,16 +2090,16 @@ PyEval_EvalFrameEx(PyFrameObject *f, int throwflag)
TARGET(YIELD_FROM) { TARGET(YIELD_FROM) {
PyObject *v = POP(); PyObject *v = POP();
PyObject *reciever = TOP(); PyObject *receiver = TOP();
int err; int err;
if (PyGen_CheckExact(reciever) || PyCoro_CheckExact(reciever)) { if (PyGen_CheckExact(receiver) || PyCoro_CheckExact(receiver)) {
retval = _PyGen_Send((PyGenObject *)reciever, v); retval = _PyGen_Send((PyGenObject *)receiver, v);
} else { } else {
_Py_IDENTIFIER(send); _Py_IDENTIFIER(send);
if (v == Py_None) if (v == Py_None)
retval = Py_TYPE(reciever)->tp_iternext(reciever); retval = Py_TYPE(receiver)->tp_iternext(receiver);
else else
retval = _PyObject_CallMethodIdObjArgs(reciever, &PyId_send, v, NULL); retval = _PyObject_CallMethodIdObjArgs(receiver, &PyId_send, v, NULL);
} }
Py_DECREF(v); Py_DECREF(v);
if (retval == NULL) { if (retval == NULL) {
@ -2110,7 +2110,7 @@ PyEval_EvalFrameEx(PyFrameObject *f, int throwflag)
err = _PyGen_FetchStopIterationValue(&val); err = _PyGen_FetchStopIterationValue(&val);
if (err < 0) if (err < 0)
goto error; goto error;
Py_DECREF(reciever); Py_DECREF(receiver);
SET_TOP(val); SET_TOP(val);
DISPATCH(); DISPATCH();
} }

View File

@ -238,7 +238,7 @@ _PyCOND_WAIT_MS(PyCOND_T *cv, PyMUTEX_T *cs, DWORD ms)
cv->waiting++; cv->waiting++;
PyMUTEX_UNLOCK(cs); PyMUTEX_UNLOCK(cs);
/* "lost wakeup bug" would occur if the caller were interrupted here, /* "lost wakeup bug" would occur if the caller were interrupted here,
* but we are safe because we are using a semaphore wich has an internal * but we are safe because we are using a semaphore which has an internal
* count. * count.
*/ */
wait = WaitForSingleObjectEx(cv->sem, ms, FALSE); wait = WaitForSingleObjectEx(cv->sem, ms, FALSE);

View File

@ -121,7 +121,7 @@ typedef struct {
} InternalFormatSpec; } InternalFormatSpec;
#if 0 #if 0
/* Occassionally useful for debugging. Should normally be commented out. */ /* Occasionally useful for debugging. Should normally be commented out. */
static void static void
DEBUG_PRINT_FORMAT_SPEC(InternalFormatSpec *format) DEBUG_PRINT_FORMAT_SPEC(InternalFormatSpec *format)
{ {

2
README
View File

@ -68,7 +68,7 @@ workloads, as it has profiling instructions embedded inside.
After this instrumented version of the interpreter is built, the Makefile After this instrumented version of the interpreter is built, the Makefile
will automatically run a training workload. This is necessary in order to will automatically run a training workload. This is necessary in order to
profile the interpreter execution. Note also that any output, both stdout profile the interpreter execution. Note also that any output, both stdout
and stderr, that may appear at this step is supressed. and stderr, that may appear at this step is suppressed.
Finally, the last step is to rebuild the interpreter, using the information Finally, the last step is to rebuild the interpreter, using the information
collected in the previous one. The end result will be a Python binary collected in the previous one. The end result will be a Python binary

2
configure vendored
View File

@ -7112,7 +7112,7 @@ $as_echo "$CC" >&6; }
# Calculate an appropriate deployment target for this build: # Calculate an appropriate deployment target for this build:
# The deployment target value is used explicitly to enable certain # The deployment target value is used explicitly to enable certain
# features are enabled (such as builtin libedit support for readline) # features are enabled (such as builtin libedit support for readline)
# through the use of Apple's Availabiliy Macros and is used as a # through the use of Apple's Availability Macros and is used as a
# component of the string returned by distutils.get_platform(). # component of the string returned by distutils.get_platform().
# #
# Use the value from: # Use the value from:

View File

@ -1639,7 +1639,7 @@ yes)
# Calculate an appropriate deployment target for this build: # Calculate an appropriate deployment target for this build:
# The deployment target value is used explicitly to enable certain # The deployment target value is used explicitly to enable certain
# features are enabled (such as builtin libedit support for readline) # features are enabled (such as builtin libedit support for readline)
# through the use of Apple's Availabiliy Macros and is used as a # through the use of Apple's Availability Macros and is used as a
# component of the string returned by distutils.get_platform(). # component of the string returned by distutils.get_platform().
# #
# Use the value from: # Use the value from: