merge heads.
This commit is contained in:
commit
873cab2890
|
@ -43,10 +43,10 @@ web server it's talking to uses both "server" sockets and "client" sockets.
|
|||
History
|
||||
-------
|
||||
|
||||
Of the various forms of IPC (*Inter Process Communication*), sockets are by far
|
||||
the most popular. On any given platform, there are likely to be other forms of
|
||||
IPC that are faster, but for cross-platform communication, sockets are about the
|
||||
only game in town.
|
||||
Of the various forms of :abbr:`IPC (Inter Process Communication)`,
|
||||
sockets are by far the most popular. On any given platform, there are
|
||||
likely to be other forms of IPC that are faster, but for
|
||||
cross-platform communication, sockets are about the only game in town.
|
||||
|
||||
They were invented in Berkeley as part of the BSD flavor of Unix. They spread
|
||||
like wildfire with the Internet. With good reason --- the combination of sockets
|
||||
|
@ -66,13 +66,14 @@ your browser did something like the following::
|
|||
# - the normal http port
|
||||
s.connect(("www.mcmillan-inc.com", 80))
|
||||
|
||||
When the ``connect`` completes, the socket ``s`` can now be used to send in a
|
||||
request for the text of this page. The same socket will read the reply, and then
|
||||
be destroyed. That's right - destroyed. Client sockets are normally only used
|
||||
for one exchange (or a small set of sequential exchanges).
|
||||
When the ``connect`` completes, the socket ``s`` can be used to send
|
||||
in a request for the text of the page. The same socket will read the
|
||||
reply, and then be destroyed. That's right, destroyed. Client sockets
|
||||
are normally only used for one exchange (or a small set of sequential
|
||||
exchanges).
|
||||
|
||||
What happens in the web server is a bit more complex. First, the web server
|
||||
creates a "server socket". ::
|
||||
creates a "server socket"::
|
||||
|
||||
#create an INET, STREAMing socket
|
||||
serversocket = socket.socket(
|
||||
|
@ -96,7 +97,7 @@ Finally, the argument to ``listen`` tells the socket library that we want it to
|
|||
queue up as many as 5 connect requests (the normal max) before refusing outside
|
||||
connections. If the rest of the code is written properly, that should be plenty.
|
||||
|
||||
OK, now we have a "server" socket, listening on port 80. Now we enter the
|
||||
Now that we have a "server" socket, listening on port 80, we can enter the
|
||||
mainloop of the web server::
|
||||
|
||||
while True:
|
||||
|
@ -145,7 +146,7 @@ perhaps a signon. But that's a design decision - it's not a rule of sockets.
|
|||
|
||||
Now there are two sets of verbs to use for communication. You can use ``send``
|
||||
and ``recv``, or you can transform your client socket into a file-like beast and
|
||||
use ``read`` and ``write``. The latter is the way Java presents their sockets.
|
||||
use ``read`` and ``write``. The latter is the way Java presents its sockets.
|
||||
I'm not going to talk about it here, except to warn you that you need to use
|
||||
``flush`` on sockets. These are buffered "files", and a common mistake is to
|
||||
``write`` something, and then ``read`` for a reply. Without a ``flush`` in
|
||||
|
@ -166,11 +167,11 @@ this connection. Ever. You may be able to send data successfully; I'll talk
|
|||
about that some on the next page.
|
||||
|
||||
A protocol like HTTP uses a socket for only one transfer. The client sends a
|
||||
request, the reads a reply. That's it. The socket is discarded. This means that
|
||||
request, then reads a reply. That's it. The socket is discarded. This means that
|
||||
a client can detect the end of the reply by receiving 0 bytes.
|
||||
|
||||
But if you plan to reuse your socket for further transfers, you need to realize
|
||||
that *there is no "EOT" (End of Transfer) on a socket.* I repeat: if a socket
|
||||
that *there is no* :abbr:`EOT (End of Transfer)` *on a socket.* I repeat: if a socket
|
||||
``send`` or ``recv`` returns after handling 0 bytes, the connection has been
|
||||
broken. If the connection has *not* been broken, you may wait on a ``recv``
|
||||
forever, because the socket will *not* tell you that there's nothing more to
|
||||
|
@ -336,7 +337,7 @@ Use ``select``.
|
|||
|
||||
In C, coding ``select`` is fairly complex. In Python, it's a piece of cake, but
|
||||
it's close enough to the C version that if you understand ``select`` in Python,
|
||||
you'll have little trouble with it in C. ::
|
||||
you'll have little trouble with it in C::
|
||||
|
||||
ready_to_read, ready_to_write, in_error = \
|
||||
select.select(
|
||||
|
@ -353,9 +354,9 @@ call is blocking, but you can give it a timeout. This is generally a sensible
|
|||
thing to do - give it a nice long timeout (say a minute) unless you have good
|
||||
reason to do otherwise.
|
||||
|
||||
In return, you will get three lists. They have the sockets that are actually
|
||||
In return, you will get three lists. They contain the sockets that are actually
|
||||
readable, writable and in error. Each of these lists is a subset (possibly
|
||||
empty) of the corresponding list you passed in. And if you put a socket in more
|
||||
empty) of the corresponding list you passed in. If you put a socket in more
|
||||
than one input list, it will only be (at most) in one output list.
|
||||
|
||||
If a socket is in the output readable list, you can be
|
||||
|
|
|
@ -189,37 +189,105 @@ are converted to strings. The default implementation uses the internals of the
|
|||
|
||||
.. _pprint-example:
|
||||
|
||||
pprint Example
|
||||
--------------
|
||||
Example
|
||||
-------
|
||||
|
||||
This example demonstrates several uses of the :func:`pprint` function and its
|
||||
parameters.
|
||||
To demonstrate several uses of the :func:`pprint` function and its parameters,
|
||||
let's fetch information about a package from PyPI::
|
||||
|
||||
>>> import json
|
||||
>>> import pprint
|
||||
>>> tup = ('spam', ('eggs', ('lumberjack', ('knights', ('ni', ('dead',
|
||||
... ('parrot', ('fresh fruit',))))))))
|
||||
>>> stuff = ['a' * 10, tup, ['a' * 30, 'b' * 30], ['c' * 20, 'd' * 20]]
|
||||
>>> pprint.pprint(stuff)
|
||||
['aaaaaaaaaa',
|
||||
('spam',
|
||||
('eggs',
|
||||
('lumberjack',
|
||||
('knights', ('ni', ('dead', ('parrot', ('fresh fruit',)))))))),
|
||||
['aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa', 'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbb'],
|
||||
['cccccccccccccccccccc', 'dddddddddddddddddddd']]
|
||||
>>> pprint.pprint(stuff, depth=3)
|
||||
['aaaaaaaaaa',
|
||||
('spam', ('eggs', (...))),
|
||||
['aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa', 'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbb'],
|
||||
['cccccccccccccccccccc', 'dddddddddddddddddddd']]
|
||||
>>> pprint.pprint(stuff, width=60)
|
||||
['aaaaaaaaaa',
|
||||
('spam',
|
||||
('eggs',
|
||||
('lumberjack',
|
||||
('knights',
|
||||
('ni', ('dead', ('parrot', ('fresh fruit',)))))))),
|
||||
['aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa',
|
||||
'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbb'],
|
||||
['cccccccccccccccccccc', 'dddddddddddddddddddd']]
|
||||
>>> from urllib.request import urlopen
|
||||
>>> with urlopen('http://pypi.python.org/pypi/configparser/json') as url:
|
||||
... http_info = url.info()
|
||||
... raw_data = url.read().decode(http_info.get_content_charset())
|
||||
>>> package_data = json.loads(raw_data)
|
||||
>>> result = {'headers': http_info.items(), 'body': package_data}
|
||||
|
||||
In its basic form, :func:`pprint` shows the whole object::
|
||||
|
||||
>>> pprint.pprint(result)
|
||||
{'body': {'info': {'_pypi_hidden': False,
|
||||
'_pypi_ordering': 12,
|
||||
'classifiers': ['Development Status :: 4 - Beta',
|
||||
'Intended Audience :: Developers',
|
||||
'License :: OSI Approved :: MIT License',
|
||||
'Natural Language :: English',
|
||||
'Operating System :: OS Independent',
|
||||
'Programming Language :: Python',
|
||||
'Programming Language :: Python :: 2',
|
||||
'Programming Language :: Python :: 2.6',
|
||||
'Programming Language :: Python :: 2.7',
|
||||
'Topic :: Software Development :: Libraries',
|
||||
'Topic :: Software Development :: Libraries :: Python Modules'],
|
||||
'download_url': 'UNKNOWN',
|
||||
'home_page': 'http://docs.python.org/py3k/library/configparser.html',
|
||||
'keywords': 'configparser ini parsing conf cfg configuration file',
|
||||
'license': 'MIT',
|
||||
'name': 'configparser',
|
||||
'package_url': 'http://pypi.python.org/pypi/configparser',
|
||||
'platform': 'any',
|
||||
'release_url': 'http://pypi.python.org/pypi/configparser/3.2.0r3',
|
||||
'requires_python': None,
|
||||
'stable_version': None,
|
||||
'summary': 'This library brings the updated configparser from Python 3.2+ to Python 2.6-2.7.',
|
||||
'version': '3.2.0r3'},
|
||||
'urls': [{'comment_text': '',
|
||||
'downloads': 47,
|
||||
'filename': 'configparser-3.2.0r3.tar.gz',
|
||||
'has_sig': False,
|
||||
'md5_digest': '8500fd87c61ac0de328fc996fce69b96',
|
||||
'packagetype': 'sdist',
|
||||
'python_version': 'source',
|
||||
'size': 32281,
|
||||
'upload_time': '2011-05-10T16:28:50',
|
||||
'url': 'http://pypi.python.org/packages/source/c/configparser/configparser-3.2.0r3.tar.gz'}]},
|
||||
'headers': [('Date', 'Sat, 14 May 2011 12:48:52 GMT'),
|
||||
('Server', 'Apache/2.2.16 (Debian)'),
|
||||
('Content-Disposition', 'inline'),
|
||||
('Connection', 'close'),
|
||||
('Transfer-Encoding', 'chunked'),
|
||||
('Content-Type', 'application/json; charset="UTF-8"')]}
|
||||
|
||||
The result can be limited to a certain *depth* (ellipsis is used for deeper
|
||||
contents)::
|
||||
|
||||
>>> pprint.pprint(result, depth=3)
|
||||
{'body': {'info': {'_pypi_hidden': False,
|
||||
'_pypi_ordering': 12,
|
||||
'classifiers': [...],
|
||||
'download_url': 'UNKNOWN',
|
||||
'home_page': 'http://docs.python.org/py3k/library/configparser.html',
|
||||
'keywords': 'configparser ini parsing conf cfg configuration file',
|
||||
'license': 'MIT',
|
||||
'name': 'configparser',
|
||||
'package_url': 'http://pypi.python.org/pypi/configparser',
|
||||
'platform': 'any',
|
||||
'release_url': 'http://pypi.python.org/pypi/configparser/3.2.0r3',
|
||||
'requires_python': None,
|
||||
'stable_version': None,
|
||||
'summary': 'This library brings the updated configparser from Python 3.2+ to Python 2.6-2.7.',
|
||||
'version': '3.2.0r3'},
|
||||
'urls': [{...}]},
|
||||
'headers': [('Date', 'Sat, 14 May 2011 12:48:52 GMT'),
|
||||
('Server', 'Apache/2.2.16 (Debian)'),
|
||||
('Content-Disposition', 'inline'),
|
||||
('Connection', 'close'),
|
||||
('Transfer-Encoding', 'chunked'),
|
||||
('Content-Type', 'application/json; charset="UTF-8"')]}
|
||||
|
||||
Additionally, maximum *width* can be suggested. If a long object cannot be
|
||||
split, the specified width will be exceeded::
|
||||
|
||||
>>> pprint.pprint(result['headers'], width=30)
|
||||
[('Date',
|
||||
'Sat, 14 May 2011 12:48:52 GMT'),
|
||||
('Server',
|
||||
'Apache/2.2.16 (Debian)'),
|
||||
('Content-Disposition',
|
||||
'inline'),
|
||||
('Connection', 'close'),
|
||||
('Transfer-Encoding',
|
||||
'chunked'),
|
||||
('Content-Type',
|
||||
'application/json; charset="UTF-8"')]
|
||||
|
|
|
@ -3,6 +3,8 @@ What's New in IDLE 3.2.1?
|
|||
|
||||
*Release date: 15-May-11*
|
||||
|
||||
- Issue #6378: Further adjust idle.bat to start associated Python
|
||||
|
||||
- Issue #11896: Save on Close failed despite selecting "Yes" in dialog.
|
||||
|
||||
- Issue #1028: Ctrl-space binding to show completions was causing IDLE to exit.
|
||||
|
@ -63,7 +65,7 @@ What's New in IDLE 2.7? (UNRELEASED, but merged into 3.1 releases above.)
|
|||
extract port from command line when warnings are present.
|
||||
|
||||
- Tk 8.5 Text widget requires 'wordprocessor' tabstyle attr to handle
|
||||
mixed space/tab properly. Issue 5120, patch by Guilherme Polo.
|
||||
mixed space/tab properly. Issue 5129, patch by Guilherme Polo.
|
||||
|
||||
- Issue #3549: On MacOS the preferences menu was not present
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
@echo off
|
||||
rem Start IDLE using the appropriate Python interpreter
|
||||
set CURRDIR=%~dp0
|
||||
start "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9
|
||||
start "IDLE" "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9
|
||||
|
|
|
@ -5,7 +5,7 @@ import re
|
|||
import sys
|
||||
import struct
|
||||
|
||||
from json.scanner import make_scanner
|
||||
from json import scanner
|
||||
try:
|
||||
from _json import scanstring as c_scanstring
|
||||
except ImportError:
|
||||
|
@ -340,7 +340,7 @@ class JSONDecoder(object):
|
|||
self.parse_array = JSONArray
|
||||
self.parse_string = scanstring
|
||||
self.memo = {}
|
||||
self.scan_once = make_scanner(self)
|
||||
self.scan_once = scanner.make_scanner(self)
|
||||
|
||||
|
||||
def decode(self, s, _w=WHITESPACE.match):
|
||||
|
|
|
@ -1,7 +1,46 @@
|
|||
import os
|
||||
import sys
|
||||
import unittest
|
||||
import json
|
||||
import doctest
|
||||
import unittest
|
||||
|
||||
from test import support
|
||||
|
||||
# import json with and without accelerations
|
||||
cjson = support.import_fresh_module('json', fresh=['_json'])
|
||||
pyjson = support.import_fresh_module('json', blocked=['_json'])
|
||||
|
||||
# create two base classes that will be used by the other tests
|
||||
class PyTest(unittest.TestCase):
|
||||
json = pyjson
|
||||
loads = staticmethod(pyjson.loads)
|
||||
dumps = staticmethod(pyjson.dumps)
|
||||
|
||||
@unittest.skipUnless(cjson, 'requires _json')
|
||||
class CTest(unittest.TestCase):
|
||||
if cjson is not None:
|
||||
json = cjson
|
||||
loads = staticmethod(cjson.loads)
|
||||
dumps = staticmethod(cjson.dumps)
|
||||
|
||||
# test PyTest and CTest checking if the functions come from the right module
|
||||
class TestPyTest(PyTest):
|
||||
def test_pyjson(self):
|
||||
self.assertEqual(self.json.scanner.make_scanner.__module__,
|
||||
'json.scanner')
|
||||
self.assertEqual(self.json.decoder.scanstring.__module__,
|
||||
'json.decoder')
|
||||
self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__,
|
||||
'json.encoder')
|
||||
|
||||
class TestCTest(CTest):
|
||||
def test_cjson(self):
|
||||
self.assertEqual(self.json.scanner.make_scanner.__module__, '_json')
|
||||
self.assertEqual(self.json.decoder.scanstring.__module__, '_json')
|
||||
self.assertEqual(self.json.encoder.c_make_encoder.__module__, '_json')
|
||||
self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__,
|
||||
'_json')
|
||||
|
||||
|
||||
here = os.path.dirname(__file__)
|
||||
|
||||
|
@ -17,12 +56,11 @@ def test_suite():
|
|||
return suite
|
||||
|
||||
def additional_tests():
|
||||
import json
|
||||
import json.encoder
|
||||
import json.decoder
|
||||
suite = unittest.TestSuite()
|
||||
for mod in (json, json.encoder, json.decoder):
|
||||
suite.addTest(doctest.DocTestSuite(mod))
|
||||
suite.addTest(TestPyTest('test_pyjson'))
|
||||
suite.addTest(TestCTest('test_cjson'))
|
||||
return suite
|
||||
|
||||
def main():
|
||||
|
|
|
@ -1,55 +1,38 @@
|
|||
import decimal
|
||||
from unittest import TestCase
|
||||
from io import StringIO
|
||||
from contextlib import contextmanager
|
||||
|
||||
import json
|
||||
import json.decoder
|
||||
import json.scanner
|
||||
from collections import OrderedDict
|
||||
from test.json_tests import PyTest, CTest
|
||||
|
||||
|
||||
@contextmanager
|
||||
def use_python_scanner():
|
||||
py_scanner = json.scanner.py_make_scanner
|
||||
old_scanner = json.decoder.make_scanner
|
||||
json.decoder.make_scanner = py_scanner
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
json.decoder.make_scanner = old_scanner
|
||||
|
||||
|
||||
class TestDecode(TestCase):
|
||||
class TestDecode:
|
||||
def test_decimal(self):
|
||||
rval = json.loads('1.1', parse_float=decimal.Decimal)
|
||||
rval = self.loads('1.1', parse_float=decimal.Decimal)
|
||||
self.assertTrue(isinstance(rval, decimal.Decimal))
|
||||
self.assertEqual(rval, decimal.Decimal('1.1'))
|
||||
|
||||
def test_float(self):
|
||||
rval = json.loads('1', parse_int=float)
|
||||
rval = self.loads('1', parse_int=float)
|
||||
self.assertTrue(isinstance(rval, float))
|
||||
self.assertEqual(rval, 1.0)
|
||||
|
||||
def test_empty_objects(self):
|
||||
self.assertEqual(json.loads('{}'), {})
|
||||
self.assertEqual(json.loads('[]'), [])
|
||||
self.assertEqual(json.loads('""'), "")
|
||||
self.assertEqual(self.loads('{}'), {})
|
||||
self.assertEqual(self.loads('[]'), [])
|
||||
self.assertEqual(self.loads('""'), "")
|
||||
|
||||
def test_object_pairs_hook(self):
|
||||
s = '{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}'
|
||||
p = [("xkd", 1), ("kcw", 2), ("art", 3), ("hxm", 4),
|
||||
("qrt", 5), ("pad", 6), ("hoy", 7)]
|
||||
self.assertEqual(json.loads(s), eval(s))
|
||||
self.assertEqual(json.loads(s, object_pairs_hook = lambda x: x), p)
|
||||
self.assertEqual(json.load(StringIO(s),
|
||||
object_pairs_hook=lambda x: x), p)
|
||||
od = json.loads(s, object_pairs_hook = OrderedDict)
|
||||
self.assertEqual(self.loads(s), eval(s))
|
||||
self.assertEqual(self.loads(s, object_pairs_hook = lambda x: x), p)
|
||||
self.assertEqual(self.json.load(StringIO(s),
|
||||
object_pairs_hook=lambda x: x), p)
|
||||
od = self.loads(s, object_pairs_hook = OrderedDict)
|
||||
self.assertEqual(od, OrderedDict(p))
|
||||
self.assertEqual(type(od), OrderedDict)
|
||||
# the object_pairs_hook takes priority over the object_hook
|
||||
self.assertEqual(json.loads(s,
|
||||
object_pairs_hook = OrderedDict,
|
||||
self.assertEqual(self.loads(s, object_pairs_hook = OrderedDict,
|
||||
object_hook = lambda x: None),
|
||||
OrderedDict(p))
|
||||
|
||||
|
@ -57,7 +40,7 @@ class TestDecode(TestCase):
|
|||
# Several optimizations were made that skip over calls to
|
||||
# the whitespace regex, so this test is designed to try and
|
||||
# exercise the uncommon cases. The array cases are already covered.
|
||||
rval = json.loads('{ "key" : "value" , "k":"v" }')
|
||||
rval = self.loads('{ "key" : "value" , "k":"v" }')
|
||||
self.assertEqual(rval, {"key":"value", "k":"v"})
|
||||
|
||||
def check_keys_reuse(self, source, loads):
|
||||
|
@ -68,7 +51,9 @@ class TestDecode(TestCase):
|
|||
|
||||
def test_keys_reuse(self):
|
||||
s = '[{"a_key": 1, "b_\xe9": 2}, {"a_key": 3, "b_\xe9": 4}]'
|
||||
self.check_keys_reuse(s, json.loads)
|
||||
# Disabled: the pure Python version of json simply doesn't work
|
||||
with use_python_scanner():
|
||||
self.check_keys_reuse(s, json.decoder.JSONDecoder().decode)
|
||||
self.check_keys_reuse(s, self.loads)
|
||||
self.check_keys_reuse(s, self.json.decoder.JSONDecoder().decode)
|
||||
|
||||
|
||||
class TestPyDecode(TestDecode, PyTest): pass
|
||||
class TestCDecode(TestDecode, CTest): pass
|
||||
|
|
|
@ -1,9 +1,12 @@
|
|||
from unittest import TestCase
|
||||
from test.json_tests import PyTest, CTest
|
||||
|
||||
import json
|
||||
|
||||
class TestDefault(TestCase):
|
||||
class TestDefault:
|
||||
def test_default(self):
|
||||
self.assertEqual(
|
||||
json.dumps(type, default=repr),
|
||||
json.dumps(repr(type)))
|
||||
self.dumps(type, default=repr),
|
||||
self.dumps(repr(type)))
|
||||
|
||||
|
||||
class TestPyDefault(TestDefault, PyTest): pass
|
||||
class TestCDefault(TestDefault, CTest): pass
|
||||
|
|
|
@ -1,21 +1,24 @@
|
|||
from unittest import TestCase
|
||||
from io import StringIO
|
||||
from test.json_tests import PyTest, CTest
|
||||
|
||||
import json
|
||||
|
||||
class TestDump(TestCase):
|
||||
class TestDump:
|
||||
def test_dump(self):
|
||||
sio = StringIO()
|
||||
json.dump({}, sio)
|
||||
self.json.dump({}, sio)
|
||||
self.assertEqual(sio.getvalue(), '{}')
|
||||
|
||||
def test_dumps(self):
|
||||
self.assertEqual(json.dumps({}), '{}')
|
||||
self.assertEqual(self.dumps({}), '{}')
|
||||
|
||||
def test_encode_truefalse(self):
|
||||
self.assertEqual(json.dumps(
|
||||
self.assertEqual(self.dumps(
|
||||
{True: False, False: True}, sort_keys=True),
|
||||
'{"false": true, "true": false}')
|
||||
self.assertEqual(json.dumps(
|
||||
self.assertEqual(self.dumps(
|
||||
{2: 3.0, 4.0: 5, False: 1, 6: True}, sort_keys=True),
|
||||
'{"false": 1, "2": 3.0, "4.0": 5, "6": true}')
|
||||
|
||||
|
||||
class TestPyDump(TestDump, PyTest): pass
|
||||
class TestCDump(TestDump, CTest): pass
|
||||
|
|
|
@ -1,8 +1,6 @@
|
|||
from unittest import TestCase
|
||||
|
||||
import json.encoder
|
||||
from json import dumps
|
||||
from collections import OrderedDict
|
||||
from test.json_tests import PyTest, CTest
|
||||
|
||||
|
||||
CASES = [
|
||||
('/\\"\ucafe\ubabe\uab98\ufcde\ubcda\uef4a\x08\x0c\n\r\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?', '"/\\\\\\"\\ucafe\\ubabe\\uab98\\ufcde\\ubcda\\uef4a\\b\\f\\n\\r\\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?"'),
|
||||
|
@ -21,19 +19,11 @@ CASES = [
|
|||
('\u0123\u4567\u89ab\ucdef\uabcd\uef4a', '"\\u0123\\u4567\\u89ab\\ucdef\\uabcd\\uef4a"'),
|
||||
]
|
||||
|
||||
class TestEncodeBaseStringAscii(TestCase):
|
||||
def test_py_encode_basestring_ascii(self):
|
||||
self._test_encode_basestring_ascii(json.encoder.py_encode_basestring_ascii)
|
||||
|
||||
def test_c_encode_basestring_ascii(self):
|
||||
if not json.encoder.c_encode_basestring_ascii:
|
||||
return
|
||||
self._test_encode_basestring_ascii(json.encoder.c_encode_basestring_ascii)
|
||||
|
||||
def _test_encode_basestring_ascii(self, encode_basestring_ascii):
|
||||
fname = encode_basestring_ascii.__name__
|
||||
class TestEncodeBasestringAscii:
|
||||
def test_encode_basestring_ascii(self):
|
||||
fname = self.json.encoder.encode_basestring_ascii.__name__
|
||||
for input_string, expect in CASES:
|
||||
result = encode_basestring_ascii(input_string)
|
||||
result = self.json.encoder.encode_basestring_ascii(input_string)
|
||||
self.assertEqual(result, expect,
|
||||
'{0!r} != {1!r} for {2}({3!r})'.format(
|
||||
result, expect, fname, input_string))
|
||||
|
@ -41,10 +31,14 @@ class TestEncodeBaseStringAscii(TestCase):
|
|||
def test_ordered_dict(self):
|
||||
# See issue 6105
|
||||
items = [('one', 1), ('two', 2), ('three', 3), ('four', 4), ('five', 5)]
|
||||
s = json.dumps(OrderedDict(items))
|
||||
s = self.dumps(OrderedDict(items))
|
||||
self.assertEqual(s, '{"one": 1, "two": 2, "three": 3, "four": 4, "five": 5}')
|
||||
|
||||
def test_sorted_dict(self):
|
||||
items = [('one', 1), ('two', 2), ('three', 3), ('four', 4), ('five', 5)]
|
||||
s = json.dumps(dict(items), sort_keys=True)
|
||||
s = self.dumps(dict(items), sort_keys=True)
|
||||
self.assertEqual(s, '{"five": 5, "four": 4, "one": 1, "three": 3, "two": 2}')
|
||||
|
||||
|
||||
class TestPyEncodeBasestringAscii(TestEncodeBasestringAscii, PyTest): pass
|
||||
class TestCEncodeBasestringAscii(TestEncodeBasestringAscii, CTest): pass
|
||||
|
|
|
@ -1,6 +1,4 @@
|
|||
from unittest import TestCase
|
||||
|
||||
import json
|
||||
from test.json_tests import PyTest, CTest
|
||||
|
||||
# Fri Dec 30 18:57:26 2005
|
||||
JSONDOCS = [
|
||||
|
@ -61,15 +59,15 @@ SKIPS = {
|
|||
18: "spec doesn't specify any nesting limitations",
|
||||
}
|
||||
|
||||
class TestFail(TestCase):
|
||||
class TestFail:
|
||||
def test_failures(self):
|
||||
for idx, doc in enumerate(JSONDOCS):
|
||||
idx = idx + 1
|
||||
if idx in SKIPS:
|
||||
json.loads(doc)
|
||||
self.loads(doc)
|
||||
continue
|
||||
try:
|
||||
json.loads(doc)
|
||||
self.loads(doc)
|
||||
except ValueError:
|
||||
pass
|
||||
else:
|
||||
|
@ -79,7 +77,11 @@ class TestFail(TestCase):
|
|||
data = {'a' : 1, (1, 2) : 2}
|
||||
|
||||
#This is for c encoder
|
||||
self.assertRaises(TypeError, json.dumps, data)
|
||||
self.assertRaises(TypeError, self.dumps, data)
|
||||
|
||||
#This is for python encoder
|
||||
self.assertRaises(TypeError, json.dumps, data, indent=True)
|
||||
self.assertRaises(TypeError, self.dumps, data, indent=True)
|
||||
|
||||
|
||||
class TestPyFail(TestFail, PyTest): pass
|
||||
class TestCFail(TestFail, CTest): pass
|
||||
|
|
|
@ -1,15 +1,18 @@
|
|||
import math
|
||||
from unittest import TestCase
|
||||
from test.json_tests import PyTest, CTest
|
||||
|
||||
import json
|
||||
|
||||
class TestFloat(TestCase):
|
||||
class TestFloat:
|
||||
def test_floats(self):
|
||||
for num in [1617161771.7650001, math.pi, math.pi**100, math.pi**-100, 3.1]:
|
||||
self.assertEqual(float(json.dumps(num)), num)
|
||||
self.assertEqual(json.loads(json.dumps(num)), num)
|
||||
self.assertEqual(float(self.dumps(num)), num)
|
||||
self.assertEqual(self.loads(self.dumps(num)), num)
|
||||
|
||||
def test_ints(self):
|
||||
for num in [1, 1<<32, 1<<64]:
|
||||
self.assertEqual(json.dumps(num), str(num))
|
||||
self.assertEqual(int(json.dumps(num)), num)
|
||||
self.assertEqual(self.dumps(num), str(num))
|
||||
self.assertEqual(int(self.dumps(num)), num)
|
||||
|
||||
|
||||
class TestPyFloat(TestFloat, PyTest): pass
|
||||
class TestCFloat(TestFloat, CTest): pass
|
||||
|
|
|
@ -1,10 +1,9 @@
|
|||
from unittest import TestCase
|
||||
|
||||
import json
|
||||
import textwrap
|
||||
from io import StringIO
|
||||
from test.json_tests import PyTest, CTest
|
||||
|
||||
class TestIndent(TestCase):
|
||||
|
||||
class TestIndent:
|
||||
def test_indent(self):
|
||||
h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth',
|
||||
{'nifty': 87}, {'field': 'yes', 'morefield': False} ]
|
||||
|
@ -30,14 +29,13 @@ class TestIndent(TestCase):
|
|||
\t}
|
||||
]""")
|
||||
|
||||
d1 = self.dumps(h)
|
||||
d2 = self.dumps(h, indent=2, sort_keys=True, separators=(',', ': '))
|
||||
d3 = self.dumps(h, indent='\t', sort_keys=True, separators=(',', ': '))
|
||||
|
||||
d1 = json.dumps(h)
|
||||
d2 = json.dumps(h, indent=2, sort_keys=True, separators=(',', ': '))
|
||||
d3 = json.dumps(h, indent='\t', sort_keys=True, separators=(',', ': '))
|
||||
|
||||
h1 = json.loads(d1)
|
||||
h2 = json.loads(d2)
|
||||
h3 = json.loads(d3)
|
||||
h1 = self.loads(d1)
|
||||
h2 = self.loads(d2)
|
||||
h3 = self.loads(d3)
|
||||
|
||||
self.assertEqual(h1, h)
|
||||
self.assertEqual(h2, h)
|
||||
|
@ -48,14 +46,18 @@ class TestIndent(TestCase):
|
|||
def test_indent0(self):
|
||||
h = {3: 1}
|
||||
def check(indent, expected):
|
||||
d1 = json.dumps(h, indent=indent)
|
||||
d1 = self.dumps(h, indent=indent)
|
||||
self.assertEqual(d1, expected)
|
||||
|
||||
sio = StringIO()
|
||||
json.dump(h, sio, indent=indent)
|
||||
self.json.dump(h, sio, indent=indent)
|
||||
self.assertEqual(sio.getvalue(), expected)
|
||||
|
||||
# indent=0 should emit newlines
|
||||
check(0, '{\n"3": 1\n}')
|
||||
# indent=None is more compact
|
||||
check(None, '{"3": 1}')
|
||||
|
||||
|
||||
class TestPyIndent(TestIndent, PyTest): pass
|
||||
class TestCIndent(TestIndent, CTest): pass
|
||||
|
|
|
@ -1,6 +1,5 @@
|
|||
from unittest import TestCase
|
||||
from test.json_tests import PyTest, CTest
|
||||
|
||||
import json
|
||||
|
||||
# from http://json.org/JSON_checker/test/pass1.json
|
||||
JSON = r'''
|
||||
|
@ -62,15 +61,19 @@ JSON = r'''
|
|||
,"rosebud"]
|
||||
'''
|
||||
|
||||
class TestPass1(TestCase):
|
||||
class TestPass1:
|
||||
def test_parse(self):
|
||||
# test in/out equivalence and parsing
|
||||
res = json.loads(JSON)
|
||||
out = json.dumps(res)
|
||||
self.assertEqual(res, json.loads(out))
|
||||
res = self.loads(JSON)
|
||||
out = self.dumps(res)
|
||||
self.assertEqual(res, self.loads(out))
|
||||
try:
|
||||
json.dumps(res, allow_nan=False)
|
||||
self.dumps(res, allow_nan=False)
|
||||
except ValueError:
|
||||
pass
|
||||
else:
|
||||
self.fail("23456789012E666 should be out of range")
|
||||
|
||||
|
||||
class TestPyPass1(TestPass1, PyTest): pass
|
||||
class TestCPass1(TestPass1, CTest): pass
|
||||
|
|
|
@ -1,14 +1,18 @@
|
|||
from unittest import TestCase
|
||||
import json
|
||||
from test.json_tests import PyTest, CTest
|
||||
|
||||
|
||||
# from http://json.org/JSON_checker/test/pass2.json
|
||||
JSON = r'''
|
||||
[[[[[[[[[[[[[[[[[[["Not too deep"]]]]]]]]]]]]]]]]]]]
|
||||
'''
|
||||
|
||||
class TestPass2(TestCase):
|
||||
class TestPass2:
|
||||
def test_parse(self):
|
||||
# test in/out equivalence and parsing
|
||||
res = json.loads(JSON)
|
||||
out = json.dumps(res)
|
||||
self.assertEqual(res, json.loads(out))
|
||||
res = self.loads(JSON)
|
||||
out = self.dumps(res)
|
||||
self.assertEqual(res, self.loads(out))
|
||||
|
||||
|
||||
class TestPyPass2(TestPass2, PyTest): pass
|
||||
class TestCPass2(TestPass2, CTest): pass
|
||||
|
|
|
@ -1,6 +1,5 @@
|
|||
from unittest import TestCase
|
||||
from test.json_tests import PyTest, CTest
|
||||
|
||||
import json
|
||||
|
||||
# from http://json.org/JSON_checker/test/pass3.json
|
||||
JSON = r'''
|
||||
|
@ -12,9 +11,14 @@ JSON = r'''
|
|||
}
|
||||
'''
|
||||
|
||||
class TestPass3(TestCase):
|
||||
|
||||
class TestPass3:
|
||||
def test_parse(self):
|
||||
# test in/out equivalence and parsing
|
||||
res = json.loads(JSON)
|
||||
out = json.dumps(res)
|
||||
self.assertEqual(res, json.loads(out))
|
||||
res = self.loads(JSON)
|
||||
out = self.dumps(res)
|
||||
self.assertEqual(res, self.loads(out))
|
||||
|
||||
|
||||
class TestPyPass3(TestPass3, PyTest): pass
|
||||
class TestCPass3(TestPass3, CTest): pass
|
||||
|
|
|
@ -1,33 +1,16 @@
|
|||
from unittest import TestCase
|
||||
from test.json_tests import PyTest, CTest
|
||||
|
||||
import json
|
||||
|
||||
class JSONTestObject:
|
||||
pass
|
||||
|
||||
|
||||
class RecursiveJSONEncoder(json.JSONEncoder):
|
||||
recurse = False
|
||||
def default(self, o):
|
||||
if o is JSONTestObject:
|
||||
if self.recurse:
|
||||
return [JSONTestObject]
|
||||
else:
|
||||
return 'JSONTestObject'
|
||||
return json.JSONEncoder.default(o)
|
||||
|
||||
class EndlessJSONEncoder(json.JSONEncoder):
|
||||
def default(self, o):
|
||||
"""If check_circular is False, this will keep adding another list."""
|
||||
return [o]
|
||||
|
||||
|
||||
class TestRecursion(TestCase):
|
||||
class TestRecursion:
|
||||
def test_listrecursion(self):
|
||||
x = []
|
||||
x.append(x)
|
||||
try:
|
||||
json.dumps(x)
|
||||
self.dumps(x)
|
||||
except ValueError:
|
||||
pass
|
||||
else:
|
||||
|
@ -36,7 +19,7 @@ class TestRecursion(TestCase):
|
|||
y = [x]
|
||||
x.append(y)
|
||||
try:
|
||||
json.dumps(x)
|
||||
self.dumps(x)
|
||||
except ValueError:
|
||||
pass
|
||||
else:
|
||||
|
@ -44,13 +27,13 @@ class TestRecursion(TestCase):
|
|||
y = []
|
||||
x = [y, y]
|
||||
# ensure that the marker is cleared
|
||||
json.dumps(x)
|
||||
self.dumps(x)
|
||||
|
||||
def test_dictrecursion(self):
|
||||
x = {}
|
||||
x["test"] = x
|
||||
try:
|
||||
json.dumps(x)
|
||||
self.dumps(x)
|
||||
except ValueError:
|
||||
pass
|
||||
else:
|
||||
|
@ -58,9 +41,19 @@ class TestRecursion(TestCase):
|
|||
x = {}
|
||||
y = {"a": x, "b": x}
|
||||
# ensure that the marker is cleared
|
||||
json.dumps(x)
|
||||
self.dumps(x)
|
||||
|
||||
def test_defaultrecursion(self):
|
||||
class RecursiveJSONEncoder(self.json.JSONEncoder):
|
||||
recurse = False
|
||||
def default(self, o):
|
||||
if o is JSONTestObject:
|
||||
if self.recurse:
|
||||
return [JSONTestObject]
|
||||
else:
|
||||
return 'JSONTestObject'
|
||||
return pyjson.JSONEncoder.default(o)
|
||||
|
||||
enc = RecursiveJSONEncoder()
|
||||
self.assertEqual(enc.encode(JSONTestObject), '"JSONTestObject"')
|
||||
enc.recurse = True
|
||||
|
@ -76,11 +69,11 @@ class TestRecursion(TestCase):
|
|||
# test that loading highly-nested objects doesn't segfault when C
|
||||
# accelerations are used. See #12017
|
||||
with self.assertRaises(RuntimeError):
|
||||
json.loads('{"a":' * 100000 + '1' + '}' * 100000)
|
||||
self.loads('{"a":' * 100000 + '1' + '}' * 100000)
|
||||
with self.assertRaises(RuntimeError):
|
||||
json.loads('{"a":' * 100000 + '[1]' + '}' * 100000)
|
||||
self.loads('{"a":' * 100000 + '[1]' + '}' * 100000)
|
||||
with self.assertRaises(RuntimeError):
|
||||
json.loads('[' * 100000 + '1' + ']' * 100000)
|
||||
self.loads('[' * 100000 + '1' + ']' * 100000)
|
||||
|
||||
def test_highly_nested_objects_encoding(self):
|
||||
# See #12051
|
||||
|
@ -88,11 +81,20 @@ class TestRecursion(TestCase):
|
|||
for x in range(100000):
|
||||
l, d = [l], {'k':d}
|
||||
with self.assertRaises(RuntimeError):
|
||||
json.dumps(l)
|
||||
self.dumps(l)
|
||||
with self.assertRaises(RuntimeError):
|
||||
json.dumps(d)
|
||||
self.dumps(d)
|
||||
|
||||
def test_endless_recursion(self):
|
||||
# See #12051
|
||||
class EndlessJSONEncoder(self.json.JSONEncoder):
|
||||
def default(self, o):
|
||||
"""If check_circular is False, this will keep adding another list."""
|
||||
return [o]
|
||||
|
||||
with self.assertRaises(RuntimeError):
|
||||
EndlessJSONEncoder(check_circular=False).encode(5j)
|
||||
|
||||
|
||||
class TestPyRecursion(TestRecursion, PyTest): pass
|
||||
class TestCRecursion(TestRecursion, CTest): pass
|
||||
|
|
|
@ -1,24 +1,10 @@
|
|||
import sys
|
||||
from unittest import TestCase, skipUnless
|
||||
from test.json_tests import PyTest, CTest
|
||||
|
||||
import json
|
||||
import json.decoder
|
||||
|
||||
try:
|
||||
import _json
|
||||
except ImportError:
|
||||
_json = None
|
||||
|
||||
class TestScanString(TestCase):
|
||||
def test_py_scanstring(self):
|
||||
self._test_scanstring(json.decoder.py_scanstring)
|
||||
|
||||
@skipUnless(_json, 'test requires the _json module')
|
||||
def test_c_scanstring(self):
|
||||
if json.decoder.c_scanstring is not None:
|
||||
self._test_scanstring(json.decoder.c_scanstring)
|
||||
|
||||
def _test_scanstring(self, scanstring):
|
||||
class TestScanstring:
|
||||
def test_scanstring(self):
|
||||
scanstring = self.json.decoder.scanstring
|
||||
self.assertEqual(
|
||||
scanstring('"z\\ud834\\udd20x"', 1, True),
|
||||
('z\U0001d120x', 16))
|
||||
|
@ -109,4 +95,9 @@ class TestScanString(TestCase):
|
|||
('Bad value', 12))
|
||||
|
||||
def test_overflow(self):
|
||||
self.assertRaises(OverflowError, json.decoder.scanstring, b"xxx", sys.maxsize+1)
|
||||
with self.assertRaises(OverflowError):
|
||||
self.json.decoder.scanstring(b"xxx", sys.maxsize+1)
|
||||
|
||||
|
||||
class TestPyScanstring(TestScanstring, PyTest): pass
|
||||
class TestCScanstring(TestScanstring, CTest): pass
|
||||
|
|
|
@ -1,10 +1,8 @@
|
|||
import textwrap
|
||||
from unittest import TestCase
|
||||
|
||||
import json
|
||||
from test.json_tests import PyTest, CTest
|
||||
|
||||
|
||||
class TestSeparators(TestCase):
|
||||
class TestSeparators:
|
||||
def test_separators(self):
|
||||
h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth',
|
||||
{'nifty': 87}, {'field': 'yes', 'morefield': False} ]
|
||||
|
@ -31,12 +29,16 @@ class TestSeparators(TestCase):
|
|||
]""")
|
||||
|
||||
|
||||
d1 = json.dumps(h)
|
||||
d2 = json.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : '))
|
||||
d1 = self.dumps(h)
|
||||
d2 = self.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : '))
|
||||
|
||||
h1 = json.loads(d1)
|
||||
h2 = json.loads(d2)
|
||||
h1 = self.loads(d1)
|
||||
h2 = self.loads(d2)
|
||||
|
||||
self.assertEqual(h1, h)
|
||||
self.assertEqual(h2, h)
|
||||
self.assertEqual(d2, expect)
|
||||
|
||||
|
||||
class TestPySeparators(TestSeparators, PyTest): pass
|
||||
class TestCSeparators(TestSeparators, CTest): pass
|
||||
|
|
|
@ -1,29 +1,24 @@
|
|||
from unittest import TestCase, skipUnless
|
||||
from test.json_tests import CTest
|
||||
|
||||
from json import decoder, encoder, scanner
|
||||
|
||||
try:
|
||||
import _json
|
||||
except ImportError:
|
||||
_json = None
|
||||
|
||||
@skipUnless(_json, 'test requires the _json module')
|
||||
class TestSpeedups(TestCase):
|
||||
class TestSpeedups(CTest):
|
||||
def test_scanstring(self):
|
||||
self.assertEqual(decoder.scanstring.__module__, "_json")
|
||||
self.assertIs(decoder.scanstring, decoder.c_scanstring)
|
||||
self.assertEqual(self.json.decoder.scanstring.__module__, "_json")
|
||||
self.assertIs(self.json.decoder.scanstring, self.json.decoder.c_scanstring)
|
||||
|
||||
def test_encode_basestring_ascii(self):
|
||||
self.assertEqual(encoder.encode_basestring_ascii.__module__, "_json")
|
||||
self.assertIs(encoder.encode_basestring_ascii,
|
||||
encoder.c_encode_basestring_ascii)
|
||||
self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__,
|
||||
"_json")
|
||||
self.assertIs(self.json.encoder.encode_basestring_ascii,
|
||||
self.json.encoder.c_encode_basestring_ascii)
|
||||
|
||||
class TestDecode(TestCase):
|
||||
|
||||
class TestDecode(CTest):
|
||||
def test_make_scanner(self):
|
||||
self.assertRaises(AttributeError, scanner.c_make_scanner, 1)
|
||||
self.assertRaises(AttributeError, self.json.scanner.c_make_scanner, 1)
|
||||
|
||||
def test_make_encoder(self):
|
||||
self.assertRaises(TypeError, encoder.c_make_encoder,
|
||||
self.assertRaises(TypeError, self.json.encoder.c_make_encoder,
|
||||
(True, False),
|
||||
b"\xCD\x7D\x3D\x4E\x12\x4C\xF9\x79\xD7\x52\xBA\x82\xF2\x27\x4A\x7D\xA0\xCA\x75",
|
||||
None)
|
||||
|
|
|
@ -1,73 +1,75 @@
|
|||
from unittest import TestCase
|
||||
|
||||
import json
|
||||
from collections import OrderedDict
|
||||
from test.json_tests import PyTest, CTest
|
||||
|
||||
class TestUnicode(TestCase):
|
||||
|
||||
class TestUnicode:
|
||||
# test_encoding1 and test_encoding2 from 2.x are irrelevant (only str
|
||||
# is supported as input, not bytes).
|
||||
|
||||
def test_encoding3(self):
|
||||
u = '\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}'
|
||||
j = json.dumps(u)
|
||||
j = self.dumps(u)
|
||||
self.assertEqual(j, '"\\u03b1\\u03a9"')
|
||||
|
||||
def test_encoding4(self):
|
||||
u = '\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}'
|
||||
j = json.dumps([u])
|
||||
j = self.dumps([u])
|
||||
self.assertEqual(j, '["\\u03b1\\u03a9"]')
|
||||
|
||||
def test_encoding5(self):
|
||||
u = '\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}'
|
||||
j = json.dumps(u, ensure_ascii=False)
|
||||
j = self.dumps(u, ensure_ascii=False)
|
||||
self.assertEqual(j, '"{0}"'.format(u))
|
||||
|
||||
def test_encoding6(self):
|
||||
u = '\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}'
|
||||
j = json.dumps([u], ensure_ascii=False)
|
||||
j = self.dumps([u], ensure_ascii=False)
|
||||
self.assertEqual(j, '["{0}"]'.format(u))
|
||||
|
||||
def test_big_unicode_encode(self):
|
||||
u = '\U0001d120'
|
||||
self.assertEqual(json.dumps(u), '"\\ud834\\udd20"')
|
||||
self.assertEqual(json.dumps(u, ensure_ascii=False), '"\U0001d120"')
|
||||
self.assertEqual(self.dumps(u), '"\\ud834\\udd20"')
|
||||
self.assertEqual(self.dumps(u, ensure_ascii=False), '"\U0001d120"')
|
||||
|
||||
def test_big_unicode_decode(self):
|
||||
u = 'z\U0001d120x'
|
||||
self.assertEqual(json.loads('"' + u + '"'), u)
|
||||
self.assertEqual(json.loads('"z\\ud834\\udd20x"'), u)
|
||||
self.assertEqual(self.loads('"' + u + '"'), u)
|
||||
self.assertEqual(self.loads('"z\\ud834\\udd20x"'), u)
|
||||
|
||||
def test_unicode_decode(self):
|
||||
for i in range(0, 0xd7ff):
|
||||
u = chr(i)
|
||||
s = '"\\u{0:04x}"'.format(i)
|
||||
self.assertEqual(json.loads(s), u)
|
||||
self.assertEqual(self.loads(s), u)
|
||||
|
||||
def test_unicode_preservation(self):
|
||||
self.assertEqual(type(json.loads('""')), str)
|
||||
self.assertEqual(type(json.loads('"a"')), str)
|
||||
self.assertEqual(type(json.loads('["a"]')[0]), str)
|
||||
self.assertEqual(type(self.loads('""')), str)
|
||||
self.assertEqual(type(self.loads('"a"')), str)
|
||||
self.assertEqual(type(self.loads('["a"]')[0]), str)
|
||||
|
||||
def test_bytes_encode(self):
|
||||
self.assertRaises(TypeError, json.dumps, b"hi")
|
||||
self.assertRaises(TypeError, json.dumps, [b"hi"])
|
||||
self.assertRaises(TypeError, self.dumps, b"hi")
|
||||
self.assertRaises(TypeError, self.dumps, [b"hi"])
|
||||
|
||||
def test_bytes_decode(self):
|
||||
self.assertRaises(TypeError, json.loads, b'"hi"')
|
||||
self.assertRaises(TypeError, json.loads, b'["hi"]')
|
||||
self.assertRaises(TypeError, self.loads, b'"hi"')
|
||||
self.assertRaises(TypeError, self.loads, b'["hi"]')
|
||||
|
||||
|
||||
def test_object_pairs_hook_with_unicode(self):
|
||||
s = '{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}'
|
||||
p = [("xkd", 1), ("kcw", 2), ("art", 3), ("hxm", 4),
|
||||
("qrt", 5), ("pad", 6), ("hoy", 7)]
|
||||
self.assertEqual(json.loads(s), eval(s))
|
||||
self.assertEqual(json.loads(s, object_pairs_hook = lambda x: x), p)
|
||||
od = json.loads(s, object_pairs_hook = OrderedDict)
|
||||
self.assertEqual(self.loads(s), eval(s))
|
||||
self.assertEqual(self.loads(s, object_pairs_hook = lambda x: x), p)
|
||||
od = self.loads(s, object_pairs_hook = OrderedDict)
|
||||
self.assertEqual(od, OrderedDict(p))
|
||||
self.assertEqual(type(od), OrderedDict)
|
||||
# the object_pairs_hook takes priority over the object_hook
|
||||
self.assertEqual(json.loads(s,
|
||||
object_pairs_hook = OrderedDict,
|
||||
self.assertEqual(self.loads(s, object_pairs_hook = OrderedDict,
|
||||
object_hook = lambda x: None),
|
||||
OrderedDict(p))
|
||||
|
||||
|
||||
class TestPyUnicode(TestUnicode, PyTest): pass
|
||||
class TestCUnicode(TestUnicode, CTest): pass
|
||||
|
|
|
@ -37,6 +37,7 @@ __all__ = [
|
|||
"findfile", "sortdict", "check_syntax_error", "open_urlresource",
|
||||
"check_warnings", "CleanImport", "EnvironmentVarGuard",
|
||||
"TransientResource", "captured_output", "captured_stdout",
|
||||
"captured_stdin", "captured_stderr",
|
||||
"time_out", "socket_peer_reset", "ioerror_peer_reset",
|
||||
"run_with_locale", 'temp_umask', "transient_internet",
|
||||
"set_memlimit", "bigmemtest", "bigaddrspacetest", "BasicTestRunner",
|
||||
|
@ -92,19 +93,15 @@ def import_module(name, deprecated=False):
|
|||
def _save_and_remove_module(name, orig_modules):
|
||||
"""Helper function to save and remove a module from sys.modules
|
||||
|
||||
Return True if the module was in sys.modules, False otherwise.
|
||||
Raise ImportError if the module can't be imported."""
|
||||
saved = True
|
||||
try:
|
||||
orig_modules[name] = sys.modules[name]
|
||||
except KeyError:
|
||||
# try to import the module and raise an error if it can't be imported
|
||||
# try to import the module and raise an error if it can't be imported
|
||||
if name not in sys.modules:
|
||||
__import__(name)
|
||||
saved = False
|
||||
else:
|
||||
del sys.modules[name]
|
||||
return saved
|
||||
|
||||
for modname in list(sys.modules):
|
||||
if modname == name or modname.startswith(name + '.'):
|
||||
orig_modules[modname] = sys.modules[modname]
|
||||
del sys.modules[modname]
|
||||
|
||||
def _save_and_block_module(name, orig_modules):
|
||||
"""Helper function to save and block a module in sys.modules
|
||||
|
@ -132,8 +129,8 @@ def import_fresh_module(name, fresh=(), blocked=(), deprecated=False):
|
|||
|
||||
If deprecated is True, any module or package deprecation messages
|
||||
will be suppressed."""
|
||||
# NOTE: test_heapq and test_warnings include extra sanity checks to make
|
||||
# sure that this utility function is working as expected
|
||||
# NOTE: test_heapq, test_json and test_warnings include extra sanity checks
|
||||
# to make sure that this utility function is working as expected
|
||||
with _ignore_deprecated_imports(deprecated):
|
||||
# Keep track of modules saved for later restoration as well
|
||||
# as those which just need a blocking entry removed
|
||||
|
@ -895,14 +892,8 @@ def transient_internet(resource_name, *, timeout=30.0, errnos=()):
|
|||
|
||||
@contextlib.contextmanager
|
||||
def captured_output(stream_name):
|
||||
"""Run the 'with' statement body using a StringIO object in place of a
|
||||
specific attribute on the sys module.
|
||||
Example use (with 'stream_name=stdout')::
|
||||
|
||||
with captured_stdout() as s:
|
||||
print("hello")
|
||||
assert s.getvalue() == "hello"
|
||||
"""
|
||||
"""Return a context manager used by captured_stdout/stdin/stderr
|
||||
that temporarily replaces the sys stream *stream_name* with a StringIO."""
|
||||
import io
|
||||
orig_stdout = getattr(sys, stream_name)
|
||||
setattr(sys, stream_name, io.StringIO())
|
||||
|
@ -912,6 +903,12 @@ def captured_output(stream_name):
|
|||
setattr(sys, stream_name, orig_stdout)
|
||||
|
||||
def captured_stdout():
|
||||
"""Capture the output of sys.stdout:
|
||||
|
||||
with captured_stdout() as s:
|
||||
print("hello")
|
||||
self.assertEqual(s.getvalue(), "hello")
|
||||
"""
|
||||
return captured_output("stdout")
|
||||
|
||||
def captured_stderr():
|
||||
|
@ -920,6 +917,7 @@ def captured_stderr():
|
|||
def captured_stdin():
|
||||
return captured_output("stdin")
|
||||
|
||||
|
||||
def gc_collect():
|
||||
"""Force as many objects as possible to be collected.
|
||||
|
||||
|
|
|
@ -193,6 +193,7 @@ class CompressTestCase(BaseCompressTestCase, unittest.TestCase):
|
|||
data = b'x' * size
|
||||
try:
|
||||
self.assertRaises(OverflowError, zlib.compress, data, 1)
|
||||
self.assertRaises(OverflowError, zlib.decompress, data)
|
||||
finally:
|
||||
data = None
|
||||
|
||||
|
@ -360,6 +361,15 @@ class CompressObjectTestCase(BaseCompressTestCase, unittest.TestCase):
|
|||
self.assertRaises(ValueError, dco.decompress, b"", -1)
|
||||
self.assertEqual(b'', dco.unconsumed_tail)
|
||||
|
||||
def test_clear_unconsumed_tail(self):
|
||||
# Issue #12050: calling decompress() without providing max_length
|
||||
# should clear the unconsumed_tail attribute.
|
||||
cdata = b"x\x9cKLJ\x06\x00\x02M\x01" # "abc"
|
||||
dco = zlib.decompressobj()
|
||||
ddata = dco.decompress(cdata, 1)
|
||||
ddata += dco.decompress(dco.unconsumed_tail)
|
||||
self.assertEqual(dco.unconsumed_tail, b"")
|
||||
|
||||
def test_flushes(self):
|
||||
# Test flush() with the various options, using all the
|
||||
# different levels in order to provide more variations.
|
||||
|
|
|
@ -90,6 +90,9 @@ Core and Builtins
|
|||
Library
|
||||
-------
|
||||
|
||||
- Issue #12050: zlib.decompressobj().decompress() now clears the unconsumed_tail
|
||||
attribute when called without a max_length argument.
|
||||
|
||||
- Issue #12062: Fix a flushing bug when doing a certain type of I/O sequence
|
||||
on a file opened in read+write mode (namely: reading, seeking a bit forward,
|
||||
writing, then seeking before the previous write but still within buffered
|
||||
|
@ -379,6 +382,8 @@ Extension Modules
|
|||
Tests
|
||||
-----
|
||||
|
||||
- Issue #5723: Improve json tests to be executed with and without accelerations.
|
||||
|
||||
- Issue #11873: Change regex in test_compileall to fix occasional failures when
|
||||
when the randomly generated temporary path happened to match the regex.
|
||||
|
||||
|
|
|
@ -116,7 +116,7 @@ PyZlib_compress(PyObject *self, PyObject *args)
|
|||
{
|
||||
PyObject *ReturnVal = NULL;
|
||||
Py_buffer pinput;
|
||||
Byte *input, *output;
|
||||
Byte *input, *output = NULL;
|
||||
unsigned int length;
|
||||
int level=Z_DEFAULT_COMPRESSION, err;
|
||||
z_stream zst;
|
||||
|
@ -127,20 +127,19 @@ PyZlib_compress(PyObject *self, PyObject *args)
|
|||
|
||||
if (pinput.len > UINT_MAX) {
|
||||
PyErr_SetString(PyExc_OverflowError,
|
||||
"size does not fit in an unsigned int");
|
||||
return NULL;
|
||||
"Size does not fit in an unsigned int");
|
||||
goto error;
|
||||
}
|
||||
length = pinput.len;
|
||||
input = pinput.buf;
|
||||
length = pinput.len;
|
||||
|
||||
zst.avail_out = length + length/1000 + 12 + 1;
|
||||
|
||||
output = (Byte*)malloc(zst.avail_out);
|
||||
if (output == NULL) {
|
||||
PyBuffer_Release(&pinput);
|
||||
PyErr_SetString(PyExc_MemoryError,
|
||||
"Can't allocate memory to compress data");
|
||||
return NULL;
|
||||
goto error;
|
||||
}
|
||||
|
||||
/* Past the point of no return. From here on out, we need to make sure
|
||||
|
@ -203,7 +202,7 @@ PyDoc_STRVAR(decompress__doc__,
|
|||
static PyObject *
|
||||
PyZlib_decompress(PyObject *self, PyObject *args)
|
||||
{
|
||||
PyObject *result_str;
|
||||
PyObject *result_str = NULL;
|
||||
Py_buffer pinput;
|
||||
Byte *input;
|
||||
unsigned int length;
|
||||
|
@ -218,11 +217,11 @@ PyZlib_decompress(PyObject *self, PyObject *args)
|
|||
|
||||
if (pinput.len > UINT_MAX) {
|
||||
PyErr_SetString(PyExc_OverflowError,
|
||||
"size does not fit in an unsigned int");
|
||||
return NULL;
|
||||
"Size does not fit in an unsigned int");
|
||||
goto error;
|
||||
}
|
||||
length = pinput.len;
|
||||
input = pinput.buf;
|
||||
length = pinput.len;
|
||||
|
||||
if (r_strlen <= 0)
|
||||
r_strlen = 1;
|
||||
|
@ -230,10 +229,8 @@ PyZlib_decompress(PyObject *self, PyObject *args)
|
|||
zst.avail_in = length;
|
||||
zst.avail_out = r_strlen;
|
||||
|
||||
if (!(result_str = PyBytes_FromStringAndSize(NULL, r_strlen))) {
|
||||
PyBuffer_Release(&pinput);
|
||||
return NULL;
|
||||
}
|
||||
if (!(result_str = PyBytes_FromStringAndSize(NULL, r_strlen)))
|
||||
goto error;
|
||||
|
||||
zst.zalloc = (alloc_func)NULL;
|
||||
zst.zfree = (free_func)Z_NULL;
|
||||
|
@ -574,17 +571,22 @@ PyZlib_objdecompress(compobject *self, PyObject *args)
|
|||
Py_END_ALLOW_THREADS
|
||||
}
|
||||
|
||||
/* Not all of the compressed data could be accommodated in the output buffer
|
||||
of specified size. Return the unconsumed tail in an attribute.*/
|
||||
if(max_length) {
|
||||
/* Not all of the compressed data could be accommodated in a buffer of
|
||||
the specified size. Return the unconsumed tail in an attribute. */
|
||||
Py_DECREF(self->unconsumed_tail);
|
||||
self->unconsumed_tail = PyBytes_FromStringAndSize((char *)self->zst.next_in,
|
||||
self->zst.avail_in);
|
||||
if(!self->unconsumed_tail) {
|
||||
Py_DECREF(RetVal);
|
||||
RetVal = NULL;
|
||||
goto error;
|
||||
}
|
||||
}
|
||||
else if (PyBytes_GET_SIZE(self->unconsumed_tail) > 0) {
|
||||
/* All of the compressed data was consumed. Clear unconsumed_tail. */
|
||||
Py_DECREF(self->unconsumed_tail);
|
||||
self->unconsumed_tail = PyBytes_FromStringAndSize("", 0);
|
||||
}
|
||||
if (self->unconsumed_tail == NULL) {
|
||||
Py_DECREF(RetVal);
|
||||
RetVal = NULL;
|
||||
goto error;
|
||||
}
|
||||
|
||||
/* The end of the compressed data has been reached, so set the
|
||||
|
|
Loading…
Reference in New Issue