mirror of https://github.com/python/cpython
Merged revisions 58221-58741 via svnmerge from
svn+ssh://pythondev@svn.python.org/python/trunk ........ r58221 | georg.brandl | 2007-09-20 10:57:59 -0700 (Thu, 20 Sep 2007) | 2 lines Patch #1181: add os.environ.clear() method. ........ r58225 | sean.reifschneider | 2007-09-20 23:33:28 -0700 (Thu, 20 Sep 2007) | 3 lines Issue1704287: "make install" fails unless you do "make" first. Make oldsharedmods and sharedmods in "libinstall". ........ r58232 | guido.van.rossum | 2007-09-22 13:18:03 -0700 (Sat, 22 Sep 2007) | 4 lines Patch # 188 by Philip Jenvey. Make tell() mark CRLF as a newline. With unit test. ........ r58242 | georg.brandl | 2007-09-24 10:55:47 -0700 (Mon, 24 Sep 2007) | 2 lines Fix typo and double word. ........ r58245 | georg.brandl | 2007-09-24 10:59:28 -0700 (Mon, 24 Sep 2007) | 2 lines #1196: document default radix for int(). ........ r58247 | georg.brandl | 2007-09-24 11:08:24 -0700 (Mon, 24 Sep 2007) | 2 lines #1177: accept 2xx responses for https too, not only http. ........ r58249 | andrew.kuchling | 2007-09-24 16:45:51 -0700 (Mon, 24 Sep 2007) | 1 line Remove stray odd character; grammar fix ........ r58250 | andrew.kuchling | 2007-09-24 16:46:28 -0700 (Mon, 24 Sep 2007) | 1 line Typo fix ........ r58251 | andrew.kuchling | 2007-09-24 17:09:42 -0700 (Mon, 24 Sep 2007) | 1 line Add various items ........ r58268 | vinay.sajip | 2007-09-26 22:34:45 -0700 (Wed, 26 Sep 2007) | 1 line Change to flush and close logic to fix #1760556. ........ r58269 | vinay.sajip | 2007-09-26 22:38:51 -0700 (Wed, 26 Sep 2007) | 1 line Change to basicConfig() to fix #1021. ........ r58270 | georg.brandl | 2007-09-26 23:26:58 -0700 (Wed, 26 Sep 2007) | 2 lines #1208: document match object's boolean value. ........ r58271 | vinay.sajip | 2007-09-26 23:56:13 -0700 (Wed, 26 Sep 2007) | 1 line Minor date change. ........ r58272 | vinay.sajip | 2007-09-27 00:35:10 -0700 (Thu, 27 Sep 2007) | 1 line Change to LogRecord.__init__() to fix #1206. Note that archaic use of type(x) == types.DictType is because of keeping 1.5.2 compatibility. While this is much less relevant these days, there probably needs to be a separate commit for removing all archaic constructs at the same time. ........ r58288 | brett.cannon | 2007-09-30 12:45:10 -0700 (Sun, 30 Sep 2007) | 9 lines tuple.__repr__ did not consider a reference loop as it is not possible from Python code; but it is possible from C. object.__str__ had the issue of not expecting a type to doing something within it's tp_str implementation that could trigger an infinite recursion, but it could in C code.. Both found thanks to BaseException and how it handles its repr. Closes issue #1686386. Thanks to Thomas Herve for taking an initial stab at coming up with a solution. ........ r58289 | brett.cannon | 2007-09-30 13:37:19 -0700 (Sun, 30 Sep 2007) | 3 lines Fix error introduced by r58288; if a tuple is length 0 return its repr and don't worry about any self-referring tuples. ........ r58294 | facundo.batista | 2007-10-02 10:01:24 -0700 (Tue, 02 Oct 2007) | 11 lines Made the various is_* operations return booleans. This was discussed with Cawlishaw by mail, and he basically confirmed that to these is_* operations, there's no need to return Decimal(0) and Decimal(1) if the language supports the False and True booleans. Also added a few tests for the these functions in extra.decTest, since they are mostly untested (apart from the doctests). Thanks Mark Dickinson ........ r58295 | facundo.batista | 2007-10-02 11:21:18 -0700 (Tue, 02 Oct 2007) | 4 lines Added a class to store the digits of log(10), so that they can be made available when necessary without recomputing. Thanks Mark Dickinson ........ r58299 | mark.summerfield | 2007-10-03 01:53:21 -0700 (Wed, 03 Oct 2007) | 4 lines Added note in footnote about string comparisons about unicodedata.normalize(). ........ r58304 | raymond.hettinger | 2007-10-03 14:18:11 -0700 (Wed, 03 Oct 2007) | 1 line enumerate() is no longer bounded to using sequences shorter than LONG_MAX. The possibility of overflow was sending some newsgroup posters into a tizzy. ........ r58305 | raymond.hettinger | 2007-10-03 17:20:27 -0700 (Wed, 03 Oct 2007) | 1 line itertools.count() no longer limited to sys.maxint. ........ r58306 | kurt.kaiser | 2007-10-03 18:49:54 -0700 (Wed, 03 Oct 2007) | 3 lines Assume that the user knows when he wants to end the line; don't insert something he didn't select or complete. ........ r58307 | kurt.kaiser | 2007-10-03 19:07:50 -0700 (Wed, 03 Oct 2007) | 2 lines Remove unused theme that was causing a fault in p3k. ........ r58308 | kurt.kaiser | 2007-10-03 19:09:17 -0700 (Wed, 03 Oct 2007) | 2 lines Clean up EditorWindow close. ........ r58309 | kurt.kaiser | 2007-10-03 19:53:07 -0700 (Wed, 03 Oct 2007) | 7 lines textView cleanup. Patch 1718043 Tal Einat. M idlelib/EditorWindow.py M idlelib/aboutDialog.py M idlelib/textView.py M idlelib/NEWS.txt ........ r58310 | kurt.kaiser | 2007-10-03 20:11:12 -0700 (Wed, 03 Oct 2007) | 3 lines configDialog cleanup. Patch 1730217 Tal Einat. ........ r58311 | neal.norwitz | 2007-10-03 23:00:48 -0700 (Wed, 03 Oct 2007) | 4 lines Coverity #151: Remove deadcode. All this code already exists above starting at line 653. ........ r58325 | fred.drake | 2007-10-04 19:46:12 -0700 (Thu, 04 Oct 2007) | 1 line wrap lines to <80 characters before fixing errors ........ r58326 | raymond.hettinger | 2007-10-04 19:47:07 -0700 (Thu, 04 Oct 2007) | 6 lines Add __asdict__() to NamedTuple and refine the docs. Add maxlen support to deque() and fixup docs. Partially fix __reduce__(). The None as a third arg was no longer supported. Still needs work on __reduce__() to handle recursive inputs. ........ r58327 | fred.drake | 2007-10-04 19:48:32 -0700 (Thu, 04 Oct 2007) | 3 lines move descriptions of ac_(in|out)_buffer_size to the right place http://bugs.python.org/issue1053 ........ r58329 | neal.norwitz | 2007-10-04 20:39:17 -0700 (Thu, 04 Oct 2007) | 3 lines dict could be NULL, so we need to XDECREF. Fix a compiler warning about passing a PyTypeObject* instead of PyObject*. ........ r58330 | neal.norwitz | 2007-10-04 20:41:19 -0700 (Thu, 04 Oct 2007) | 2 lines Fix Coverity #158: Check the correct variable. ........ r58332 | neal.norwitz | 2007-10-04 22:01:38 -0700 (Thu, 04 Oct 2007) | 7 lines Fix Coverity #159. This code was broken if save() returned a negative number since i contained a boolean value and then we compared i < 0 which should never be true. Will backport (assuming it's necessary) ........ r58334 | neal.norwitz | 2007-10-04 22:29:17 -0700 (Thu, 04 Oct 2007) | 1 line Add a note about fixing some more warnings found by Coverity. ........ r58338 | raymond.hettinger | 2007-10-05 12:07:31 -0700 (Fri, 05 Oct 2007) | 1 line Restore BEGIN/END THREADS macros which were squashed in the previous checkin ........ r58343 | gregory.p.smith | 2007-10-06 00:48:10 -0700 (Sat, 06 Oct 2007) | 3 lines Stab in the dark attempt to fix the test_bsddb3 failure on sparc and S-390 ubuntu buildbots. ........ r58344 | gregory.p.smith | 2007-10-06 00:51:59 -0700 (Sat, 06 Oct 2007) | 2 lines Allows BerkeleyDB 4.6.x >= 4.6.21 for the bsddb module. ........ r58348 | gregory.p.smith | 2007-10-06 08:47:37 -0700 (Sat, 06 Oct 2007) | 3 lines Use the host the author likely meant in the first place. pop.gmail.com is reliable. gmail.org is someones personal domain. ........ r58351 | neal.norwitz | 2007-10-06 12:16:28 -0700 (Sat, 06 Oct 2007) | 3 lines Ensure that this test will pass even if another test left an unwritable TESTFN. Also use the safe unlink in test_support instead of rolling our own here. ........ r58368 | georg.brandl | 2007-10-08 00:50:24 -0700 (Mon, 08 Oct 2007) | 3 lines #1123: fix the docs for the str.split(None, sep) case. Also expand a few other methods' docs, which had more info in the deprecated string module docs. ........ r58369 | georg.brandl | 2007-10-08 01:06:05 -0700 (Mon, 08 Oct 2007) | 2 lines Update docstring of sched, also remove an unused assignment. ........ r58370 | raymond.hettinger | 2007-10-08 02:14:28 -0700 (Mon, 08 Oct 2007) | 5 lines Add comments to NamedTuple code. Let the field spec be either a string or a non-string sequence (suggested by Martin Blais with use cases). Improve the error message in the case of a SyntaxError (caused by a duplicate field name). ........ r58371 | raymond.hettinger | 2007-10-08 02:56:29 -0700 (Mon, 08 Oct 2007) | 1 line Missed a line in the docs ........ r58372 | raymond.hettinger | 2007-10-08 03:11:51 -0700 (Mon, 08 Oct 2007) | 1 line Better variable names ........ r58376 | georg.brandl | 2007-10-08 07:12:47 -0700 (Mon, 08 Oct 2007) | 3 lines #1199: docs for tp_as_{number,sequence,mapping}, by Amaury Forgeot d'Arc. No need to merge this to py3k! ........ r58380 | raymond.hettinger | 2007-10-08 14:26:58 -0700 (Mon, 08 Oct 2007) | 1 line Eliminate camelcase function name ........ r58381 | andrew.kuchling | 2007-10-08 16:23:03 -0700 (Mon, 08 Oct 2007) | 1 line Eliminate camelcase function name ........ r58382 | raymond.hettinger | 2007-10-08 18:36:23 -0700 (Mon, 08 Oct 2007) | 1 line Make the error messages more specific ........ r58384 | gregory.p.smith | 2007-10-08 23:02:21 -0700 (Mon, 08 Oct 2007) | 10 lines Splits Modules/_bsddb.c up into bsddb.h and _bsddb.c and adds a C API object available as bsddb.db.api. This is based on the patch submitted by Duncan Grisby here: http://sourceforge.net/tracker/index.php?func=detail&aid=1551895&group_id=13900&atid=313900 See this thread for additional info: http://sourceforge.net/mailarchive/forum.php?thread_name=E1GAVDK-0002rk-Iw%40apasphere.com&forum_name=pybsddb-users It also cleans up the code a little by removing some ifdef/endifs for python prior to 2.1 and for unsupported Berkeley DB <= 3.2. ........ r58385 | gregory.p.smith | 2007-10-08 23:50:43 -0700 (Mon, 08 Oct 2007) | 5 lines Fix a double free when positioning a database cursor to a non-existant string key (and probably a few other situations with string keys). This was reported with a patch as pybsddb sourceforge bug 1708868 by jjjhhhlll at gmail. ........ r58386 | gregory.p.smith | 2007-10-09 00:19:11 -0700 (Tue, 09 Oct 2007) | 3 lines Use the highest cPickle protocol in bsddb.dbshelve. This comes from sourceforge pybsddb patch 1551443 by w_barnes. ........ r58394 | gregory.p.smith | 2007-10-09 11:26:02 -0700 (Tue, 09 Oct 2007) | 2 lines remove another sleepycat reference ........ r58396 | kurt.kaiser | 2007-10-09 12:31:30 -0700 (Tue, 09 Oct 2007) | 3 lines Allow interrupt only when executing user code in subprocess Patch 1225 Tal Einat modified from IDLE-Spoon. ........ r58399 | brett.cannon | 2007-10-09 17:07:50 -0700 (Tue, 09 Oct 2007) | 5 lines Remove file-level typedefs that were inconsistently used throughout the file. Just move over to the public API names. Closes issue1238. ........ r58401 | raymond.hettinger | 2007-10-09 17:26:46 -0700 (Tue, 09 Oct 2007) | 1 line Accept Jim Jewett's api suggestion to use None instead of -1 to indicate unbounded deques. ........ r58403 | kurt.kaiser | 2007-10-09 17:55:40 -0700 (Tue, 09 Oct 2007) | 2 lines Allow cursor color change w/o restart. Patch 1725576 Tal Einat. ........ r58404 | kurt.kaiser | 2007-10-09 18:06:47 -0700 (Tue, 09 Oct 2007) | 2 lines show paste if > 80 columns. Patch 1659326 Tal Einat. ........ r58415 | thomas.heller | 2007-10-11 12:51:32 -0700 (Thu, 11 Oct 2007) | 5 lines On OS X, use os.uname() instead of gestalt.sysv(...) to get the operating system version. This allows to use ctypes when Python was configured with --disable-toolbox-glue. ........ r58419 | neal.norwitz | 2007-10-11 20:01:01 -0700 (Thu, 11 Oct 2007) | 1 line Get rid of warning about not being able to create an existing directory. ........ r58420 | neal.norwitz | 2007-10-11 20:01:30 -0700 (Thu, 11 Oct 2007) | 1 line Get rid of warnings on a bunch of platforms by using a proper prototype. ........ r58421 | neal.norwitz | 2007-10-11 20:01:54 -0700 (Thu, 11 Oct 2007) | 4 lines Get rid of compiler warning about retval being used (returned) without being initialized. (gcc warning and Coverity 202) ........ r58422 | neal.norwitz | 2007-10-11 20:03:23 -0700 (Thu, 11 Oct 2007) | 1 line Fix Coverity 168: Close the file before returning (exiting). ........ r58423 | neal.norwitz | 2007-10-11 20:04:18 -0700 (Thu, 11 Oct 2007) | 4 lines Fix Coverity 180: Don't overallocate. We don't need structs, but pointers. Also fix a memory leak. ........ r58424 | neal.norwitz | 2007-10-11 20:05:19 -0700 (Thu, 11 Oct 2007) | 5 lines Fix Coverity 185-186: If the passed in FILE is NULL, uninitialized memory would be accessed. Will backport. ........ r58425 | neal.norwitz | 2007-10-11 20:52:34 -0700 (Thu, 11 Oct 2007) | 1 line Get this module to compile with bsddb versions prior to 4.3 ........ r58430 | martin.v.loewis | 2007-10-12 01:56:52 -0700 (Fri, 12 Oct 2007) | 3 lines Bug #1216: Restore support for Visual Studio 2002. Will backport to 2.5. ........ r58433 | raymond.hettinger | 2007-10-12 10:53:11 -0700 (Fri, 12 Oct 2007) | 1 line Fix test of count.__repr__() to ignore the 'L' if the count is a long ........ r58434 | gregory.p.smith | 2007-10-12 11:44:06 -0700 (Fri, 12 Oct 2007) | 4 lines Fixes http://bugs.python.org/issue1233 - bsddb.dbshelve.DBShelf.append was useless due to inverted logic. Also adds a test case for RECNO dbs to test_dbshelve. ........ r58445 | georg.brandl | 2007-10-13 06:20:03 -0700 (Sat, 13 Oct 2007) | 2 lines Fix email example. ........ r58450 | gregory.p.smith | 2007-10-13 16:02:05 -0700 (Sat, 13 Oct 2007) | 2 lines Fix an uncollectable reference leak in bsddb.db.DBShelf.append ........ r58453 | neal.norwitz | 2007-10-13 17:18:40 -0700 (Sat, 13 Oct 2007) | 8 lines Let the O/S supply a port if none of the default ports can be used. This should make the tests more robust at the expense of allowing tests to be sloppier by not requiring them to cleanup after themselves. (It will legitamitely help when running two test suites simultaneously or if another process is already using one of the predefined ports.) Also simplifies (slightLy) the exception handling elsewhere. ........ r58459 | neal.norwitz | 2007-10-14 11:30:21 -0700 (Sun, 14 Oct 2007) | 2 lines Don't raise a string exception, they don't work anymore. ........ r58460 | neal.norwitz | 2007-10-14 11:40:37 -0700 (Sun, 14 Oct 2007) | 1 line Use unittest for assertions ........ r58468 | armin.rigo | 2007-10-15 00:48:35 -0700 (Mon, 15 Oct 2007) | 2 lines test_bigbits was not testing what it seemed to. ........ r58471 | guido.van.rossum | 2007-10-15 08:54:11 -0700 (Mon, 15 Oct 2007) | 3 lines Change a PyErr_Print() into a PyErr_Clear(), per discussion in issue 1031213. ........ r58500 | raymond.hettinger | 2007-10-16 12:18:30 -0700 (Tue, 16 Oct 2007) | 1 line Improve error messages ........ r58506 | raymond.hettinger | 2007-10-16 14:28:32 -0700 (Tue, 16 Oct 2007) | 1 line More docs, error messages, and tests ........ r58507 | andrew.kuchling | 2007-10-16 15:58:03 -0700 (Tue, 16 Oct 2007) | 1 line Add items ........ r58508 | brett.cannon | 2007-10-16 16:24:06 -0700 (Tue, 16 Oct 2007) | 3 lines Remove ``:const:`` notation on None in parameter list. Since the markup is not rendered for parameters it just showed up as ``:const:`None` `` in the output. ........ r58509 | brett.cannon | 2007-10-16 16:26:45 -0700 (Tue, 16 Oct 2007) | 3 lines Re-order some functions whose parameters differ between PyObject and const char * so that they are next to each other. ........ r58522 | armin.rigo | 2007-10-17 11:46:37 -0700 (Wed, 17 Oct 2007) | 5 lines Fix the overflow checking of list_repeat. Introduce overflow checking into list_inplace_repeat. Backport candidate, possibly. ........ r58530 | facundo.batista | 2007-10-17 20:16:03 -0700 (Wed, 17 Oct 2007) | 7 lines Issue #1580738. When HTTPConnection reads the whole stream with read(), it closes itself. When the stream is read in several calls to read(n), it should behave in the same way if HTTPConnection knows where the end of the stream is (through self.length). Added a test case for this behaviour. ........ r58531 | facundo.batista | 2007-10-17 20:44:48 -0700 (Wed, 17 Oct 2007) | 3 lines Issue 1289, just a typo. ........ r58532 | gregory.p.smith | 2007-10-18 00:56:54 -0700 (Thu, 18 Oct 2007) | 4 lines cleanup test_dbtables to use mkdtemp. cleanup dbtables to pass txn as a keyword argument whenever possible to avoid bugs and confusion. (dbtables.py line 447 self.db.get using txn as a non-keyword was an actual bug due to this) ........ r58533 | gregory.p.smith | 2007-10-18 01:34:20 -0700 (Thu, 18 Oct 2007) | 4 lines Fix a weird bug in dbtables: if it chose a random rowid string that contained NULL bytes it would cause the database all sorts of problems in the future leading to very strange random failures and corrupt dbtables.bsdTableDb dbs. ........ r58534 | gregory.p.smith | 2007-10-18 09:32:02 -0700 (Thu, 18 Oct 2007) | 3 lines A cleaner fix than the one committed last night. Generate random rowids that do not contain null bytes. ........ r58537 | gregory.p.smith | 2007-10-18 10:17:57 -0700 (Thu, 18 Oct 2007) | 2 lines mention bsddb fixes. ........ r58538 | raymond.hettinger | 2007-10-18 14:13:06 -0700 (Thu, 18 Oct 2007) | 1 line Remove useless warning ........ r58539 | gregory.p.smith | 2007-10-19 00:31:20 -0700 (Fri, 19 Oct 2007) | 2 lines squelch the warning that this test is supposed to trigger. ........ r58542 | georg.brandl | 2007-10-19 05:32:39 -0700 (Fri, 19 Oct 2007) | 2 lines Clarify wording for apply(). ........ r58544 | mark.summerfield | 2007-10-19 05:48:17 -0700 (Fri, 19 Oct 2007) | 3 lines Added a cross-ref to each other. ........ r58545 | georg.brandl | 2007-10-19 10:38:49 -0700 (Fri, 19 Oct 2007) | 2 lines #1284: "S" means "seen", not unread. ........ r58548 | thomas.heller | 2007-10-19 11:11:41 -0700 (Fri, 19 Oct 2007) | 4 lines Fix ctypes on 32-bit systems when Python is configured --with-system-ffi. See also https://bugs.launchpad.net/bugs/72505. Ported from release25-maint branch. ........ r58550 | facundo.batista | 2007-10-19 12:25:57 -0700 (Fri, 19 Oct 2007) | 8 lines The constructor from tuple was way too permissive: it allowed bad coefficient numbers, floats in the sign, and other details that generated directly the wrong number in the best case, or triggered misfunctionality in the alorithms. Test cases added for these issues. Thanks Mark Dickinson. ........ r58559 | georg.brandl | 2007-10-20 06:22:53 -0700 (Sat, 20 Oct 2007) | 2 lines Fix code being interpreted as a target. ........ r58561 | georg.brandl | 2007-10-20 06:36:24 -0700 (Sat, 20 Oct 2007) | 2 lines Document new "cmdoption" directive. ........ r58562 | georg.brandl | 2007-10-20 08:21:22 -0700 (Sat, 20 Oct 2007) | 2 lines Make a path more Unix-standardy. ........ r58564 | georg.brandl | 2007-10-20 10:51:39 -0700 (Sat, 20 Oct 2007) | 2 lines Document new directive "envvar". ........ r58567 | georg.brandl | 2007-10-20 11:08:14 -0700 (Sat, 20 Oct 2007) | 6 lines * Add new toplevel chapter, "Using Python." (how to install, configure and setup python on different platforms -- at least in theory.) * Move the Python on Mac docs in that chapter. * Add a new chapter about the command line invocation, by stargaming. ........ r58568 | georg.brandl | 2007-10-20 11:33:20 -0700 (Sat, 20 Oct 2007) | 2 lines Change title, for now. ........ r58569 | georg.brandl | 2007-10-20 11:39:25 -0700 (Sat, 20 Oct 2007) | 2 lines Add entry to ACKS. ........ r58570 | georg.brandl | 2007-10-20 12:05:45 -0700 (Sat, 20 Oct 2007) | 2 lines Clarify -E docs. ........ r58571 | georg.brandl | 2007-10-20 12:08:36 -0700 (Sat, 20 Oct 2007) | 2 lines Even more clarification. ........ r58572 | andrew.kuchling | 2007-10-20 12:25:37 -0700 (Sat, 20 Oct 2007) | 1 line Fix protocol name ........ r58573 | andrew.kuchling | 2007-10-20 12:35:18 -0700 (Sat, 20 Oct 2007) | 1 line Various items ........ r58574 | andrew.kuchling | 2007-10-20 12:39:35 -0700 (Sat, 20 Oct 2007) | 1 line Use correct header line ........ r58576 | armin.rigo | 2007-10-21 02:14:15 -0700 (Sun, 21 Oct 2007) | 3 lines Add a crasher for the long-standing issue with closing a file while another thread uses it. ........ r58577 | georg.brandl | 2007-10-21 03:01:56 -0700 (Sun, 21 Oct 2007) | 2 lines Remove duplicate crasher. ........ r58578 | georg.brandl | 2007-10-21 03:24:20 -0700 (Sun, 21 Oct 2007) | 2 lines Unify "byte code" to "bytecode". Also sprinkle :term: markup for it. ........ r58579 | georg.brandl | 2007-10-21 03:32:54 -0700 (Sun, 21 Oct 2007) | 2 lines Add markup to new function descriptions. ........ r58580 | georg.brandl | 2007-10-21 03:45:46 -0700 (Sun, 21 Oct 2007) | 2 lines Add :term:s for descriptors. ........ r58581 | georg.brandl | 2007-10-21 03:46:24 -0700 (Sun, 21 Oct 2007) | 2 lines Unify "file-descriptor" to "file descriptor". ........ r58582 | georg.brandl | 2007-10-21 03:52:38 -0700 (Sun, 21 Oct 2007) | 2 lines Add :term: for generators. ........ r58583 | georg.brandl | 2007-10-21 05:10:28 -0700 (Sun, 21 Oct 2007) | 2 lines Add :term:s for iterator. ........ r58584 | georg.brandl | 2007-10-21 05:15:05 -0700 (Sun, 21 Oct 2007) | 2 lines Add :term:s for "new-style class". ........ r58588 | neal.norwitz | 2007-10-21 21:47:54 -0700 (Sun, 21 Oct 2007) | 1 line Add Chris Monson so he can edit PEPs. ........ r58594 | guido.van.rossum | 2007-10-22 09:27:19 -0700 (Mon, 22 Oct 2007) | 4 lines Issue #1307, patch by Derek Shockey. When "MAIL" is received without args, an exception happens instead of sending a 501 syntax error response. ........ r58598 | travis.oliphant | 2007-10-22 19:40:56 -0700 (Mon, 22 Oct 2007) | 1 line Add phuang patch from Issue 708374 which adds offset parameter to mmap module. ........ r58601 | neal.norwitz | 2007-10-22 22:44:27 -0700 (Mon, 22 Oct 2007) | 2 lines Bug #1313, fix typo (wrong variable name) in example. ........ r58609 | georg.brandl | 2007-10-23 11:21:35 -0700 (Tue, 23 Oct 2007) | 2 lines Update Pygments version from externals. ........ r58618 | guido.van.rossum | 2007-10-23 12:25:41 -0700 (Tue, 23 Oct 2007) | 3 lines Issue 1307 by Derek Shockey, fox the same bug for RCPT. Neal: please backport! ........ r58620 | raymond.hettinger | 2007-10-23 13:37:41 -0700 (Tue, 23 Oct 2007) | 1 line Shorter name for namedtuple() ........ r58621 | andrew.kuchling | 2007-10-23 13:55:47 -0700 (Tue, 23 Oct 2007) | 1 line Update name ........ r58622 | raymond.hettinger | 2007-10-23 14:23:07 -0700 (Tue, 23 Oct 2007) | 1 line Fixup news entry ........ r58623 | raymond.hettinger | 2007-10-23 18:28:33 -0700 (Tue, 23 Oct 2007) | 1 line Optimize sum() for integer and float inputs. ........ r58624 | raymond.hettinger | 2007-10-23 19:05:51 -0700 (Tue, 23 Oct 2007) | 1 line Fixup error return and add support for intermixed ints and floats/ ........ r58628 | vinay.sajip | 2007-10-24 03:47:06 -0700 (Wed, 24 Oct 2007) | 1 line Bug #1321: Fixed logic error in TimedRotatingFileHandler.__init__() ........ r58641 | facundo.batista | 2007-10-24 12:11:08 -0700 (Wed, 24 Oct 2007) | 4 lines Issue 1290. CharacterData.__repr__ was constructing a string in response that keeped having a non-ascii character. ........ r58643 | thomas.heller | 2007-10-24 12:50:45 -0700 (Wed, 24 Oct 2007) | 1 line Added unittest for calling a function with paramflags (backport from py3k branch). ........ r58645 | matthias.klose | 2007-10-24 13:00:44 -0700 (Wed, 24 Oct 2007) | 2 lines - Build using system ffi library on arm*-linux*. ........ r58651 | georg.brandl | 2007-10-24 14:40:38 -0700 (Wed, 24 Oct 2007) | 2 lines Bug #1287: make os.environ.pop() work as expected. ........ r58652 | raymond.hettinger | 2007-10-24 19:26:58 -0700 (Wed, 24 Oct 2007) | 1 line Missing DECREFs ........ r58653 | matthias.klose | 2007-10-24 23:37:24 -0700 (Wed, 24 Oct 2007) | 2 lines - Build using system ffi library on arm*-linux*, pass --with-system-ffi to CONFIG_ARGS ........ r58655 | thomas.heller | 2007-10-25 12:47:32 -0700 (Thu, 25 Oct 2007) | 2 lines ffi_type_longdouble may be already #defined. See issue 1324. ........ r58656 | kurt.kaiser | 2007-10-25 15:43:45 -0700 (Thu, 25 Oct 2007) | 3 lines Correct an ancient bug in an unused path by removing that path: register() is now idempotent. ........ r58660 | kurt.kaiser | 2007-10-25 17:10:09 -0700 (Thu, 25 Oct 2007) | 4 lines 1. Add comments to provide top-level documentation. 2. Refactor to use more descriptive names. 3. Enhance tests in main(). ........ r58675 | georg.brandl | 2007-10-26 11:30:41 -0700 (Fri, 26 Oct 2007) | 2 lines Fix new pop() method on os.environ on ignorecase-platforms. ........ r58696 | neal.norwitz | 2007-10-27 15:32:21 -0700 (Sat, 27 Oct 2007) | 1 line Update URL for Pygments. 0.8.1 is no longer available ........ r58697 | hyeshik.chang | 2007-10-28 04:19:02 -0700 (Sun, 28 Oct 2007) | 3 lines - Add support for FreeBSD 8 which is recently forked from FreeBSD 7. - Regenerate IN module for most recent maintenance tree of FreeBSD 6 and 7. ........ r58698 | hyeshik.chang | 2007-10-28 05:38:09 -0700 (Sun, 28 Oct 2007) | 2 lines Enable platform-specific tweaks for FreeBSD 8 (exactly same to FreeBSD 7's yet) ........ r58700 | kurt.kaiser | 2007-10-28 12:03:59 -0700 (Sun, 28 Oct 2007) | 2 lines Add confirmation dialog before printing. Patch 1717170 Tal Einat. ........ r58706 | guido.van.rossum | 2007-10-29 13:52:45 -0700 (Mon, 29 Oct 2007) | 3 lines Patch 1353 by Jacob Winther. Add mp4 mapping to mimetypes.py. ........ r58709 | guido.van.rossum | 2007-10-29 15:15:05 -0700 (Mon, 29 Oct 2007) | 6 lines Backport fixes for the code that decodes octal escapes (and for PyString also hex escapes) -- this was reaching beyond the end of the input string buffer, even though it is not supposed to be \0-terminated. This has no visible effect but is clearly the correct thing to do. (In 3.0 it had a visible effect after removing ob_sstate from PyString.) ........ r58710 | kurt.kaiser | 2007-10-29 19:38:54 -0700 (Mon, 29 Oct 2007) | 7 lines check in Tal Einat's update to tabpage.py Patch 1612746 M configDialog.py M NEWS.txt AM tabbedpages.py ........ r58715 | georg.brandl | 2007-10-30 10:51:18 -0700 (Tue, 30 Oct 2007) | 2 lines Use correct markup. ........ r58716 | georg.brandl | 2007-10-30 10:57:12 -0700 (Tue, 30 Oct 2007) | 2 lines Make example about hiding None return values at the prompt clearer. ........ r58728 | neal.norwitz | 2007-10-30 23:33:20 -0700 (Tue, 30 Oct 2007) | 1 line Fix some compiler warnings for signed comparisons on Unix and Windows. ........ r58731 | martin.v.loewis | 2007-10-31 10:19:33 -0700 (Wed, 31 Oct 2007) | 2 lines Adding Christian Heimes. ........ r58737 | raymond.hettinger | 2007-10-31 14:57:58 -0700 (Wed, 31 Oct 2007) | 1 line Clarify the reasons why pickle is almost always better than marshal ........ r58739 | raymond.hettinger | 2007-10-31 15:15:49 -0700 (Wed, 31 Oct 2007) | 1 line Sets are marshalable. ........
This commit is contained in:
parent
1d1a400164
commit
8ce8a784bd
|
@ -30,11 +30,21 @@ storage.
|
|||
#------------------------------------------------------------------------
|
||||
|
||||
import pickle
|
||||
try:
|
||||
import sys
|
||||
|
||||
#At version 2.3 cPickle switched to using protocol instead of bin and
|
||||
#DictMixin was added
|
||||
if sys.version_info[:3] >= (2, 3, 0):
|
||||
HIGHEST_PROTOCOL = pickle.HIGHEST_PROTOCOL
|
||||
def _dumps(object, protocol):
|
||||
return pickle.dumps(object, protocol=protocol)
|
||||
from UserDict import DictMixin
|
||||
except ImportError:
|
||||
# DictMixin is new in Python 2.3
|
||||
else:
|
||||
HIGHEST_PROTOCOL = None
|
||||
def _dumps(object, protocol):
|
||||
return pickle.dumps(object, bin=protocol)
|
||||
class DictMixin: pass
|
||||
|
||||
from . import db
|
||||
|
||||
_unspecified = object()
|
||||
|
@ -87,7 +97,10 @@ class DBShelf(DictMixin):
|
|||
def __init__(self, dbenv=None):
|
||||
self.db = db.DB(dbenv)
|
||||
self._closed = True
|
||||
self.binary = 1
|
||||
if HIGHEST_PROTOCOL:
|
||||
self.protocol = HIGHEST_PROTOCOL
|
||||
else:
|
||||
self.protocol = 1
|
||||
|
||||
|
||||
def __del__(self):
|
||||
|
@ -114,7 +127,7 @@ class DBShelf(DictMixin):
|
|||
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
data = pickle.dumps(value, self.binary)
|
||||
data = _dumps(value, self.protocol)
|
||||
self.db[key] = data
|
||||
|
||||
|
||||
|
@ -169,7 +182,7 @@ class DBShelf(DictMixin):
|
|||
# Other methods
|
||||
|
||||
def __append(self, value, txn=None):
|
||||
data = pickle.dumps(value, self.binary)
|
||||
data = _dumps(value, self.protocol)
|
||||
return self.db.append(data, txn)
|
||||
|
||||
def append(self, value, txn=None):
|
||||
|
@ -200,19 +213,19 @@ class DBShelf(DictMixin):
|
|||
return pickle.loads(data)
|
||||
|
||||
def get_both(self, key, value, txn=None, flags=0):
|
||||
data = pickle.dumps(value, self.binary)
|
||||
data = _dumps(value, self.protocol)
|
||||
data = self.db.get(key, data, txn, flags)
|
||||
return pickle.loads(data)
|
||||
|
||||
|
||||
def cursor(self, txn=None, flags=0):
|
||||
c = DBShelfCursor(self.db.cursor(txn, flags))
|
||||
c.binary = self.binary
|
||||
c.protocol = self.protocol
|
||||
return c
|
||||
|
||||
|
||||
def put(self, key, value, txn=None, flags=0):
|
||||
data = pickle.dumps(value, self.binary)
|
||||
data = _dumps(value, self.protocol)
|
||||
return self.db.put(key, data, txn, flags)
|
||||
|
||||
|
||||
|
@ -252,11 +265,13 @@ class DBShelfCursor:
|
|||
#----------------------------------------------
|
||||
|
||||
def dup(self, flags=0):
|
||||
return DBShelfCursor(self.dbc.dup(flags))
|
||||
c = DBShelfCursor(self.dbc.dup(flags))
|
||||
c.protocol = self.protocol
|
||||
return c
|
||||
|
||||
|
||||
def put(self, key, value, flags=0):
|
||||
data = pickle.dumps(value, self.binary)
|
||||
data = _dumps(value, self.protocol)
|
||||
return self.dbc.put(key, data, flags)
|
||||
|
||||
|
||||
|
@ -274,7 +289,7 @@ class DBShelfCursor:
|
|||
return self._extract(rec)
|
||||
|
||||
def get_3(self, key, value, flags):
|
||||
data = pickle.dumps(value, self.binary)
|
||||
data = _dumps(value, self.protocol)
|
||||
rec = self.dbc.get(key, flags)
|
||||
return self._extract(rec)
|
||||
|
||||
|
@ -291,7 +306,7 @@ class DBShelfCursor:
|
|||
|
||||
|
||||
def get_both(self, key, value, flags=0):
|
||||
data = pickle.dumps(value, self.binary)
|
||||
data = _dumps(value, self.protocol)
|
||||
rec = self.dbc.get_both(key, flags)
|
||||
return self._extract(rec)
|
||||
|
||||
|
|
|
@ -28,10 +28,10 @@ class MiscTestCase(unittest.TestCase):
|
|||
pass
|
||||
shutil.rmtree(self.homeDir)
|
||||
|
||||
def test01_badpointer(self):
|
||||
dbs = dbshelve.open(self.filename)
|
||||
dbs.close()
|
||||
self.assertRaises(db.DBError, dbs.get, "foo")
|
||||
## def test01_badpointer(self):
|
||||
## dbs = dbshelve.open(self.filename)
|
||||
## dbs.close()
|
||||
## self.assertRaises(db.DBError, dbs.get, "foo")
|
||||
|
||||
def test02_db_home(self):
|
||||
env = db.DBEnv()
|
||||
|
@ -46,6 +46,26 @@ class MiscTestCase(unittest.TestCase):
|
|||
rp = repr(db)
|
||||
self.assertEquals(rp, "{}")
|
||||
|
||||
# http://sourceforge.net/tracker/index.php?func=detail&aid=1708868&group_id=13900&atid=313900
|
||||
#
|
||||
# See the bug report for details.
|
||||
#
|
||||
# The problem was that make_key_dbt() was not allocating a copy of
|
||||
# string keys but FREE_DBT() was always being told to free it when the
|
||||
# database was opened with DB_THREAD.
|
||||
## def test04_double_free_make_key_dbt(self):
|
||||
## try:
|
||||
## db1 = db.DB()
|
||||
## db1.open(self.filename, None, db.DB_BTREE,
|
||||
## db.DB_CREATE | db.DB_THREAD)
|
||||
|
||||
## curs = db1.cursor()
|
||||
## t = curs.get(b"/foo", db.DB_SET)
|
||||
## # double free happened during exit from DBC_get
|
||||
## finally:
|
||||
## db1.close()
|
||||
## os.unlink(self.filename)
|
||||
|
||||
|
||||
#----------------------------------------------------------------------
|
||||
|
||||
|
|
|
@ -1,7 +1,8 @@
|
|||
__all__ = ['deque', 'defaultdict', 'NamedTuple']
|
||||
__all__ = ['deque', 'defaultdict', 'namedtuple']
|
||||
|
||||
from _collections import deque, defaultdict
|
||||
from operator import itemgetter as _itemgetter
|
||||
from keyword import iskeyword as _iskeyword
|
||||
import sys as _sys
|
||||
|
||||
# For bootstrapping reasons, the collection ABCs are defined in _abcoll.py.
|
||||
|
@ -10,11 +11,10 @@ from _abcoll import *
|
|||
import _abcoll
|
||||
__all__ += _abcoll.__all__
|
||||
|
||||
|
||||
def NamedTuple(typename, s, verbose=False):
|
||||
def namedtuple(typename, field_names, verbose=False):
|
||||
"""Returns a new subclass of tuple with named fields.
|
||||
|
||||
>>> Point = NamedTuple('Point', 'x y')
|
||||
>>> Point = namedtuple('Point', 'x y')
|
||||
>>> Point.__doc__ # docstring for the new class
|
||||
'Point(x, y)'
|
||||
>>> p = Point(11, y=22) # instantiate with positional args or keywords
|
||||
|
@ -25,19 +25,36 @@ def NamedTuple(typename, s, verbose=False):
|
|||
(11, 22)
|
||||
>>> p.x + p.y # fields also accessable by name
|
||||
33
|
||||
>>> p # readable __repr__ with name=value style
|
||||
>>> d = p.__asdict__() # convert to a dictionary
|
||||
>>> d['x']
|
||||
11
|
||||
>>> Point(**d) # convert from a dictionary
|
||||
Point(x=11, y=22)
|
||||
>>> p.__replace__('x', 100) # __replace__() is like str.replace() but targets a named field
|
||||
Point(x=100, y=22)
|
||||
>>> d = dict(zip(p.__fields__, p)) # use __fields__ to make a dictionary
|
||||
>>> d['x']
|
||||
11
|
||||
|
||||
"""
|
||||
|
||||
field_names = tuple(s.replace(',', ' ').split()) # names separated by spaces and/or commas
|
||||
if not ''.join((typename,) + field_names).replace('_', '').isalnum():
|
||||
raise ValueError('Type names and field names can only contain alphanumeric characters and underscores')
|
||||
# Parse and validate the field names
|
||||
if isinstance(field_names, str):
|
||||
field_names = field_names.replace(',', ' ').split() # names separated by whitespace and/or commas
|
||||
field_names = tuple(field_names)
|
||||
for name in (typename,) + field_names:
|
||||
if not name.replace('_', '').isalnum():
|
||||
raise ValueError('Type names and field names can only contain alphanumeric characters and underscores: %r' % name)
|
||||
if _iskeyword(name):
|
||||
raise ValueError('Type names and field names cannot be a keyword: %r' % name)
|
||||
if name[0].isdigit():
|
||||
raise ValueError('Type names and field names cannot start with a number: %r' % name)
|
||||
seen_names = set()
|
||||
for name in field_names:
|
||||
if name.startswith('__') and name.endswith('__'):
|
||||
raise ValueError('Field names cannot start and end with double underscores: %r' % name)
|
||||
if name in seen_names:
|
||||
raise ValueError('Encountered duplicate field name: %r' % name)
|
||||
seen_names.add(name)
|
||||
|
||||
# Create and fill-in the class template
|
||||
argtxt = repr(field_names).replace("'", "")[1:-1] # tuple repr without parens or quotes
|
||||
reprtxt = ', '.join('%s=%%r' % name for name in field_names)
|
||||
template = '''class %(typename)s(tuple):
|
||||
|
@ -48,18 +65,31 @@ def NamedTuple(typename, s, verbose=False):
|
|||
return tuple.__new__(cls, (%(argtxt)s))
|
||||
def __repr__(self):
|
||||
return '%(typename)s(%(reprtxt)s)' %% self
|
||||
def __replace__(self, field, value):
|
||||
def __asdict__(self, dict=dict, zip=zip):
|
||||
'Return a new dict mapping field names to their values'
|
||||
return dict(zip(%(field_names)r, self))
|
||||
def __replace__(self, field, value, dict=dict, zip=zip):
|
||||
'Return a new %(typename)s object replacing one field with a new value'
|
||||
return %(typename)s(**dict(list(zip(%(field_names)r, self)) + [(field, value)])) \n''' % locals()
|
||||
for i, name in enumerate(field_names):
|
||||
template += ' %s = property(itemgetter(%d))\n' % (name, i)
|
||||
if verbose:
|
||||
print(template)
|
||||
m = dict(itemgetter=_itemgetter)
|
||||
exec(template, m)
|
||||
result = m[typename]
|
||||
|
||||
# Execute the template string in a temporary namespace
|
||||
namespace = dict(itemgetter=_itemgetter)
|
||||
try:
|
||||
exec(template, namespace)
|
||||
except SyntaxError as e:
|
||||
raise SyntaxError(e.message + ':\n' + template)
|
||||
result = namespace[typename]
|
||||
|
||||
# For pickling to work, the __module__ variable needs to be set to the frame
|
||||
# where the named tuple is created. Bypass this step in enviroments where
|
||||
# sys._getframe is not defined (Jython for example).
|
||||
if hasattr(_sys, '_getframe'):
|
||||
result.__module__ = _sys._getframe(1).f_globals['__name__']
|
||||
|
||||
return result
|
||||
|
||||
|
||||
|
@ -69,10 +99,10 @@ def NamedTuple(typename, s, verbose=False):
|
|||
if __name__ == '__main__':
|
||||
# verify that instances can be pickled
|
||||
from pickle import loads, dumps
|
||||
Point = NamedTuple('Point', 'x, y', True)
|
||||
Point = namedtuple('Point', 'x, y', True)
|
||||
p = Point(x=10, y=20)
|
||||
assert p == loads(dumps(p))
|
||||
|
||||
import doctest
|
||||
TestResults = NamedTuple('TestResults', 'failed attempted')
|
||||
TestResults = namedtuple('TestResults', 'failed attempted')
|
||||
print(TestResults(*doctest.testmod()))
|
||||
|
|
|
@ -21,19 +21,12 @@ if _os.name in ("nt", "ce"):
|
|||
|
||||
DEFAULT_MODE = RTLD_LOCAL
|
||||
if _os.name == "posix" and _sys.platform == "darwin":
|
||||
import gestalt
|
||||
|
||||
# gestalt.gestalt("sysv") returns the version number of the
|
||||
# currently active system file as BCD.
|
||||
# On OS X 10.4.6 -> 0x1046
|
||||
# On OS X 10.2.8 -> 0x1028
|
||||
# See also http://www.rgaros.nl/gestalt/
|
||||
#
|
||||
# On OS X 10.3, we use RTLD_GLOBAL as default mode
|
||||
# because RTLD_LOCAL does not work at least on some
|
||||
# libraries.
|
||||
# libraries. OS X 10.3 is Darwin 7, so we check for
|
||||
# that.
|
||||
|
||||
if gestalt.gestalt("sysv") < 0x1040:
|
||||
if int(_os.uname()[2].split('.')[0]) < 8:
|
||||
DEFAULT_MODE = RTLD_GLOBAL
|
||||
|
||||
from _ctypes import FUNCFLAG_CDECL as _FUNCFLAG_CDECL, \
|
||||
|
|
318
Lib/decimal.py
318
Lib/decimal.py
|
@ -562,20 +562,46 @@ class Decimal(object):
|
|||
# tuple/list conversion (possibly from as_tuple())
|
||||
if isinstance(value, (list,tuple)):
|
||||
if len(value) != 3:
|
||||
raise ValueError('Invalid arguments')
|
||||
if value[0] not in (0,1):
|
||||
raise ValueError('Invalid sign')
|
||||
for digit in value[1]:
|
||||
if not isinstance(digit, int) or digit < 0:
|
||||
raise ValueError("The second value in the tuple must be "
|
||||
"composed of non negative integer elements.")
|
||||
raise ValueError('Invalid tuple size in creation of Decimal '
|
||||
'from list or tuple. The list or tuple '
|
||||
'should have exactly three elements.')
|
||||
# process sign. The isinstance test rejects floats
|
||||
if not (isinstance(value[0], int) and value[0] in (0,1)):
|
||||
raise ValueError("Invalid sign. The first value in the tuple "
|
||||
"should be an integer; either 0 for a "
|
||||
"positive number or 1 for a negative number.")
|
||||
self._sign = value[0]
|
||||
self._int = tuple(value[1])
|
||||
if value[2] in ('F','n','N'):
|
||||
if value[2] == 'F':
|
||||
# infinity: value[1] is ignored
|
||||
self._int = (0,)
|
||||
self._exp = value[2]
|
||||
self._is_special = True
|
||||
else:
|
||||
self._exp = int(value[2])
|
||||
# process and validate the digits in value[1]
|
||||
digits = []
|
||||
for digit in value[1]:
|
||||
if isinstance(digit, int) and 0 <= digit <= 9:
|
||||
# skip leading zeros
|
||||
if digits or digit != 0:
|
||||
digits.append(digit)
|
||||
else:
|
||||
raise ValueError("The second value in the tuple must "
|
||||
"be composed of integers in the range "
|
||||
"0 through 9.")
|
||||
if value[2] in ('n', 'N'):
|
||||
# NaN: digits form the diagnostic
|
||||
self._int = tuple(digits)
|
||||
self._exp = value[2]
|
||||
self._is_special = True
|
||||
elif isinstance(value[2], int):
|
||||
# finite number: digits give the coefficient
|
||||
self._int = tuple(digits or [0])
|
||||
self._exp = value[2]
|
||||
self._is_special = False
|
||||
else:
|
||||
raise ValueError("The third value in the tuple must "
|
||||
"be an integer, or one of the "
|
||||
"strings 'F', 'n', 'N'.")
|
||||
return self
|
||||
|
||||
if isinstance(value, float):
|
||||
|
@ -679,14 +705,11 @@ class Decimal(object):
|
|||
return 0
|
||||
|
||||
def __bool__(self):
|
||||
"""return True if the number is non-zero.
|
||||
"""Return True if self is nonzero; otherwise return False.
|
||||
|
||||
False if self == 0
|
||||
True if self != 0
|
||||
NaNs and infinities are considered nonzero.
|
||||
"""
|
||||
if self._is_special:
|
||||
return True
|
||||
return sum(self._int) != 0
|
||||
return self._is_special or self._int[0] != 0
|
||||
|
||||
def __cmp__(self, other):
|
||||
other = _convert_other(other)
|
||||
|
@ -2252,15 +2275,18 @@ class Decimal(object):
|
|||
return ans
|
||||
|
||||
def same_quantum(self, other):
|
||||
"""Test whether self and other have the same exponent.
|
||||
"""Return True if self and other have the same exponent; otherwise
|
||||
return False.
|
||||
|
||||
same as self._exp == other._exp, except NaN == sNaN
|
||||
If either operand is a special value, the following rules are used:
|
||||
* return True if both operands are infinities
|
||||
* return True if both operands are NaNs
|
||||
* otherwise, return False.
|
||||
"""
|
||||
other = _convert_other(other, raiseit=True)
|
||||
if self._is_special or other._is_special:
|
||||
if self._isnan() or other._isnan():
|
||||
return self._isnan() and other._isnan() and True
|
||||
if self._isinfinity() or other._isinfinity():
|
||||
return self._isinfinity() and other._isinfinity() and True
|
||||
return (self.is_nan() and other.is_nan() or
|
||||
self.is_infinite() and other.is_infinite())
|
||||
return self._exp == other._exp
|
||||
|
||||
def _rescale(self, exp, rounding):
|
||||
|
@ -2743,84 +2769,60 @@ class Decimal(object):
|
|||
return ans
|
||||
|
||||
def is_canonical(self):
|
||||
"""Returns 1 if self is canonical; otherwise returns 0."""
|
||||
return Dec_p1
|
||||
"""Return True if self is canonical; otherwise return False.
|
||||
|
||||
Currently, the encoding of a Decimal instance is always
|
||||
canonical, so this method returns True for any Decimal.
|
||||
"""
|
||||
return True
|
||||
|
||||
def is_finite(self):
|
||||
"""Returns 1 if self is finite, otherwise returns 0.
|
||||
"""Return True if self is finite; otherwise return False.
|
||||
|
||||
For it to be finite, it must be neither infinite nor a NaN.
|
||||
A Decimal instance is considered finite if it is neither
|
||||
infinite nor a NaN.
|
||||
"""
|
||||
if self._is_special:
|
||||
return Dec_0
|
||||
else:
|
||||
return Dec_p1
|
||||
return not self._is_special
|
||||
|
||||
def is_infinite(self):
|
||||
"""Returns 1 if self is an Infinite, otherwise returns 0."""
|
||||
if self._isinfinity():
|
||||
return Dec_p1
|
||||
else:
|
||||
return Dec_0
|
||||
"""Return True if self is infinite; otherwise return False."""
|
||||
return self._exp == 'F'
|
||||
|
||||
def is_nan(self):
|
||||
"""Returns 1 if self is qNaN or sNaN, otherwise returns 0."""
|
||||
if self._isnan():
|
||||
return Dec_p1
|
||||
else:
|
||||
return Dec_0
|
||||
"""Return True if self is a qNaN or sNaN; otherwise return False."""
|
||||
return self._exp in ('n', 'N')
|
||||
|
||||
def is_normal(self, context=None):
|
||||
"""Returns 1 if self is a normal number, otherwise returns 0."""
|
||||
if self._is_special:
|
||||
return Dec_0
|
||||
if not self:
|
||||
return Dec_0
|
||||
"""Return True if self is a normal number; otherwise return False."""
|
||||
if self._is_special or not self:
|
||||
return False
|
||||
if context is None:
|
||||
context = getcontext()
|
||||
if context.Emin <= self.adjusted() <= context.Emax:
|
||||
return Dec_p1
|
||||
else:
|
||||
return Dec_0
|
||||
return context.Emin <= self.adjusted() <= context.Emax
|
||||
|
||||
def is_qnan(self):
|
||||
"""Returns 1 if self is a quiet NaN, otherwise returns 0."""
|
||||
if self._isnan() == 1:
|
||||
return Dec_p1
|
||||
else:
|
||||
return Dec_0
|
||||
"""Return True if self is a quiet NaN; otherwise return False."""
|
||||
return self._exp == 'n'
|
||||
|
||||
def is_signed(self):
|
||||
"""Returns 1 if self is negative, otherwise returns 0."""
|
||||
return Decimal(self._sign)
|
||||
"""Return True if self is negative; otherwise return False."""
|
||||
return self._sign == 1
|
||||
|
||||
def is_snan(self):
|
||||
"""Returns 1 if self is a signaling NaN, otherwise returns 0."""
|
||||
if self._isnan() == 2:
|
||||
return Dec_p1
|
||||
else:
|
||||
return Dec_0
|
||||
"""Return True if self is a signaling NaN; otherwise return False."""
|
||||
return self._exp == 'N'
|
||||
|
||||
def is_subnormal(self, context=None):
|
||||
"""Returns 1 if self is subnormal, otherwise returns 0."""
|
||||
if self._is_special:
|
||||
return Dec_0
|
||||
if not self:
|
||||
return Dec_0
|
||||
"""Return True if self is subnormal; otherwise return False."""
|
||||
if self._is_special or not self:
|
||||
return False
|
||||
if context is None:
|
||||
context = getcontext()
|
||||
|
||||
r = self._exp + len(self._int)
|
||||
if r <= context.Emin:
|
||||
return Dec_p1
|
||||
return Dec_0
|
||||
return self.adjusted() < context.Emin
|
||||
|
||||
def is_zero(self):
|
||||
"""Returns 1 if self is a zero, otherwise returns 0."""
|
||||
if self:
|
||||
return Dec_0
|
||||
else:
|
||||
return Dec_p1
|
||||
"""Return True if self is a zero; otherwise return False."""
|
||||
return not self._is_special and self._int[0] == 0
|
||||
|
||||
def _ln_exp_bound(self):
|
||||
"""Compute a lower bound for the adjusted exponent of self.ln().
|
||||
|
@ -3883,138 +3885,145 @@ class Context(object):
|
|||
return a.fma(b, c, context=self)
|
||||
|
||||
def is_canonical(self, a):
|
||||
"""Returns 1 if the operand is canonical; otherwise returns 0.
|
||||
"""Return True if the operand is canonical; otherwise return False.
|
||||
|
||||
Currently, the encoding of a Decimal instance is always
|
||||
canonical, so this method returns True for any Decimal.
|
||||
|
||||
>>> ExtendedContext.is_canonical(Decimal('2.50'))
|
||||
Decimal("1")
|
||||
True
|
||||
"""
|
||||
return Dec_p1
|
||||
return a.is_canonical()
|
||||
|
||||
def is_finite(self, a):
|
||||
"""Returns 1 if the operand is finite, otherwise returns 0.
|
||||
"""Return True if the operand is finite; otherwise return False.
|
||||
|
||||
For it to be finite, it must be neither infinite nor a NaN.
|
||||
A Decimal instance is considered finite if it is neither
|
||||
infinite nor a NaN.
|
||||
|
||||
>>> ExtendedContext.is_finite(Decimal('2.50'))
|
||||
Decimal("1")
|
||||
True
|
||||
>>> ExtendedContext.is_finite(Decimal('-0.3'))
|
||||
Decimal("1")
|
||||
True
|
||||
>>> ExtendedContext.is_finite(Decimal('0'))
|
||||
Decimal("1")
|
||||
True
|
||||
>>> ExtendedContext.is_finite(Decimal('Inf'))
|
||||
Decimal("0")
|
||||
False
|
||||
>>> ExtendedContext.is_finite(Decimal('NaN'))
|
||||
Decimal("0")
|
||||
False
|
||||
"""
|
||||
return a.is_finite()
|
||||
|
||||
def is_infinite(self, a):
|
||||
"""Returns 1 if the operand is an Infinite, otherwise returns 0.
|
||||
"""Return True if the operand is infinite; otherwise return False.
|
||||
|
||||
>>> ExtendedContext.is_infinite(Decimal('2.50'))
|
||||
Decimal("0")
|
||||
False
|
||||
>>> ExtendedContext.is_infinite(Decimal('-Inf'))
|
||||
Decimal("1")
|
||||
True
|
||||
>>> ExtendedContext.is_infinite(Decimal('NaN'))
|
||||
Decimal("0")
|
||||
False
|
||||
"""
|
||||
return a.is_infinite()
|
||||
|
||||
def is_nan(self, a):
|
||||
"""Returns 1 if the operand is qNaN or sNaN, otherwise returns 0.
|
||||
"""Return True if the operand is a qNaN or sNaN;
|
||||
otherwise return False.
|
||||
|
||||
>>> ExtendedContext.is_nan(Decimal('2.50'))
|
||||
Decimal("0")
|
||||
False
|
||||
>>> ExtendedContext.is_nan(Decimal('NaN'))
|
||||
Decimal("1")
|
||||
True
|
||||
>>> ExtendedContext.is_nan(Decimal('-sNaN'))
|
||||
Decimal("1")
|
||||
True
|
||||
"""
|
||||
return a.is_nan()
|
||||
|
||||
def is_normal(self, a):
|
||||
"""Returns 1 if the operand is a normal number, otherwise returns 0.
|
||||
"""Return True if the operand is a normal number;
|
||||
otherwise return False.
|
||||
|
||||
>>> c = ExtendedContext.copy()
|
||||
>>> c.Emin = -999
|
||||
>>> c.Emax = 999
|
||||
>>> c.is_normal(Decimal('2.50'))
|
||||
Decimal("1")
|
||||
True
|
||||
>>> c.is_normal(Decimal('0.1E-999'))
|
||||
Decimal("0")
|
||||
False
|
||||
>>> c.is_normal(Decimal('0.00'))
|
||||
Decimal("0")
|
||||
False
|
||||
>>> c.is_normal(Decimal('-Inf'))
|
||||
Decimal("0")
|
||||
False
|
||||
>>> c.is_normal(Decimal('NaN'))
|
||||
Decimal("0")
|
||||
False
|
||||
"""
|
||||
return a.is_normal(context=self)
|
||||
|
||||
def is_qnan(self, a):
|
||||
"""Returns 1 if the operand is a quiet NaN, otherwise returns 0.
|
||||
"""Return True if the operand is a quiet NaN; otherwise return False.
|
||||
|
||||
>>> ExtendedContext.is_qnan(Decimal('2.50'))
|
||||
Decimal("0")
|
||||
False
|
||||
>>> ExtendedContext.is_qnan(Decimal('NaN'))
|
||||
Decimal("1")
|
||||
True
|
||||
>>> ExtendedContext.is_qnan(Decimal('sNaN'))
|
||||
Decimal("0")
|
||||
False
|
||||
"""
|
||||
return a.is_qnan()
|
||||
|
||||
def is_signed(self, a):
|
||||
"""Returns 1 if the operand is negative, otherwise returns 0.
|
||||
"""Return True if the operand is negative; otherwise return False.
|
||||
|
||||
>>> ExtendedContext.is_signed(Decimal('2.50'))
|
||||
Decimal("0")
|
||||
False
|
||||
>>> ExtendedContext.is_signed(Decimal('-12'))
|
||||
Decimal("1")
|
||||
True
|
||||
>>> ExtendedContext.is_signed(Decimal('-0'))
|
||||
Decimal("1")
|
||||
True
|
||||
"""
|
||||
return a.is_signed()
|
||||
|
||||
def is_snan(self, a):
|
||||
"""Returns 1 if the operand is a signaling NaN, otherwise returns 0.
|
||||
"""Return True if the operand is a signaling NaN;
|
||||
otherwise return False.
|
||||
|
||||
>>> ExtendedContext.is_snan(Decimal('2.50'))
|
||||
Decimal("0")
|
||||
False
|
||||
>>> ExtendedContext.is_snan(Decimal('NaN'))
|
||||
Decimal("0")
|
||||
False
|
||||
>>> ExtendedContext.is_snan(Decimal('sNaN'))
|
||||
Decimal("1")
|
||||
True
|
||||
"""
|
||||
return a.is_snan()
|
||||
|
||||
def is_subnormal(self, a):
|
||||
"""Returns 1 if the operand is subnormal, otherwise returns 0.
|
||||
"""Return True if the operand is subnormal; otherwise return False.
|
||||
|
||||
>>> c = ExtendedContext.copy()
|
||||
>>> c.Emin = -999
|
||||
>>> c.Emax = 999
|
||||
>>> c.is_subnormal(Decimal('2.50'))
|
||||
Decimal("0")
|
||||
False
|
||||
>>> c.is_subnormal(Decimal('0.1E-999'))
|
||||
Decimal("1")
|
||||
True
|
||||
>>> c.is_subnormal(Decimal('0.00'))
|
||||
Decimal("0")
|
||||
False
|
||||
>>> c.is_subnormal(Decimal('-Inf'))
|
||||
Decimal("0")
|
||||
False
|
||||
>>> c.is_subnormal(Decimal('NaN'))
|
||||
Decimal("0")
|
||||
False
|
||||
"""
|
||||
return a.is_subnormal(context=self)
|
||||
|
||||
def is_zero(self, a):
|
||||
"""Returns 1 if the operand is a zero, otherwise returns 0.
|
||||
"""Return True if the operand is a zero; otherwise return False.
|
||||
|
||||
>>> ExtendedContext.is_zero(Decimal('0'))
|
||||
Decimal("1")
|
||||
True
|
||||
>>> ExtendedContext.is_zero(Decimal('2.50'))
|
||||
Decimal("0")
|
||||
False
|
||||
>>> ExtendedContext.is_zero(Decimal('-0E+2'))
|
||||
Decimal("1")
|
||||
True
|
||||
"""
|
||||
return a.is_zero()
|
||||
|
||||
|
@ -4937,7 +4946,7 @@ def _dlog10(c, e, p):
|
|||
c = _div_nearest(c, 10**-k)
|
||||
|
||||
log_d = _ilog(c, M) # error < 5 + 22 = 27
|
||||
log_10 = _ilog(10*M, M) # error < 15
|
||||
log_10 = _log10_digits(p) # error < 1
|
||||
log_d = _div_nearest(log_d*M, log_10)
|
||||
log_tenpower = f*M # exact
|
||||
else:
|
||||
|
@ -4975,24 +4984,58 @@ def _dlog(c, e, p):
|
|||
# p <= 0: just approximate the whole thing by 0; error < 2.31
|
||||
log_d = 0
|
||||
|
||||
# compute approximation to 10**p*f*log(10), with error < 17
|
||||
# compute approximation to f*10**p*log(10), with error < 11.
|
||||
if f:
|
||||
sign_f = [-1, 1][f > 0]
|
||||
if p >= 0:
|
||||
M = 10**p * abs(f)
|
||||
else:
|
||||
M = _div_nearest(abs(f), 10**-p) # M = 10**p*|f|, error <= 0.5
|
||||
|
||||
if M:
|
||||
f_log_ten = sign_f*_ilog(10*M, M) # M*log(10), error <= 1.2 + 15 < 17
|
||||
extra = len(str(abs(f)))-1
|
||||
if p + extra >= 0:
|
||||
# error in f * _log10_digits(p+extra) < |f| * 1 = |f|
|
||||
# after division, error < |f|/10**extra + 0.5 < 10 + 0.5 < 11
|
||||
f_log_ten = _div_nearest(f*_log10_digits(p+extra), 10**extra)
|
||||
else:
|
||||
f_log_ten = 0
|
||||
else:
|
||||
f_log_ten = 0
|
||||
|
||||
# error in sum < 17+27 = 44; error after division < 0.44 + 0.5 < 1
|
||||
# error in sum < 11+27 = 38; error after division < 0.38 + 0.5 < 1
|
||||
return _div_nearest(f_log_ten + log_d, 100)
|
||||
|
||||
class _Log10Memoize(object):
|
||||
"""Class to compute, store, and allow retrieval of, digits of the
|
||||
constant log(10) = 2.302585.... This constant is needed by
|
||||
Decimal.ln, Decimal.log10, Decimal.exp and Decimal.__pow__."""
|
||||
def __init__(self):
|
||||
self.digits = "23025850929940456840179914546843642076011014886"
|
||||
|
||||
def getdigits(self, p):
|
||||
"""Given an integer p >= 0, return floor(10**p)*log(10).
|
||||
|
||||
For example, self.getdigits(3) returns 2302.
|
||||
"""
|
||||
# digits are stored as a string, for quick conversion to
|
||||
# integer in the case that we've already computed enough
|
||||
# digits; the stored digits should always be correct
|
||||
# (truncated, not rounded to nearest).
|
||||
if p < 0:
|
||||
raise ValueError("p should be nonnegative")
|
||||
|
||||
if p >= len(self.digits):
|
||||
# compute p+3, p+6, p+9, ... digits; continue until at
|
||||
# least one of the extra digits is nonzero
|
||||
extra = 3
|
||||
while True:
|
||||
# compute p+extra digits, correct to within 1ulp
|
||||
M = 10**(p+extra+2)
|
||||
digits = str(_div_nearest(_ilog(10*M, M), 100))
|
||||
if digits[-extra:] != '0'*extra:
|
||||
break
|
||||
extra += 3
|
||||
# keep all reliable digits so far; remove trailing zeros
|
||||
# and next nonzero digit
|
||||
self.digits = digits.rstrip('0')[:-1]
|
||||
return int(self.digits[:p+1])
|
||||
|
||||
_log10_digits = _Log10Memoize().getdigits
|
||||
|
||||
def _iexp(x, M, L=8):
|
||||
"""Given integers x and M, M > 0, such that x/M is small in absolute
|
||||
value, compute an integer approximation to M*exp(x/M). For 0 <=
|
||||
|
@ -5034,7 +5077,7 @@ def _dexp(c, e, p):
|
|||
"""Compute an approximation to exp(c*10**e), with p decimal places of
|
||||
precision.
|
||||
|
||||
Returns d, f such that:
|
||||
Returns integers d, f such that:
|
||||
|
||||
10**(p-1) <= d <= 10**p, and
|
||||
(d-1)*10**f < exp(c*10**e) < (d+1)*10**f
|
||||
|
@ -5047,19 +5090,18 @@ def _dexp(c, e, p):
|
|||
# we'll call iexp with M = 10**(p+2), giving p+3 digits of precision
|
||||
p += 2
|
||||
|
||||
# compute log10 with extra precision = adjusted exponent of c*10**e
|
||||
# compute log(10) with extra precision = adjusted exponent of c*10**e
|
||||
extra = max(0, e + len(str(c)) - 1)
|
||||
q = p + extra
|
||||
log10 = _dlog(10, 0, q) # error <= 1
|
||||
|
||||
# compute quotient c*10**e/(log10/10**q) = c*10**(e+q)/log10,
|
||||
# compute quotient c*10**e/(log(10)) = c*10**(e+q)/(log(10)*10**q),
|
||||
# rounding down
|
||||
shift = e+q
|
||||
if shift >= 0:
|
||||
cshift = c*10**shift
|
||||
else:
|
||||
cshift = c//10**-shift
|
||||
quot, rem = divmod(cshift, log10)
|
||||
quot, rem = divmod(cshift, _log10_digits(q))
|
||||
|
||||
# reduce remainder back to original precision
|
||||
rem = _div_nearest(rem, 10**extra)
|
||||
|
|
|
@ -527,7 +527,7 @@ class HTTPResponse:
|
|||
|
||||
def read(self, amt=None):
|
||||
if self.fp is None:
|
||||
return ""
|
||||
return b""
|
||||
|
||||
if self.chunked:
|
||||
return self._read_chunked(amt)
|
||||
|
@ -553,7 +553,8 @@ class HTTPResponse:
|
|||
s = self.fp.read(amt)
|
||||
if self.length is not None:
|
||||
self.length -= len(s)
|
||||
|
||||
if not self.length:
|
||||
self.close()
|
||||
return s
|
||||
|
||||
def _read_chunked(self, amt):
|
||||
|
@ -595,7 +596,7 @@ class HTTPResponse:
|
|||
### note: we shouldn't have any trailers!
|
||||
while True:
|
||||
line = self.fp.readline()
|
||||
if line == "\r\n":
|
||||
if line == b"\r\n":
|
||||
break
|
||||
|
||||
# we read everything; close the "file"
|
||||
|
|
|
@ -27,7 +27,7 @@ class AutoComplete:
|
|||
|
||||
menudefs = [
|
||||
('edit', [
|
||||
("Show completions", "<<force-open-completions>>"),
|
||||
("Show Completions", "<<force-open-completions>>"),
|
||||
])
|
||||
]
|
||||
|
||||
|
|
|
@ -283,20 +283,9 @@ class AutoCompleteWindow:
|
|||
self._selection_changed()
|
||||
return "break"
|
||||
|
||||
elif keysym == "Return" and not state:
|
||||
# If start is a prefix of the selection, or there was an indication
|
||||
# that the user used the completion window, put the selected
|
||||
# completion in the text, and close the list.
|
||||
# Otherwise, close the window and let the event through.
|
||||
cursel = int(self.listbox.curselection()[0])
|
||||
if self.completions[cursel][:len(self.start)] == self.start or \
|
||||
self.userwantswindow:
|
||||
self._change_start(self.completions[cursel])
|
||||
self.hide_window()
|
||||
return "break"
|
||||
else:
|
||||
self.hide_window()
|
||||
return
|
||||
elif keysym == "Return":
|
||||
self.hide_window()
|
||||
return
|
||||
|
||||
elif (self.mode == COMPLETE_ATTRIBUTES and keysym in
|
||||
("period", "space", "parenleft", "parenright", "bracketleft",
|
||||
|
|
|
@ -386,7 +386,7 @@ class EditorWindow(object):
|
|||
|
||||
def help_dialog(self, event=None):
|
||||
fn=os.path.join(os.path.abspath(os.path.dirname(__file__)),'help.txt')
|
||||
textView.TextViewer(self.top,'Help',fn)
|
||||
textView.view_file(self.top,'Help',fn)
|
||||
|
||||
def python_docs(self, event=None):
|
||||
if sys.platform[:3] == 'win':
|
||||
|
@ -408,6 +408,7 @@ class EditorWindow(object):
|
|||
|
||||
def paste(self,event):
|
||||
self.text.event_generate("<<Paste>>")
|
||||
self.text.see("insert")
|
||||
return "break"
|
||||
|
||||
def select_all(self, event=None):
|
||||
|
@ -549,7 +550,8 @@ class EditorWindow(object):
|
|||
|
||||
def close_hook(self):
|
||||
if self.flist:
|
||||
self.flist.close_edit(self)
|
||||
self.flist.unregister_maybe_terminate(self)
|
||||
self.flist = None
|
||||
|
||||
def set_close_hook(self, close_hook):
|
||||
self.close_hook = close_hook
|
||||
|
@ -828,22 +830,21 @@ class EditorWindow(object):
|
|||
if self.io.filename:
|
||||
self.update_recent_files_list(new_file=self.io.filename)
|
||||
WindowList.unregister_callback(self.postwindowsmenu)
|
||||
if self.close_hook:
|
||||
self.close_hook()
|
||||
self.flist = None
|
||||
colorizing = 0
|
||||
self.unload_extensions()
|
||||
self.io.close(); self.io = None
|
||||
self.undo = None # XXX
|
||||
self.io.close()
|
||||
self.io = None
|
||||
self.undo = None
|
||||
if self.color:
|
||||
colorizing = self.color.colorizing
|
||||
doh = colorizing and self.top
|
||||
self.color.close(doh) # Cancel colorization
|
||||
self.color.close(False)
|
||||
self.color = None
|
||||
self.text = None
|
||||
self.tkinter_vars = None
|
||||
self.per.close(); self.per = None
|
||||
if not colorizing:
|
||||
self.top.destroy()
|
||||
self.per.close()
|
||||
self.per = None
|
||||
self.top.destroy()
|
||||
if self.close_hook:
|
||||
# unless override: unregister from flist, terminate if last window
|
||||
self.close_hook()
|
||||
|
||||
def load_extensions(self):
|
||||
self.extensions = {}
|
||||
|
@ -1501,6 +1502,7 @@ def test():
|
|||
filename = None
|
||||
edit = EditorWindow(root=root, filename=filename)
|
||||
edit.set_close_hook(root.quit)
|
||||
edit.text.bind("<<close-all-windows>>", edit.close_event)
|
||||
root.mainloop()
|
||||
root.destroy()
|
||||
|
||||
|
|
|
@ -55,7 +55,7 @@ class FileList:
|
|||
break
|
||||
return "break"
|
||||
|
||||
def close_edit(self, edit):
|
||||
def unregister_maybe_terminate(self, edit):
|
||||
try:
|
||||
key = self.inversedict[edit]
|
||||
except KeyError:
|
||||
|
|
|
@ -485,13 +485,23 @@ class IOBinding:
|
|||
self.text.insert("end-1c", "\n")
|
||||
|
||||
def print_window(self, event):
|
||||
m = tkMessageBox.Message(
|
||||
title="Print",
|
||||
message="Print to Default Printer",
|
||||
icon=tkMessageBox.QUESTION,
|
||||
type=tkMessageBox.OKCANCEL,
|
||||
default=tkMessageBox.OK,
|
||||
master=self.text)
|
||||
reply = m.show()
|
||||
if reply != tkMessageBox.OK:
|
||||
self.text.focus_set()
|
||||
return "break"
|
||||
tempfilename = None
|
||||
saved = self.get_saved()
|
||||
if saved:
|
||||
filename = self.filename
|
||||
# shell undo is reset after every prompt, looks saved, probably isn't
|
||||
if not saved or filename is None:
|
||||
# XXX KBK 08Jun03 Wouldn't it be better to ask the user to save?
|
||||
(tfd, tempfilename) = tempfile.mkstemp(prefix='IDLE_tmp_')
|
||||
filename = tempfilename
|
||||
os.close(tfd)
|
||||
|
|
|
@ -30,6 +30,24 @@ What's New in IDLE 2.6a1?
|
|||
|
||||
*Release date: XX-XXX-200X* UNRELEASED, but merged into 3.0a1
|
||||
|
||||
- tabpage.py updated: tabbedPages.py now supports multiple dynamic rows
|
||||
of tabs. Patch 1612746 Tal Einat.
|
||||
|
||||
- Add confirmation dialog before printing. Patch 1717170 Tal Einat.
|
||||
|
||||
- Show paste position if > 80 col. Patch 1659326 Tal Einat.
|
||||
|
||||
- Update cursor color without restarting. Patch 1725576 Tal Einat.
|
||||
|
||||
- Allow keyboard interrupt only when user code is executing in subprocess.
|
||||
Patch 1225 Tal Einat (reworked from IDLE-Spoon).
|
||||
|
||||
- configDialog cleanup. Patch 1730217 Tal Einat.
|
||||
|
||||
- textView cleanup. Patch 1718043 Tal Einat.
|
||||
|
||||
- Clean up EditorWindow close.
|
||||
|
||||
- Corrected some bugs in AutoComplete. Also, Page Up/Down in ACW implemented;
|
||||
mouse and cursor selection in ACWindow implemented; double Tab inserts
|
||||
current selection and closes ACW (similar to double-click and Return); scroll
|
||||
|
@ -51,6 +69,8 @@ What's New in IDLE 2.6a1?
|
|||
- Bug #813342: Start the IDLE subprocess with -Qnew if the parent
|
||||
is started with that option.
|
||||
|
||||
- Honor the "Cancel" action in the save dialog (Debian bug #299092)
|
||||
|
||||
- Some syntax errors were being caught by tokenize during the tabnanny
|
||||
check, resulting in obscure error messages. Do the syntax check
|
||||
first. Bug 1562716, 1562719
|
||||
|
|
|
@ -296,9 +296,6 @@ class ModifiedColorDelegator(ColorDelegator):
|
|||
"stdout": idleConf.GetHighlight(theme, "stdout"),
|
||||
"stderr": idleConf.GetHighlight(theme, "stderr"),
|
||||
"console": idleConf.GetHighlight(theme, "console"),
|
||||
### KBK 10Aug07: None tag doesn't seem to serve a purpose and
|
||||
### breaks in py3k. Comment out for now.
|
||||
#None: idleConf.GetHighlight(theme, "normal"),
|
||||
})
|
||||
|
||||
class ModifiedUndoDelegator(UndoDelegator):
|
||||
|
|
|
@ -1,17 +1,38 @@
|
|||
from Tkinter import *
|
||||
|
||||
|
||||
class WidgetRedirector:
|
||||
|
||||
"""Support for redirecting arbitrary widget subcommands."""
|
||||
"""Support for redirecting arbitrary widget subcommands.
|
||||
|
||||
Some Tk operations don't normally pass through Tkinter. For example, if a
|
||||
character is inserted into a Text widget by pressing a key, a default Tk
|
||||
binding to the widget's 'insert' operation is activated, and the Tk library
|
||||
processes the insert without calling back into Tkinter.
|
||||
|
||||
Although a binding to <Key> could be made via Tkinter, what we really want
|
||||
to do is to hook the Tk 'insert' operation itself.
|
||||
|
||||
When a widget is instantiated, a Tcl command is created whose name is the
|
||||
same as the pathname widget._w. This command is used to invoke the various
|
||||
widget operations, e.g. insert (for a Text widget). We are going to hook
|
||||
this command and provide a facility ('register') to intercept the widget
|
||||
operation.
|
||||
|
||||
In IDLE, the function being registered provides access to the top of a
|
||||
Percolator chain. At the bottom of the chain is a call to the original
|
||||
Tk widget operation.
|
||||
|
||||
"""
|
||||
def __init__(self, widget):
|
||||
self.dict = {}
|
||||
self.widget = widget
|
||||
self.tk = tk = widget.tk
|
||||
w = widget._w
|
||||
self._operations = {}
|
||||
self.widget = widget # widget instance
|
||||
self.tk = tk = widget.tk # widget's root
|
||||
w = widget._w # widget's (full) Tk pathname
|
||||
self.orig = w + "_orig"
|
||||
# Rename the Tcl command within Tcl:
|
||||
tk.call("rename", w, self.orig)
|
||||
# Create a new Tcl command whose name is the widget's pathname, and
|
||||
# whose action is to dispatch on the operation passed to the widget:
|
||||
tk.createcommand(w, self.dispatch)
|
||||
|
||||
def __repr__(self):
|
||||
|
@ -19,74 +40,87 @@ class WidgetRedirector:
|
|||
self.widget._w)
|
||||
|
||||
def close(self):
|
||||
for name in list(self.dict.keys()):
|
||||
self.unregister(name)
|
||||
for operation in list(self._operations):
|
||||
self.unregister(operation)
|
||||
widget = self.widget; del self.widget
|
||||
orig = self.orig; del self.orig
|
||||
tk = widget.tk
|
||||
w = widget._w
|
||||
tk.deletecommand(w)
|
||||
# restore the original widget Tcl command:
|
||||
tk.call("rename", orig, w)
|
||||
|
||||
def register(self, name, function):
|
||||
if name in self.dict:
|
||||
previous = dict[name]
|
||||
else:
|
||||
previous = OriginalCommand(self, name)
|
||||
self.dict[name] = function
|
||||
setattr(self.widget, name, function)
|
||||
return previous
|
||||
def register(self, operation, function):
|
||||
self._operations[operation] = function
|
||||
setattr(self.widget, operation, function)
|
||||
return OriginalCommand(self, operation)
|
||||
|
||||
def unregister(self, name):
|
||||
if name in self.dict:
|
||||
function = self.dict[name]
|
||||
del self.dict[name]
|
||||
if hasattr(self.widget, name):
|
||||
delattr(self.widget, name)
|
||||
def unregister(self, operation):
|
||||
if operation in self._operations:
|
||||
function = self._operations[operation]
|
||||
del self._operations[operation]
|
||||
if hasattr(self.widget, operation):
|
||||
delattr(self.widget, operation)
|
||||
return function
|
||||
else:
|
||||
return None
|
||||
|
||||
def dispatch(self, cmd, *args):
|
||||
m = self.dict.get(cmd)
|
||||
def dispatch(self, operation, *args):
|
||||
'''Callback from Tcl which runs when the widget is referenced.
|
||||
|
||||
If an operation has been registered in self._operations, apply the
|
||||
associated function to the args passed into Tcl. Otherwise, pass the
|
||||
operation through to Tk via the original Tcl function.
|
||||
|
||||
Note that if a registered function is called, the operation is not
|
||||
passed through to Tk. Apply the function returned by self.register()
|
||||
to *args to accomplish that. For an example, see ColorDelegator.py.
|
||||
|
||||
'''
|
||||
m = self._operations.get(operation)
|
||||
try:
|
||||
if m:
|
||||
return m(*args)
|
||||
else:
|
||||
return self.tk.call((self.orig, cmd) + args)
|
||||
return self.tk.call((self.orig, operation) + args)
|
||||
except TclError:
|
||||
return ""
|
||||
|
||||
|
||||
class OriginalCommand:
|
||||
|
||||
def __init__(self, redir, name):
|
||||
def __init__(self, redir, operation):
|
||||
self.redir = redir
|
||||
self.name = name
|
||||
self.operation = operation
|
||||
self.tk = redir.tk
|
||||
self.orig = redir.orig
|
||||
self.tk_call = self.tk.call
|
||||
self.orig_and_name = (self.orig, self.name)
|
||||
self.orig_and_operation = (self.orig, self.operation)
|
||||
|
||||
def __repr__(self):
|
||||
return "OriginalCommand(%r, %r)" % (self.redir, self.name)
|
||||
return "OriginalCommand(%r, %r)" % (self.redir, self.operation)
|
||||
|
||||
def __call__(self, *args):
|
||||
return self.tk_call(self.orig_and_name + args)
|
||||
return self.tk_call(self.orig_and_operation + args)
|
||||
|
||||
|
||||
def main():
|
||||
root = Tk()
|
||||
root.wm_protocol("WM_DELETE_WINDOW", root.quit)
|
||||
text = Text()
|
||||
text.pack()
|
||||
text.focus_set()
|
||||
redir = WidgetRedirector(text)
|
||||
global orig_insert
|
||||
global previous_tcl_fcn
|
||||
def my_insert(*args):
|
||||
print("insert", args)
|
||||
orig_insert(*args)
|
||||
orig_insert = redir.register("insert", my_insert)
|
||||
previous_tcl_fcn(*args)
|
||||
previous_tcl_fcn = redir.register("insert", my_insert)
|
||||
root.mainloop()
|
||||
redir.unregister("insert") # runs after first 'close window'
|
||||
redir.close()
|
||||
root.mainloop()
|
||||
root.destroy()
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
|
|
@ -111,45 +111,31 @@ class AboutDialog(Toplevel):
|
|||
idle_credits_b.pack(side=LEFT, padx=10, pady=10)
|
||||
|
||||
def ShowLicense(self):
|
||||
self.display_printer_text(license, 'About - License')
|
||||
self.display_printer_text('About - License', license)
|
||||
|
||||
def ShowCopyright(self):
|
||||
self.display_printer_text(copyright, 'About - Copyright')
|
||||
self.display_printer_text('About - Copyright', copyright)
|
||||
|
||||
def ShowPythonCredits(self):
|
||||
self.display_printer_text(credits, 'About - Python Credits')
|
||||
self.display_printer_text('About - Python Credits', credits)
|
||||
|
||||
def ShowIDLECredits(self):
|
||||
self.ViewFile('About - Credits','CREDITS.txt')
|
||||
self.display_file_text('About - Credits', 'CREDITS.txt', 'iso-8859-1')
|
||||
|
||||
def ShowIDLEAbout(self):
|
||||
self.ViewFile('About - Readme', 'README.txt')
|
||||
self.display_file_text('About - Readme', 'README.txt')
|
||||
|
||||
def ShowIDLENEWS(self):
|
||||
self.ViewFile('About - NEWS', 'NEWS.txt')
|
||||
self.display_file_text('About - NEWS', 'NEWS.txt')
|
||||
|
||||
def display_printer_text(self, printer, title):
|
||||
def display_printer_text(self, title, printer):
|
||||
printer._Printer__setup()
|
||||
data = '\n'.join(printer._Printer__lines)
|
||||
textView.TextViewer(self, title, None, data)
|
||||
text = '\n'.join(printer._Printer__lines)
|
||||
textView.view_text(self, title, text)
|
||||
|
||||
def ViewFile(self, viewTitle, viewFile, encoding=None):
|
||||
fn = os.path.join(os.path.abspath(os.path.dirname(__file__)), viewFile)
|
||||
if encoding:
|
||||
import codecs
|
||||
try:
|
||||
textFile = codecs.open(fn, 'r')
|
||||
except IOError:
|
||||
import tkMessageBox
|
||||
tkMessageBox.showerror(title='File Load Error',
|
||||
message='Unable to load file %r .' % (fn,),
|
||||
parent=self)
|
||||
return
|
||||
else:
|
||||
data = textFile.read()
|
||||
else:
|
||||
data = None
|
||||
textView.TextViewer(self, viewTitle, fn, data=data)
|
||||
def display_file_text(self, title, filename, encoding=None):
|
||||
fn = os.path.join(os.path.abspath(os.path.dirname(__file__)), filename)
|
||||
textView.view_file(self, title, fn, encoding)
|
||||
|
||||
def Ok(self, event=None):
|
||||
self.destroy()
|
||||
|
|
|
@ -15,7 +15,7 @@ import copy
|
|||
|
||||
from idlelib.configHandler import idleConf
|
||||
from idlelib.dynOptionMenuWidget import DynOptionMenu
|
||||
from idlelib.tabpage import TabPageSet
|
||||
from idlelib.tabbedpages import TabbedPageSet
|
||||
from idlelib.keybindingDialog import GetKeysDialog
|
||||
from idlelib.configSectionNameDialog import GetCfgSectionNameDialog
|
||||
from idlelib.configHelpSourceEdit import GetHelpSourceDialog
|
||||
|
@ -24,6 +24,8 @@ class ConfigDialog(Toplevel):
|
|||
|
||||
def __init__(self,parent,title):
|
||||
Toplevel.__init__(self, parent)
|
||||
self.wm_withdraw()
|
||||
|
||||
self.configure(borderwidth=5)
|
||||
self.geometry("+%d+%d" % (parent.winfo_rootx()+20,
|
||||
parent.winfo_rooty()+30))
|
||||
|
@ -58,31 +60,37 @@ class ConfigDialog(Toplevel):
|
|||
#self.bind('<F1>',self.Help) #context help
|
||||
self.LoadConfigs()
|
||||
self.AttachVarCallbacks() #avoid callbacks during LoadConfigs
|
||||
|
||||
self.wm_deiconify()
|
||||
self.wait_window()
|
||||
|
||||
def CreateWidgets(self):
|
||||
self.tabPages = TabPageSet(self,
|
||||
pageNames=['Fonts/Tabs','Highlighting','Keys','General'])
|
||||
self.tabPages.ChangePage()#activates default (first) page
|
||||
frameActionButtons = Frame(self)
|
||||
self.tabPages = TabbedPageSet(self,
|
||||
page_names=['Fonts/Tabs','Highlighting','Keys','General'])
|
||||
frameActionButtons = Frame(self,pady=2)
|
||||
#action buttons
|
||||
self.buttonHelp = Button(frameActionButtons,text='Help',
|
||||
command=self.Help,takefocus=FALSE)
|
||||
command=self.Help,takefocus=FALSE,
|
||||
padx=6,pady=3)
|
||||
self.buttonOk = Button(frameActionButtons,text='Ok',
|
||||
command=self.Ok,takefocus=FALSE)
|
||||
command=self.Ok,takefocus=FALSE,
|
||||
padx=6,pady=3)
|
||||
self.buttonApply = Button(frameActionButtons,text='Apply',
|
||||
command=self.Apply,takefocus=FALSE)
|
||||
command=self.Apply,takefocus=FALSE,
|
||||
padx=6,pady=3)
|
||||
self.buttonCancel = Button(frameActionButtons,text='Cancel',
|
||||
command=self.Cancel,takefocus=FALSE)
|
||||
command=self.Cancel,takefocus=FALSE,
|
||||
padx=6,pady=3)
|
||||
self.CreatePageFontTab()
|
||||
self.CreatePageHighlight()
|
||||
self.CreatePageKeys()
|
||||
self.CreatePageGeneral()
|
||||
self.buttonHelp.pack(side=RIGHT,padx=5,pady=5)
|
||||
self.buttonOk.pack(side=LEFT,padx=5,pady=5)
|
||||
self.buttonApply.pack(side=LEFT,padx=5,pady=5)
|
||||
self.buttonCancel.pack(side=LEFT,padx=5,pady=5)
|
||||
self.buttonHelp.pack(side=RIGHT,padx=5)
|
||||
self.buttonOk.pack(side=LEFT,padx=5)
|
||||
self.buttonApply.pack(side=LEFT,padx=5)
|
||||
self.buttonCancel.pack(side=LEFT,padx=5)
|
||||
frameActionButtons.pack(side=BOTTOM)
|
||||
Frame(self, border=0).pack(side=BOTTOM,pady=2)
|
||||
self.tabPages.pack(side=TOP,expand=TRUE,fill=BOTH)
|
||||
|
||||
def CreatePageFontTab(self):
|
||||
|
@ -94,16 +102,17 @@ class ConfigDialog(Toplevel):
|
|||
self.editFont=tkFont.Font(self,('courier',10,'normal'))
|
||||
##widget creation
|
||||
#body frame
|
||||
frame=self.tabPages.pages['Fonts/Tabs']['page']
|
||||
frame=self.tabPages.pages['Fonts/Tabs'].frame
|
||||
#body section frames
|
||||
frameFont=Frame(frame,borderwidth=2,relief=GROOVE)
|
||||
frameIndent=Frame(frame,borderwidth=2,relief=GROOVE)
|
||||
frameFont=LabelFrame(frame,borderwidth=2,relief=GROOVE,
|
||||
text=' Base Editor Font ')
|
||||
frameIndent=LabelFrame(frame,borderwidth=2,relief=GROOVE,
|
||||
text=' Indentation Width ')
|
||||
#frameFont
|
||||
labelFontTitle=Label(frameFont,text='Set Base Editor Font')
|
||||
frameFontName=Frame(frameFont)
|
||||
frameFontParam=Frame(frameFont)
|
||||
labelFontNameTitle=Label(frameFontName,justify=LEFT,
|
||||
text='Font :')
|
||||
text='Font Face :')
|
||||
self.listFontName=Listbox(frameFontName,height=5,takefocus=FALSE,
|
||||
exportselection=FALSE)
|
||||
self.listFontName.bind('<ButtonRelease-1>',self.OnListFontButtonRelease)
|
||||
|
@ -124,14 +133,13 @@ class ConfigDialog(Toplevel):
|
|||
labelSpaceNumTitle=Label(frameIndentSize, justify=LEFT,
|
||||
text='Python Standard: 4 Spaces!')
|
||||
self.scaleSpaceNum=Scale(frameIndentSize, variable=self.spaceNum,
|
||||
label='Indentation Width', orient='horizontal',
|
||||
orient='horizontal',
|
||||
tickinterval=2, from_=2, to=16)
|
||||
#widget packing
|
||||
#body
|
||||
frameFont.pack(side=LEFT,padx=5,pady=10,expand=TRUE,fill=BOTH)
|
||||
frameIndent.pack(side=LEFT,padx=5,pady=10,fill=Y)
|
||||
frameFont.pack(side=LEFT,padx=5,pady=5,expand=TRUE,fill=BOTH)
|
||||
frameIndent.pack(side=LEFT,padx=5,pady=5,fill=Y)
|
||||
#frameFont
|
||||
labelFontTitle.pack(side=TOP,anchor=W,padx=5,pady=5)
|
||||
frameFontName.pack(side=TOP,padx=5,pady=5,fill=X)
|
||||
frameFontParam.pack(side=TOP,padx=5,pady=5,fill=X)
|
||||
labelFontNameTitle.pack(side=TOP,anchor=W)
|
||||
|
@ -143,7 +151,7 @@ class ConfigDialog(Toplevel):
|
|||
frameFontSample.pack(side=TOP,padx=5,pady=5,expand=TRUE,fill=BOTH)
|
||||
self.labelFontSample.pack(expand=TRUE,fill=BOTH)
|
||||
#frameIndent
|
||||
frameIndentSize.pack(side=TOP,padx=5,pady=5,fill=BOTH)
|
||||
frameIndentSize.pack(side=TOP,fill=X)
|
||||
labelSpaceNumTitle.pack(side=TOP,anchor=W,padx=5)
|
||||
self.scaleSpaceNum.pack(side=TOP,padx=5,fill=X)
|
||||
return frame
|
||||
|
@ -158,10 +166,12 @@ class ConfigDialog(Toplevel):
|
|||
self.highlightTarget=StringVar(self)
|
||||
##widget creation
|
||||
#body frame
|
||||
frame=self.tabPages.pages['Highlighting']['page']
|
||||
frame=self.tabPages.pages['Highlighting'].frame
|
||||
#body section frames
|
||||
frameCustom=Frame(frame,borderwidth=2,relief=GROOVE)
|
||||
frameTheme=Frame(frame,borderwidth=2,relief=GROOVE)
|
||||
frameCustom=LabelFrame(frame,borderwidth=2,relief=GROOVE,
|
||||
text=' Custom Highlighting ')
|
||||
frameTheme=LabelFrame(frame,borderwidth=2,relief=GROOVE,
|
||||
text=' Highlighting Theme ')
|
||||
#frameCustom
|
||||
self.textHighlightSample=Text(frameCustom,relief=SOLID,borderwidth=1,
|
||||
font=('courier',12,''),cursor='hand2',width=21,height=10,
|
||||
|
@ -189,7 +199,6 @@ class ConfigDialog(Toplevel):
|
|||
text.config(state=DISABLED)
|
||||
self.frameColourSet=Frame(frameCustom,relief=SOLID,borderwidth=1)
|
||||
frameFgBg=Frame(frameCustom)
|
||||
labelCustomTitle=Label(frameCustom,text='Set Custom Highlighting')
|
||||
buttonSetColour=Button(self.frameColourSet,text='Choose Colour for :',
|
||||
command=self.GetColour,highlightthickness=0)
|
||||
self.optMenuHighlightTarget=DynOptionMenu(self.frameColourSet,
|
||||
|
@ -202,7 +211,6 @@ class ConfigDialog(Toplevel):
|
|||
buttonSaveCustomTheme=Button(frameCustom,
|
||||
text='Save as New Custom Theme',command=self.SaveAsNewTheme)
|
||||
#frameTheme
|
||||
labelThemeTitle=Label(frameTheme,text='Select a Highlighting Theme')
|
||||
labelTypeTitle=Label(frameTheme,text='Select : ')
|
||||
self.radioThemeBuiltin=Radiobutton(frameTheme,variable=self.themeIsBuiltin,
|
||||
value=1,command=self.SetThemeType,text='a Built-in Theme')
|
||||
|
@ -216,10 +224,9 @@ class ConfigDialog(Toplevel):
|
|||
command=self.DeleteCustomTheme)
|
||||
##widget packing
|
||||
#body
|
||||
frameCustom.pack(side=LEFT,padx=5,pady=10,expand=TRUE,fill=BOTH)
|
||||
frameTheme.pack(side=LEFT,padx=5,pady=10,fill=Y)
|
||||
frameCustom.pack(side=LEFT,padx=5,pady=5,expand=TRUE,fill=BOTH)
|
||||
frameTheme.pack(side=LEFT,padx=5,pady=5,fill=Y)
|
||||
#frameCustom
|
||||
labelCustomTitle.pack(side=TOP,anchor=W,padx=5,pady=5)
|
||||
self.frameColourSet.pack(side=TOP,padx=5,pady=5,expand=TRUE,fill=X)
|
||||
frameFgBg.pack(side=TOP,padx=5,pady=0)
|
||||
self.textHighlightSample.pack(side=TOP,padx=5,pady=5,expand=TRUE,
|
||||
|
@ -230,7 +237,6 @@ class ConfigDialog(Toplevel):
|
|||
self.radioBg.pack(side=RIGHT,anchor=W)
|
||||
buttonSaveCustomTheme.pack(side=BOTTOM,fill=X,padx=5,pady=5)
|
||||
#frameTheme
|
||||
labelThemeTitle.pack(side=TOP,anchor=W,padx=5,pady=5)
|
||||
labelTypeTitle.pack(side=TOP,anchor=W,padx=5,pady=5)
|
||||
self.radioThemeBuiltin.pack(side=TOP,anchor=W,padx=5)
|
||||
self.radioThemeCustom.pack(side=TOP,anchor=W,padx=5,pady=2)
|
||||
|
@ -248,13 +254,14 @@ class ConfigDialog(Toplevel):
|
|||
self.keyBinding=StringVar(self)
|
||||
##widget creation
|
||||
#body frame
|
||||
frame=self.tabPages.pages['Keys']['page']
|
||||
frame=self.tabPages.pages['Keys'].frame
|
||||
#body section frames
|
||||
frameCustom=Frame(frame,borderwidth=2,relief=GROOVE)
|
||||
frameKeySets=Frame(frame,borderwidth=2,relief=GROOVE)
|
||||
frameCustom=LabelFrame(frame,borderwidth=2,relief=GROOVE,
|
||||
text=' Custom Key Bindings ')
|
||||
frameKeySets=LabelFrame(frame,borderwidth=2,relief=GROOVE,
|
||||
text=' Key Set ')
|
||||
#frameCustom
|
||||
frameTarget=Frame(frameCustom)
|
||||
labelCustomTitle=Label(frameCustom,text='Set Custom Key Bindings')
|
||||
labelTargetTitle=Label(frameTarget,text='Action - Key(s)')
|
||||
scrollTargetY=Scrollbar(frameTarget)
|
||||
scrollTargetX=Scrollbar(frameTarget,orient=HORIZONTAL)
|
||||
|
@ -270,7 +277,6 @@ class ConfigDialog(Toplevel):
|
|||
buttonSaveCustomKeys=Button(frameCustom,
|
||||
text='Save as New Custom Key Set',command=self.SaveAsNewKeySet)
|
||||
#frameKeySets
|
||||
labelKeysTitle=Label(frameKeySets,text='Select a Key Set')
|
||||
labelTypeTitle=Label(frameKeySets,text='Select : ')
|
||||
self.radioKeysBuiltin=Radiobutton(frameKeySets,variable=self.keysAreBuiltin,
|
||||
value=1,command=self.SetKeysType,text='a Built-in Key Set')
|
||||
|
@ -287,7 +293,6 @@ class ConfigDialog(Toplevel):
|
|||
frameCustom.pack(side=LEFT,padx=5,pady=5,expand=TRUE,fill=BOTH)
|
||||
frameKeySets.pack(side=LEFT,padx=5,pady=5,fill=Y)
|
||||
#frameCustom
|
||||
labelCustomTitle.pack(side=TOP,anchor=W,padx=5,pady=5)
|
||||
buttonSaveCustomKeys.pack(side=BOTTOM,fill=X,padx=5,pady=5)
|
||||
self.buttonNewKeys.pack(side=BOTTOM,fill=X,padx=5,pady=5)
|
||||
frameTarget.pack(side=LEFT,padx=5,pady=5,expand=TRUE,fill=BOTH)
|
||||
|
@ -299,7 +304,6 @@ class ConfigDialog(Toplevel):
|
|||
scrollTargetY.grid(row=1,column=1,sticky=NS)
|
||||
scrollTargetX.grid(row=2,column=0,sticky=EW)
|
||||
#frameKeySets
|
||||
labelKeysTitle.pack(side=TOP,anchor=W,padx=5,pady=5)
|
||||
labelTypeTitle.pack(side=TOP,anchor=W,padx=5,pady=5)
|
||||
self.radioKeysBuiltin.pack(side=TOP,anchor=W,padx=5)
|
||||
self.radioKeysCustom.pack(side=TOP,anchor=W,padx=5,pady=2)
|
||||
|
@ -320,23 +324,24 @@ class ConfigDialog(Toplevel):
|
|||
self.helpBrowser=StringVar(self)
|
||||
#widget creation
|
||||
#body
|
||||
frame=self.tabPages.pages['General']['page']
|
||||
frame=self.tabPages.pages['General'].frame
|
||||
#body section frames
|
||||
frameRun=Frame(frame,borderwidth=2,relief=GROOVE)
|
||||
frameSave=Frame(frame,borderwidth=2,relief=GROOVE)
|
||||
frameRun=LabelFrame(frame,borderwidth=2,relief=GROOVE,
|
||||
text=' Startup Preferences ')
|
||||
frameSave=LabelFrame(frame,borderwidth=2,relief=GROOVE,
|
||||
text=' Autosave Preferences ')
|
||||
frameWinSize=Frame(frame,borderwidth=2,relief=GROOVE)
|
||||
frameParaSize=Frame(frame,borderwidth=2,relief=GROOVE)
|
||||
frameEncoding=Frame(frame,borderwidth=2,relief=GROOVE)
|
||||
frameHelp=Frame(frame,borderwidth=2,relief=GROOVE)
|
||||
frameHelp=LabelFrame(frame,borderwidth=2,relief=GROOVE,
|
||||
text=' Additional Help Sources ')
|
||||
#frameRun
|
||||
labelRunTitle=Label(frameRun,text='Startup Preferences')
|
||||
labelRunChoiceTitle=Label(frameRun,text='At Startup')
|
||||
radioStartupEdit=Radiobutton(frameRun,variable=self.startupEdit,
|
||||
value=1,command=self.SetKeysType,text="Open Edit Window")
|
||||
radioStartupShell=Radiobutton(frameRun,variable=self.startupEdit,
|
||||
value=0,command=self.SetKeysType,text='Open Shell Window')
|
||||
#frameSave
|
||||
labelSaveTitle=Label(frameSave,text='Autosave Preference')
|
||||
labelRunSaveTitle=Label(frameSave,text='At Start of Run (F5) ')
|
||||
radioSaveAsk=Radiobutton(frameSave,variable=self.autoSave,
|
||||
value=0,command=self.SetKeysType,text="Prompt to Save")
|
||||
|
@ -367,7 +372,6 @@ class ConfigDialog(Toplevel):
|
|||
#frameHelp
|
||||
frameHelpList=Frame(frameHelp)
|
||||
frameHelpListButtons=Frame(frameHelpList)
|
||||
labelHelpListTitle=Label(frameHelpList,text='Additional Help Sources:')
|
||||
scrollHelpList=Scrollbar(frameHelpList)
|
||||
self.listHelp=Listbox(frameHelpList,height=5,takefocus=FALSE,
|
||||
exportselection=FALSE)
|
||||
|
@ -389,12 +393,10 @@ class ConfigDialog(Toplevel):
|
|||
frameEncoding.pack(side=TOP,padx=5,pady=5,fill=X)
|
||||
frameHelp.pack(side=TOP,padx=5,pady=5,expand=TRUE,fill=BOTH)
|
||||
#frameRun
|
||||
labelRunTitle.pack(side=TOP,anchor=W,padx=5,pady=5)
|
||||
labelRunChoiceTitle.pack(side=LEFT,anchor=W,padx=5,pady=5)
|
||||
radioStartupShell.pack(side=RIGHT,anchor=W,padx=5,pady=5)
|
||||
radioStartupEdit.pack(side=RIGHT,anchor=W,padx=5,pady=5)
|
||||
#frameSave
|
||||
labelSaveTitle.pack(side=TOP,anchor=W,padx=5,pady=5)
|
||||
labelRunSaveTitle.pack(side=LEFT,anchor=W,padx=5,pady=5)
|
||||
radioSaveAuto.pack(side=RIGHT,anchor=W,padx=5,pady=5)
|
||||
radioSaveAsk.pack(side=RIGHT,anchor=W,padx=5,pady=5)
|
||||
|
@ -415,7 +417,6 @@ class ConfigDialog(Toplevel):
|
|||
#frameHelp
|
||||
frameHelpListButtons.pack(side=RIGHT,padx=5,pady=5,fill=Y)
|
||||
frameHelpList.pack(side=TOP,padx=5,pady=5,expand=TRUE,fill=BOTH)
|
||||
labelHelpListTitle.pack(side=TOP,anchor=W)
|
||||
scrollHelpList.pack(side=RIGHT,anchor=W,fill=Y)
|
||||
self.listHelp.pack(side=LEFT,anchor=E,expand=TRUE,fill=BOTH)
|
||||
self.buttonHelpListEdit.pack(side=TOP,anchor=W,pady=5)
|
||||
|
@ -1116,12 +1117,15 @@ class ConfigDialog(Toplevel):
|
|||
def ActivateConfigChanges(self):
|
||||
"Dynamically apply configuration changes"
|
||||
winInstances = self.parent.instance_dict.keys()
|
||||
theme = idleConf.CurrentTheme()
|
||||
cursor_color = idleConf.GetHighlight(theme, 'cursor', fgBg='fg')
|
||||
for instance in winInstances:
|
||||
instance.ResetColorizer()
|
||||
instance.ResetFont()
|
||||
instance.set_notabs_indentwidth()
|
||||
instance.ApplyKeybindings()
|
||||
instance.reset_help_menu_entries()
|
||||
instance.text.configure(insertbackground=cursor_color)
|
||||
|
||||
def Cancel(self):
|
||||
self.destroy()
|
||||
|
|
|
@ -38,10 +38,11 @@ else:
|
|||
|
||||
# Thread shared globals: Establish a queue between a subthread (which handles
|
||||
# the socket) and the main thread (which runs user code), plus global
|
||||
# completion and exit flags:
|
||||
# completion, exit and interruptable (the main thread) flags:
|
||||
|
||||
exit_now = False
|
||||
quitting = False
|
||||
interruptable = False
|
||||
|
||||
def main(del_exitfunc=False):
|
||||
"""Start the Python execution server in a subprocess
|
||||
|
@ -278,9 +279,14 @@ class Executive(object):
|
|||
self.autocomplete = AutoComplete.AutoComplete()
|
||||
|
||||
def runcode(self, code):
|
||||
global interruptable
|
||||
try:
|
||||
self.usr_exc_info = None
|
||||
exec(code, self.locals)
|
||||
interruptable = True
|
||||
try:
|
||||
exec(code, self.locals)
|
||||
finally:
|
||||
interruptable = False
|
||||
except:
|
||||
self.usr_exc_info = sys.exc_info()
|
||||
if quitting:
|
||||
|
@ -294,7 +300,8 @@ class Executive(object):
|
|||
flush_stdout()
|
||||
|
||||
def interrupt_the_server(self):
|
||||
thread.interrupt_main()
|
||||
if interruptable:
|
||||
thread.interrupt_main()
|
||||
|
||||
def start_the_debugger(self, gui_adap_oid):
|
||||
return RemoteDebugger.start_debugger(self.rpchandler, gui_adap_oid)
|
||||
|
|
|
@ -0,0 +1,478 @@
|
|||
"""An implementation of tabbed pages using only standard Tkinter.
|
||||
|
||||
Originally developed for use in IDLE. Based on tabpage.py.
|
||||
|
||||
Classes exported:
|
||||
TabbedPageSet -- A Tkinter implementation of a tabbed-page widget.
|
||||
TabBarSet -- A widget containing tabs (buttons) in one or more rows.
|
||||
|
||||
"""
|
||||
from Tkinter import *
|
||||
|
||||
class InvalidNameError(Exception): pass
|
||||
class AlreadyExistsError(Exception): pass
|
||||
|
||||
|
||||
class TabBarSet(Frame):
|
||||
"""A widget containing tabs (buttons) in one or more rows.
|
||||
|
||||
Only one tab may be selected at a time.
|
||||
|
||||
"""
|
||||
def __init__(self, page_set, select_command,
|
||||
tabs=None, n_rows=1, max_tabs_per_row=5,
|
||||
expand_tabs=False, **kw):
|
||||
"""Constructor arguments:
|
||||
|
||||
select_command -- A callable which will be called when a tab is
|
||||
selected. It is called with the name of the selected tab as an
|
||||
argument.
|
||||
|
||||
tabs -- A list of strings, the names of the tabs. Should be specified in
|
||||
the desired tab order. The first tab will be the default and first
|
||||
active tab. If tabs is None or empty, the TabBarSet will be initialized
|
||||
empty.
|
||||
|
||||
n_rows -- Number of rows of tabs to be shown. If n_rows <= 0 or is
|
||||
None, then the number of rows will be decided by TabBarSet. See
|
||||
_arrange_tabs() for details.
|
||||
|
||||
max_tabs_per_row -- Used for deciding how many rows of tabs are needed,
|
||||
when the number of rows is not constant. See _arrange_tabs() for
|
||||
details.
|
||||
|
||||
"""
|
||||
Frame.__init__(self, page_set, **kw)
|
||||
self.select_command = select_command
|
||||
self.n_rows = n_rows
|
||||
self.max_tabs_per_row = max_tabs_per_row
|
||||
self.expand_tabs = expand_tabs
|
||||
self.page_set = page_set
|
||||
|
||||
self._tabs = {}
|
||||
self._tab2row = {}
|
||||
if tabs:
|
||||
self._tab_names = list(tabs)
|
||||
else:
|
||||
self._tab_names = []
|
||||
self._selected_tab = None
|
||||
self._tab_rows = []
|
||||
|
||||
self.padding_frame = Frame(self, height=2,
|
||||
borderwidth=0, relief=FLAT,
|
||||
background=self.cget('background'))
|
||||
self.padding_frame.pack(side=TOP, fill=X, expand=False)
|
||||
|
||||
self._arrange_tabs()
|
||||
|
||||
def add_tab(self, tab_name):
|
||||
"""Add a new tab with the name given in tab_name."""
|
||||
if not tab_name:
|
||||
raise InvalidNameError("Invalid Tab name: '%s'" % tab_name)
|
||||
if tab_name in self._tab_names:
|
||||
raise AlreadyExistsError("Tab named '%s' already exists" %tab_name)
|
||||
|
||||
self._tab_names.append(tab_name)
|
||||
self._arrange_tabs()
|
||||
|
||||
def remove_tab(self, tab_name):
|
||||
"""Remove the tab with the name given in tab_name."""
|
||||
if not tab_name in self._tab_names:
|
||||
raise KeyError("No such Tab: '%s" % page_name)
|
||||
|
||||
self._tab_names.remove(tab_name)
|
||||
self._arrange_tabs()
|
||||
|
||||
def select_tab(self, tab_name):
|
||||
"""Select the tab with the name given in tab_name."""
|
||||
if tab_name == self._selected_tab:
|
||||
return
|
||||
if tab_name is not None and tab_name not in self._tabs:
|
||||
raise KeyError("No such Tab: '%s" % page_name)
|
||||
|
||||
# deselect the current selected tab
|
||||
if self._selected_tab is not None:
|
||||
self._tabs[self._selected_tab].set_normal()
|
||||
self._selected_tab = None
|
||||
|
||||
if tab_name is not None:
|
||||
# activate the tab named tab_name
|
||||
self._selected_tab = tab_name
|
||||
tab = self._tabs[tab_name]
|
||||
tab.set_selected()
|
||||
# move the tab row with the selected tab to the bottom
|
||||
tab_row = self._tab2row[tab]
|
||||
tab_row.pack_forget()
|
||||
tab_row.pack(side=TOP, fill=X, expand=0)
|
||||
|
||||
def _add_tab_row(self, tab_names, expand_tabs):
|
||||
if not tab_names:
|
||||
return
|
||||
|
||||
tab_row = Frame(self)
|
||||
tab_row.pack(side=TOP, fill=X, expand=0)
|
||||
tab_row.tab_set = self
|
||||
self._tab_rows.append(tab_row)
|
||||
|
||||
for tab_name in tab_names:
|
||||
def tab_command(select_command=self.select_command,
|
||||
tab_name=tab_name):
|
||||
return select_command(tab_name)
|
||||
tab = TabBarSet.TabButton(tab_row, tab_name, tab_command)
|
||||
if expand_tabs:
|
||||
tab.pack(side=LEFT, fill=X, expand=True)
|
||||
else:
|
||||
tab.pack(side=LEFT)
|
||||
self._tabs[tab_name] = tab
|
||||
self._tab2row[tab] = tab_row
|
||||
|
||||
tab.is_last_in_row = True
|
||||
|
||||
def _reset_tab_rows(self):
|
||||
while self._tab_rows:
|
||||
tab_row = self._tab_rows.pop()
|
||||
tab_row.destroy()
|
||||
self._tab2row = {}
|
||||
|
||||
def _arrange_tabs(self):
|
||||
"""
|
||||
Arrange the tabs in rows, in the order in which they were added.
|
||||
|
||||
If n_rows >= 1, this will be the number of rows used. Otherwise the
|
||||
number of rows will be calculated according to the number of tabs and
|
||||
max_tabs_per_row. In this case, the number of rows may change when
|
||||
adding/removing tabs.
|
||||
|
||||
"""
|
||||
# remove all tabs and rows
|
||||
for tab_name in self._tabs.keys():
|
||||
self._tabs.pop(tab_name).destroy()
|
||||
self._reset_tab_rows()
|
||||
|
||||
if not self._tab_names:
|
||||
return
|
||||
|
||||
if self.n_rows is not None and self.n_rows > 0:
|
||||
n_rows = self.n_rows
|
||||
else:
|
||||
# calculate the required number of rows
|
||||
n_rows = (len(self._tab_names) - 1) // self.max_tabs_per_row + 1
|
||||
|
||||
i = 0
|
||||
expand_tabs = self.expand_tabs or n_rows > 1
|
||||
for row_index in xrange(n_rows):
|
||||
# calculate required number of tabs in this row
|
||||
n_tabs = (len(self._tab_names) - i - 1) // (n_rows - row_index) + 1
|
||||
tab_names = self._tab_names[i:i + n_tabs]
|
||||
i += n_tabs
|
||||
self._add_tab_row(tab_names, expand_tabs)
|
||||
|
||||
# re-select selected tab so it is properly displayed
|
||||
selected = self._selected_tab
|
||||
self.select_tab(None)
|
||||
if selected in self._tab_names:
|
||||
self.select_tab(selected)
|
||||
|
||||
class TabButton(Frame):
|
||||
"""A simple tab-like widget."""
|
||||
|
||||
bw = 2 # borderwidth
|
||||
|
||||
def __init__(self, tab_row, name, command):
|
||||
"""Constructor arguments:
|
||||
|
||||
name -- The tab's name, which will appear in its button.
|
||||
|
||||
command -- The command to be called upon selection of the tab. It
|
||||
is called with the tab's name as an argument.
|
||||
|
||||
"""
|
||||
Frame.__init__(self, tab_row, borderwidth=self.bw)
|
||||
self.button = Radiobutton(self, text=name, command=command,
|
||||
padx=5, pady=1, takefocus=FALSE, indicatoron=FALSE,
|
||||
highlightthickness=0, selectcolor='', borderwidth=0)
|
||||
self.button.pack(side=LEFT, fill=X, expand=True)
|
||||
|
||||
self.tab_set = tab_row.tab_set
|
||||
|
||||
self.is_last_in_row = False
|
||||
|
||||
self._init_masks()
|
||||
self.set_normal()
|
||||
|
||||
def set_selected(self):
|
||||
"""Assume selected look"""
|
||||
for widget in self, self.mskl.ml, self.mskr.mr:
|
||||
widget.config(relief=RAISED)
|
||||
self._place_masks(selected=True)
|
||||
|
||||
def set_normal(self):
|
||||
"""Assume normal look"""
|
||||
for widget in self, self.mskl.ml, self.mskr.mr:
|
||||
widget.config(relief=RAISED)
|
||||
self._place_masks(selected=False)
|
||||
|
||||
def _init_masks(self):
|
||||
page_set = self.tab_set.page_set
|
||||
background = page_set.pages_frame.cget('background')
|
||||
# mask replaces the middle of the border with the background color
|
||||
self.mask = Frame(page_set, borderwidth=0, relief=FLAT,
|
||||
background=background)
|
||||
# mskl replaces the bottom-left corner of the border with a normal
|
||||
# left border
|
||||
self.mskl = Frame(page_set, borderwidth=0, relief=FLAT,
|
||||
background=background)
|
||||
self.mskl.ml = Frame(self.mskl, borderwidth=self.bw,
|
||||
relief=RAISED)
|
||||
self.mskl.ml.place(x=0, y=-self.bw,
|
||||
width=2*self.bw, height=self.bw*4)
|
||||
# mskr replaces the bottom-right corner of the border with a normal
|
||||
# right border
|
||||
self.mskr = Frame(page_set, borderwidth=0, relief=FLAT,
|
||||
background=background)
|
||||
self.mskr.mr = Frame(self.mskr, borderwidth=self.bw,
|
||||
relief=RAISED)
|
||||
|
||||
def _place_masks(self, selected=False):
|
||||
height = self.bw
|
||||
if selected:
|
||||
height += self.bw
|
||||
|
||||
self.mask.place(in_=self,
|
||||
relx=0.0, x=0,
|
||||
rely=1.0, y=0,
|
||||
relwidth=1.0, width=0,
|
||||
relheight=0.0, height=height)
|
||||
|
||||
self.mskl.place(in_=self,
|
||||
relx=0.0, x=-self.bw,
|
||||
rely=1.0, y=0,
|
||||
relwidth=0.0, width=self.bw,
|
||||
relheight=0.0, height=height)
|
||||
|
||||
page_set = self.tab_set.page_set
|
||||
if selected and ((not self.is_last_in_row) or
|
||||
(self.winfo_rootx() + self.winfo_width() <
|
||||
page_set.winfo_rootx() + page_set.winfo_width())
|
||||
):
|
||||
# for a selected tab, if its rightmost edge isn't on the
|
||||
# rightmost edge of the page set, the right mask should be one
|
||||
# borderwidth shorter (vertically)
|
||||
height -= self.bw
|
||||
|
||||
self.mskr.place(in_=self,
|
||||
relx=1.0, x=0,
|
||||
rely=1.0, y=0,
|
||||
relwidth=0.0, width=self.bw,
|
||||
relheight=0.0, height=height)
|
||||
|
||||
self.mskr.mr.place(x=-self.bw, y=-self.bw,
|
||||
width=2*self.bw, height=height + self.bw*2)
|
||||
|
||||
# finally, lower the tab set so that all of the frames we just
|
||||
# placed hide it
|
||||
self.tab_set.lower()
|
||||
|
||||
class TabbedPageSet(Frame):
|
||||
"""A Tkinter tabbed-pane widget.
|
||||
|
||||
Constains set of 'pages' (or 'panes') with tabs above for selecting which
|
||||
page is displayed. Only one page will be displayed at a time.
|
||||
|
||||
Pages may be accessed through the 'pages' attribute, which is a dictionary
|
||||
of pages, using the name given as the key. A page is an instance of a
|
||||
subclass of Tk's Frame widget.
|
||||
|
||||
The page widgets will be created (and destroyed when required) by the
|
||||
TabbedPageSet. Do not call the page's pack/place/grid/destroy methods.
|
||||
|
||||
Pages may be added or removed at any time using the add_page() and
|
||||
remove_page() methods.
|
||||
|
||||
"""
|
||||
class Page(object):
|
||||
"""Abstract base class for TabbedPageSet's pages.
|
||||
|
||||
Subclasses must override the _show() and _hide() methods.
|
||||
|
||||
"""
|
||||
uses_grid = False
|
||||
|
||||
def __init__(self, page_set):
|
||||
self.frame = Frame(page_set, borderwidth=2, relief=RAISED)
|
||||
|
||||
def _show(self):
|
||||
raise NotImplementedError
|
||||
|
||||
def _hide(self):
|
||||
raise NotImplementedError
|
||||
|
||||
class PageRemove(Page):
|
||||
"""Page class using the grid placement manager's "remove" mechanism."""
|
||||
uses_grid = True
|
||||
|
||||
def _show(self):
|
||||
self.frame.grid(row=0, column=0, sticky=NSEW)
|
||||
|
||||
def _hide(self):
|
||||
self.frame.grid_remove()
|
||||
|
||||
class PageLift(Page):
|
||||
"""Page class using the grid placement manager's "lift" mechanism."""
|
||||
uses_grid = True
|
||||
|
||||
def __init__(self, page_set):
|
||||
super(TabbedPageSet.PageLift, self).__init__(page_set)
|
||||
self.frame.grid(row=0, column=0, sticky=NSEW)
|
||||
self.frame.lower()
|
||||
|
||||
def _show(self):
|
||||
self.frame.lift()
|
||||
|
||||
def _hide(self):
|
||||
self.frame.lower()
|
||||
|
||||
class PagePackForget(Page):
|
||||
"""Page class using the pack placement manager's "forget" mechanism."""
|
||||
def _show(self):
|
||||
self.frame.pack(fill=BOTH, expand=True)
|
||||
|
||||
def _hide(self):
|
||||
self.frame.pack_forget()
|
||||
|
||||
def __init__(self, parent, page_names=None, page_class=PageLift,
|
||||
n_rows=1, max_tabs_per_row=5, expand_tabs=False,
|
||||
**kw):
|
||||
"""Constructor arguments:
|
||||
|
||||
page_names -- A list of strings, each will be the dictionary key to a
|
||||
page's widget, and the name displayed on the page's tab. Should be
|
||||
specified in the desired page order. The first page will be the default
|
||||
and first active page. If page_names is None or empty, the
|
||||
TabbedPageSet will be initialized empty.
|
||||
|
||||
n_rows, max_tabs_per_row -- Parameters for the TabBarSet which will
|
||||
manage the tabs. See TabBarSet's docs for details.
|
||||
|
||||
page_class -- Pages can be shown/hidden using three mechanisms:
|
||||
|
||||
* PageLift - All pages will be rendered one on top of the other. When
|
||||
a page is selected, it will be brought to the top, thus hiding all
|
||||
other pages. Using this method, the TabbedPageSet will not be resized
|
||||
when pages are switched. (It may still be resized when pages are
|
||||
added/removed.)
|
||||
|
||||
* PageRemove - When a page is selected, the currently showing page is
|
||||
hidden, and the new page shown in its place. Using this method, the
|
||||
TabbedPageSet may resize when pages are changed.
|
||||
|
||||
* PagePackForget - This mechanism uses the pack placement manager.
|
||||
When a page is shown it is packed, and when it is hidden it is
|
||||
unpacked (i.e. pack_forget). This mechanism may also cause the
|
||||
TabbedPageSet to resize when the page is changed.
|
||||
|
||||
"""
|
||||
Frame.__init__(self, parent, kw)
|
||||
|
||||
self.page_class = page_class
|
||||
self.pages = {}
|
||||
self._pages_order = []
|
||||
self._current_page = None
|
||||
self._default_page = None
|
||||
|
||||
self.columnconfigure(0, weight=1)
|
||||
self.rowconfigure(1, weight=1)
|
||||
|
||||
self.pages_frame = Frame(self)
|
||||
self.pages_frame.grid(row=1, column=0, sticky=NSEW)
|
||||
if self.page_class.uses_grid:
|
||||
self.pages_frame.columnconfigure(0, weight=1)
|
||||
self.pages_frame.rowconfigure(0, weight=1)
|
||||
|
||||
# the order of the following commands is important
|
||||
self._tab_set = TabBarSet(self, self.change_page, n_rows=n_rows,
|
||||
max_tabs_per_row=max_tabs_per_row,
|
||||
expand_tabs=expand_tabs)
|
||||
if page_names:
|
||||
for name in page_names:
|
||||
self.add_page(name)
|
||||
self._tab_set.grid(row=0, column=0, sticky=NSEW)
|
||||
|
||||
self.change_page(self._default_page)
|
||||
|
||||
def add_page(self, page_name):
|
||||
"""Add a new page with the name given in page_name."""
|
||||
if not page_name:
|
||||
raise InvalidNameError("Invalid TabPage name: '%s'" % page_name)
|
||||
if page_name in self.pages:
|
||||
raise AlreadyExistsError(
|
||||
"TabPage named '%s' already exists" % page_name)
|
||||
|
||||
self.pages[page_name] = self.page_class(self.pages_frame)
|
||||
self._pages_order.append(page_name)
|
||||
self._tab_set.add_tab(page_name)
|
||||
|
||||
if len(self.pages) == 1: # adding first page
|
||||
self._default_page = page_name
|
||||
self.change_page(page_name)
|
||||
|
||||
def remove_page(self, page_name):
|
||||
"""Destroy the page whose name is given in page_name."""
|
||||
if not page_name in self.pages:
|
||||
raise KeyError("No such TabPage: '%s" % page_name)
|
||||
|
||||
self._pages_order.remove(page_name)
|
||||
|
||||
# handle removing last remaining, default, or currently shown page
|
||||
if len(self._pages_order) > 0:
|
||||
if page_name == self._default_page:
|
||||
# set a new default page
|
||||
self._default_page = self._pages_order[0]
|
||||
else:
|
||||
self._default_page = None
|
||||
|
||||
if page_name == self._current_page:
|
||||
self.change_page(self._default_page)
|
||||
|
||||
self._tab_set.remove_tab(page_name)
|
||||
page = self.pages.pop(page_name)
|
||||
page.frame.destroy()
|
||||
|
||||
def change_page(self, page_name):
|
||||
"""Show the page whose name is given in page_name."""
|
||||
if self._current_page == page_name:
|
||||
return
|
||||
if page_name is not None and page_name not in self.pages:
|
||||
raise KeyError("No such TabPage: '%s'" % page_name)
|
||||
|
||||
if self._current_page is not None:
|
||||
self.pages[self._current_page]._hide()
|
||||
self._current_page = None
|
||||
|
||||
if page_name is not None:
|
||||
self._current_page = page_name
|
||||
self.pages[page_name]._show()
|
||||
|
||||
self._tab_set.select_tab(page_name)
|
||||
|
||||
if __name__ == '__main__':
|
||||
# test dialog
|
||||
root=Tk()
|
||||
tabPage=TabbedPageSet(root, page_names=['Foobar','Baz'], n_rows=0,
|
||||
expand_tabs=False,
|
||||
)
|
||||
tabPage.pack(side=TOP, expand=TRUE, fill=BOTH)
|
||||
Label(tabPage.pages['Foobar'].frame, text='Foo', pady=20).pack()
|
||||
Label(tabPage.pages['Foobar'].frame, text='Bar', pady=20).pack()
|
||||
Label(tabPage.pages['Baz'].frame, text='Baz').pack()
|
||||
entryPgName=Entry(root)
|
||||
buttonAdd=Button(root, text='Add Page',
|
||||
command=lambda:tabPage.add_page(entryPgName.get()))
|
||||
buttonRemove=Button(root, text='Remove Page',
|
||||
command=lambda:tabPage.remove_page(entryPgName.get()))
|
||||
labelPgName=Label(root, text='name of page to add/remove:')
|
||||
buttonAdd.pack(padx=5, pady=5)
|
||||
buttonRemove.pack(padx=5, pady=5)
|
||||
labelPgName.pack(padx=5)
|
||||
entryPgName.pack(padx=5)
|
||||
root.mainloop()
|
|
@ -6,13 +6,12 @@ from Tkinter import *
|
|||
import tkMessageBox
|
||||
|
||||
class TextViewer(Toplevel):
|
||||
"""
|
||||
simple text viewer dialog for idle
|
||||
"""
|
||||
def __init__(self, parent, title, fileName, data=None):
|
||||
"""If data exists, load it into viewer, otherwise try to load file.
|
||||
"""A simple text viewer dialog for IDLE
|
||||
|
||||
"""
|
||||
def __init__(self, parent, title, text):
|
||||
"""Show the given text in a scrollable window with a 'close' button
|
||||
|
||||
fileName - string, should be an absoulute filename
|
||||
"""
|
||||
Toplevel.__init__(self, parent)
|
||||
self.configure(borderwidth=5)
|
||||
|
@ -33,23 +32,10 @@ class TextViewer(Toplevel):
|
|||
#key bindings for this dialog
|
||||
self.bind('<Return>',self.Ok) #dismiss dialog
|
||||
self.bind('<Escape>',self.Ok) #dismiss dialog
|
||||
if data:
|
||||
self.textView.insert(0.0, data)
|
||||
else:
|
||||
self.LoadTextFile(fileName)
|
||||
self.textView.insert(0.0, text)
|
||||
self.textView.config(state=DISABLED)
|
||||
self.wait_window()
|
||||
|
||||
def LoadTextFile(self, fileName):
|
||||
textFile = None
|
||||
try:
|
||||
textFile = open(fileName, 'r')
|
||||
except IOError:
|
||||
tkMessageBox.showerror(title='File Load Error',
|
||||
message='Unable to load file %r .' % (fileName,))
|
||||
else:
|
||||
self.textView.insert(0.0,textFile.read())
|
||||
|
||||
def CreateWidgets(self):
|
||||
frameText = Frame(self, relief=SUNKEN, height=700)
|
||||
frameButtons = Frame(self)
|
||||
|
@ -70,9 +56,38 @@ class TextViewer(Toplevel):
|
|||
def Ok(self, event=None):
|
||||
self.destroy()
|
||||
|
||||
|
||||
def view_text(parent, title, text):
|
||||
TextViewer(parent, title, text)
|
||||
|
||||
def view_file(parent, title, filename, encoding=None):
|
||||
try:
|
||||
if encoding:
|
||||
import codecs
|
||||
textFile = codecs.open(filename, 'r')
|
||||
else:
|
||||
textFile = open(filename, 'r')
|
||||
except IOError:
|
||||
import tkMessageBox
|
||||
tkMessageBox.showerror(title='File Load Error',
|
||||
message='Unable to load file %r .' % filename,
|
||||
parent=parent)
|
||||
else:
|
||||
return view_text(parent, title, textFile.read())
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
#test the dialog
|
||||
root=Tk()
|
||||
Button(root,text='View',
|
||||
command=lambda:TextViewer(root,'Text','./textView.py')).pack()
|
||||
root.title('textView test')
|
||||
filename = './textView.py'
|
||||
text = file(filename, 'r').read()
|
||||
btn1 = Button(root, text='view_text',
|
||||
command=lambda:view_text(root, 'view_text', text))
|
||||
btn1.pack(side=LEFT)
|
||||
btn2 = Button(root, text='view_file',
|
||||
command=lambda:view_file(root, 'view_file', filename))
|
||||
btn2.pack(side=LEFT)
|
||||
close = Button(root, text='Close', command=root.destroy)
|
||||
close.pack(side=RIGHT)
|
||||
root.mainloop()
|
||||
|
|
|
@ -41,8 +41,8 @@ except ImportError:
|
|||
|
||||
__author__ = "Vinay Sajip <vinay_sajip@red-dove.com>"
|
||||
__status__ = "production"
|
||||
__version__ = "0.5.0.2"
|
||||
__date__ = "16 February 2007"
|
||||
__version__ = "0.5.0.3"
|
||||
__date__ = "26 September 2007"
|
||||
|
||||
#---------------------------------------------------------------------------
|
||||
# Miscellaneous module data
|
||||
|
@ -236,7 +236,7 @@ class LogRecord:
|
|||
# 'Value is %d' instead of 'Value is 0'.
|
||||
# For the use case of passing a dictionary, this should not be a
|
||||
# problem.
|
||||
if args and (len(args) == 1) and args[0] and isinstance(args[0], dict):
|
||||
if args and len(args) == 1 and isinstance(args[0], dict) and args[0]:
|
||||
args = args[0]
|
||||
self.args = args
|
||||
self.levelname = getLevelName(level)
|
||||
|
@ -730,7 +730,8 @@ class StreamHandler(Handler):
|
|||
"""
|
||||
Flushes the stream.
|
||||
"""
|
||||
self.stream.flush()
|
||||
if self.stream:
|
||||
self.stream.flush()
|
||||
|
||||
def emit(self, record):
|
||||
"""
|
||||
|
@ -780,9 +781,11 @@ class FileHandler(StreamHandler):
|
|||
"""
|
||||
Closes the stream.
|
||||
"""
|
||||
self.flush()
|
||||
self.stream.close()
|
||||
StreamHandler.close(self)
|
||||
if self.stream:
|
||||
self.flush()
|
||||
self.stream.close()
|
||||
StreamHandler.close(self)
|
||||
self.stream = None
|
||||
|
||||
def _open(self):
|
||||
"""
|
||||
|
@ -1245,7 +1248,7 @@ def basicConfig(**kwargs):
|
|||
hdlr.setFormatter(fmt)
|
||||
root.addHandler(hdlr)
|
||||
level = kwargs.get("level")
|
||||
if level:
|
||||
if level is not None:
|
||||
root.setLevel(level)
|
||||
|
||||
#---------------------------------------------------------------------------
|
||||
|
|
|
@ -231,11 +231,11 @@ class TimedRotatingFileHandler(BaseRotatingHandler):
|
|||
# of days in the next week until the rollover day (3).
|
||||
if when.startswith('W'):
|
||||
day = t[6] # 0 is Monday
|
||||
if day > self.dayOfWeek:
|
||||
daysToWait = (day - self.dayOfWeek) - 1
|
||||
self.rolloverAt = self.rolloverAt + (daysToWait * (60 * 60 * 24))
|
||||
if day < self.dayOfWeek:
|
||||
daysToWait = (6 - self.dayOfWeek) + day
|
||||
if day != self.dayOfWeek:
|
||||
if day < self.dayOfWeek:
|
||||
daysToWait = self.dayOfWeek - day - 1
|
||||
else:
|
||||
daysToWait = 6 - day + self.dayOfWeek
|
||||
self.rolloverAt = self.rolloverAt + (daysToWait * (60 * 60 * 24))
|
||||
|
||||
#print "Will rollover at %d, %d seconds from now" % (self.rolloverAt, self.rolloverAt - currentTime)
|
||||
|
|
|
@ -393,6 +393,7 @@ def _default_mime_types():
|
|||
'.movie' : 'video/x-sgi-movie',
|
||||
'.mp2' : 'audio/mpeg',
|
||||
'.mp3' : 'audio/mpeg',
|
||||
'.mp4' : 'video/mp4',
|
||||
'.mpa' : 'video/mpeg',
|
||||
'.mpe' : 'video/mpeg',
|
||||
'.mpeg' : 'video/mpeg',
|
||||
|
|
|
@ -1,6 +1,28 @@
|
|||
# Generated by h2py from /usr/include/netinet/in.h
|
||||
|
||||
# Included from sys/cdefs.h
|
||||
__GNUCLIKE_ASM = 3
|
||||
__GNUCLIKE_ASM = 2
|
||||
__GNUCLIKE___TYPEOF = 1
|
||||
__GNUCLIKE___OFFSETOF = 1
|
||||
__GNUCLIKE___SECTION = 1
|
||||
__GNUCLIKE_ATTRIBUTE_MODE_DI = 1
|
||||
__GNUCLIKE_CTOR_SECTION_HANDLING = 1
|
||||
__GNUCLIKE_BUILTIN_CONSTANT_P = 1
|
||||
__GNUCLIKE_BUILTIN_VARARGS = 1
|
||||
__GNUCLIKE_BUILTIN_STDARG = 1
|
||||
__GNUCLIKE_BUILTIN_VAALIST = 1
|
||||
__GNUC_VA_LIST_COMPATIBILITY = 1
|
||||
__GNUCLIKE_BUILTIN_NEXT_ARG = 1
|
||||
__GNUCLIKE_BUILTIN_MEMCPY = 1
|
||||
__CC_SUPPORTS_INLINE = 1
|
||||
__CC_SUPPORTS___INLINE = 1
|
||||
__CC_SUPPORTS___INLINE__ = 1
|
||||
__CC_SUPPORTS___FUNC__ = 1
|
||||
__CC_SUPPORTS_WARNING = 1
|
||||
__CC_SUPPORTS_VARADIC_XXX = 1
|
||||
__CC_SUPPORTS_DYNAMIC_ARRAY_INIT = 1
|
||||
__CC_INT_IS_32BIT = 1
|
||||
def __P(protos): return protos
|
||||
|
||||
def __STRING(x): return #x
|
||||
|
@ -29,6 +51,8 @@ def __predict_true(exp): return (exp)
|
|||
|
||||
def __predict_false(exp): return (exp)
|
||||
|
||||
def __format_arg(fmtarg): return __attribute__((__format_arg__ (fmtarg)))
|
||||
|
||||
def __FBSDID(s): return __IDSTRING(__CONCAT(__rcsid_,__LINE__),s)
|
||||
|
||||
def __RCSID(s): return __IDSTRING(__CONCAT(__rcsid_,__LINE__),s)
|
||||
|
@ -86,8 +110,6 @@ LITTLE_ENDIAN = _LITTLE_ENDIAN
|
|||
BIG_ENDIAN = _BIG_ENDIAN
|
||||
PDP_ENDIAN = _PDP_ENDIAN
|
||||
BYTE_ORDER = _BYTE_ORDER
|
||||
__INTEL_COMPILER_with_FreeBSD_endian = 1
|
||||
__INTEL_COMPILER_with_FreeBSD_endian = 1
|
||||
def __word_swap_int_var(x): return \
|
||||
|
||||
def __word_swap_int_const(x): return \
|
||||
|
@ -96,12 +118,16 @@ def __word_swap_int(x): return __word_swap_int_var(x)
|
|||
|
||||
def __byte_swap_int_var(x): return \
|
||||
|
||||
def __byte_swap_int_var(x): return \
|
||||
|
||||
def __byte_swap_int_const(x): return \
|
||||
|
||||
def __byte_swap_int(x): return __byte_swap_int_var(x)
|
||||
|
||||
def __byte_swap_long_var(x): return \
|
||||
|
||||
def __byte_swap_long_const(x): return \
|
||||
|
||||
def __byte_swap_long(x): return __byte_swap_long_var(x)
|
||||
|
||||
def __byte_swap_word_var(x): return \
|
||||
|
||||
def __byte_swap_word_const(x): return \
|
||||
|
@ -229,47 +255,50 @@ IPPROTO_ENCAP = 98
|
|||
IPPROTO_APES = 99
|
||||
IPPROTO_GMTP = 100
|
||||
IPPROTO_IPCOMP = 108
|
||||
IPPROTO_SCTP = 132
|
||||
IPPROTO_PIM = 103
|
||||
IPPROTO_CARP = 112
|
||||
IPPROTO_PGM = 113
|
||||
IPPROTO_PFSYNC = 240
|
||||
IPPROTO_OLD_DIVERT = 254
|
||||
IPPROTO_MAX = 256
|
||||
IPPROTO_DONE = 257
|
||||
IPPROTO_DIVERT = 258
|
||||
IPPROTO_SPACER = 32767
|
||||
IPPORT_RESERVED = 1024
|
||||
IPPORT_HIFIRSTAUTO = 49152
|
||||
IPPORT_HILASTAUTO = 65535
|
||||
IPPORT_RESERVEDSTART = 600
|
||||
IPPORT_MAX = 65535
|
||||
def IN_CLASSA(i): return (((u_int32_t)(i) & (-2147483648)) == 0)
|
||||
def IN_CLASSA(i): return (((u_int32_t)(i) & 0x80000000) == 0)
|
||||
|
||||
IN_CLASSA_NET = (-16777216)
|
||||
IN_CLASSA_NET = 0xff000000
|
||||
IN_CLASSA_NSHIFT = 24
|
||||
IN_CLASSA_HOST = 0x00ffffff
|
||||
IN_CLASSA_MAX = 128
|
||||
def IN_CLASSB(i): return (((u_int32_t)(i) & (-1073741824)) == (-2147483648))
|
||||
def IN_CLASSB(i): return (((u_int32_t)(i) & 0xc0000000) == 0x80000000)
|
||||
|
||||
IN_CLASSB_NET = (-65536)
|
||||
IN_CLASSB_NET = 0xffff0000
|
||||
IN_CLASSB_NSHIFT = 16
|
||||
IN_CLASSB_HOST = 0x0000ffff
|
||||
IN_CLASSB_MAX = 65536
|
||||
def IN_CLASSC(i): return (((u_int32_t)(i) & (-536870912)) == (-1073741824))
|
||||
def IN_CLASSC(i): return (((u_int32_t)(i) & 0xe0000000) == 0xc0000000)
|
||||
|
||||
IN_CLASSC_NET = (-256)
|
||||
IN_CLASSC_NET = 0xffffff00
|
||||
IN_CLASSC_NSHIFT = 8
|
||||
IN_CLASSC_HOST = 0x000000ff
|
||||
def IN_CLASSD(i): return (((u_int32_t)(i) & (-268435456)) == (-536870912))
|
||||
def IN_CLASSD(i): return (((u_int32_t)(i) & 0xf0000000) == 0xe0000000)
|
||||
|
||||
IN_CLASSD_NET = (-268435456)
|
||||
IN_CLASSD_NET = 0xf0000000
|
||||
IN_CLASSD_NSHIFT = 28
|
||||
IN_CLASSD_HOST = 0x0fffffff
|
||||
def IN_MULTICAST(i): return IN_CLASSD(i)
|
||||
|
||||
def IN_EXPERIMENTAL(i): return (((u_int32_t)(i) & (-268435456)) == (-268435456))
|
||||
def IN_EXPERIMENTAL(i): return (((u_int32_t)(i) & 0xf0000000) == 0xf0000000)
|
||||
|
||||
def IN_BADCLASS(i): return (((u_int32_t)(i) & (-268435456)) == (-268435456))
|
||||
def IN_BADCLASS(i): return (((u_int32_t)(i) & 0xf0000000) == 0xf0000000)
|
||||
|
||||
INADDR_NONE = (-1)
|
||||
INADDR_NONE = 0xffffffff
|
||||
IN_LOOPBACKNET = 127
|
||||
IP_OPTIONS = 1
|
||||
IP_HDRINCL = 2
|
||||
|
@ -311,6 +340,8 @@ IP_DUMMYNET_DEL = 61
|
|||
IP_DUMMYNET_FLUSH = 62
|
||||
IP_DUMMYNET_GET = 64
|
||||
IP_RECVTTL = 65
|
||||
IP_MINTTL = 66
|
||||
IP_DONTFRAG = 67
|
||||
IP_DEFAULT_MULTICAST_TTL = 1
|
||||
IP_DEFAULT_MULTICAST_LOOP = 1
|
||||
IP_MAX_MEMBERSHIPS = 20
|
||||
|
@ -339,7 +370,7 @@ def in_nullhost(x): return ((x).s_addr == INADDR_ANY)
|
|||
|
||||
|
||||
# Included from netinet6/in6.h
|
||||
__KAME_VERSION = "20010528/FreeBSD"
|
||||
__KAME_VERSION = "FreeBSD"
|
||||
IPV6PORT_RESERVED = 1024
|
||||
IPV6PORT_ANONMIN = 49152
|
||||
IPV6PORT_ANONMAX = 65535
|
||||
|
@ -348,8 +379,8 @@ IPV6PORT_RESERVEDMAX = (IPV6PORT_RESERVED-1)
|
|||
INET6_ADDRSTRLEN = 46
|
||||
IPV6_ADDR_INT32_ONE = 1
|
||||
IPV6_ADDR_INT32_TWO = 2
|
||||
IPV6_ADDR_INT32_MNL = (-16711680)
|
||||
IPV6_ADDR_INT32_MLL = (-16646144)
|
||||
IPV6_ADDR_INT32_MNL = 0xff010000
|
||||
IPV6_ADDR_INT32_MLL = 0xff020000
|
||||
IPV6_ADDR_INT32_SMP = 0x0000ffff
|
||||
IPV6_ADDR_INT16_ULL = 0xfe80
|
||||
IPV6_ADDR_INT16_USL = 0xfec0
|
||||
|
@ -358,7 +389,7 @@ IPV6_ADDR_INT32_ONE = 0x01000000
|
|||
IPV6_ADDR_INT32_TWO = 0x02000000
|
||||
IPV6_ADDR_INT32_MNL = 0x000001ff
|
||||
IPV6_ADDR_INT32_MLL = 0x000002ff
|
||||
IPV6_ADDR_INT32_SMP = (-65536)
|
||||
IPV6_ADDR_INT32_SMP = 0xffff0000
|
||||
IPV6_ADDR_INT16_ULL = 0x80fe
|
||||
IPV6_ADDR_INT16_USL = 0xc0fe
|
||||
IPV6_ADDR_INT16_MLL = 0x02ff
|
||||
|
@ -511,5 +542,10 @@ IPV6CTL_AUTO_LINKLOCAL = 35
|
|||
IPV6CTL_RIP6STATS = 36
|
||||
IPV6CTL_PREFER_TEMPADDR = 37
|
||||
IPV6CTL_ADDRCTLPOLICY = 38
|
||||
IPV6CTL_USE_DEFAULTZONE = 39
|
||||
IPV6CTL_MAXFRAGS = 41
|
||||
IPV6CTL_MAXID = 42
|
||||
IPV6CTL_IFQ = 42
|
||||
IPV6CTL_ISATAPRTR = 43
|
||||
IPV6CTL_MCAST_PMTU = 44
|
||||
IPV6CTL_STEALTH = 45
|
||||
IPV6CTL_MAXID = 46
|
||||
|
|
|
@ -10,9 +10,9 @@ __GNUCLIKE_ATTRIBUTE_MODE_DI = 1
|
|||
__GNUCLIKE_CTOR_SECTION_HANDLING = 1
|
||||
__GNUCLIKE_BUILTIN_CONSTANT_P = 1
|
||||
__GNUCLIKE_BUILTIN_VARARGS = 1
|
||||
__GNUCLIKE_BUILTIN_STDARG = 1
|
||||
__GNUCLIKE_BUILTIN_VAALIST = 1
|
||||
__GNUC_VA_LIST_COMPATIBILITY = 1
|
||||
__GNUCLIKE_BUILTIN_STDARG = 1
|
||||
__GNUCLIKE_BUILTIN_NEXT_ARG = 1
|
||||
__GNUCLIKE_BUILTIN_MEMCPY = 1
|
||||
__CC_SUPPORTS_INLINE = 1
|
||||
|
@ -51,6 +51,8 @@ def __predict_true(exp): return (exp)
|
|||
|
||||
def __predict_false(exp): return (exp)
|
||||
|
||||
def __format_arg(fmtarg): return __attribute__((__format_arg__ (fmtarg)))
|
||||
|
||||
def __FBSDID(s): return __IDSTRING(__CONCAT(__rcsid_,__LINE__),s)
|
||||
|
||||
def __RCSID(s): return __IDSTRING(__CONCAT(__rcsid_,__LINE__),s)
|
||||
|
@ -247,6 +249,7 @@ IPPROTO_ENCAP = 98
|
|||
IPPROTO_APES = 99
|
||||
IPPROTO_GMTP = 100
|
||||
IPPROTO_IPCOMP = 108
|
||||
IPPROTO_SCTP = 132
|
||||
IPPROTO_PIM = 103
|
||||
IPPROTO_CARP = 112
|
||||
IPPROTO_PGM = 113
|
||||
|
@ -289,6 +292,10 @@ def IN_EXPERIMENTAL(i): return (((u_int32_t)(i) & (-268435456)) == (-268435456))
|
|||
|
||||
def IN_BADCLASS(i): return (((u_int32_t)(i) & (-268435456)) == (-268435456))
|
||||
|
||||
def IN_LINKLOCAL(i): return (((u_int32_t)(i) & (-65536)) == (-1442971648))
|
||||
|
||||
def IN_LOCAL_GROUP(i): return (((u_int32_t)(i) & (-256)) == (-536870912))
|
||||
|
||||
INADDR_NONE = (-1)
|
||||
IN_LOOPBACKNET = 127
|
||||
IP_OPTIONS = 1
|
||||
|
@ -326,14 +333,35 @@ IP_FW_FLUSH = 52
|
|||
IP_FW_ZERO = 53
|
||||
IP_FW_GET = 54
|
||||
IP_FW_RESETLOG = 55
|
||||
IP_FW_NAT_CFG = 56
|
||||
IP_FW_NAT_DEL = 57
|
||||
IP_FW_NAT_GET_CONFIG = 58
|
||||
IP_FW_NAT_GET_LOG = 59
|
||||
IP_DUMMYNET_CONFIGURE = 60
|
||||
IP_DUMMYNET_DEL = 61
|
||||
IP_DUMMYNET_FLUSH = 62
|
||||
IP_DUMMYNET_GET = 64
|
||||
IP_RECVTTL = 65
|
||||
IP_MINTTL = 66
|
||||
IP_DONTFRAG = 67
|
||||
IP_ADD_SOURCE_MEMBERSHIP = 70
|
||||
IP_DROP_SOURCE_MEMBERSHIP = 71
|
||||
IP_BLOCK_SOURCE = 72
|
||||
IP_UNBLOCK_SOURCE = 73
|
||||
IP_MSFILTER = 74
|
||||
MCAST_JOIN_GROUP = 80
|
||||
MCAST_LEAVE_GROUP = 81
|
||||
MCAST_JOIN_SOURCE_GROUP = 82
|
||||
MCAST_LEAVE_SOURCE_GROUP = 83
|
||||
MCAST_BLOCK_SOURCE = 84
|
||||
MCAST_UNBLOCK_SOURCE = 85
|
||||
IP_DEFAULT_MULTICAST_TTL = 1
|
||||
IP_DEFAULT_MULTICAST_LOOP = 1
|
||||
IP_MAX_MEMBERSHIPS = 20
|
||||
IP_MIN_MEMBERSHIPS = 31
|
||||
IP_MAX_MEMBERSHIPS = 4095
|
||||
IP_MAX_SOURCE_FILTER = 1024
|
||||
MCAST_INCLUDE = 1
|
||||
MCAST_EXCLUDE = 2
|
||||
IP_PORTRANGE_DEFAULT = 0
|
||||
IP_PORTRANGE_HIGH = 1
|
||||
IP_PORTRANGE_LOW = 2
|
||||
|
@ -359,7 +387,7 @@ def in_nullhost(x): return ((x).s_addr == INADDR_ANY)
|
|||
|
||||
|
||||
# Included from netinet6/in6.h
|
||||
__KAME_VERSION = "20010528/FreeBSD"
|
||||
__KAME_VERSION = "FreeBSD"
|
||||
IPV6PORT_RESERVED = 1024
|
||||
IPV6PORT_ANONMIN = 49152
|
||||
IPV6PORT_ANONMAX = 65535
|
||||
|
@ -430,6 +458,8 @@ def IN6_IS_ADDR_MC_GLOBAL(a): return \
|
|||
|
||||
def IN6_IS_SCOPE_LINKLOCAL(a): return \
|
||||
|
||||
def IN6_IS_SCOPE_EMBED(a): return \
|
||||
|
||||
def IFA6_IS_DEPRECATED(a): return \
|
||||
|
||||
def IFA6_IS_INVALID(a): return \
|
||||
|
@ -488,6 +518,7 @@ IPV6_AUTOFLOWLABEL = 59
|
|||
IPV6_TCLASS = 61
|
||||
IPV6_DONTFRAG = 62
|
||||
IPV6_PREFER_TEMPADDR = 63
|
||||
IPV6_MSFILTER = 74
|
||||
IPV6_RTHDR_LOOSE = 0
|
||||
IPV6_RTHDR_STRICT = 1
|
||||
IPV6_RTHDR_TYPE_0 = 0
|
||||
|
@ -531,5 +562,10 @@ IPV6CTL_AUTO_LINKLOCAL = 35
|
|||
IPV6CTL_RIP6STATS = 36
|
||||
IPV6CTL_PREFER_TEMPADDR = 37
|
||||
IPV6CTL_ADDRCTLPOLICY = 38
|
||||
IPV6CTL_USE_DEFAULTZONE = 39
|
||||
IPV6CTL_MAXFRAGS = 41
|
||||
IPV6CTL_MAXID = 42
|
||||
IPV6CTL_IFQ = 42
|
||||
IPV6CTL_ISATAPRTR = 43
|
||||
IPV6CTL_MCAST_PMTU = 44
|
||||
IPV6CTL_STEALTH = 45
|
||||
IPV6CTL_MAXID = 46
|
||||
|
|
|
@ -0,0 +1,571 @@
|
|||
# Generated by h2py from /usr/include/netinet/in.h
|
||||
|
||||
# Included from sys/cdefs.h
|
||||
__GNUCLIKE_ASM = 3
|
||||
__GNUCLIKE_ASM = 2
|
||||
__GNUCLIKE___TYPEOF = 1
|
||||
__GNUCLIKE___OFFSETOF = 1
|
||||
__GNUCLIKE___SECTION = 1
|
||||
__GNUCLIKE_ATTRIBUTE_MODE_DI = 1
|
||||
__GNUCLIKE_CTOR_SECTION_HANDLING = 1
|
||||
__GNUCLIKE_BUILTIN_CONSTANT_P = 1
|
||||
__GNUCLIKE_BUILTIN_VARARGS = 1
|
||||
__GNUCLIKE_BUILTIN_STDARG = 1
|
||||
__GNUCLIKE_BUILTIN_VAALIST = 1
|
||||
__GNUC_VA_LIST_COMPATIBILITY = 1
|
||||
__GNUCLIKE_BUILTIN_NEXT_ARG = 1
|
||||
__GNUCLIKE_BUILTIN_MEMCPY = 1
|
||||
__CC_SUPPORTS_INLINE = 1
|
||||
__CC_SUPPORTS___INLINE = 1
|
||||
__CC_SUPPORTS___INLINE__ = 1
|
||||
__CC_SUPPORTS___FUNC__ = 1
|
||||
__CC_SUPPORTS_WARNING = 1
|
||||
__CC_SUPPORTS_VARADIC_XXX = 1
|
||||
__CC_SUPPORTS_DYNAMIC_ARRAY_INIT = 1
|
||||
__CC_INT_IS_32BIT = 1
|
||||
def __P(protos): return protos
|
||||
|
||||
def __STRING(x): return #x
|
||||
|
||||
def __XSTRING(x): return __STRING(x)
|
||||
|
||||
def __P(protos): return ()
|
||||
|
||||
def __STRING(x): return "x"
|
||||
|
||||
def __aligned(x): return __attribute__((__aligned__(x)))
|
||||
|
||||
def __section(x): return __attribute__((__section__(x)))
|
||||
|
||||
def __aligned(x): return __attribute__((__aligned__(x)))
|
||||
|
||||
def __section(x): return __attribute__((__section__(x)))
|
||||
|
||||
def __nonnull(x): return __attribute__((__nonnull__(x)))
|
||||
|
||||
def __predict_true(exp): return __builtin_expect((exp), 1)
|
||||
|
||||
def __predict_false(exp): return __builtin_expect((exp), 0)
|
||||
|
||||
def __predict_true(exp): return (exp)
|
||||
|
||||
def __predict_false(exp): return (exp)
|
||||
|
||||
def __format_arg(fmtarg): return __attribute__((__format_arg__ (fmtarg)))
|
||||
|
||||
def __FBSDID(s): return __IDSTRING(__CONCAT(__rcsid_,__LINE__),s)
|
||||
|
||||
def __RCSID(s): return __IDSTRING(__CONCAT(__rcsid_,__LINE__),s)
|
||||
|
||||
def __RCSID_SOURCE(s): return __IDSTRING(__CONCAT(__rcsid_source_,__LINE__),s)
|
||||
|
||||
def __SCCSID(s): return __IDSTRING(__CONCAT(__sccsid_,__LINE__),s)
|
||||
|
||||
def __COPYRIGHT(s): return __IDSTRING(__CONCAT(__copyright_,__LINE__),s)
|
||||
|
||||
_POSIX_C_SOURCE = 199009
|
||||
_POSIX_C_SOURCE = 199209
|
||||
__XSI_VISIBLE = 600
|
||||
_POSIX_C_SOURCE = 200112
|
||||
__XSI_VISIBLE = 500
|
||||
_POSIX_C_SOURCE = 199506
|
||||
_POSIX_C_SOURCE = 198808
|
||||
__POSIX_VISIBLE = 200112
|
||||
__ISO_C_VISIBLE = 1999
|
||||
__POSIX_VISIBLE = 199506
|
||||
__ISO_C_VISIBLE = 1990
|
||||
__POSIX_VISIBLE = 199309
|
||||
__ISO_C_VISIBLE = 1990
|
||||
__POSIX_VISIBLE = 199209
|
||||
__ISO_C_VISIBLE = 1990
|
||||
__POSIX_VISIBLE = 199009
|
||||
__ISO_C_VISIBLE = 1990
|
||||
__POSIX_VISIBLE = 198808
|
||||
__ISO_C_VISIBLE = 0
|
||||
__POSIX_VISIBLE = 0
|
||||
__XSI_VISIBLE = 0
|
||||
__BSD_VISIBLE = 0
|
||||
__ISO_C_VISIBLE = 1990
|
||||
__POSIX_VISIBLE = 0
|
||||
__XSI_VISIBLE = 0
|
||||
__BSD_VISIBLE = 0
|
||||
__ISO_C_VISIBLE = 1999
|
||||
__POSIX_VISIBLE = 200112
|
||||
__XSI_VISIBLE = 600
|
||||
__BSD_VISIBLE = 1
|
||||
__ISO_C_VISIBLE = 1999
|
||||
|
||||
# Included from sys/_types.h
|
||||
|
||||
# Included from machine/_types.h
|
||||
|
||||
# Included from machine/endian.h
|
||||
_QUAD_HIGHWORD = 1
|
||||
_QUAD_LOWWORD = 0
|
||||
_LITTLE_ENDIAN = 1234
|
||||
_BIG_ENDIAN = 4321
|
||||
_PDP_ENDIAN = 3412
|
||||
_BYTE_ORDER = _LITTLE_ENDIAN
|
||||
LITTLE_ENDIAN = _LITTLE_ENDIAN
|
||||
BIG_ENDIAN = _BIG_ENDIAN
|
||||
PDP_ENDIAN = _PDP_ENDIAN
|
||||
BYTE_ORDER = _BYTE_ORDER
|
||||
def __word_swap_int_var(x): return \
|
||||
|
||||
def __word_swap_int_const(x): return \
|
||||
|
||||
def __word_swap_int(x): return __word_swap_int_var(x)
|
||||
|
||||
def __byte_swap_int_var(x): return \
|
||||
|
||||
def __byte_swap_int_const(x): return \
|
||||
|
||||
def __byte_swap_int(x): return __byte_swap_int_var(x)
|
||||
|
||||
def __byte_swap_word_var(x): return \
|
||||
|
||||
def __byte_swap_word_const(x): return \
|
||||
|
||||
def __byte_swap_word(x): return __byte_swap_word_var(x)
|
||||
|
||||
def __htonl(x): return __bswap32(x)
|
||||
|
||||
def __htons(x): return __bswap16(x)
|
||||
|
||||
def __ntohl(x): return __bswap32(x)
|
||||
|
||||
def __ntohs(x): return __bswap16(x)
|
||||
|
||||
IPPROTO_IP = 0
|
||||
IPPROTO_ICMP = 1
|
||||
IPPROTO_TCP = 6
|
||||
IPPROTO_UDP = 17
|
||||
def htonl(x): return __htonl(x)
|
||||
|
||||
def htons(x): return __htons(x)
|
||||
|
||||
def ntohl(x): return __ntohl(x)
|
||||
|
||||
def ntohs(x): return __ntohs(x)
|
||||
|
||||
IPPROTO_RAW = 255
|
||||
INET_ADDRSTRLEN = 16
|
||||
IPPROTO_HOPOPTS = 0
|
||||
IPPROTO_IGMP = 2
|
||||
IPPROTO_GGP = 3
|
||||
IPPROTO_IPV4 = 4
|
||||
IPPROTO_IPIP = IPPROTO_IPV4
|
||||
IPPROTO_ST = 7
|
||||
IPPROTO_EGP = 8
|
||||
IPPROTO_PIGP = 9
|
||||
IPPROTO_RCCMON = 10
|
||||
IPPROTO_NVPII = 11
|
||||
IPPROTO_PUP = 12
|
||||
IPPROTO_ARGUS = 13
|
||||
IPPROTO_EMCON = 14
|
||||
IPPROTO_XNET = 15
|
||||
IPPROTO_CHAOS = 16
|
||||
IPPROTO_MUX = 18
|
||||
IPPROTO_MEAS = 19
|
||||
IPPROTO_HMP = 20
|
||||
IPPROTO_PRM = 21
|
||||
IPPROTO_IDP = 22
|
||||
IPPROTO_TRUNK1 = 23
|
||||
IPPROTO_TRUNK2 = 24
|
||||
IPPROTO_LEAF1 = 25
|
||||
IPPROTO_LEAF2 = 26
|
||||
IPPROTO_RDP = 27
|
||||
IPPROTO_IRTP = 28
|
||||
IPPROTO_TP = 29
|
||||
IPPROTO_BLT = 30
|
||||
IPPROTO_NSP = 31
|
||||
IPPROTO_INP = 32
|
||||
IPPROTO_SEP = 33
|
||||
IPPROTO_3PC = 34
|
||||
IPPROTO_IDPR = 35
|
||||
IPPROTO_XTP = 36
|
||||
IPPROTO_DDP = 37
|
||||
IPPROTO_CMTP = 38
|
||||
IPPROTO_TPXX = 39
|
||||
IPPROTO_IL = 40
|
||||
IPPROTO_IPV6 = 41
|
||||
IPPROTO_SDRP = 42
|
||||
IPPROTO_ROUTING = 43
|
||||
IPPROTO_FRAGMENT = 44
|
||||
IPPROTO_IDRP = 45
|
||||
IPPROTO_RSVP = 46
|
||||
IPPROTO_GRE = 47
|
||||
IPPROTO_MHRP = 48
|
||||
IPPROTO_BHA = 49
|
||||
IPPROTO_ESP = 50
|
||||
IPPROTO_AH = 51
|
||||
IPPROTO_INLSP = 52
|
||||
IPPROTO_SWIPE = 53
|
||||
IPPROTO_NHRP = 54
|
||||
IPPROTO_MOBILE = 55
|
||||
IPPROTO_TLSP = 56
|
||||
IPPROTO_SKIP = 57
|
||||
IPPROTO_ICMPV6 = 58
|
||||
IPPROTO_NONE = 59
|
||||
IPPROTO_DSTOPTS = 60
|
||||
IPPROTO_AHIP = 61
|
||||
IPPROTO_CFTP = 62
|
||||
IPPROTO_HELLO = 63
|
||||
IPPROTO_SATEXPAK = 64
|
||||
IPPROTO_KRYPTOLAN = 65
|
||||
IPPROTO_RVD = 66
|
||||
IPPROTO_IPPC = 67
|
||||
IPPROTO_ADFS = 68
|
||||
IPPROTO_SATMON = 69
|
||||
IPPROTO_VISA = 70
|
||||
IPPROTO_IPCV = 71
|
||||
IPPROTO_CPNX = 72
|
||||
IPPROTO_CPHB = 73
|
||||
IPPROTO_WSN = 74
|
||||
IPPROTO_PVP = 75
|
||||
IPPROTO_BRSATMON = 76
|
||||
IPPROTO_ND = 77
|
||||
IPPROTO_WBMON = 78
|
||||
IPPROTO_WBEXPAK = 79
|
||||
IPPROTO_EON = 80
|
||||
IPPROTO_VMTP = 81
|
||||
IPPROTO_SVMTP = 82
|
||||
IPPROTO_VINES = 83
|
||||
IPPROTO_TTP = 84
|
||||
IPPROTO_IGP = 85
|
||||
IPPROTO_DGP = 86
|
||||
IPPROTO_TCF = 87
|
||||
IPPROTO_IGRP = 88
|
||||
IPPROTO_OSPFIGP = 89
|
||||
IPPROTO_SRPC = 90
|
||||
IPPROTO_LARP = 91
|
||||
IPPROTO_MTP = 92
|
||||
IPPROTO_AX25 = 93
|
||||
IPPROTO_IPEIP = 94
|
||||
IPPROTO_MICP = 95
|
||||
IPPROTO_SCCSP = 96
|
||||
IPPROTO_ETHERIP = 97
|
||||
IPPROTO_ENCAP = 98
|
||||
IPPROTO_APES = 99
|
||||
IPPROTO_GMTP = 100
|
||||
IPPROTO_IPCOMP = 108
|
||||
IPPROTO_SCTP = 132
|
||||
IPPROTO_PIM = 103
|
||||
IPPROTO_CARP = 112
|
||||
IPPROTO_PGM = 113
|
||||
IPPROTO_PFSYNC = 240
|
||||
IPPROTO_OLD_DIVERT = 254
|
||||
IPPROTO_MAX = 256
|
||||
IPPROTO_DONE = 257
|
||||
IPPROTO_DIVERT = 258
|
||||
IPPROTO_SPACER = 32767
|
||||
IPPORT_RESERVED = 1024
|
||||
IPPORT_HIFIRSTAUTO = 49152
|
||||
IPPORT_HILASTAUTO = 65535
|
||||
IPPORT_RESERVEDSTART = 600
|
||||
IPPORT_MAX = 65535
|
||||
def IN_CLASSA(i): return (((u_int32_t)(i) & (-2147483648)) == 0)
|
||||
|
||||
IN_CLASSA_NET = (-16777216)
|
||||
IN_CLASSA_NSHIFT = 24
|
||||
IN_CLASSA_HOST = 0x00ffffff
|
||||
IN_CLASSA_MAX = 128
|
||||
def IN_CLASSB(i): return (((u_int32_t)(i) & (-1073741824)) == (-2147483648))
|
||||
|
||||
IN_CLASSB_NET = (-65536)
|
||||
IN_CLASSB_NSHIFT = 16
|
||||
IN_CLASSB_HOST = 0x0000ffff
|
||||
IN_CLASSB_MAX = 65536
|
||||
def IN_CLASSC(i): return (((u_int32_t)(i) & (-536870912)) == (-1073741824))
|
||||
|
||||
IN_CLASSC_NET = (-256)
|
||||
IN_CLASSC_NSHIFT = 8
|
||||
IN_CLASSC_HOST = 0x000000ff
|
||||
def IN_CLASSD(i): return (((u_int32_t)(i) & (-268435456)) == (-536870912))
|
||||
|
||||
IN_CLASSD_NET = (-268435456)
|
||||
IN_CLASSD_NSHIFT = 28
|
||||
IN_CLASSD_HOST = 0x0fffffff
|
||||
def IN_MULTICAST(i): return IN_CLASSD(i)
|
||||
|
||||
def IN_EXPERIMENTAL(i): return (((u_int32_t)(i) & (-268435456)) == (-268435456))
|
||||
|
||||
def IN_BADCLASS(i): return (((u_int32_t)(i) & (-268435456)) == (-268435456))
|
||||
|
||||
def IN_LINKLOCAL(i): return (((u_int32_t)(i) & (-65536)) == (-1442971648))
|
||||
|
||||
def IN_LOCAL_GROUP(i): return (((u_int32_t)(i) & (-256)) == (-536870912))
|
||||
|
||||
INADDR_NONE = (-1)
|
||||
IN_LOOPBACKNET = 127
|
||||
IP_OPTIONS = 1
|
||||
IP_HDRINCL = 2
|
||||
IP_TOS = 3
|
||||
IP_TTL = 4
|
||||
IP_RECVOPTS = 5
|
||||
IP_RECVRETOPTS = 6
|
||||
IP_RECVDSTADDR = 7
|
||||
IP_SENDSRCADDR = IP_RECVDSTADDR
|
||||
IP_RETOPTS = 8
|
||||
IP_MULTICAST_IF = 9
|
||||
IP_MULTICAST_TTL = 10
|
||||
IP_MULTICAST_LOOP = 11
|
||||
IP_ADD_MEMBERSHIP = 12
|
||||
IP_DROP_MEMBERSHIP = 13
|
||||
IP_MULTICAST_VIF = 14
|
||||
IP_RSVP_ON = 15
|
||||
IP_RSVP_OFF = 16
|
||||
IP_RSVP_VIF_ON = 17
|
||||
IP_RSVP_VIF_OFF = 18
|
||||
IP_PORTRANGE = 19
|
||||
IP_RECVIF = 20
|
||||
IP_IPSEC_POLICY = 21
|
||||
IP_FAITH = 22
|
||||
IP_ONESBCAST = 23
|
||||
IP_FW_TABLE_ADD = 40
|
||||
IP_FW_TABLE_DEL = 41
|
||||
IP_FW_TABLE_FLUSH = 42
|
||||
IP_FW_TABLE_GETSIZE = 43
|
||||
IP_FW_TABLE_LIST = 44
|
||||
IP_FW_ADD = 50
|
||||
IP_FW_DEL = 51
|
||||
IP_FW_FLUSH = 52
|
||||
IP_FW_ZERO = 53
|
||||
IP_FW_GET = 54
|
||||
IP_FW_RESETLOG = 55
|
||||
IP_FW_NAT_CFG = 56
|
||||
IP_FW_NAT_DEL = 57
|
||||
IP_FW_NAT_GET_CONFIG = 58
|
||||
IP_FW_NAT_GET_LOG = 59
|
||||
IP_DUMMYNET_CONFIGURE = 60
|
||||
IP_DUMMYNET_DEL = 61
|
||||
IP_DUMMYNET_FLUSH = 62
|
||||
IP_DUMMYNET_GET = 64
|
||||
IP_RECVTTL = 65
|
||||
IP_MINTTL = 66
|
||||
IP_DONTFRAG = 67
|
||||
IP_ADD_SOURCE_MEMBERSHIP = 70
|
||||
IP_DROP_SOURCE_MEMBERSHIP = 71
|
||||
IP_BLOCK_SOURCE = 72
|
||||
IP_UNBLOCK_SOURCE = 73
|
||||
IP_MSFILTER = 74
|
||||
MCAST_JOIN_GROUP = 80
|
||||
MCAST_LEAVE_GROUP = 81
|
||||
MCAST_JOIN_SOURCE_GROUP = 82
|
||||
MCAST_LEAVE_SOURCE_GROUP = 83
|
||||
MCAST_BLOCK_SOURCE = 84
|
||||
MCAST_UNBLOCK_SOURCE = 85
|
||||
IP_DEFAULT_MULTICAST_TTL = 1
|
||||
IP_DEFAULT_MULTICAST_LOOP = 1
|
||||
IP_MIN_MEMBERSHIPS = 31
|
||||
IP_MAX_MEMBERSHIPS = 4095
|
||||
IP_MAX_SOURCE_FILTER = 1024
|
||||
MCAST_INCLUDE = 1
|
||||
MCAST_EXCLUDE = 2
|
||||
IP_PORTRANGE_DEFAULT = 0
|
||||
IP_PORTRANGE_HIGH = 1
|
||||
IP_PORTRANGE_LOW = 2
|
||||
IPPROTO_MAXID = (IPPROTO_AH + 1)
|
||||
IPCTL_FORWARDING = 1
|
||||
IPCTL_SENDREDIRECTS = 2
|
||||
IPCTL_DEFTTL = 3
|
||||
IPCTL_DEFMTU = 4
|
||||
IPCTL_RTEXPIRE = 5
|
||||
IPCTL_RTMINEXPIRE = 6
|
||||
IPCTL_RTMAXCACHE = 7
|
||||
IPCTL_SOURCEROUTE = 8
|
||||
IPCTL_DIRECTEDBROADCAST = 9
|
||||
IPCTL_INTRQMAXLEN = 10
|
||||
IPCTL_INTRQDROPS = 11
|
||||
IPCTL_STATS = 12
|
||||
IPCTL_ACCEPTSOURCEROUTE = 13
|
||||
IPCTL_FASTFORWARDING = 14
|
||||
IPCTL_KEEPFAITH = 15
|
||||
IPCTL_GIF_TTL = 16
|
||||
IPCTL_MAXID = 17
|
||||
def in_nullhost(x): return ((x).s_addr == INADDR_ANY)
|
||||
|
||||
|
||||
# Included from netinet6/in6.h
|
||||
__KAME_VERSION = "FreeBSD"
|
||||
IPV6PORT_RESERVED = 1024
|
||||
IPV6PORT_ANONMIN = 49152
|
||||
IPV6PORT_ANONMAX = 65535
|
||||
IPV6PORT_RESERVEDMIN = 600
|
||||
IPV6PORT_RESERVEDMAX = (IPV6PORT_RESERVED-1)
|
||||
INET6_ADDRSTRLEN = 46
|
||||
IPV6_ADDR_INT32_ONE = 1
|
||||
IPV6_ADDR_INT32_TWO = 2
|
||||
IPV6_ADDR_INT32_MNL = (-16711680)
|
||||
IPV6_ADDR_INT32_MLL = (-16646144)
|
||||
IPV6_ADDR_INT32_SMP = 0x0000ffff
|
||||
IPV6_ADDR_INT16_ULL = 0xfe80
|
||||
IPV6_ADDR_INT16_USL = 0xfec0
|
||||
IPV6_ADDR_INT16_MLL = 0xff02
|
||||
IPV6_ADDR_INT32_ONE = 0x01000000
|
||||
IPV6_ADDR_INT32_TWO = 0x02000000
|
||||
IPV6_ADDR_INT32_MNL = 0x000001ff
|
||||
IPV6_ADDR_INT32_MLL = 0x000002ff
|
||||
IPV6_ADDR_INT32_SMP = (-65536)
|
||||
IPV6_ADDR_INT16_ULL = 0x80fe
|
||||
IPV6_ADDR_INT16_USL = 0xc0fe
|
||||
IPV6_ADDR_INT16_MLL = 0x02ff
|
||||
def IN6_IS_ADDR_UNSPECIFIED(a): return \
|
||||
|
||||
def IN6_IS_ADDR_LOOPBACK(a): return \
|
||||
|
||||
def IN6_IS_ADDR_V4COMPAT(a): return \
|
||||
|
||||
def IN6_IS_ADDR_V4MAPPED(a): return \
|
||||
|
||||
IPV6_ADDR_SCOPE_NODELOCAL = 0x01
|
||||
IPV6_ADDR_SCOPE_INTFACELOCAL = 0x01
|
||||
IPV6_ADDR_SCOPE_LINKLOCAL = 0x02
|
||||
IPV6_ADDR_SCOPE_SITELOCAL = 0x05
|
||||
IPV6_ADDR_SCOPE_ORGLOCAL = 0x08
|
||||
IPV6_ADDR_SCOPE_GLOBAL = 0x0e
|
||||
__IPV6_ADDR_SCOPE_NODELOCAL = 0x01
|
||||
__IPV6_ADDR_SCOPE_INTFACELOCAL = 0x01
|
||||
__IPV6_ADDR_SCOPE_LINKLOCAL = 0x02
|
||||
__IPV6_ADDR_SCOPE_SITELOCAL = 0x05
|
||||
__IPV6_ADDR_SCOPE_ORGLOCAL = 0x08
|
||||
__IPV6_ADDR_SCOPE_GLOBAL = 0x0e
|
||||
def IN6_IS_ADDR_LINKLOCAL(a): return \
|
||||
|
||||
def IN6_IS_ADDR_SITELOCAL(a): return \
|
||||
|
||||
def IN6_IS_ADDR_MC_NODELOCAL(a): return \
|
||||
|
||||
def IN6_IS_ADDR_MC_INTFACELOCAL(a): return \
|
||||
|
||||
def IN6_IS_ADDR_MC_LINKLOCAL(a): return \
|
||||
|
||||
def IN6_IS_ADDR_MC_SITELOCAL(a): return \
|
||||
|
||||
def IN6_IS_ADDR_MC_ORGLOCAL(a): return \
|
||||
|
||||
def IN6_IS_ADDR_MC_GLOBAL(a): return \
|
||||
|
||||
def IN6_IS_ADDR_MC_NODELOCAL(a): return \
|
||||
|
||||
def IN6_IS_ADDR_MC_LINKLOCAL(a): return \
|
||||
|
||||
def IN6_IS_ADDR_MC_SITELOCAL(a): return \
|
||||
|
||||
def IN6_IS_ADDR_MC_ORGLOCAL(a): return \
|
||||
|
||||
def IN6_IS_ADDR_MC_GLOBAL(a): return \
|
||||
|
||||
def IN6_IS_SCOPE_LINKLOCAL(a): return \
|
||||
|
||||
def IN6_IS_SCOPE_EMBED(a): return \
|
||||
|
||||
def IFA6_IS_DEPRECATED(a): return \
|
||||
|
||||
def IFA6_IS_INVALID(a): return \
|
||||
|
||||
IPV6_OPTIONS = 1
|
||||
IPV6_RECVOPTS = 5
|
||||
IPV6_RECVRETOPTS = 6
|
||||
IPV6_RECVDSTADDR = 7
|
||||
IPV6_RETOPTS = 8
|
||||
IPV6_SOCKOPT_RESERVED1 = 3
|
||||
IPV6_UNICAST_HOPS = 4
|
||||
IPV6_MULTICAST_IF = 9
|
||||
IPV6_MULTICAST_HOPS = 10
|
||||
IPV6_MULTICAST_LOOP = 11
|
||||
IPV6_JOIN_GROUP = 12
|
||||
IPV6_LEAVE_GROUP = 13
|
||||
IPV6_PORTRANGE = 14
|
||||
ICMP6_FILTER = 18
|
||||
IPV6_2292PKTINFO = 19
|
||||
IPV6_2292HOPLIMIT = 20
|
||||
IPV6_2292NEXTHOP = 21
|
||||
IPV6_2292HOPOPTS = 22
|
||||
IPV6_2292DSTOPTS = 23
|
||||
IPV6_2292RTHDR = 24
|
||||
IPV6_2292PKTOPTIONS = 25
|
||||
IPV6_CHECKSUM = 26
|
||||
IPV6_V6ONLY = 27
|
||||
IPV6_BINDV6ONLY = IPV6_V6ONLY
|
||||
IPV6_IPSEC_POLICY = 28
|
||||
IPV6_FAITH = 29
|
||||
IPV6_FW_ADD = 30
|
||||
IPV6_FW_DEL = 31
|
||||
IPV6_FW_FLUSH = 32
|
||||
IPV6_FW_ZERO = 33
|
||||
IPV6_FW_GET = 34
|
||||
IPV6_RTHDRDSTOPTS = 35
|
||||
IPV6_RECVPKTINFO = 36
|
||||
IPV6_RECVHOPLIMIT = 37
|
||||
IPV6_RECVRTHDR = 38
|
||||
IPV6_RECVHOPOPTS = 39
|
||||
IPV6_RECVDSTOPTS = 40
|
||||
IPV6_RECVRTHDRDSTOPTS = 41
|
||||
IPV6_USE_MIN_MTU = 42
|
||||
IPV6_RECVPATHMTU = 43
|
||||
IPV6_PATHMTU = 44
|
||||
IPV6_REACHCONF = 45
|
||||
IPV6_PKTINFO = 46
|
||||
IPV6_HOPLIMIT = 47
|
||||
IPV6_NEXTHOP = 48
|
||||
IPV6_HOPOPTS = 49
|
||||
IPV6_DSTOPTS = 50
|
||||
IPV6_RTHDR = 51
|
||||
IPV6_PKTOPTIONS = 52
|
||||
IPV6_RECVTCLASS = 57
|
||||
IPV6_AUTOFLOWLABEL = 59
|
||||
IPV6_TCLASS = 61
|
||||
IPV6_DONTFRAG = 62
|
||||
IPV6_PREFER_TEMPADDR = 63
|
||||
IPV6_MSFILTER = 74
|
||||
IPV6_RTHDR_LOOSE = 0
|
||||
IPV6_RTHDR_STRICT = 1
|
||||
IPV6_RTHDR_TYPE_0 = 0
|
||||
IPV6_DEFAULT_MULTICAST_HOPS = 1
|
||||
IPV6_DEFAULT_MULTICAST_LOOP = 1
|
||||
IPV6_PORTRANGE_DEFAULT = 0
|
||||
IPV6_PORTRANGE_HIGH = 1
|
||||
IPV6_PORTRANGE_LOW = 2
|
||||
IPV6PROTO_MAXID = (IPPROTO_PIM + 1)
|
||||
IPV6CTL_FORWARDING = 1
|
||||
IPV6CTL_SENDREDIRECTS = 2
|
||||
IPV6CTL_DEFHLIM = 3
|
||||
IPV6CTL_DEFMTU = 4
|
||||
IPV6CTL_FORWSRCRT = 5
|
||||
IPV6CTL_STATS = 6
|
||||
IPV6CTL_MRTSTATS = 7
|
||||
IPV6CTL_MRTPROTO = 8
|
||||
IPV6CTL_MAXFRAGPACKETS = 9
|
||||
IPV6CTL_SOURCECHECK = 10
|
||||
IPV6CTL_SOURCECHECK_LOGINT = 11
|
||||
IPV6CTL_ACCEPT_RTADV = 12
|
||||
IPV6CTL_KEEPFAITH = 13
|
||||
IPV6CTL_LOG_INTERVAL = 14
|
||||
IPV6CTL_HDRNESTLIMIT = 15
|
||||
IPV6CTL_DAD_COUNT = 16
|
||||
IPV6CTL_AUTO_FLOWLABEL = 17
|
||||
IPV6CTL_DEFMCASTHLIM = 18
|
||||
IPV6CTL_GIF_HLIM = 19
|
||||
IPV6CTL_KAME_VERSION = 20
|
||||
IPV6CTL_USE_DEPRECATED = 21
|
||||
IPV6CTL_RR_PRUNE = 22
|
||||
IPV6CTL_MAPPED_ADDR = 23
|
||||
IPV6CTL_V6ONLY = 24
|
||||
IPV6CTL_RTEXPIRE = 25
|
||||
IPV6CTL_RTMINEXPIRE = 26
|
||||
IPV6CTL_RTMAXCACHE = 27
|
||||
IPV6CTL_USETEMPADDR = 32
|
||||
IPV6CTL_TEMPPLTIME = 33
|
||||
IPV6CTL_TEMPVLTIME = 34
|
||||
IPV6CTL_AUTO_LINKLOCAL = 35
|
||||
IPV6CTL_RIP6STATS = 36
|
||||
IPV6CTL_PREFER_TEMPADDR = 37
|
||||
IPV6CTL_ADDRCTLPOLICY = 38
|
||||
IPV6CTL_USE_DEFAULTZONE = 39
|
||||
IPV6CTL_MAXFRAGS = 41
|
||||
IPV6CTL_IFQ = 42
|
||||
IPV6CTL_ISATAPRTR = 43
|
||||
IPV6CTL_MCAST_PMTU = 44
|
||||
IPV6CTL_STEALTH = 45
|
||||
IPV6CTL_MAXID = 46
|
|
@ -0,0 +1,3 @@
|
|||
#! /bin/sh
|
||||
set -v
|
||||
python ../../Tools/scripts/h2py.py -i '(u_long)' /usr/include/netinet/in.h
|
12
Lib/sched.py
12
Lib/sched.py
|
@ -16,11 +16,11 @@ integers or floating point numbers, as long as it is consistent.
|
|||
Events are specified by tuples (time, priority, action, argument).
|
||||
As in UNIX, lower priority numbers mean higher priority; in this
|
||||
way the queue can be maintained as a priority queue. Execution of the
|
||||
event means calling the action function, passing it the argument.
|
||||
Remember that in Python, multiple function arguments can be packed
|
||||
in a tuple. The action function may be an instance method so it
|
||||
event means calling the action function, passing it the argument
|
||||
sequence in "argument" (remember that in Python, multiple function
|
||||
arguments are be packed in a sequence).
|
||||
The action function may be an instance method so it
|
||||
has another way to reference private data (besides global variables).
|
||||
Parameterless functions or methods cannot be used, however.
|
||||
"""
|
||||
|
||||
# XXX The timefunc and delayfunc should have been defined as methods
|
||||
|
@ -89,7 +89,7 @@ class scheduler:
|
|||
exceptions are not caught but the scheduler's state remains
|
||||
well-defined so run() may be called again.
|
||||
|
||||
A questionably hack is added to allow other threads to run:
|
||||
A questionable hack is added to allow other threads to run:
|
||||
just after an event is executed, a delay of 0 is executed, to
|
||||
avoid monopolizing the CPU when other threads are also
|
||||
runnable.
|
||||
|
@ -111,7 +111,7 @@ class scheduler:
|
|||
# Verify that the event was not removed or altered
|
||||
# by another thread after we last looked at q[0].
|
||||
if event is checked_event:
|
||||
void = action(*argument)
|
||||
action(*argument)
|
||||
delayfunc(0) # Let other threads run
|
||||
else:
|
||||
heapq.heappush(event)
|
||||
|
|
|
@ -221,7 +221,7 @@ class SMTPChannel(asynchat.async_chat):
|
|||
|
||||
def smtp_MAIL(self, arg):
|
||||
print('===> MAIL', arg, file=DEBUGSTREAM)
|
||||
address = self.__getaddr('FROM:', arg)
|
||||
address = self.__getaddr('FROM:', arg) if arg else None
|
||||
if not address:
|
||||
self.push('501 Syntax: MAIL FROM:<address>')
|
||||
return
|
||||
|
@ -237,7 +237,7 @@ class SMTPChannel(asynchat.async_chat):
|
|||
if not self.__mailfrom:
|
||||
self.push('503 Error: need MAIL command')
|
||||
return
|
||||
address = self.__getaddr('TO:', arg)
|
||||
address = self.__getaddr('TO:', arg) if arg else None
|
||||
if not address:
|
||||
self.push('501 Syntax: RCPT TO: <address>')
|
||||
return
|
||||
|
|
|
@ -1,8 +0,0 @@
|
|||
# An example for http://bugs.python.org/issue815646
|
||||
|
||||
import thread
|
||||
|
||||
while 1:
|
||||
f = open("/tmp/dupa", "w")
|
||||
thread.start_new_thread(f.close, ())
|
||||
f.close()
|
|
@ -0,0 +1,14 @@
|
|||
# f.close() is not thread-safe: calling it at the same time as another
|
||||
# operation (or another close) on the same file, but done from another
|
||||
# thread, causes crashes. The issue is more complicated than it seems,
|
||||
# witness the discussions in:
|
||||
#
|
||||
# http://bugs.python.org/issue595601
|
||||
# http://bugs.python.org/issue815646
|
||||
|
||||
import thread
|
||||
|
||||
while 1:
|
||||
f = open("multithreaded_close.tmp", "w")
|
||||
thread.start_new_thread(f.close, ())
|
||||
f.close()
|
File diff suppressed because it is too large
Load Diff
|
@ -1105,6 +1105,7 @@ _expectations = {
|
|||
_expectations['freebsd5'] = _expectations['freebsd4']
|
||||
_expectations['freebsd6'] = _expectations['freebsd4']
|
||||
_expectations['freebsd7'] = _expectations['freebsd4']
|
||||
_expectations['freebsd8'] = _expectations['freebsd4']
|
||||
|
||||
class _ExpectedSkips:
|
||||
def __init__(self):
|
||||
|
|
|
@ -13,6 +13,9 @@ class BufferSizeTest(unittest.TestCase):
|
|||
# Write s + "\n" + s to file, then open it and ensure that successive
|
||||
# .readline()s deliver what we wrote.
|
||||
|
||||
# Ensure we can open TESTFN for writing.
|
||||
test_support.unlink(test_support.TESTFN)
|
||||
|
||||
# Since C doesn't guarantee we can write/read arbitrary bytes in text
|
||||
# files, use binary mode.
|
||||
f = open(test_support.TESTFN, "wb")
|
||||
|
@ -31,11 +34,7 @@ class BufferSizeTest(unittest.TestCase):
|
|||
self.assert_(not line) # Must be at EOF
|
||||
f.close()
|
||||
finally:
|
||||
try:
|
||||
import os
|
||||
os.unlink(test_support.TESTFN)
|
||||
except:
|
||||
pass
|
||||
test_support.unlink(test_support.TESTFN)
|
||||
|
||||
def drive_one(self, pattern):
|
||||
for length in lengths:
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
import unittest
|
||||
from test import test_support
|
||||
from collections import NamedTuple
|
||||
from collections import namedtuple
|
||||
from collections import Hashable, Iterable, Iterator
|
||||
from collections import Sized, Container, Callable
|
||||
from collections import Set, MutableSet
|
||||
|
@ -13,18 +13,27 @@ from collections import Sequence, MutableSequence
|
|||
class TestNamedTuple(unittest.TestCase):
|
||||
|
||||
def test_factory(self):
|
||||
Point = NamedTuple('Point', 'x y')
|
||||
Point = namedtuple('Point', 'x y')
|
||||
self.assertEqual(Point.__name__, 'Point')
|
||||
self.assertEqual(Point.__doc__, 'Point(x, y)')
|
||||
self.assertEqual(Point.__slots__, ())
|
||||
self.assertEqual(Point.__module__, __name__)
|
||||
self.assertEqual(Point.__getitem__, tuple.__getitem__)
|
||||
self.assertRaises(ValueError, NamedTuple, 'abc%', 'def ghi')
|
||||
self.assertRaises(ValueError, NamedTuple, 'abc', 'def g%hi')
|
||||
NamedTuple('Point0', 'x1 y2') # Verify that numbers are allowed in names
|
||||
|
||||
self.assertRaises(ValueError, namedtuple, 'abc%', 'efg ghi') # type has non-alpha char
|
||||
self.assertRaises(ValueError, namedtuple, 'class', 'efg ghi') # type has keyword
|
||||
self.assertRaises(ValueError, namedtuple, '9abc', 'efg ghi') # type starts with digit
|
||||
|
||||
self.assertRaises(ValueError, namedtuple, 'abc', 'efg g%hi') # field with non-alpha char
|
||||
self.assertRaises(ValueError, namedtuple, 'abc', 'abc class') # field has keyword
|
||||
self.assertRaises(ValueError, namedtuple, 'abc', '8efg 9ghi') # field starts with digit
|
||||
self.assertRaises(ValueError, namedtuple, 'abc', '__efg__ ghi') # field with double underscores
|
||||
self.assertRaises(ValueError, namedtuple, 'abc', 'efg efg ghi') # duplicate field
|
||||
|
||||
namedtuple('Point0', 'x1 y2') # Verify that numbers are allowed in names
|
||||
|
||||
def test_instance(self):
|
||||
Point = NamedTuple('Point', 'x y')
|
||||
Point = namedtuple('Point', 'x y')
|
||||
p = Point(11, 22)
|
||||
self.assertEqual(p, Point(x=11, y=22))
|
||||
self.assertEqual(p, Point(11, y=22))
|
||||
|
@ -40,14 +49,20 @@ class TestNamedTuple(unittest.TestCase):
|
|||
self.assert_('__weakref__' not in dir(p))
|
||||
self.assertEqual(p.__fields__, ('x', 'y')) # test __fields__ attribute
|
||||
self.assertEqual(p.__replace__('x', 1), (1, 22)) # test __replace__ method
|
||||
self.assertEqual(p.__asdict__(), dict(x=11, y=22)) # test __dict__ method
|
||||
|
||||
# verify that field string can have commas
|
||||
Point = NamedTuple('Point', 'x, y')
|
||||
Point = namedtuple('Point', 'x, y')
|
||||
p = Point(x=11, y=22)
|
||||
self.assertEqual(repr(p), 'Point(x=11, y=22)')
|
||||
|
||||
# verify that fieldspec can be a non-string sequence
|
||||
Point = namedtuple('Point', ('x', 'y'))
|
||||
p = Point(x=11, y=22)
|
||||
self.assertEqual(repr(p), 'Point(x=11, y=22)')
|
||||
|
||||
def test_tupleness(self):
|
||||
Point = NamedTuple('Point', 'x y')
|
||||
Point = namedtuple('Point', 'x y')
|
||||
p = Point(11, 22)
|
||||
|
||||
self.assert_(isinstance(p, tuple))
|
||||
|
@ -66,9 +81,9 @@ class TestNamedTuple(unittest.TestCase):
|
|||
self.assertRaises(AttributeError, eval, 'p.z', locals())
|
||||
|
||||
def test_odd_sizes(self):
|
||||
Zero = NamedTuple('Zero', '')
|
||||
Zero = namedtuple('Zero', '')
|
||||
self.assertEqual(Zero(), ())
|
||||
Dot = NamedTuple('Dot', 'd')
|
||||
Dot = namedtuple('Dot', 'd')
|
||||
self.assertEqual(Dot(1), (1,))
|
||||
|
||||
|
||||
|
|
|
@ -95,35 +95,61 @@ RoundingDict = {'ceiling' : ROUND_CEILING, #Maps test-case names to roundings.
|
|||
|
||||
# Name adapter to be able to change the Decimal and Context
|
||||
# interface without changing the test files from Cowlishaw
|
||||
nameAdapter = {'toeng':'to_eng_string',
|
||||
'tosci':'to_sci_string',
|
||||
'samequantum':'same_quantum',
|
||||
'tointegral':'to_integral_value',
|
||||
'tointegralx':'to_integral_exact',
|
||||
'remaindernear':'remainder_near',
|
||||
'divideint':'divide_int',
|
||||
'squareroot':'sqrt',
|
||||
nameAdapter = {'and':'logical_and',
|
||||
'apply':'_apply',
|
||||
'class':'number_class',
|
||||
'comparesig':'compare_signal',
|
||||
'comparetotal':'compare_total',
|
||||
'comparetotmag':'compare_total_mag',
|
||||
'copyabs':'copy_abs',
|
||||
'copy':'copy_decimal',
|
||||
'copyabs':'copy_abs',
|
||||
'copynegate':'copy_negate',
|
||||
'copysign':'copy_sign',
|
||||
'and':'logical_and',
|
||||
'or':'logical_or',
|
||||
'xor':'logical_xor',
|
||||
'divideint':'divide_int',
|
||||
'invert':'logical_invert',
|
||||
'iscanonical':'is_canonical',
|
||||
'isfinite':'is_finite',
|
||||
'isinfinite':'is_infinite',
|
||||
'isnan':'is_nan',
|
||||
'isnormal':'is_normal',
|
||||
'isqnan':'is_qnan',
|
||||
'issigned':'is_signed',
|
||||
'issnan':'is_snan',
|
||||
'issubnormal':'is_subnormal',
|
||||
'iszero':'is_zero',
|
||||
'maxmag':'max_mag',
|
||||
'minmag':'min_mag',
|
||||
'nextminus':'next_minus',
|
||||
'nextplus':'next_plus',
|
||||
'nexttoward':'next_toward',
|
||||
'or':'logical_or',
|
||||
'reduce':'normalize',
|
||||
'remaindernear':'remainder_near',
|
||||
'samequantum':'same_quantum',
|
||||
'squareroot':'sqrt',
|
||||
'toeng':'to_eng_string',
|
||||
'tointegral':'to_integral_value',
|
||||
'tointegralx':'to_integral_exact',
|
||||
'tosci':'to_sci_string',
|
||||
'xor':'logical_xor',
|
||||
}
|
||||
|
||||
# The following functions return True/False rather than a Decimal instance
|
||||
|
||||
LOGICAL_FUNCTIONS = (
|
||||
'is_canonical',
|
||||
'is_finite',
|
||||
'is_infinite',
|
||||
'is_nan',
|
||||
'is_normal',
|
||||
'is_qnan',
|
||||
'is_signed',
|
||||
'is_snan',
|
||||
'is_subnormal',
|
||||
'is_zero',
|
||||
'same_quantum',
|
||||
)
|
||||
|
||||
# For some operations (currently exp, ln, log10, power), the decNumber
|
||||
# reference implementation imposes additional restrictions on the
|
||||
# context and operands. These restrictions are not part of the
|
||||
|
@ -321,7 +347,7 @@ class DecimalTest(unittest.TestCase):
|
|||
print("--", self.context)
|
||||
try:
|
||||
result = str(funct(*vals))
|
||||
if fname == 'same_quantum':
|
||||
if fname in LOGICAL_FUNCTIONS:
|
||||
result = str(int(eval(result))) # 'True', 'False' -> '1', '0'
|
||||
except Signals as error:
|
||||
self.fail("Raised %s in %s" % (error, s))
|
||||
|
@ -426,13 +452,18 @@ class DecimalExplicitConstructionTest(unittest.TestCase):
|
|||
|
||||
#bad sign
|
||||
self.assertRaises(ValueError, Decimal, (8, (4, 3, 4, 9, 1), 2) )
|
||||
self.assertRaises(ValueError, Decimal, (0., (4, 3, 4, 9, 1), 2) )
|
||||
self.assertRaises(ValueError, Decimal, (Decimal(1), (4, 3, 4, 9, 1), 2))
|
||||
|
||||
#bad exp
|
||||
self.assertRaises(ValueError, Decimal, (1, (4, 3, 4, 9, 1), 'wrong!') )
|
||||
self.assertRaises(ValueError, Decimal, (1, (4, 3, 4, 9, 1), 0.) )
|
||||
self.assertRaises(ValueError, Decimal, (1, (4, 3, 4, 9, 1), '1') )
|
||||
|
||||
#bad coefficients
|
||||
self.assertRaises(ValueError, Decimal, (1, (4, 3, 4, None, 1), 2) )
|
||||
self.assertRaises(ValueError, Decimal, (1, (4, -3, 4, 9, 1), 2) )
|
||||
self.assertRaises(ValueError, Decimal, (1, (4, 10, 4, 9, 1), 2) )
|
||||
|
||||
def test_explicit_from_Decimal(self):
|
||||
|
||||
|
@ -1025,6 +1056,28 @@ class DecimalUsabilityTest(unittest.TestCase):
|
|||
d = Decimal("Infinity")
|
||||
self.assertEqual(d.as_tuple(), (0, (0,), 'F') )
|
||||
|
||||
#leading zeros in coefficient should be stripped
|
||||
d = Decimal( (0, (0, 0, 4, 0, 5, 3, 4), -2) )
|
||||
self.assertEqual(d.as_tuple(), (0, (4, 0, 5, 3, 4), -2) )
|
||||
d = Decimal( (1, (0, 0, 0), 37) )
|
||||
self.assertEqual(d.as_tuple(), (1, (0,), 37))
|
||||
d = Decimal( (1, (), 37) )
|
||||
self.assertEqual(d.as_tuple(), (1, (0,), 37))
|
||||
|
||||
#leading zeros in NaN diagnostic info should be stripped
|
||||
d = Decimal( (0, (0, 0, 4, 0, 5, 3, 4), 'n') )
|
||||
self.assertEqual(d.as_tuple(), (0, (4, 0, 5, 3, 4), 'n') )
|
||||
d = Decimal( (1, (0, 0, 0), 'N') )
|
||||
self.assertEqual(d.as_tuple(), (1, (), 'N') )
|
||||
d = Decimal( (1, (), 'n') )
|
||||
self.assertEqual(d.as_tuple(), (1, (), 'n') )
|
||||
|
||||
#coefficient in infinity should be ignored
|
||||
d = Decimal( (0, (4, 5, 3, 4), 'F') )
|
||||
self.assertEqual(d.as_tuple(), (0, (0,), 'F'))
|
||||
d = Decimal( (1, (0, 2, 7, 1), 'F') )
|
||||
self.assertEqual(d.as_tuple(), (1, (0,), 'F'))
|
||||
|
||||
def test_immutability_operations(self):
|
||||
# Do operations and check that it didn't change change internal objects.
|
||||
|
||||
|
|
|
@ -47,6 +47,26 @@ class TestBasic(unittest.TestCase):
|
|||
self.assertEqual(right, list(range(150, 400)))
|
||||
self.assertEqual(list(d), list(range(50, 150)))
|
||||
|
||||
def test_maxlen(self):
|
||||
self.assertRaises(ValueError, deque, 'abc', -1)
|
||||
self.assertRaises(ValueError, deque, 'abc', -2)
|
||||
d = deque(range(10), maxlen=3)
|
||||
self.assertEqual(repr(d), 'deque([7, 8, 9], maxlen=3)')
|
||||
self.assertEqual(list(d), [7, 8, 9])
|
||||
self.assertEqual(d, deque(range(10), 3))
|
||||
d.append(10)
|
||||
self.assertEqual(list(d), [8, 9, 10])
|
||||
d.appendleft(7)
|
||||
self.assertEqual(list(d), [7, 8, 9])
|
||||
d.extend([10, 11])
|
||||
self.assertEqual(list(d), [9, 10, 11])
|
||||
d.extendleft([8, 7])
|
||||
self.assertEqual(list(d), [7, 8, 9])
|
||||
d = deque(range(200), maxlen=10)
|
||||
d.append(d)
|
||||
d = deque(range(10), maxlen=None)
|
||||
self.assertEqual(repr(d), 'deque([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])')
|
||||
|
||||
def test_comparisons(self):
|
||||
d = deque('xabc'); d.popleft()
|
||||
for e in [d, deque('abc'), deque('ab'), deque(), list(d)]:
|
||||
|
@ -254,7 +274,7 @@ class TestBasic(unittest.TestCase):
|
|||
os.remove(test_support.TESTFN)
|
||||
|
||||
def test_init(self):
|
||||
self.assertRaises(TypeError, deque, 'abc', 2);
|
||||
self.assertRaises(TypeError, deque, 'abc', 2, 3);
|
||||
self.assertRaises(TypeError, deque, 1);
|
||||
|
||||
def test_hash(self):
|
||||
|
@ -340,13 +360,13 @@ class TestBasic(unittest.TestCase):
|
|||
self.assertNotEqual(id(d), id(e))
|
||||
self.assertEqual(list(d), list(e))
|
||||
|
||||
def test_pickle_recursive(self):
|
||||
d = deque('abc')
|
||||
d.append(d)
|
||||
for i in (0, 1, 2):
|
||||
e = pickle.loads(pickle.dumps(d, i))
|
||||
self.assertNotEqual(id(d), id(e))
|
||||
self.assertEqual(id(e), id(e[-1]))
|
||||
## def test_pickle_recursive(self):
|
||||
## d = deque('abc')
|
||||
## d.append(d)
|
||||
## for i in (0, 1, 2):
|
||||
## e = pickle.loads(pickle.dumps(d, i))
|
||||
## self.assertNotEqual(id(d), id(e))
|
||||
## self.assertEqual(id(e), id(e[-1]))
|
||||
|
||||
def test_deepcopy(self):
|
||||
mut = [10]
|
||||
|
@ -452,24 +472,40 @@ class TestSubclass(unittest.TestCase):
|
|||
self.assertEqual(type(d), type(e))
|
||||
self.assertEqual(list(d), list(e))
|
||||
|
||||
def test_pickle(self):
|
||||
d = Deque('abc')
|
||||
d.append(d)
|
||||
d = Deque('abcde', maxlen=4)
|
||||
|
||||
e = pickle.loads(pickle.dumps(d))
|
||||
e = d.__copy__()
|
||||
self.assertEqual(type(d), type(e))
|
||||
self.assertEqual(list(d), list(e))
|
||||
|
||||
e = Deque(d)
|
||||
self.assertEqual(type(d), type(e))
|
||||
self.assertEqual(list(d), list(e))
|
||||
|
||||
s = pickle.dumps(d)
|
||||
e = pickle.loads(s)
|
||||
self.assertNotEqual(id(d), id(e))
|
||||
self.assertEqual(type(d), type(e))
|
||||
dd = d.pop()
|
||||
ee = e.pop()
|
||||
self.assertEqual(id(e), id(ee))
|
||||
self.assertEqual(d, e)
|
||||
self.assertEqual(list(d), list(e))
|
||||
|
||||
d.x = d
|
||||
e = pickle.loads(pickle.dumps(d))
|
||||
self.assertEqual(id(e), id(e.x))
|
||||
|
||||
d = DequeWithBadIter('abc')
|
||||
self.assertRaises(TypeError, pickle.dumps, d)
|
||||
## def test_pickle(self):
|
||||
## d = Deque('abc')
|
||||
## d.append(d)
|
||||
##
|
||||
## e = pickle.loads(pickle.dumps(d))
|
||||
## self.assertNotEqual(id(d), id(e))
|
||||
## self.assertEqual(type(d), type(e))
|
||||
## dd = d.pop()
|
||||
## ee = e.pop()
|
||||
## self.assertEqual(id(e), id(ee))
|
||||
## self.assertEqual(d, e)
|
||||
##
|
||||
## d.x = d
|
||||
## e = pickle.loads(pickle.dumps(d))
|
||||
## self.assertEqual(id(e), id(e.x))
|
||||
##
|
||||
## d = DequeWithBadIter('abc')
|
||||
## self.assertRaises(TypeError, pickle.dumps, d)
|
||||
|
||||
def test_weakref(self):
|
||||
d = deque('gallahad')
|
||||
|
|
|
@ -23,7 +23,7 @@ if sys.platform.startswith('atheos'):
|
|||
if sys.platform in ('netbsd1', 'netbsd2', 'netbsd3',
|
||||
'Darwin1.2', 'darwin',
|
||||
'freebsd2', 'freebsd3', 'freebsd4', 'freebsd5',
|
||||
'freebsd6', 'freebsd7',
|
||||
'freebsd6', 'freebsd7', 'freebsd8',
|
||||
'bsdos2', 'bsdos3', 'bsdos4',
|
||||
'openbsd', 'openbsd2', 'openbsd3', 'openbsd4'):
|
||||
if struct.calcsize('l') == 8:
|
||||
|
|
|
@ -83,13 +83,25 @@ class BasicTest(TestCase):
|
|||
resp = httplib.HTTPResponse(sock)
|
||||
resp.begin()
|
||||
self.assertEqual(resp.read(), b"Text")
|
||||
resp.close()
|
||||
self.assertTrue(resp.isclosed())
|
||||
|
||||
body = "HTTP/1.1 400.100 Not Ok\r\n\r\nText"
|
||||
sock = FakeSocket(body)
|
||||
resp = httplib.HTTPResponse(sock)
|
||||
self.assertRaises(httplib.BadStatusLine, resp.begin)
|
||||
|
||||
def test_partial_reads(self):
|
||||
# if we have a lenght, the system knows when to close itself
|
||||
# same behaviour than when we read the whole thing with read()
|
||||
body = "HTTP/1.1 200 Ok\r\nContent-Length: 4\r\n\r\nText"
|
||||
sock = FakeSocket(body)
|
||||
resp = httplib.HTTPResponse(sock)
|
||||
resp.begin()
|
||||
self.assertEqual(resp.read(2), b'Te')
|
||||
self.assertFalse(resp.isclosed())
|
||||
self.assertEqual(resp.read(2), b'xt')
|
||||
self.assertTrue(resp.isclosed())
|
||||
|
||||
def test_host_port(self):
|
||||
# Check invalid host_port
|
||||
|
||||
|
@ -135,7 +147,6 @@ class BasicTest(TestCase):
|
|||
resp.begin()
|
||||
if resp.read():
|
||||
self.fail("Did not expect response from HEAD request")
|
||||
resp.close()
|
||||
|
||||
def test_send_file(self):
|
||||
expected = (b'GET /foo HTTP/1.1\r\nHost: example.com\r\n'
|
||||
|
|
|
@ -55,9 +55,14 @@ class TestBasicOps(unittest.TestCase):
|
|||
self.assertEqual(lzip('abc',count()), [('a', 0), ('b', 1), ('c', 2)])
|
||||
self.assertEqual(lzip('abc',count(3)), [('a', 3), ('b', 4), ('c', 5)])
|
||||
self.assertEqual(take(2, lzip('abc',count(3))), [('a', 3), ('b', 4)])
|
||||
self.assertEqual(take(2, zip('abc',count(-1))), [('a', -1), ('b', 0)])
|
||||
self.assertEqual(take(2, zip('abc',count(-3))), [('a', -3), ('b', -2)])
|
||||
self.assertRaises(TypeError, count, 2, 3)
|
||||
self.assertRaises(TypeError, count, 'a')
|
||||
self.assertRaises(OverflowError, list, islice(count(maxsize-5), 10))
|
||||
self.assertEqual(list(islice(count(maxsize-5), 10)),
|
||||
list(range(maxsize-5, maxsize+5)))
|
||||
self.assertEqual(list(islice(count(-maxsize-5), 10)),
|
||||
list(range(-maxsize-5, -maxsize+5)))
|
||||
c = count(3)
|
||||
self.assertEqual(repr(c), 'count(3)')
|
||||
next(c)
|
||||
|
@ -66,6 +71,11 @@ class TestBasicOps(unittest.TestCase):
|
|||
self.assertEqual(repr(c), 'count(-9)')
|
||||
next(c)
|
||||
self.assertEqual(next(c), -8)
|
||||
for i in (-sys.maxint-5, -sys.maxint+5 ,-10, -1, 0, 10, sys.maxint-5, sys.maxint+5):
|
||||
# Test repr (ignoring the L in longs)
|
||||
r1 = repr(count(i)).replace('L', '')
|
||||
r2 = 'count(%r)'.__mod__(i).replace('L', '')
|
||||
self.assertEqual(r1, r2)
|
||||
|
||||
def test_cycle(self):
|
||||
self.assertEqual(take(10, cycle('abc')), list('abcabcabca'))
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
import unittest
|
||||
import sys
|
||||
from test import test_support, list_tests
|
||||
|
||||
class ListTest(list_tests.CommonTest):
|
||||
|
@ -18,6 +19,14 @@ class ListTest(list_tests.CommonTest):
|
|||
self.assertEqual(len([0]), 1)
|
||||
self.assertEqual(len([0, 1, 2]), 3)
|
||||
|
||||
def test_overflow(self):
|
||||
lst = [4, 5, 6, 7]
|
||||
n = int((sys.maxint*2+2) // len(lst))
|
||||
def mul(a, b): return a * b
|
||||
def imul(a, b): a *= b
|
||||
self.assertRaises((MemoryError, OverflowError), mul, lst, n)
|
||||
self.assertRaises((MemoryError, OverflowError), imul, lst, n)
|
||||
|
||||
def test_main(verbose=None):
|
||||
test_support.run_unittest(ListTest)
|
||||
|
||||
|
|
|
@ -339,6 +339,50 @@ class MmapTests(unittest.TestCase):
|
|||
m[start:stop:step] = data
|
||||
self.assertEquals(m[:], bytes(L))
|
||||
|
||||
def make_mmap_file (self, f, halfsize):
|
||||
# Write 2 pages worth of data to the file
|
||||
f.write (b'\0' * halfsize)
|
||||
f.write (b'foo')
|
||||
f.write (b'\0' * (halfsize - 3))
|
||||
f.flush ()
|
||||
return mmap.mmap (f.fileno(), 0)
|
||||
|
||||
def test_offset (self):
|
||||
f = open (TESTFN, 'w+b')
|
||||
|
||||
try: # unlink TESTFN no matter what
|
||||
halfsize = mmap.ALLOCATIONGRANULARITY
|
||||
m = self.make_mmap_file (f, halfsize)
|
||||
m.close ()
|
||||
f.close ()
|
||||
|
||||
mapsize = halfsize * 2
|
||||
# Try invalid offset
|
||||
f = open(TESTFN, "r+b")
|
||||
for offset in [-2, -1, None]:
|
||||
try:
|
||||
m = mmap.mmap(f.fileno(), mapsize, offset=offset)
|
||||
self.assertEqual(0, 1)
|
||||
except (ValueError, TypeError, OverflowError):
|
||||
pass
|
||||
else:
|
||||
self.assertEqual(0, 0)
|
||||
f.close()
|
||||
|
||||
# Try valid offset, hopefully 8192 works on all OSes
|
||||
f = open(TESTFN, "r+b")
|
||||
m = mmap.mmap(f.fileno(), mapsize - halfsize, offset=halfsize)
|
||||
self.assertEqual(m[0:3], b'foo')
|
||||
f.close()
|
||||
m.close()
|
||||
|
||||
finally:
|
||||
f.close()
|
||||
try:
|
||||
os.unlink(TESTFN)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
def test_main():
|
||||
run_unittest(MmapTests)
|
||||
|
||||
|
|
|
@ -338,7 +338,7 @@ class GeneralModuleTests(unittest.TestCase):
|
|||
# I've ordered this by protocols that have both a tcp and udp
|
||||
# protocol, at least for modern Linuxes.
|
||||
if sys.platform in ('linux2', 'freebsd4', 'freebsd5', 'freebsd6',
|
||||
'freebsd7', 'darwin'):
|
||||
'freebsd7', 'freebsd8', 'darwin'):
|
||||
# avoid the 'echo' service on this platform, as there is an
|
||||
# assumption breaking non-standard port/protocol entry
|
||||
services = ('daytime', 'qotd', 'domain')
|
||||
|
|
|
@ -100,10 +100,19 @@ def bind_port(sock, host='', preferred_port=54321):
|
|||
tests and we don't try multiple ports, the test can fails. This
|
||||
makes the test more robust."""
|
||||
|
||||
# some random ports that hopefully no one is listening on.
|
||||
for port in [preferred_port, 9907, 10243, 32999]:
|
||||
# Find some random ports that hopefully no one is listening on.
|
||||
# Ideally each test would clean up after itself and not continue listening
|
||||
# on any ports. However, this isn't the case. The last port (0) is
|
||||
# a stop-gap that asks the O/S to assign a port. Whenever the warning
|
||||
# message below is printed, the test that is listening on the port should
|
||||
# be fixed to close the socket at the end of the test.
|
||||
# Another reason why we can't use a port is another process (possibly
|
||||
# another instance of the test suite) is using the same port.
|
||||
for port in [preferred_port, 9907, 10243, 32999, 0]:
|
||||
try:
|
||||
sock.bind((host, port))
|
||||
if port == 0:
|
||||
port = sock.getsockname()[1]
|
||||
return port
|
||||
except socket.error as e:
|
||||
(err, msg) = e.args
|
||||
|
@ -519,8 +528,7 @@ def _run_suite(suite):
|
|||
elif len(result.failures) == 1 and not result.errors:
|
||||
err = result.failures[0][1]
|
||||
else:
|
||||
msg = "errors occurred; run in verbose mode for details"
|
||||
raise TestFailed(msg)
|
||||
err = "errors occurred; run in verbose mode for details"
|
||||
raise TestFailed(err)
|
||||
|
||||
|
||||
|
|
|
@ -93,6 +93,13 @@ class TestCRLFNewlines(TestGenericUnivNewlines):
|
|||
NEWLINE = '\r\n'
|
||||
DATA = DATA_CRLF
|
||||
|
||||
def test_tell(self):
|
||||
fp = open(test_support.TESTFN, self.READMODE)
|
||||
self.assertEqual(repr(fp.newlines), repr(None))
|
||||
data = fp.readline()
|
||||
pos = fp.tell()
|
||||
self.assertEqual(repr(fp.newlines), repr(self.NEWLINE))
|
||||
|
||||
class TestMixedNewlines(TestGenericUnivNewlines):
|
||||
NEWLINE = ('\r', '\n')
|
||||
DATA = DATA_MIXED
|
||||
|
|
|
@ -42,14 +42,18 @@ class ChecksumTestCase(unittest.TestCase):
|
|||
|
||||
class ExceptionTestCase(unittest.TestCase):
|
||||
# make sure we generate some expected errors
|
||||
def test_bigbits(self):
|
||||
# specifying total bits too large causes an error
|
||||
self.assertRaises(zlib.error,
|
||||
zlib.compress, 'ERROR', zlib.MAX_WBITS + 1)
|
||||
def test_badlevel(self):
|
||||
# specifying compression level out of range causes an error
|
||||
# (but -1 is Z_DEFAULT_COMPRESSION and apparently the zlib
|
||||
# accepts 0 too)
|
||||
self.assertRaises(zlib.error, zlib.compress, 'ERROR', 10)
|
||||
|
||||
def test_badcompressobj(self):
|
||||
# verify failure on building compress object with bad params
|
||||
self.assertRaises(ValueError, zlib.compressobj, 1, zlib.DEFLATED, 0)
|
||||
# specifying total bits too large causes an error
|
||||
self.assertRaises(ValueError,
|
||||
zlib.compressobj, 1, zlib.DEFLATED, zlib.MAX_WBITS + 1)
|
||||
|
||||
def test_baddecompressobj(self):
|
||||
# verify failure on building decompress object with bad params
|
||||
|
|
|
@ -960,7 +960,7 @@ class CharacterData(Childless, Node):
|
|||
dotdotdot = "..."
|
||||
else:
|
||||
dotdotdot = ""
|
||||
return "<DOM %s node \"%s%s\">" % (
|
||||
return '<DOM %s node "%r%s">' % (
|
||||
self.__class__.__name__, data[0:10], dotdotdot)
|
||||
|
||||
def substringData(self, offset, count):
|
||||
|
|
|
@ -336,7 +336,8 @@ LIBRARY_OBJS= \
|
|||
# Rules
|
||||
|
||||
# Default target
|
||||
all: $(BUILDPYTHON) oldsharedmods sharedmods
|
||||
all: build_all
|
||||
build_all: $(BUILDPYTHON) oldsharedmods sharedmods
|
||||
|
||||
# Build the interpreter
|
||||
$(BUILDPYTHON): Modules/python.o $(LIBRARY) $(LDLIBRARY)
|
||||
|
@ -476,7 +477,7 @@ Modules/python.o: $(srcdir)/Modules/python.c
|
|||
|
||||
|
||||
$(GRAMMAR_H) $(GRAMMAR_C): $(PGEN) $(GRAMMAR_INPUT)
|
||||
-@ mkdir Include
|
||||
-@$(INSTALL) -d Include
|
||||
-$(PGEN) $(GRAMMAR_INPUT) $(GRAMMAR_H) $(GRAMMAR_C)
|
||||
|
||||
$(PGEN): $(PGENOBJS)
|
||||
|
@ -758,7 +759,7 @@ LIBSUBDIRS= lib-tk site-packages test test/output test/data \
|
|||
distutils distutils/command distutils/tests $(XMLLIBSUBDIRS) \
|
||||
setuptools setuptools/command setuptools/tests setuptools.egg-info \
|
||||
curses $(MACHDEPS)
|
||||
libinstall: $(BUILDPYTHON) $(srcdir)/Lib/$(PLATDIR)
|
||||
libinstall: build_all $(srcdir)/Lib/$(PLATDIR)
|
||||
@for i in $(SCRIPTDIR) $(LIBDEST); \
|
||||
do \
|
||||
if test ! -d $(DESTDIR)$$i; then \
|
||||
|
@ -1126,7 +1127,7 @@ funny:
|
|||
Python/thread.o: @THREADHEADERS@
|
||||
|
||||
# Declare targets that aren't real files
|
||||
.PHONY: all sharedmods oldsharedmods test quicktest memtest
|
||||
.PHONY: all build_all sharedmods oldsharedmods test quicktest memtest
|
||||
.PHONY: install altinstall oldsharedinstall bininstall altbininstall
|
||||
.PHONY: maninstall libinstall inclinstall libainstall sharedinstall
|
||||
.PHONY: frameworkinstall frameworkinstallframework frameworkinstallstructure
|
||||
|
|
|
@ -17,6 +17,12 @@ the format to accommodate documentation needs as they arise.
|
|||
Permissions History
|
||||
-------------------
|
||||
|
||||
- Christian Heimes was given SVN access on 31 October 2007 by MvL,
|
||||
for general contributions to Python.
|
||||
|
||||
- Chris Monson was given SVN access on 20 October 2007 by NCN,
|
||||
for his work on editing PEPs.
|
||||
|
||||
- Bill Janssen was given SVN access on 28 August 2007 by NCN,
|
||||
for his work on the SSL module and other things related to (SSL) sockets.
|
||||
|
||||
|
|
213
Modules/_bsddb.c
213
Modules/_bsddb.c
|
@ -87,21 +87,16 @@
|
|||
|
||||
#include <stddef.h> /* for offsetof() */
|
||||
#include <Python.h>
|
||||
#include <db.h>
|
||||
|
||||
#define COMPILING_BSDDB_C
|
||||
#include "bsddb.h"
|
||||
#undef COMPILING_BSDDB_C
|
||||
|
||||
static char *svn_id = "$Id$";
|
||||
|
||||
/* --------------------------------------------------------------------- */
|
||||
/* Various macro definitions */
|
||||
|
||||
/* 40 = 4.0, 33 = 3.3; this will break if the second number is > 9 */
|
||||
#define DBVER (DB_VERSION_MAJOR * 10 + DB_VERSION_MINOR)
|
||||
#if DB_VERSION_MINOR > 9
|
||||
#error "eek! DBVER can't handle minor versions > 9"
|
||||
#endif
|
||||
|
||||
#define PY_BSDDB_VERSION "4.5.0"
|
||||
static char *svn_id = "$Id$";
|
||||
|
||||
|
||||
#if (PY_VERSION_HEX < 0x02050000)
|
||||
typedef int Py_ssize_t;
|
||||
#endif
|
||||
|
@ -196,107 +191,15 @@ static PyObject* DBPermissionsError; /* EPERM */
|
|||
/* --------------------------------------------------------------------- */
|
||||
/* Structure definitions */
|
||||
|
||||
#if PYTHON_API_VERSION >= 1010 /* python >= 2.1 support weak references */
|
||||
#define HAVE_WEAKREF
|
||||
#else
|
||||
#undef HAVE_WEAKREF
|
||||
#if PYTHON_API_VERSION < 1010
|
||||
#error "Python 2.1 or later required"
|
||||
#endif
|
||||
|
||||
/* if Python >= 2.1 better support warnings */
|
||||
#if PYTHON_API_VERSION >= 1010
|
||||
#define HAVE_WARNINGS
|
||||
#else
|
||||
#undef HAVE_WARNINGS
|
||||
#endif
|
||||
|
||||
#if PYTHON_API_VERSION <= 1007
|
||||
/* 1.5 compatibility */
|
||||
#define PyObject_New PyObject_NEW
|
||||
#define PyObject_Del PyMem_DEL
|
||||
#endif
|
||||
|
||||
struct behaviourFlags {
|
||||
/* What is the default behaviour when DB->get or DBCursor->get returns a
|
||||
DB_NOTFOUND || DB_KEYEMPTY error? Return None or raise an exception? */
|
||||
unsigned int getReturnsNone : 1;
|
||||
/* What is the default behaviour for DBCursor.set* methods when DBCursor->get
|
||||
* returns a DB_NOTFOUND || DB_KEYEMPTY error? Return None or raise? */
|
||||
unsigned int cursorSetReturnsNone : 1;
|
||||
};
|
||||
|
||||
/* Defaults for moduleFlags in DBEnvObject and DBObject. */
|
||||
#define DEFAULT_GET_RETURNS_NONE 1
|
||||
#define DEFAULT_CURSOR_SET_RETURNS_NONE 1 /* 0 in pybsddb < 4.2, python < 2.4 */
|
||||
|
||||
typedef struct {
|
||||
PyObject_HEAD
|
||||
DB_ENV* db_env;
|
||||
u_int32_t flags; /* saved flags from open() */
|
||||
int closed;
|
||||
struct behaviourFlags moduleFlags;
|
||||
#ifdef HAVE_WEAKREF
|
||||
PyObject *in_weakreflist; /* List of weak references */
|
||||
#endif
|
||||
} DBEnvObject;
|
||||
|
||||
|
||||
typedef struct {
|
||||
PyObject_HEAD
|
||||
DB* db;
|
||||
DBEnvObject* myenvobj; /* PyObject containing the DB_ENV */
|
||||
u_int32_t flags; /* saved flags from open() */
|
||||
u_int32_t setflags; /* saved flags from set_flags() */
|
||||
int haveStat;
|
||||
struct behaviourFlags moduleFlags;
|
||||
#if (DBVER >= 33)
|
||||
PyObject* associateCallback;
|
||||
PyObject* btCompareCallback;
|
||||
int primaryDBType;
|
||||
#endif
|
||||
#ifdef HAVE_WEAKREF
|
||||
PyObject *in_weakreflist; /* List of weak references */
|
||||
#endif
|
||||
} DBObject;
|
||||
|
||||
|
||||
typedef struct {
|
||||
PyObject_HEAD
|
||||
DBC* dbc;
|
||||
DBObject* mydb;
|
||||
#ifdef HAVE_WEAKREF
|
||||
PyObject *in_weakreflist; /* List of weak references */
|
||||
#endif
|
||||
} DBCursorObject;
|
||||
|
||||
|
||||
typedef struct {
|
||||
PyObject_HEAD
|
||||
DB_TXN* txn;
|
||||
PyObject *env;
|
||||
#ifdef HAVE_WEAKREF
|
||||
PyObject *in_weakreflist; /* List of weak references */
|
||||
#endif
|
||||
} DBTxnObject;
|
||||
|
||||
|
||||
typedef struct {
|
||||
PyObject_HEAD
|
||||
DB_LOCK lock;
|
||||
#ifdef HAVE_WEAKREF
|
||||
PyObject *in_weakreflist; /* List of weak references */
|
||||
#endif
|
||||
} DBLockObject;
|
||||
|
||||
#if (DBVER >= 43)
|
||||
typedef struct {
|
||||
PyObject_HEAD
|
||||
DB_SEQUENCE* sequence;
|
||||
DBObject* mydb;
|
||||
#ifdef HAVE_WEAKREF
|
||||
PyObject *in_weakreflist; /* List of weak references */
|
||||
#endif
|
||||
} DBSequenceObject;
|
||||
static PyTypeObject DBSequence_Type;
|
||||
#endif
|
||||
|
||||
static PyTypeObject DB_Type, DBCursor_Type, DBEnv_Type, DBTxn_Type, DBLock_Type;
|
||||
|
||||
|
@ -628,12 +531,7 @@ static int makeDBError(int err)
|
|||
strncat(errTxt, _db_errmsg, bytes_left);
|
||||
}
|
||||
_db_errmsg[0] = 0;
|
||||
#ifdef HAVE_WARNINGS
|
||||
exceptionRaised = PyErr_WarnEx(PyExc_RuntimeWarning, errTxt, 1);
|
||||
#else
|
||||
fprintf(stderr, errTxt);
|
||||
fprintf(stderr, "\n");
|
||||
#endif
|
||||
|
||||
#else /* do an exception instead */
|
||||
errObj = DBIncompleteError;
|
||||
|
@ -887,9 +785,7 @@ newDBObject(DBEnvObject* arg, int flags)
|
|||
self->btCompareCallback = NULL;
|
||||
self->primaryDBType = 0;
|
||||
#endif
|
||||
#ifdef HAVE_WEAKREF
|
||||
self->in_weakreflist = NULL;
|
||||
#endif
|
||||
|
||||
/* keep a reference to our python DBEnv object */
|
||||
if (arg) {
|
||||
|
@ -940,21 +836,17 @@ DB_dealloc(DBObject* self)
|
|||
MYDB_BEGIN_ALLOW_THREADS;
|
||||
self->db->close(self->db, 0);
|
||||
MYDB_END_ALLOW_THREADS;
|
||||
#ifdef HAVE_WARNINGS
|
||||
} else {
|
||||
PyErr_WarnEx(PyExc_RuntimeWarning,
|
||||
"DB could not be closed in destructor:"
|
||||
" DBEnv already closed",
|
||||
1);
|
||||
#endif
|
||||
}
|
||||
self->db = NULL;
|
||||
}
|
||||
#ifdef HAVE_WEAKREF
|
||||
if (self->in_weakreflist != NULL) {
|
||||
PyObject_ClearWeakRefs((PyObject *) self);
|
||||
}
|
||||
#endif
|
||||
if (self->myenvobj) {
|
||||
Py_DECREF(self->myenvobj);
|
||||
self->myenvobj = NULL;
|
||||
|
@ -982,9 +874,7 @@ newDBCursorObject(DBC* dbc, DBObject* db)
|
|||
|
||||
self->dbc = dbc;
|
||||
self->mydb = db;
|
||||
#ifdef HAVE_WEAKREF
|
||||
self->in_weakreflist = NULL;
|
||||
#endif
|
||||
Py_INCREF(self->mydb);
|
||||
return self;
|
||||
}
|
||||
|
@ -995,11 +885,9 @@ DBCursor_dealloc(DBCursorObject* self)
|
|||
{
|
||||
int err;
|
||||
|
||||
#ifdef HAVE_WEAKREF
|
||||
if (self->in_weakreflist != NULL) {
|
||||
PyObject_ClearWeakRefs((PyObject *) self);
|
||||
}
|
||||
#endif
|
||||
|
||||
if (self->dbc != NULL) {
|
||||
MYDB_BEGIN_ALLOW_THREADS;
|
||||
|
@ -1032,9 +920,7 @@ newDBEnvObject(int flags)
|
|||
self->flags = flags;
|
||||
self->moduleFlags.getReturnsNone = DEFAULT_GET_RETURNS_NONE;
|
||||
self->moduleFlags.cursorSetReturnsNone = DEFAULT_CURSOR_SET_RETURNS_NONE;
|
||||
#ifdef HAVE_WEAKREF
|
||||
self->in_weakreflist = NULL;
|
||||
#endif
|
||||
|
||||
MYDB_BEGIN_ALLOW_THREADS;
|
||||
err = db_env_create(&self->db_env, flags);
|
||||
|
@ -1053,11 +939,9 @@ newDBEnvObject(int flags)
|
|||
static void
|
||||
DBEnv_dealloc(DBEnvObject* self)
|
||||
{
|
||||
#ifdef HAVE_WEAKREF
|
||||
if (self->in_weakreflist != NULL) {
|
||||
PyObject_ClearWeakRefs((PyObject *) self);
|
||||
}
|
||||
#endif
|
||||
|
||||
if (self->db_env && !self->closed) {
|
||||
MYDB_BEGIN_ALLOW_THREADS;
|
||||
|
@ -1077,9 +961,7 @@ newDBTxnObject(DBEnvObject* myenv, DB_TXN *parent, int flags)
|
|||
return NULL;
|
||||
Py_INCREF(myenv);
|
||||
self->env = (PyObject*)myenv;
|
||||
#ifdef HAVE_WEAKREF
|
||||
self->in_weakreflist = NULL;
|
||||
#endif
|
||||
|
||||
MYDB_BEGIN_ALLOW_THREADS;
|
||||
#if (DBVER >= 40)
|
||||
|
@ -1100,13 +982,10 @@ newDBTxnObject(DBEnvObject* myenv, DB_TXN *parent, int flags)
|
|||
static void
|
||||
DBTxn_dealloc(DBTxnObject* self)
|
||||
{
|
||||
#ifdef HAVE_WEAKREF
|
||||
if (self->in_weakreflist != NULL) {
|
||||
PyObject_ClearWeakRefs((PyObject *) self);
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef HAVE_WARNINGS
|
||||
if (self->txn) {
|
||||
/* it hasn't been finalized, abort it! */
|
||||
MYDB_BEGIN_ALLOW_THREADS;
|
||||
|
@ -1121,7 +1000,6 @@ DBTxn_dealloc(DBTxnObject* self)
|
|||
" No prior commit() or abort().",
|
||||
1);
|
||||
}
|
||||
#endif
|
||||
|
||||
Py_DECREF(self->env);
|
||||
PyObject_Del(self);
|
||||
|
@ -1136,9 +1014,7 @@ newDBLockObject(DBEnvObject* myenv, u_int32_t locker, DBT* obj,
|
|||
DBLockObject* self = PyObject_New(DBLockObject, &DBLock_Type);
|
||||
if (self == NULL)
|
||||
return NULL;
|
||||
#ifdef HAVE_WEAKREF
|
||||
self->in_weakreflist = NULL;
|
||||
#endif
|
||||
|
||||
MYDB_BEGIN_ALLOW_THREADS;
|
||||
#if (DBVER >= 40)
|
||||
|
@ -1160,11 +1036,9 @@ newDBLockObject(DBEnvObject* myenv, u_int32_t locker, DBT* obj,
|
|||
static void
|
||||
DBLock_dealloc(DBLockObject* self)
|
||||
{
|
||||
#ifdef HAVE_WEAKREF
|
||||
if (self->in_weakreflist != NULL) {
|
||||
PyObject_ClearWeakRefs((PyObject *) self);
|
||||
}
|
||||
#endif
|
||||
/* TODO: is this lock held? should we release it? */
|
||||
|
||||
PyObject_Del(self);
|
||||
|
@ -1181,9 +1055,7 @@ newDBSequenceObject(DBObject* mydb, int flags)
|
|||
return NULL;
|
||||
Py_INCREF(mydb);
|
||||
self->mydb = mydb;
|
||||
#ifdef HAVE_WEAKREF
|
||||
self->in_weakreflist = NULL;
|
||||
#endif
|
||||
|
||||
|
||||
MYDB_BEGIN_ALLOW_THREADS;
|
||||
|
@ -1202,11 +1074,9 @@ newDBSequenceObject(DBObject* mydb, int flags)
|
|||
static void
|
||||
DBSequence_dealloc(DBSequenceObject* self)
|
||||
{
|
||||
#ifdef HAVE_WEAKREF
|
||||
if (self->in_weakreflist != NULL) {
|
||||
PyObject_ClearWeakRefs((PyObject *) self);
|
||||
}
|
||||
#endif
|
||||
|
||||
Py_DECREF(self->mydb);
|
||||
PyObject_Del(self);
|
||||
|
@ -1432,7 +1302,6 @@ DB_close(DBObject* self, PyObject* args)
|
|||
}
|
||||
|
||||
|
||||
#if (DBVER >= 32)
|
||||
static PyObject*
|
||||
_DB_consume(DBObject* self, PyObject* args, PyObject* kwargs, int consume_flag)
|
||||
{
|
||||
|
@ -1500,8 +1369,6 @@ DB_consume_wait(DBObject* self, PyObject* args, PyObject* kwargs,
|
|||
{
|
||||
return _DB_consume(self, args, kwargs, DB_CONSUME_WAIT);
|
||||
}
|
||||
#endif
|
||||
|
||||
|
||||
|
||||
static PyObject*
|
||||
|
@ -2526,7 +2393,6 @@ DB_set_re_source(DBObject* self, PyObject* args)
|
|||
}
|
||||
|
||||
|
||||
#if (DBVER >= 32)
|
||||
static PyObject*
|
||||
DB_set_q_extentsize(DBObject* self, PyObject* args)
|
||||
{
|
||||
|
@ -2543,7 +2409,6 @@ DB_set_q_extentsize(DBObject* self, PyObject* args)
|
|||
RETURN_IF_ERR();
|
||||
RETURN_NONE();
|
||||
}
|
||||
#endif
|
||||
|
||||
static PyObject*
|
||||
DB_stat(DBObject* self, PyObject* args, PyObject* kwargs)
|
||||
|
@ -4144,7 +4009,6 @@ DBEnv_set_cachesize(DBEnvObject* self, PyObject* args)
|
|||
}
|
||||
|
||||
|
||||
#if (DBVER >= 32)
|
||||
static PyObject*
|
||||
DBEnv_set_flags(DBEnvObject* self, PyObject* args)
|
||||
{
|
||||
|
@ -4161,7 +4025,6 @@ DBEnv_set_flags(DBEnvObject* self, PyObject* args)
|
|||
RETURN_IF_ERR();
|
||||
RETURN_NONE();
|
||||
}
|
||||
#endif
|
||||
|
||||
|
||||
static PyObject*
|
||||
|
@ -4288,7 +4151,6 @@ DBEnv_set_lk_max(DBEnvObject* self, PyObject* args)
|
|||
#endif
|
||||
|
||||
|
||||
#if (DBVER >= 32)
|
||||
|
||||
static PyObject*
|
||||
DBEnv_set_lk_max_locks(DBEnvObject* self, PyObject* args)
|
||||
|
@ -4340,8 +4202,6 @@ DBEnv_set_lk_max_objects(DBEnvObject* self, PyObject* args)
|
|||
RETURN_NONE();
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
|
||||
static PyObject*
|
||||
DBEnv_set_mp_mmapsize(DBEnvObject* self, PyObject* args)
|
||||
|
@ -4664,19 +4524,15 @@ DBEnv_lock_stat(DBEnvObject* self, PyObject* args)
|
|||
MAKE_ENTRY(lastid);
|
||||
#endif
|
||||
MAKE_ENTRY(nmodes);
|
||||
#if (DBVER >= 32)
|
||||
MAKE_ENTRY(maxlocks);
|
||||
MAKE_ENTRY(maxlockers);
|
||||
MAKE_ENTRY(maxobjects);
|
||||
MAKE_ENTRY(nlocks);
|
||||
MAKE_ENTRY(maxnlocks);
|
||||
#endif
|
||||
MAKE_ENTRY(nlockers);
|
||||
MAKE_ENTRY(maxnlockers);
|
||||
#if (DBVER >= 32)
|
||||
MAKE_ENTRY(nobjects);
|
||||
MAKE_ENTRY(maxnobjects);
|
||||
#endif
|
||||
MAKE_ENTRY(nrequests);
|
||||
MAKE_ENTRY(nreleases);
|
||||
#if (DBVER < 44)
|
||||
|
@ -5024,7 +4880,7 @@ DBSequence_get_key(DBSequenceObject* self, PyObject* args)
|
|||
{
|
||||
int err;
|
||||
DBT key;
|
||||
PyObject *retval;
|
||||
PyObject *retval = NULL;
|
||||
key.flags = DB_DBT_MALLOC;
|
||||
CHECK_SEQUENCE_NOT_CLOSED(self)
|
||||
MYDB_BEGIN_ALLOW_THREADS
|
||||
|
@ -5265,10 +5121,8 @@ static PyMethodDef DB_methods[] = {
|
|||
{"associate", (PyCFunction)DB_associate, METH_VARARGS|METH_KEYWORDS},
|
||||
#endif
|
||||
{"close", (PyCFunction)DB_close, METH_VARARGS},
|
||||
#if (DBVER >= 32)
|
||||
{"consume", (PyCFunction)DB_consume, METH_VARARGS|METH_KEYWORDS},
|
||||
{"consume_wait", (PyCFunction)DB_consume_wait, METH_VARARGS|METH_KEYWORDS},
|
||||
#endif
|
||||
{"cursor", (PyCFunction)DB_cursor, METH_VARARGS|METH_KEYWORDS},
|
||||
{"delete", (PyCFunction)DB_delete, METH_VARARGS|METH_KEYWORDS},
|
||||
{"fd", (PyCFunction)DB_fd, METH_VARARGS},
|
||||
|
@ -5306,9 +5160,7 @@ static PyMethodDef DB_methods[] = {
|
|||
{"set_re_len", (PyCFunction)DB_set_re_len, METH_VARARGS},
|
||||
{"set_re_pad", (PyCFunction)DB_set_re_pad, METH_VARARGS},
|
||||
{"set_re_source", (PyCFunction)DB_set_re_source, METH_VARARGS},
|
||||
#if (DBVER >= 32)
|
||||
{"set_q_extentsize",(PyCFunction)DB_set_q_extentsize,METH_VARARGS},
|
||||
#endif
|
||||
{"stat", (PyCFunction)DB_stat, METH_VARARGS|METH_KEYWORDS},
|
||||
{"sync", (PyCFunction)DB_sync, METH_VARARGS},
|
||||
#if (DBVER >= 33)
|
||||
|
@ -5376,9 +5228,7 @@ static PyMethodDef DBEnv_methods[] = {
|
|||
{"set_shm_key", (PyCFunction)DBEnv_set_shm_key, METH_VARARGS},
|
||||
{"set_cachesize", (PyCFunction)DBEnv_set_cachesize, METH_VARARGS},
|
||||
{"set_data_dir", (PyCFunction)DBEnv_set_data_dir, METH_VARARGS},
|
||||
#if (DBVER >= 32)
|
||||
{"set_flags", (PyCFunction)DBEnv_set_flags, METH_VARARGS},
|
||||
#endif
|
||||
{"set_lg_bsize", (PyCFunction)DBEnv_set_lg_bsize, METH_VARARGS},
|
||||
{"set_lg_dir", (PyCFunction)DBEnv_set_lg_dir, METH_VARARGS},
|
||||
{"set_lg_max", (PyCFunction)DBEnv_set_lg_max, METH_VARARGS},
|
||||
|
@ -5389,11 +5239,9 @@ static PyMethodDef DBEnv_methods[] = {
|
|||
#if (DBVER < 45)
|
||||
{"set_lk_max", (PyCFunction)DBEnv_set_lk_max, METH_VARARGS},
|
||||
#endif
|
||||
#if (DBVER >= 32)
|
||||
{"set_lk_max_locks", (PyCFunction)DBEnv_set_lk_max_locks, METH_VARARGS},
|
||||
{"set_lk_max_lockers", (PyCFunction)DBEnv_set_lk_max_lockers, METH_VARARGS},
|
||||
{"set_lk_max_objects", (PyCFunction)DBEnv_set_lk_max_objects, METH_VARARGS},
|
||||
#endif
|
||||
{"set_mp_mmapsize", (PyCFunction)DBEnv_set_mp_mmapsize, METH_VARARGS},
|
||||
{"set_tmp_dir", (PyCFunction)DBEnv_set_tmp_dir, METH_VARARGS},
|
||||
{"txn_begin", (PyCFunction)DBEnv_txn_begin, METH_VARARGS|METH_KEYWORDS},
|
||||
|
@ -5512,7 +5360,6 @@ static PyTypeObject DB_Type = {
|
|||
0, /*tp_as_sequence*/
|
||||
&DB_mapping,/*tp_as_mapping*/
|
||||
0, /*tp_hash*/
|
||||
#ifdef HAVE_WEAKREF
|
||||
0, /* tp_call */
|
||||
0, /* tp_str */
|
||||
0, /* tp_getattro */
|
||||
|
@ -5524,7 +5371,6 @@ static PyTypeObject DB_Type = {
|
|||
0, /* tp_clear */
|
||||
0, /* tp_richcompare */
|
||||
offsetof(DBObject, in_weakreflist), /* tp_weaklistoffset */
|
||||
#endif
|
||||
};
|
||||
|
||||
|
||||
|
@ -5544,7 +5390,6 @@ static PyTypeObject DBCursor_Type = {
|
|||
0, /*tp_as_sequence*/
|
||||
0, /*tp_as_mapping*/
|
||||
0, /*tp_hash*/
|
||||
#ifdef HAVE_WEAKREF
|
||||
0, /* tp_call */
|
||||
0, /* tp_str */
|
||||
0, /* tp_getattro */
|
||||
|
@ -5556,7 +5401,6 @@ static PyTypeObject DBCursor_Type = {
|
|||
0, /* tp_clear */
|
||||
0, /* tp_richcompare */
|
||||
offsetof(DBCursorObject, in_weakreflist), /* tp_weaklistoffset */
|
||||
#endif
|
||||
};
|
||||
|
||||
|
||||
|
@ -5576,7 +5420,6 @@ static PyTypeObject DBEnv_Type = {
|
|||
0, /*tp_as_sequence*/
|
||||
0, /*tp_as_mapping*/
|
||||
0, /*tp_hash*/
|
||||
#ifdef HAVE_WEAKREF
|
||||
0, /* tp_call */
|
||||
0, /* tp_str */
|
||||
0, /* tp_getattro */
|
||||
|
@ -5588,7 +5431,6 @@ static PyTypeObject DBEnv_Type = {
|
|||
0, /* tp_clear */
|
||||
0, /* tp_richcompare */
|
||||
offsetof(DBEnvObject, in_weakreflist), /* tp_weaklistoffset */
|
||||
#endif
|
||||
};
|
||||
|
||||
static PyTypeObject DBTxn_Type = {
|
||||
|
@ -5607,7 +5449,6 @@ static PyTypeObject DBTxn_Type = {
|
|||
0, /*tp_as_sequence*/
|
||||
0, /*tp_as_mapping*/
|
||||
0, /*tp_hash*/
|
||||
#ifdef HAVE_WEAKREF
|
||||
0, /* tp_call */
|
||||
0, /* tp_str */
|
||||
0, /* tp_getattro */
|
||||
|
@ -5619,7 +5460,6 @@ static PyTypeObject DBTxn_Type = {
|
|||
0, /* tp_clear */
|
||||
0, /* tp_richcompare */
|
||||
offsetof(DBTxnObject, in_weakreflist), /* tp_weaklistoffset */
|
||||
#endif
|
||||
};
|
||||
|
||||
|
||||
|
@ -5639,7 +5479,6 @@ static PyTypeObject DBLock_Type = {
|
|||
0, /*tp_as_sequence*/
|
||||
0, /*tp_as_mapping*/
|
||||
0, /*tp_hash*/
|
||||
#ifdef HAVE_WEAKREF
|
||||
0, /* tp_call */
|
||||
0, /* tp_str */
|
||||
0, /* tp_getattro */
|
||||
|
@ -5651,7 +5490,6 @@ static PyTypeObject DBLock_Type = {
|
|||
0, /* tp_clear */
|
||||
0, /* tp_richcompare */
|
||||
offsetof(DBLockObject, in_weakreflist), /* tp_weaklistoffset */
|
||||
#endif
|
||||
};
|
||||
|
||||
#if (DBVER >= 43)
|
||||
|
@ -5671,7 +5509,6 @@ static PyTypeObject DBSequence_Type = {
|
|||
0, /*tp_as_sequence*/
|
||||
0, /*tp_as_mapping*/
|
||||
0, /*tp_hash*/
|
||||
#ifdef HAVE_WEAKREF
|
||||
0, /* tp_call */
|
||||
0, /* tp_str */
|
||||
0, /* tp_getattro */
|
||||
|
@ -5683,7 +5520,6 @@ static PyTypeObject DBSequence_Type = {
|
|||
0, /* tp_clear */
|
||||
0, /* tp_richcompare */
|
||||
offsetof(DBSequenceObject, in_weakreflist), /* tp_weaklistoffset */
|
||||
#endif
|
||||
};
|
||||
#endif
|
||||
|
||||
|
@ -5765,6 +5601,9 @@ static PyMethodDef bsddb_methods[] = {
|
|||
{NULL, NULL} /* sentinel */
|
||||
};
|
||||
|
||||
/* API structure */
|
||||
static BSDDB_api bsddb_api;
|
||||
|
||||
|
||||
/* --------------------------------------------------------------------- */
|
||||
/* Module initialization */
|
||||
|
@ -5785,6 +5624,7 @@ PyMODINIT_FUNC init_bsddb(void)
|
|||
PyObject* pybsddb_version_s = PyUnicode_FromString(PY_BSDDB_VERSION);
|
||||
PyObject* db_version_s = PyUnicode_FromString(DB_VERSION_STRING);
|
||||
PyObject* svnid_s = PyUnicode_FromString(svn_id);
|
||||
PyObject* py_api;
|
||||
|
||||
/* Initialize the type of the new type objects here; doing it here
|
||||
is required for portability to Windows without requiring C++. */
|
||||
|
@ -5846,9 +5686,7 @@ PyMODINIT_FUNC init_bsddb(void)
|
|||
ADD_INT(d, DB_INIT_LOG);
|
||||
ADD_INT(d, DB_INIT_MPOOL);
|
||||
ADD_INT(d, DB_INIT_TXN);
|
||||
#if (DBVER >= 32)
|
||||
ADD_INT(d, DB_JOINENV);
|
||||
#endif
|
||||
|
||||
ADD_INT(d, DB_RECOVER);
|
||||
ADD_INT(d, DB_RECOVER_FATAL);
|
||||
|
@ -5869,11 +5707,9 @@ PyMODINIT_FUNC init_bsddb(void)
|
|||
ADD_INT(d, DB_RDWRMASTER);
|
||||
ADD_INT(d, DB_RDONLY);
|
||||
ADD_INT(d, DB_TRUNCATE);
|
||||
#if (DBVER >= 32)
|
||||
ADD_INT(d, DB_EXTENT);
|
||||
ADD_INT(d, DB_CDB_ALLDB);
|
||||
ADD_INT(d, DB_VERIFY);
|
||||
#endif
|
||||
ADD_INT(d, DB_UPGRADE);
|
||||
|
||||
ADD_INT(d, DB_AGGRESSIVE);
|
||||
|
@ -5917,9 +5753,7 @@ PyMODINIT_FUNC init_bsddb(void)
|
|||
ADD_INT(d, DB_LOCK_READ);
|
||||
ADD_INT(d, DB_LOCK_WRITE);
|
||||
ADD_INT(d, DB_LOCK_NOWAIT);
|
||||
#if (DBVER >= 32)
|
||||
ADD_INT(d, DB_LOCK_WAIT);
|
||||
#endif
|
||||
ADD_INT(d, DB_LOCK_IWRITE);
|
||||
ADD_INT(d, DB_LOCK_IREAD);
|
||||
ADD_INT(d, DB_LOCK_IWR);
|
||||
|
@ -5934,9 +5768,7 @@ PyMODINIT_FUNC init_bsddb(void)
|
|||
|
||||
ADD_INT(d, DB_LOCK_RECORD);
|
||||
ADD_INT(d, DB_LOCK_UPGRADE);
|
||||
#if (DBVER >= 32)
|
||||
ADD_INT(d, DB_LOCK_SWITCH);
|
||||
#endif
|
||||
#if (DBVER >= 33)
|
||||
ADD_INT(d, DB_LOCK_UPGRADE_WRITE);
|
||||
#endif
|
||||
|
@ -5997,9 +5829,7 @@ PyMODINIT_FUNC init_bsddb(void)
|
|||
ADD_INT(d, DB_COMMIT);
|
||||
#endif
|
||||
ADD_INT(d, DB_CONSUME);
|
||||
#if (DBVER >= 32)
|
||||
ADD_INT(d, DB_CONSUME_WAIT);
|
||||
#endif
|
||||
ADD_INT(d, DB_CURRENT);
|
||||
#if (DBVER >= 33)
|
||||
ADD_INT(d, DB_FAST_STAT);
|
||||
|
@ -6182,6 +6012,21 @@ PyMODINIT_FUNC init_bsddb(void)
|
|||
|
||||
#undef MAKE_EX
|
||||
|
||||
/* Initiliase the C API structure and add it to the module */
|
||||
bsddb_api.db_type = &DB_Type;
|
||||
bsddb_api.dbcursor_type = &DBCursor_Type;
|
||||
bsddb_api.dbenv_type = &DBEnv_Type;
|
||||
bsddb_api.dbtxn_type = &DBTxn_Type;
|
||||
bsddb_api.dblock_type = &DBLock_Type;
|
||||
#if (DBVER >= 43)
|
||||
bsddb_api.dbsequence_type = &DBSequence_Type;
|
||||
#endif
|
||||
bsddb_api.makeDBError = makeDBError;
|
||||
|
||||
py_api = PyCObject_FromVoidPtr((void*)&bsddb_api, NULL);
|
||||
PyDict_SetItemString(d, "api", py_api);
|
||||
Py_DECREF(py_api);
|
||||
|
||||
/* Check for errors */
|
||||
if (PyErr_Occurred()) {
|
||||
PyErr_Print();
|
||||
|
|
|
@ -83,10 +83,27 @@ typedef struct {
|
|||
int leftindex; /* in range(BLOCKLEN) */
|
||||
int rightindex; /* in range(BLOCKLEN) */
|
||||
int len;
|
||||
int maxlen;
|
||||
long state; /* incremented whenever the indices move */
|
||||
PyObject *weakreflist; /* List of weak references */
|
||||
} dequeobject;
|
||||
|
||||
/* The deque's size limit is d.maxlen. The limit can be zero or positive.
|
||||
* If there is no limit, then d.maxlen == -1.
|
||||
*
|
||||
* After an item is added to a deque, we check to see if the size has grown past
|
||||
* the limit. If it has, we get the size back down to the limit by popping an
|
||||
* item off of the opposite end. The methods that can trigger this are append(),
|
||||
* appendleft(), extend(), and extendleft().
|
||||
*/
|
||||
|
||||
#define TRIM(d, popfunction) \
|
||||
if (d->maxlen != -1 && d->len > d->maxlen) { \
|
||||
PyObject *rv = popfunction(d, NULL); \
|
||||
assert(rv != NULL && d->len <= d->maxlen); \
|
||||
Py_DECREF(rv); \
|
||||
}
|
||||
|
||||
static PyTypeObject deque_type;
|
||||
|
||||
static PyObject *
|
||||
|
@ -95,9 +112,6 @@ deque_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
|
|||
dequeobject *deque;
|
||||
block *b;
|
||||
|
||||
if (type == &deque_type && !_PyArg_NoKeywords("deque()", kwds))
|
||||
return NULL;
|
||||
|
||||
/* create dequeobject structure */
|
||||
deque = (dequeobject *)type->tp_alloc(type, 0);
|
||||
if (deque == NULL)
|
||||
|
@ -117,54 +131,11 @@ deque_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
|
|||
deque->len = 0;
|
||||
deque->state = 0;
|
||||
deque->weakreflist = NULL;
|
||||
deque->maxlen = -1;
|
||||
|
||||
return (PyObject *)deque;
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
deque_append(dequeobject *deque, PyObject *item)
|
||||
{
|
||||
deque->state++;
|
||||
if (deque->rightindex == BLOCKLEN-1) {
|
||||
block *b = newblock(deque->rightblock, NULL, deque->len);
|
||||
if (b == NULL)
|
||||
return NULL;
|
||||
assert(deque->rightblock->rightlink == NULL);
|
||||
deque->rightblock->rightlink = b;
|
||||
deque->rightblock = b;
|
||||
deque->rightindex = -1;
|
||||
}
|
||||
Py_INCREF(item);
|
||||
deque->len++;
|
||||
deque->rightindex++;
|
||||
deque->rightblock->data[deque->rightindex] = item;
|
||||
Py_RETURN_NONE;
|
||||
}
|
||||
|
||||
PyDoc_STRVAR(append_doc, "Add an element to the right side of the deque.");
|
||||
|
||||
static PyObject *
|
||||
deque_appendleft(dequeobject *deque, PyObject *item)
|
||||
{
|
||||
deque->state++;
|
||||
if (deque->leftindex == 0) {
|
||||
block *b = newblock(NULL, deque->leftblock, deque->len);
|
||||
if (b == NULL)
|
||||
return NULL;
|
||||
assert(deque->leftblock->leftlink == NULL);
|
||||
deque->leftblock->leftlink = b;
|
||||
deque->leftblock = b;
|
||||
deque->leftindex = BLOCKLEN;
|
||||
}
|
||||
Py_INCREF(item);
|
||||
deque->len++;
|
||||
deque->leftindex--;
|
||||
deque->leftblock->data[deque->leftindex] = item;
|
||||
Py_RETURN_NONE;
|
||||
}
|
||||
|
||||
PyDoc_STRVAR(appendleft_doc, "Add an element to the left side of the deque.");
|
||||
|
||||
static PyObject *
|
||||
deque_pop(dequeobject *deque, PyObject *unused)
|
||||
{
|
||||
|
@ -239,6 +210,52 @@ deque_popleft(dequeobject *deque, PyObject *unused)
|
|||
|
||||
PyDoc_STRVAR(popleft_doc, "Remove and return the leftmost element.");
|
||||
|
||||
static PyObject *
|
||||
deque_append(dequeobject *deque, PyObject *item)
|
||||
{
|
||||
deque->state++;
|
||||
if (deque->rightindex == BLOCKLEN-1) {
|
||||
block *b = newblock(deque->rightblock, NULL, deque->len);
|
||||
if (b == NULL)
|
||||
return NULL;
|
||||
assert(deque->rightblock->rightlink == NULL);
|
||||
deque->rightblock->rightlink = b;
|
||||
deque->rightblock = b;
|
||||
deque->rightindex = -1;
|
||||
}
|
||||
Py_INCREF(item);
|
||||
deque->len++;
|
||||
deque->rightindex++;
|
||||
deque->rightblock->data[deque->rightindex] = item;
|
||||
TRIM(deque, deque_popleft);
|
||||
Py_RETURN_NONE;
|
||||
}
|
||||
|
||||
PyDoc_STRVAR(append_doc, "Add an element to the right side of the deque.");
|
||||
|
||||
static PyObject *
|
||||
deque_appendleft(dequeobject *deque, PyObject *item)
|
||||
{
|
||||
deque->state++;
|
||||
if (deque->leftindex == 0) {
|
||||
block *b = newblock(NULL, deque->leftblock, deque->len);
|
||||
if (b == NULL)
|
||||
return NULL;
|
||||
assert(deque->leftblock->leftlink == NULL);
|
||||
deque->leftblock->leftlink = b;
|
||||
deque->leftblock = b;
|
||||
deque->leftindex = BLOCKLEN;
|
||||
}
|
||||
Py_INCREF(item);
|
||||
deque->len++;
|
||||
deque->leftindex--;
|
||||
deque->leftblock->data[deque->leftindex] = item;
|
||||
TRIM(deque, deque_pop);
|
||||
Py_RETURN_NONE;
|
||||
}
|
||||
|
||||
PyDoc_STRVAR(appendleft_doc, "Add an element to the left side of the deque.");
|
||||
|
||||
static PyObject *
|
||||
deque_extend(dequeobject *deque, PyObject *iterable)
|
||||
{
|
||||
|
@ -266,6 +283,7 @@ deque_extend(dequeobject *deque, PyObject *iterable)
|
|||
deque->len++;
|
||||
deque->rightindex++;
|
||||
deque->rightblock->data[deque->rightindex] = item;
|
||||
TRIM(deque, deque_popleft);
|
||||
}
|
||||
Py_DECREF(it);
|
||||
if (PyErr_Occurred())
|
||||
|
@ -303,6 +321,7 @@ deque_extendleft(dequeobject *deque, PyObject *iterable)
|
|||
deque->len++;
|
||||
deque->leftindex--;
|
||||
deque->leftblock->data[deque->leftindex] = item;
|
||||
TRIM(deque, deque_pop);
|
||||
}
|
||||
Py_DECREF(it);
|
||||
if (PyErr_Occurred())
|
||||
|
@ -579,8 +598,11 @@ deque_nohash(PyObject *self)
|
|||
static PyObject *
|
||||
deque_copy(PyObject *deque)
|
||||
{
|
||||
return PyObject_CallFunctionObjArgs((PyObject *)(Py_Type(deque)),
|
||||
deque, NULL);
|
||||
if (((dequeobject *)deque)->maxlen == -1)
|
||||
return PyObject_CallFunction((PyObject *)(Py_Type(deque)), "O", deque, NULL);
|
||||
else
|
||||
return PyObject_CallFunction((PyObject *)(Py_Type(deque)), "Oi",
|
||||
deque, ((dequeobject *)deque)->maxlen, NULL);
|
||||
}
|
||||
|
||||
PyDoc_STRVAR(copy_doc, "Return a shallow copy of a deque.");
|
||||
|
@ -588,21 +610,29 @@ PyDoc_STRVAR(copy_doc, "Return a shallow copy of a deque.");
|
|||
static PyObject *
|
||||
deque_reduce(dequeobject *deque)
|
||||
{
|
||||
PyObject *dict, *result, *it;
|
||||
PyObject *dict, *result, *aslist;
|
||||
|
||||
dict = PyObject_GetAttrString((PyObject *)deque, "__dict__");
|
||||
if (dict == NULL) {
|
||||
if (dict == NULL)
|
||||
PyErr_Clear();
|
||||
dict = Py_None;
|
||||
Py_INCREF(dict);
|
||||
}
|
||||
it = PyObject_GetIter((PyObject *)deque);
|
||||
if (it == NULL) {
|
||||
Py_DECREF(dict);
|
||||
aslist = PySequence_List((PyObject *)deque);
|
||||
if (aslist == NULL) {
|
||||
Py_XDECREF(dict);
|
||||
return NULL;
|
||||
}
|
||||
result = Py_BuildValue("O()ON", Py_Type(deque), dict, it);
|
||||
Py_DECREF(dict);
|
||||
if (dict == NULL) {
|
||||
if (deque->maxlen == -1)
|
||||
result = Py_BuildValue("O(O)", Py_Type(deque), aslist);
|
||||
else
|
||||
result = Py_BuildValue("O(Oi)", Py_Type(deque), aslist, deque->maxlen);
|
||||
} else {
|
||||
if (deque->maxlen == -1)
|
||||
result = Py_BuildValue("O(OO)O", Py_Type(deque), aslist, Py_None, dict);
|
||||
else
|
||||
result = Py_BuildValue("O(Oi)O", Py_Type(deque), aslist, deque->maxlen, dict);
|
||||
}
|
||||
Py_XDECREF(dict);
|
||||
Py_DECREF(aslist);
|
||||
return result;
|
||||
}
|
||||
|
||||
|
@ -626,8 +656,11 @@ deque_repr(PyObject *deque)
|
|||
Py_ReprLeave(deque);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
result = PyUnicode_FromFormat("deque(%R)", aslist);
|
||||
if (((dequeobject *)deque)->maxlen != -1)
|
||||
result = PyUnicode_FromFormat("deque(%R, maxlen=%i)", aslist,
|
||||
((dequeobject *)deque)->maxlen);
|
||||
else
|
||||
result = PyUnicode_FromFormat("deque(%R)", aslist);
|
||||
Py_DECREF(aslist);
|
||||
Py_ReprLeave(deque);
|
||||
return result;
|
||||
|
@ -712,13 +745,25 @@ done:
|
|||
}
|
||||
|
||||
static int
|
||||
deque_init(dequeobject *deque, PyObject *args, PyObject *kwds)
|
||||
deque_init(dequeobject *deque, PyObject *args, PyObject *kwdargs)
|
||||
{
|
||||
PyObject *iterable = NULL;
|
||||
PyObject *maxlenobj = NULL;
|
||||
int maxlen = -1;
|
||||
char *kwlist[] = {"iterable", "maxlen", 0};
|
||||
|
||||
if (!PyArg_UnpackTuple(args, "deque", 0, 1, &iterable))
|
||||
if (!PyArg_ParseTupleAndKeywords(args, kwdargs, "|OO:deque", kwlist, &iterable, &maxlenobj))
|
||||
return -1;
|
||||
|
||||
if (maxlenobj != NULL && maxlenobj != Py_None) {
|
||||
maxlen = PyInt_AsLong(maxlenobj);
|
||||
if (maxlen == -1 && PyErr_Occurred())
|
||||
return -1;
|
||||
if (maxlen < 0) {
|
||||
PyErr_SetString(PyExc_ValueError, "maxlen must be non-negative");
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
deque->maxlen = maxlen;
|
||||
if (iterable != NULL) {
|
||||
PyObject *rv = deque_extend(deque, iterable);
|
||||
if (rv == NULL)
|
||||
|
@ -773,7 +818,7 @@ static PyMethodDef deque_methods[] = {
|
|||
};
|
||||
|
||||
PyDoc_STRVAR(deque_doc,
|
||||
"deque(iterable) --> deque object\n\
|
||||
"deque(iterable[, maxlen]) --> deque object\n\
|
||||
\n\
|
||||
Build an ordered collection accessible from endpoints only.");
|
||||
|
||||
|
@ -1063,7 +1108,7 @@ defdict_copy(defdictobject *dd)
|
|||
whose class constructor has the same signature. Subclasses that
|
||||
define a different constructor signature must override copy().
|
||||
*/
|
||||
return PyObject_CallFunctionObjArgs((PyObject *)Py_Type(dd),
|
||||
return PyObject_CallFunctionObjArgs((PyObject*)Py_Type(dd),
|
||||
dd->default_factory, dd, NULL);
|
||||
}
|
||||
|
||||
|
|
|
@ -1631,17 +1631,21 @@ static struct fielddesc formattable[] = {
|
|||
/* XXX Hm, sizeof(int) == sizeof(long) doesn't hold on every platform */
|
||||
/* As soon as we can get rid of the type codes, this is no longer a problem */
|
||||
#if SIZEOF_LONG == 4
|
||||
{ 'l', l_set, l_get, &ffi_type_sint, l_set_sw, l_get_sw},
|
||||
{ 'L', L_set, L_get, &ffi_type_uint, L_set_sw, L_get_sw},
|
||||
{ 'l', l_set, l_get, &ffi_type_sint32, l_set_sw, l_get_sw},
|
||||
{ 'L', L_set, L_get, &ffi_type_uint32, L_set_sw, L_get_sw},
|
||||
#elif SIZEOF_LONG == 8
|
||||
{ 'l', l_set, l_get, &ffi_type_slong, l_set_sw, l_get_sw},
|
||||
{ 'L', L_set, L_get, &ffi_type_ulong, L_set_sw, L_get_sw},
|
||||
{ 'l', l_set, l_get, &ffi_type_sint64, l_set_sw, l_get_sw},
|
||||
{ 'L', L_set, L_get, &ffi_type_uint64, L_set_sw, L_get_sw},
|
||||
#else
|
||||
# error
|
||||
#endif
|
||||
#ifdef HAVE_LONG_LONG
|
||||
{ 'q', q_set, q_get, &ffi_type_slong, q_set_sw, q_get_sw},
|
||||
{ 'Q', Q_set, Q_get, &ffi_type_ulong, Q_set_sw, Q_get_sw},
|
||||
#if SIZEOF_LONG_LONG == 8
|
||||
{ 'q', q_set, q_get, &ffi_type_sint64, q_set_sw, q_get_sw},
|
||||
{ 'Q', Q_set, Q_get, &ffi_type_uint64, Q_set_sw, Q_get_sw},
|
||||
#else
|
||||
# error
|
||||
#endif
|
||||
#endif
|
||||
{ 'P', P_set, P_get, &ffi_type_pointer},
|
||||
{ 'z', z_set, z_get, &ffi_type_pointer},
|
||||
|
@ -1764,11 +1768,13 @@ ffi_type ffi_type_sint64 = { 8, LONG_LONG_ALIGN, FFI_TYPE_SINT64 };
|
|||
|
||||
ffi_type ffi_type_float = { sizeof(float), FLOAT_ALIGN, FFI_TYPE_FLOAT };
|
||||
ffi_type ffi_type_double = { sizeof(double), DOUBLE_ALIGN, FFI_TYPE_DOUBLE };
|
||||
|
||||
#ifdef ffi_type_longdouble
|
||||
#undef ffi_type_longdouble
|
||||
#endif
|
||||
ffi_type ffi_type_longdouble = { sizeof(long double), LONGDOUBLE_ALIGN,
|
||||
FFI_TYPE_LONGDOUBLE };
|
||||
|
||||
/* ffi_type ffi_type_longdouble */
|
||||
|
||||
ffi_type ffi_type_pointer = { sizeof(void *), VOID_P_ALIGN, FFI_TYPE_POINTER };
|
||||
|
||||
/*---------------- EOF ----------------*/
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
|
||||
#include <stdlib.h>
|
||||
|
||||
extern void ffi_call_osf(void *, unsigned long, unsigned, void *, void (*)());
|
||||
extern void ffi_call_osf(void *, unsigned long, unsigned, void *, void (*)(void));
|
||||
extern void ffi_closure_osf(void);
|
||||
|
||||
|
||||
|
@ -58,7 +58,7 @@ ffi_prep_cif_machdep(ffi_cif *cif)
|
|||
}
|
||||
|
||||
void
|
||||
ffi_call(ffi_cif *cif, void (*fn)(), void *rvalue, void **avalue)
|
||||
ffi_call(ffi_cif *cif, void (*fn)(void), void *rvalue, void **avalue)
|
||||
{
|
||||
unsigned long *stack, *argp;
|
||||
long i, avn;
|
||||
|
|
|
@ -259,10 +259,10 @@ ffi_prep_cif_machdep(ffi_cif *cif)
|
|||
return FFI_OK;
|
||||
}
|
||||
|
||||
extern int ffi_call_unix (struct ia64_args *, PTR64, void (*)(), UINT64);
|
||||
extern int ffi_call_unix (struct ia64_args *, PTR64, void (*)(void), UINT64);
|
||||
|
||||
void
|
||||
ffi_call(ffi_cif *cif, void (*fn)(), void *rvalue, void **avalue)
|
||||
ffi_call(ffi_cif *cif, void (*fn)(void), void *rvalue, void **avalue)
|
||||
{
|
||||
struct ia64_args *stack;
|
||||
long i, avn, gpcount, fpcount;
|
||||
|
@ -387,7 +387,7 @@ ffi_call(ffi_cif *cif, void (*fn)(), void *rvalue, void **avalue)
|
|||
gp pointer to the closure. This allows the function entry code to
|
||||
both retrieve the user data, and to restire the correct gp pointer. */
|
||||
|
||||
extern void ffi_closure_unix ();
|
||||
extern void ffi_closure_unix (void);
|
||||
|
||||
ffi_status
|
||||
ffi_prep_closure (ffi_closure* closure,
|
||||
|
|
|
@ -445,14 +445,14 @@ ffi_status ffi_prep_cif_machdep(ffi_cif *cif)
|
|||
/* Low level routine for calling O32 functions */
|
||||
extern int ffi_call_O32(void (*)(char *, extended_cif *, int, int),
|
||||
extended_cif *, unsigned,
|
||||
unsigned, unsigned *, void (*)());
|
||||
unsigned, unsigned *, void (*)(void));
|
||||
|
||||
/* Low level routine for calling N32 functions */
|
||||
extern int ffi_call_N32(void (*)(char *, extended_cif *, int, int),
|
||||
extended_cif *, unsigned,
|
||||
unsigned, unsigned *, void (*)());
|
||||
unsigned, unsigned *, void (*)(void));
|
||||
|
||||
void ffi_call(ffi_cif *cif, void (*fn)(), void *rvalue, void **avalue)
|
||||
void ffi_call(ffi_cif *cif, void (*fn)(void), void *rvalue, void **avalue)
|
||||
{
|
||||
extended_cif ecif;
|
||||
|
||||
|
|
|
@ -345,12 +345,12 @@ extern void ffi_call_LINUX(void (*)(UINT32 *, extended_cif *, unsigned),
|
|||
/*@out@*/ extended_cif *,
|
||||
unsigned, unsigned,
|
||||
/*@out@*/ unsigned *,
|
||||
void (*fn)());
|
||||
void (*fn)(void));
|
||||
/*@=declundef@*/
|
||||
/*@=exportheader@*/
|
||||
|
||||
void ffi_call(/*@dependent@*/ ffi_cif *cif,
|
||||
void (*fn)(),
|
||||
void (*fn)(void),
|
||||
/*@out@*/ void *rvalue,
|
||||
/*@dependent@*/ void **avalue)
|
||||
{
|
||||
|
|
|
@ -756,17 +756,17 @@ ffi_prep_cif_machdep (ffi_cif *cif)
|
|||
extern void ffi_call_SYSV(/*@out@*/ extended_cif *,
|
||||
unsigned, unsigned,
|
||||
/*@out@*/ unsigned *,
|
||||
void (*fn)());
|
||||
void (*fn)(void));
|
||||
extern void FFI_HIDDEN ffi_call_LINUX64(/*@out@*/ extended_cif *,
|
||||
unsigned long, unsigned long,
|
||||
/*@out@*/ unsigned long *,
|
||||
void (*fn)());
|
||||
void (*fn)(void));
|
||||
/*@=declundef@*/
|
||||
/*@=exportheader@*/
|
||||
|
||||
void
|
||||
ffi_call(/*@dependent@*/ ffi_cif *cif,
|
||||
void (*fn)(),
|
||||
void (*fn)(void),
|
||||
/*@out@*/ void *rvalue,
|
||||
/*@dependent@*/ void **avalue)
|
||||
{
|
||||
|
|
|
@ -88,7 +88,7 @@ extern void ffi_call_SYSV(unsigned,
|
|||
void (*)(unsigned char *, extended_cif *),
|
||||
unsigned,
|
||||
void *,
|
||||
void (*fn)());
|
||||
void (*fn)(void));
|
||||
|
||||
extern void ffi_closure_SYSV(void);
|
||||
|
||||
|
@ -480,7 +480,7 @@ ffi_prep_cif_machdep(ffi_cif *cif)
|
|||
|
||||
void
|
||||
ffi_call(ffi_cif *cif,
|
||||
void (*fn)(),
|
||||
void (*fn)(void),
|
||||
void *rvalue,
|
||||
void **avalue)
|
||||
{
|
||||
|
|
|
@ -358,13 +358,13 @@ int ffi_v9_layout_struct(ffi_type *arg, int off, char *ret, char *intg, char *fl
|
|||
|
||||
#ifdef SPARC64
|
||||
extern int ffi_call_v9(void *, extended_cif *, unsigned,
|
||||
unsigned, unsigned *, void (*fn)());
|
||||
unsigned, unsigned *, void (*fn)(void));
|
||||
#else
|
||||
extern int ffi_call_v8(void *, extended_cif *, unsigned,
|
||||
unsigned, unsigned *, void (*fn)());
|
||||
unsigned, unsigned *, void (*fn)(void));
|
||||
#endif
|
||||
|
||||
void ffi_call(ffi_cif *cif, void (*fn)(), void *rvalue, void **avalue)
|
||||
void ffi_call(ffi_cif *cif, void (*fn)(void), void *rvalue, void **avalue)
|
||||
{
|
||||
extended_cif ecif;
|
||||
void *rval = rvalue;
|
||||
|
|
|
@ -0,0 +1,238 @@
|
|||
/*----------------------------------------------------------------------
|
||||
Copyright (c) 1999-2001, Digital Creations, Fredericksburg, VA, USA
|
||||
and Andrew Kuchling. All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
o Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions, and the disclaimer that follows.
|
||||
|
||||
o Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions, and the following disclaimer in
|
||||
the documentation and/or other materials provided with the
|
||||
distribution.
|
||||
|
||||
o Neither the name of Digital Creations nor the names of its
|
||||
contributors may be used to endorse or promote products derived
|
||||
from this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY DIGITAL CREATIONS AND CONTRIBUTORS *AS
|
||||
IS* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
|
||||
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
|
||||
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL DIGITAL
|
||||
CREATIONS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
|
||||
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
|
||||
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
|
||||
OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
|
||||
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
|
||||
TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
|
||||
USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
|
||||
DAMAGE.
|
||||
------------------------------------------------------------------------*/
|
||||
|
||||
|
||||
/*
|
||||
* Handwritten code to wrap version 3.x of the Berkeley DB library,
|
||||
* written to replace a SWIG-generated file. It has since been updated
|
||||
* to compile with BerkeleyDB versions 3.2 through 4.2.
|
||||
*
|
||||
* This module was started by Andrew Kuchling to remove the dependency
|
||||
* on SWIG in a package by Gregory P. Smith who based his work on a
|
||||
* similar package by Robin Dunn <robin@alldunn.com> which wrapped
|
||||
* Berkeley DB 2.7.x.
|
||||
*
|
||||
* Development of this module then returned full circle back to Robin Dunn
|
||||
* who worked on behalf of Digital Creations to complete the wrapping of
|
||||
* the DB 3.x API and to build a solid unit test suite. Robin has
|
||||
* since gone onto other projects (wxPython).
|
||||
*
|
||||
* Gregory P. Smith <greg@krypto.org> is once again the maintainer.
|
||||
*
|
||||
* Use the pybsddb-users@lists.sf.net mailing list for all questions.
|
||||
* Things can change faster than the header of this file is updated. This
|
||||
* file is shared with the PyBSDDB project at SourceForge:
|
||||
*
|
||||
* http://pybsddb.sf.net
|
||||
*
|
||||
* This file should remain backward compatible with Python 2.1, but see PEP
|
||||
* 291 for the most current backward compatibility requirements:
|
||||
*
|
||||
* http://www.python.org/peps/pep-0291.html
|
||||
*
|
||||
* This module contains 6 types:
|
||||
*
|
||||
* DB (Database)
|
||||
* DBCursor (Database Cursor)
|
||||
* DBEnv (database environment)
|
||||
* DBTxn (An explicit database transaction)
|
||||
* DBLock (A lock handle)
|
||||
* DBSequence (Sequence)
|
||||
*
|
||||
*/
|
||||
|
||||
/* --------------------------------------------------------------------- */
|
||||
|
||||
/*
|
||||
* Portions of this module, associated unit tests and build scripts are the
|
||||
* result of a contract with The Written Word (http://thewrittenword.com/)
|
||||
* Many thanks go out to them for causing me to raise the bar on quality and
|
||||
* functionality, resulting in a better bsddb3 package for all of us to use.
|
||||
*
|
||||
* --Robin
|
||||
*/
|
||||
|
||||
/* --------------------------------------------------------------------- */
|
||||
|
||||
/*
|
||||
* Work to split it up into a separate header and to add a C API was
|
||||
* contributed by Duncan Grisby <duncan@tideway.com>. See here:
|
||||
* http://sourceforge.net/tracker/index.php?func=detail&aid=1551895&group_id=13900&atid=313900
|
||||
*/
|
||||
|
||||
/* --------------------------------------------------------------------- */
|
||||
|
||||
#ifndef _BSDDB_H_
|
||||
#define _BSDDB_H_
|
||||
|
||||
#include <db.h>
|
||||
|
||||
|
||||
/* 40 = 4.0, 33 = 3.3; this will break if the minor revision is > 9 */
|
||||
#define DBVER (DB_VERSION_MAJOR * 10 + DB_VERSION_MINOR)
|
||||
#if DB_VERSION_MINOR > 9
|
||||
#error "eek! DBVER can't handle minor versions > 9"
|
||||
#endif
|
||||
|
||||
#define PY_BSDDB_VERSION "4.6.0"
|
||||
|
||||
/* Python object definitions */
|
||||
|
||||
struct behaviourFlags {
|
||||
/* What is the default behaviour when DB->get or DBCursor->get returns a
|
||||
DB_NOTFOUND || DB_KEYEMPTY error? Return None or raise an exception? */
|
||||
unsigned int getReturnsNone : 1;
|
||||
/* What is the default behaviour for DBCursor.set* methods when DBCursor->get
|
||||
* returns a DB_NOTFOUND || DB_KEYEMPTY error? Return None or raise? */
|
||||
unsigned int cursorSetReturnsNone : 1;
|
||||
};
|
||||
|
||||
|
||||
typedef struct {
|
||||
PyObject_HEAD
|
||||
DB_ENV* db_env;
|
||||
u_int32_t flags; /* saved flags from open() */
|
||||
int closed;
|
||||
struct behaviourFlags moduleFlags;
|
||||
PyObject *in_weakreflist; /* List of weak references */
|
||||
} DBEnvObject;
|
||||
|
||||
|
||||
typedef struct {
|
||||
PyObject_HEAD
|
||||
DB* db;
|
||||
DBEnvObject* myenvobj; /* PyObject containing the DB_ENV */
|
||||
u_int32_t flags; /* saved flags from open() */
|
||||
u_int32_t setflags; /* saved flags from set_flags() */
|
||||
int haveStat;
|
||||
struct behaviourFlags moduleFlags;
|
||||
#if (DBVER >= 33)
|
||||
PyObject* associateCallback;
|
||||
PyObject* btCompareCallback;
|
||||
int primaryDBType;
|
||||
#endif
|
||||
PyObject *in_weakreflist; /* List of weak references */
|
||||
} DBObject;
|
||||
|
||||
|
||||
typedef struct {
|
||||
PyObject_HEAD
|
||||
DBC* dbc;
|
||||
DBObject* mydb;
|
||||
PyObject *in_weakreflist; /* List of weak references */
|
||||
} DBCursorObject;
|
||||
|
||||
|
||||
typedef struct {
|
||||
PyObject_HEAD
|
||||
DB_TXN* txn;
|
||||
PyObject *env;
|
||||
PyObject *in_weakreflist; /* List of weak references */
|
||||
} DBTxnObject;
|
||||
|
||||
|
||||
typedef struct {
|
||||
PyObject_HEAD
|
||||
DB_LOCK lock;
|
||||
PyObject *in_weakreflist; /* List of weak references */
|
||||
} DBLockObject;
|
||||
|
||||
|
||||
#if (DBVER >= 43)
|
||||
typedef struct {
|
||||
PyObject_HEAD
|
||||
DB_SEQUENCE* sequence;
|
||||
DBObject* mydb;
|
||||
PyObject *in_weakreflist; /* List of weak references */
|
||||
} DBSequenceObject;
|
||||
static PyTypeObject DBSequence_Type;
|
||||
#endif
|
||||
|
||||
|
||||
/* API structure for use by C code */
|
||||
|
||||
/* To access the structure from an external module, use code like the
|
||||
following (error checking missed out for clarity):
|
||||
|
||||
BSDDB_api* bsddb_api;
|
||||
PyObject* mod;
|
||||
PyObject* cobj;
|
||||
|
||||
mod = PyImport_ImportModule("bsddb._bsddb");
|
||||
// Use "bsddb3._pybsddb" if you're using the standalone pybsddb add-on.
|
||||
cobj = PyObject_GetAttrString(mod, "api");
|
||||
api = (BSDDB_api*)PyCObject_AsVoidPtr(cobj);
|
||||
Py_DECREF(cobj);
|
||||
Py_DECREF(mod);
|
||||
|
||||
The structure's members must not be changed.
|
||||
*/
|
||||
|
||||
typedef struct {
|
||||
/* Type objects */
|
||||
PyTypeObject* db_type;
|
||||
PyTypeObject* dbcursor_type;
|
||||
PyTypeObject* dbenv_type;
|
||||
PyTypeObject* dbtxn_type;
|
||||
PyTypeObject* dblock_type;
|
||||
#if (DBVER >= 43)
|
||||
PyTypeObject* dbsequence_type;
|
||||
#endif
|
||||
|
||||
/* Functions */
|
||||
int (*makeDBError)(int err);
|
||||
|
||||
} BSDDB_api;
|
||||
|
||||
|
||||
#ifndef COMPILING_BSDDB_C
|
||||
|
||||
/* If not inside _bsddb.c, define type check macros that use the api
|
||||
structure. The calling code must have a value named bsddb_api
|
||||
pointing to the api structure.
|
||||
*/
|
||||
|
||||
#define DBObject_Check(v) ((v)->ob_type == bsddb_api->db_type)
|
||||
#define DBCursorObject_Check(v) ((v)->ob_type == bsddb_api->dbcursor_type)
|
||||
#define DBEnvObject_Check(v) ((v)->ob_type == bsddb_api->dbenv_type)
|
||||
#define DBTxnObject_Check(v) ((v)->ob_type == bsddb_api->dbtxn_type)
|
||||
#define DBLockObject_Check(v) ((v)->ob_type == bsddb_api->dblock_type)
|
||||
#if (DBVER >= 43)
|
||||
#define DBSequenceObject_Check(v) ((v)->ob_type == bsddb_api->dbsequence_type)
|
||||
#endif
|
||||
|
||||
#endif // COMPILING_BSDDB_C
|
||||
|
||||
|
||||
#endif // _BSDDB_H_
|
|
@ -2032,6 +2032,7 @@ static PyTypeObject ifilterfalse_type = {
|
|||
typedef struct {
|
||||
PyObject_HEAD
|
||||
Py_ssize_t cnt;
|
||||
PyObject *long_cnt; /* Arbitrarily large count when cnt >= PY_SSIZE_T_MAX */
|
||||
} countobject;
|
||||
|
||||
static PyTypeObject count_type;
|
||||
|
@ -2041,37 +2042,89 @@ count_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
|
|||
{
|
||||
countobject *lz;
|
||||
Py_ssize_t cnt = 0;
|
||||
PyObject *cnt_arg = NULL;
|
||||
PyObject *long_cnt = NULL;
|
||||
|
||||
if (type == &count_type && !_PyArg_NoKeywords("count()", kwds))
|
||||
return NULL;
|
||||
|
||||
if (!PyArg_ParseTuple(args, "|n:count", &cnt))
|
||||
if (!PyArg_UnpackTuple(args, "count", 0, 1, &cnt_arg))
|
||||
return NULL;
|
||||
|
||||
if (cnt_arg != NULL) {
|
||||
cnt = PyInt_AsSsize_t(cnt_arg);
|
||||
if (cnt == -1 && PyErr_Occurred()) {
|
||||
PyErr_Clear();
|
||||
if (!PyLong_Check(cnt_arg)) {
|
||||
PyErr_SetString(PyExc_TypeError, "an integer is required");
|
||||
return NULL;
|
||||
}
|
||||
long_cnt = cnt_arg;
|
||||
Py_INCREF(long_cnt);
|
||||
cnt = PY_SSIZE_T_MAX;
|
||||
}
|
||||
}
|
||||
|
||||
/* create countobject structure */
|
||||
lz = (countobject *)PyObject_New(countobject, &count_type);
|
||||
if (lz == NULL)
|
||||
if (lz == NULL) {
|
||||
Py_XDECREF(long_cnt);
|
||||
return NULL;
|
||||
}
|
||||
lz->cnt = cnt;
|
||||
lz->long_cnt = long_cnt;
|
||||
|
||||
return (PyObject *)lz;
|
||||
}
|
||||
|
||||
static void
|
||||
count_dealloc(countobject *lz)
|
||||
{
|
||||
Py_XDECREF(lz->long_cnt);
|
||||
PyObject_Del(lz);
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
count_nextlong(countobject *lz)
|
||||
{
|
||||
static PyObject *one = NULL;
|
||||
PyObject *cnt;
|
||||
PyObject *stepped_up;
|
||||
|
||||
if (lz->long_cnt == NULL) {
|
||||
lz->long_cnt = PyInt_FromSsize_t(PY_SSIZE_T_MAX);
|
||||
if (lz->long_cnt == NULL)
|
||||
return NULL;
|
||||
}
|
||||
if (one == NULL) {
|
||||
one = PyInt_FromLong(1);
|
||||
if (one == NULL)
|
||||
return NULL;
|
||||
}
|
||||
cnt = lz->long_cnt;
|
||||
assert(cnt != NULL);
|
||||
stepped_up = PyNumber_Add(cnt, one);
|
||||
if (stepped_up == NULL)
|
||||
return NULL;
|
||||
lz->long_cnt = stepped_up;
|
||||
return cnt;
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
count_next(countobject *lz)
|
||||
{
|
||||
if (lz->cnt == PY_SSIZE_T_MAX) {
|
||||
PyErr_SetString(PyExc_OverflowError,
|
||||
"cannot count beyond PY_SSIZE_T_MAX");
|
||||
return NULL;
|
||||
}
|
||||
if (lz->cnt == PY_SSIZE_T_MAX)
|
||||
return count_nextlong(lz);
|
||||
return PyInt_FromSsize_t(lz->cnt++);
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
count_repr(countobject *lz)
|
||||
{
|
||||
return PyUnicode_FromFormat("count(%zd)", lz->cnt);
|
||||
if (lz->cnt != PY_SSIZE_T_MAX)
|
||||
return PyUnicode_FromFormat("count(%zd)", lz->cnt);
|
||||
|
||||
return PyUnicode_FromFormat("count(%R)", lz->long_cnt);
|
||||
}
|
||||
|
||||
PyDoc_STRVAR(count_doc,
|
||||
|
@ -2086,7 +2139,7 @@ static PyTypeObject count_type = {
|
|||
sizeof(countobject), /* tp_basicsize */
|
||||
0, /* tp_itemsize */
|
||||
/* methods */
|
||||
(destructor)PyObject_Del, /* tp_dealloc */
|
||||
(destructor)count_dealloc, /* tp_dealloc */
|
||||
0, /* tp_print */
|
||||
0, /* tp_getattr */
|
||||
0, /* tp_setattr */
|
||||
|
|
|
@ -367,6 +367,7 @@ Py_Main(int argc, char **argv)
|
|||
if (fstat(fileno(fp), &sb) == 0 &&
|
||||
S_ISDIR(sb.st_mode)) {
|
||||
fprintf(stderr, "%s: '%s' is a directory, cannot continue\n", argv[0], filename);
|
||||
fclose(fp);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -3,6 +3,9 @@
|
|||
/ Hacked for Unix by AMK
|
||||
/ $Id$
|
||||
|
||||
/ Modified to support mmap with offset - to map a 'window' of a file
|
||||
/ Author: Yotam Medini yotamm@mellanox.co.il
|
||||
/
|
||||
/ mmapmodule.cpp -- map a view of a file into memory
|
||||
/
|
||||
/ todo: need permission flags, perhaps a 'chsize' analog
|
||||
|
@ -31,6 +34,16 @@ my_getpagesize(void)
|
|||
GetSystemInfo(&si);
|
||||
return si.dwPageSize;
|
||||
}
|
||||
|
||||
static int
|
||||
my_getallocationgranularity (void)
|
||||
{
|
||||
|
||||
SYSTEM_INFO si;
|
||||
GetSystemInfo(&si);
|
||||
return si.dwAllocationGranularity;
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
#ifdef UNIX
|
||||
|
@ -43,6 +56,8 @@ my_getpagesize(void)
|
|||
{
|
||||
return sysconf(_SC_PAGESIZE);
|
||||
}
|
||||
|
||||
#define my_getallocationgranularity my_getpagesize
|
||||
#else
|
||||
#define my_getpagesize getpagesize
|
||||
#endif
|
||||
|
@ -74,7 +89,8 @@ typedef struct {
|
|||
PyObject_HEAD
|
||||
char * data;
|
||||
size_t size;
|
||||
size_t pos;
|
||||
size_t pos; /* relative to offset */
|
||||
size_t offset;
|
||||
int exports;
|
||||
|
||||
#ifdef MS_WINDOWS
|
||||
|
@ -398,18 +414,22 @@ mmap_resize_method(mmap_object *self,
|
|||
#ifdef MS_WINDOWS
|
||||
} else {
|
||||
DWORD dwErrCode = 0;
|
||||
DWORD newSizeLow, newSizeHigh;
|
||||
DWORD off_hi, off_lo, newSizeLow, newSizeHigh;
|
||||
/* First, unmap the file view */
|
||||
UnmapViewOfFile(self->data);
|
||||
/* Close the mapping object */
|
||||
CloseHandle(self->map_handle);
|
||||
/* Move to the desired EOF position */
|
||||
#if SIZEOF_SIZE_T > 4
|
||||
newSizeHigh = (DWORD)(new_size >> 32);
|
||||
newSizeLow = (DWORD)(new_size & 0xFFFFFFFF);
|
||||
newSizeHigh = (DWORD)((self->offset + new_size) >> 32);
|
||||
newSizeLow = (DWORD)((self->offset + new_size) & 0xFFFFFFFF);
|
||||
off_hi = (DWORD)(self->offset >> 32);
|
||||
off_lo = (DWORD)(self->offset & 0xFFFFFFFF);
|
||||
#else
|
||||
newSizeHigh = 0;
|
||||
newSizeLow = (DWORD)new_size;
|
||||
off_hi = 0;
|
||||
off_lo = (DWORD)self->offset;
|
||||
#endif
|
||||
SetFilePointer(self->file_handle,
|
||||
newSizeLow, &newSizeHigh, FILE_BEGIN);
|
||||
|
@ -420,15 +440,15 @@ mmap_resize_method(mmap_object *self,
|
|||
self->file_handle,
|
||||
NULL,
|
||||
PAGE_READWRITE,
|
||||
newSizeHigh,
|
||||
newSizeLow,
|
||||
0,
|
||||
0,
|
||||
self->tagname);
|
||||
if (self->map_handle != NULL) {
|
||||
self->data = (char *) MapViewOfFile(self->map_handle,
|
||||
FILE_MAP_WRITE,
|
||||
0,
|
||||
0,
|
||||
0);
|
||||
off_hi,
|
||||
off_lo,
|
||||
new_size);
|
||||
if (self->data != NULL) {
|
||||
self->size = new_size;
|
||||
Py_INCREF(Py_None);
|
||||
|
@ -651,7 +671,7 @@ mmap_subscript(mmap_object *self, PyObject *item)
|
|||
return NULL;
|
||||
if (i < 0)
|
||||
i += self->size;
|
||||
if (i < 0 || i > self->size) {
|
||||
if (i < 0 || (size_t)i > self->size) {
|
||||
PyErr_SetString(PyExc_IndexError,
|
||||
"mmap index out of range");
|
||||
return NULL;
|
||||
|
@ -753,7 +773,7 @@ mmap_ass_subscript(mmap_object *self, PyObject *item, PyObject *value)
|
|||
return -1;
|
||||
if (i < 0)
|
||||
i += self->size;
|
||||
if (i < 0 || i > self->size) {
|
||||
if (i < 0 || (size_t)i > self->size) {
|
||||
PyErr_SetString(PyExc_IndexError,
|
||||
"mmap index out of range");
|
||||
return -1;
|
||||
|
@ -882,15 +902,18 @@ static PyTypeObject mmap_object_type = {
|
|||
Returns -1 on error, with an appropriate Python exception raised. On
|
||||
success, the map size is returned. */
|
||||
static Py_ssize_t
|
||||
_GetMapSize(PyObject *o)
|
||||
_GetMapSize(PyObject *o, const char* param)
|
||||
{
|
||||
if (o == NULL)
|
||||
return 0;
|
||||
if (PyIndex_Check(o)) {
|
||||
Py_ssize_t i = PyNumber_AsSsize_t(o, PyExc_OverflowError);
|
||||
if (i==-1 && PyErr_Occurred())
|
||||
return -1;
|
||||
if (i < 0) {
|
||||
PyErr_SetString(PyExc_OverflowError,
|
||||
"memory mapped size must be positive");
|
||||
PyErr_Format(PyExc_OverflowError,
|
||||
"memory mapped %s must be positive",
|
||||
param);
|
||||
return -1;
|
||||
}
|
||||
return i;
|
||||
|
@ -908,22 +931,25 @@ new_mmap_object(PyObject *self, PyObject *args, PyObject *kwdict)
|
|||
struct stat st;
|
||||
#endif
|
||||
mmap_object *m_obj;
|
||||
PyObject *map_size_obj = NULL;
|
||||
Py_ssize_t map_size;
|
||||
PyObject *map_size_obj = NULL, *offset_obj = NULL;
|
||||
Py_ssize_t map_size, offset;
|
||||
int fd, flags = MAP_SHARED, prot = PROT_WRITE | PROT_READ;
|
||||
int devzero = -1;
|
||||
int access = (int)ACCESS_DEFAULT;
|
||||
static char *keywords[] = {"fileno", "length",
|
||||
"flags", "prot",
|
||||
"access", NULL};
|
||||
"access", "offset", NULL};
|
||||
|
||||
if (!PyArg_ParseTupleAndKeywords(args, kwdict, "iO|iii", keywords,
|
||||
if (!PyArg_ParseTupleAndKeywords(args, kwdict, "iO|iiiO", keywords,
|
||||
&fd, &map_size_obj, &flags, &prot,
|
||||
&access))
|
||||
&access, &offset_obj))
|
||||
return NULL;
|
||||
map_size = _GetMapSize(map_size_obj);
|
||||
map_size = _GetMapSize(map_size_obj, "size");
|
||||
if (map_size < 0)
|
||||
return NULL;
|
||||
offset = _GetMapSize(offset_obj, "offset");
|
||||
if (offset < 0)
|
||||
return NULL;
|
||||
|
||||
if ((access != (int)ACCESS_DEFAULT) &&
|
||||
((flags != MAP_SHARED) || (prot != (PROT_WRITE | PROT_READ))))
|
||||
|
@ -958,7 +984,7 @@ new_mmap_object(PyObject *self, PyObject *args, PyObject *kwdict)
|
|||
if (fstat(fd, &st) == 0 && S_ISREG(st.st_mode)) {
|
||||
if (map_size == 0) {
|
||||
map_size = st.st_size;
|
||||
} else if ((size_t)map_size > st.st_size) {
|
||||
} else if ((size_t)offset + (size_t)map_size > st.st_size) {
|
||||
PyErr_SetString(PyExc_ValueError,
|
||||
"mmap length is greater than file size");
|
||||
return NULL;
|
||||
|
@ -971,6 +997,7 @@ new_mmap_object(PyObject *self, PyObject *args, PyObject *kwdict)
|
|||
m_obj->size = (size_t) map_size;
|
||||
m_obj->pos = (size_t) 0;
|
||||
m_obj->exports = 0;
|
||||
m_obj->offset = offset;
|
||||
if (fd == -1) {
|
||||
m_obj->fd = -1;
|
||||
/* Assume the caller wants to map anonymous memory.
|
||||
|
@ -997,10 +1024,10 @@ new_mmap_object(PyObject *self, PyObject *args, PyObject *kwdict)
|
|||
return NULL;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
m_obj->data = mmap(NULL, map_size,
|
||||
prot, flags,
|
||||
fd, 0);
|
||||
fd, offset);
|
||||
|
||||
if (devzero != -1) {
|
||||
close(devzero);
|
||||
|
@ -1022,10 +1049,12 @@ static PyObject *
|
|||
new_mmap_object(PyObject *self, PyObject *args, PyObject *kwdict)
|
||||
{
|
||||
mmap_object *m_obj;
|
||||
PyObject *map_size_obj = NULL;
|
||||
Py_ssize_t map_size;
|
||||
DWORD size_hi; /* upper 32 bits of m_obj->size */
|
||||
DWORD size_lo; /* lower 32 bits of m_obj->size */
|
||||
PyObject *map_size_obj = NULL, *offset_obj = NULL;
|
||||
Py_ssize_t map_size, offset;
|
||||
DWORD off_hi; /* upper 32 bits of offset */
|
||||
DWORD off_lo; /* lower 32 bits of offset */
|
||||
DWORD size_hi; /* upper 32 bits of size */
|
||||
DWORD size_lo; /* lower 32 bits of size */
|
||||
char *tagname = "";
|
||||
DWORD dwErr = 0;
|
||||
int fileno;
|
||||
|
@ -1034,11 +1063,11 @@ new_mmap_object(PyObject *self, PyObject *args, PyObject *kwdict)
|
|||
DWORD flProtect, dwDesiredAccess;
|
||||
static char *keywords[] = { "fileno", "length",
|
||||
"tagname",
|
||||
"access", NULL };
|
||||
"access", "offset", NULL };
|
||||
|
||||
if (!PyArg_ParseTupleAndKeywords(args, kwdict, "iO|zi", keywords,
|
||||
if (!PyArg_ParseTupleAndKeywords(args, kwdict, "iO|ziO", keywords,
|
||||
&fileno, &map_size_obj,
|
||||
&tagname, &access)) {
|
||||
&tagname, &access, &offset_obj)) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
@ -1060,9 +1089,12 @@ new_mmap_object(PyObject *self, PyObject *args, PyObject *kwdict)
|
|||
"mmap invalid access parameter.");
|
||||
}
|
||||
|
||||
map_size = _GetMapSize(map_size_obj);
|
||||
map_size = _GetMapSize(map_size_obj, "size");
|
||||
if (map_size < 0)
|
||||
return NULL;
|
||||
offset = _GetMapSize(offset_obj, "offset");
|
||||
if (offset < 0)
|
||||
return NULL;
|
||||
|
||||
/* assume -1 and 0 both mean invalid filedescriptor
|
||||
to 'anonymously' map memory.
|
||||
|
@ -1092,6 +1124,7 @@ new_mmap_object(PyObject *self, PyObject *args, PyObject *kwdict)
|
|||
m_obj->file_handle = INVALID_HANDLE_VALUE;
|
||||
m_obj->map_handle = INVALID_HANDLE_VALUE;
|
||||
m_obj->tagname = NULL;
|
||||
m_obj->offset = offset;
|
||||
|
||||
if (fh) {
|
||||
/* It is necessary to duplicate the handle, so the
|
||||
|
@ -1161,12 +1194,18 @@ new_mmap_object(PyObject *self, PyObject *args, PyObject *kwdict)
|
|||
* right by 32, so we need different code.
|
||||
*/
|
||||
#if SIZEOF_SIZE_T > 4
|
||||
size_hi = (DWORD)(m_obj->size >> 32);
|
||||
size_lo = (DWORD)(m_obj->size & 0xFFFFFFFF);
|
||||
size_hi = (DWORD)((offset + m_obj->size) >> 32);
|
||||
size_lo = (DWORD)((offset + m_obj->size) & 0xFFFFFFFF);
|
||||
off_hi = (DWORD)(offset >> 32);
|
||||
off_lo = (DWORD)(offset & 0xFFFFFFFF);
|
||||
#else
|
||||
size_hi = 0;
|
||||
size_lo = (DWORD)m_obj->size;
|
||||
size_lo = (DWORD)(offset + m_obj->size);
|
||||
off_hi = 0;
|
||||
off_lo = (DWORD)offset;
|
||||
#endif
|
||||
/* For files, it would be sufficient to pass 0 as size.
|
||||
For anonymous maps, we have to pass the size explicitly. */
|
||||
m_obj->map_handle = CreateFileMapping(m_obj->file_handle,
|
||||
NULL,
|
||||
flProtect,
|
||||
|
@ -1176,8 +1215,8 @@ new_mmap_object(PyObject *self, PyObject *args, PyObject *kwdict)
|
|||
if (m_obj->map_handle != NULL) {
|
||||
m_obj->data = (char *) MapViewOfFile(m_obj->map_handle,
|
||||
dwDesiredAccess,
|
||||
0,
|
||||
0,
|
||||
off_hi,
|
||||
off_lo,
|
||||
0);
|
||||
if (m_obj->data != NULL)
|
||||
return (PyObject *)m_obj;
|
||||
|
@ -1252,6 +1291,8 @@ PyMODINIT_FUNC
|
|||
|
||||
setint(dict, "PAGESIZE", (long)my_getpagesize());
|
||||
|
||||
setint(dict, "ALLOCATIONGRANULARITY", (long)my_getallocationgranularity());
|
||||
|
||||
setint(dict, "ACCESS_READ", ACCESS_READ);
|
||||
setint(dict, "ACCESS_WRITE", ACCESS_WRITE);
|
||||
setint(dict, "ACCESS_COPY", ACCESS_COPY);
|
||||
|
|
|
@ -9,8 +9,6 @@
|
|||
|
||||
#include "Python.h"
|
||||
|
||||
typedef PyDictEntry dictentry;
|
||||
typedef PyDictObject dictobject;
|
||||
|
||||
/* Set a key error with the specified argument, wrapping it in a
|
||||
* tuple automatically so that tuple keys are not unpacked as the
|
||||
|
@ -116,14 +114,14 @@ approach, using repeated multiplication by x in GF(2**n) where an irreducible
|
|||
polynomial for each table size was chosen such that x was a primitive root.
|
||||
Christian Tismer later extended that to use division by x instead, as an
|
||||
efficient way to get the high bits of the hash code into play. This scheme
|
||||
also gave excellent collision statistics, but was more expensive: two if-tests
|
||||
were required inside the loop; computing "the next" index took about the same
|
||||
number of operations but without as much potential parallelism (e.g.,
|
||||
computing 5*j can go on at the same time as computing 1+perturb in the above,
|
||||
and then shifting perturb can be done while the table index is being masked);
|
||||
and the dictobject struct required a member to hold the table's polynomial.
|
||||
In Tim's experiments the current scheme ran faster, produced equally good
|
||||
collision statistics, needed less code & used less memory.
|
||||
also gave excellent collision statistics, but was more expensive: two
|
||||
if-tests were required inside the loop; computing "the next" index took about
|
||||
the same number of operations but without as much potential parallelism
|
||||
(e.g., computing 5*j can go on at the same time as computing 1+perturb in the
|
||||
above, and then shifting perturb can be done while the table index is being
|
||||
masked); and the PyDictObject struct required a member to hold the table's
|
||||
polynomial. In Tim's experiments the current scheme ran faster, produced
|
||||
equally good collision statistics, needed less code & used less memory.
|
||||
|
||||
Theoretical Python 2.5 headache: hash codes are only C "long", but
|
||||
sizeof(Py_ssize_t) > sizeof(long) may be possible. In that case, and if a
|
||||
|
@ -137,7 +135,7 @@ which point everyone will have terabytes of RAM on 64-bit boxes).
|
|||
*/
|
||||
|
||||
/* Object used as dummy key to fill deleted entries */
|
||||
static PyObject *dummy = NULL; /* Initialized by first call to newdictobject() */
|
||||
static PyObject *dummy = NULL; /* Initialized by first call to newPyDictObject() */
|
||||
|
||||
#ifdef Py_REF_DEBUG
|
||||
PyObject *
|
||||
|
@ -148,8 +146,8 @@ _PyDict_Dummy(void)
|
|||
#endif
|
||||
|
||||
/* forward declarations */
|
||||
static dictentry *
|
||||
lookdict_unicode(dictobject *mp, PyObject *key, long hash);
|
||||
static PyDictEntry *
|
||||
lookdict_unicode(PyDictObject *mp, PyObject *key, long hash);
|
||||
|
||||
#ifdef SHOW_CONVERSION_COUNTS
|
||||
static long created = 0L;
|
||||
|
@ -192,7 +190,7 @@ static int num_free_dicts = 0;
|
|||
PyObject *
|
||||
PyDict_New(void)
|
||||
{
|
||||
register dictobject *mp;
|
||||
register PyDictObject *mp;
|
||||
if (dummy == NULL) { /* Auto-initialize dummy */
|
||||
dummy = PyUnicode_FromString("<dummy key>");
|
||||
if (dummy == NULL)
|
||||
|
@ -213,7 +211,7 @@ PyDict_New(void)
|
|||
assert (mp->ma_table == mp->ma_smalltable);
|
||||
assert (mp->ma_mask == PyDict_MINSIZE - 1);
|
||||
} else {
|
||||
mp = PyObject_GC_New(dictobject, &PyDict_Type);
|
||||
mp = PyObject_GC_New(PyDictObject, &PyDict_Type);
|
||||
if (mp == NULL)
|
||||
return NULL;
|
||||
EMPTY_TO_MINSIZE(mp);
|
||||
|
@ -245,20 +243,20 @@ lookdict() is general-purpose, and may return NULL if (and only if) a
|
|||
comparison raises an exception (this was new in Python 2.5).
|
||||
lookdict_unicode() below is specialized to string keys, comparison of which can
|
||||
never raise an exception; that function can never return NULL. For both, when
|
||||
the key isn't found a dictentry* is returned for which the me_value field is
|
||||
the key isn't found a PyDictEntry* is returned for which the me_value field is
|
||||
NULL; this is the slot in the dict at which the key would have been found, and
|
||||
the caller can (if it wishes) add the <key, value> pair to the returned
|
||||
dictentry*.
|
||||
PyDictEntry*.
|
||||
*/
|
||||
static dictentry *
|
||||
lookdict(dictobject *mp, PyObject *key, register long hash)
|
||||
static PyDictEntry *
|
||||
lookdict(PyDictObject *mp, PyObject *key, register long hash)
|
||||
{
|
||||
register size_t i;
|
||||
register size_t perturb;
|
||||
register dictentry *freeslot;
|
||||
register PyDictEntry *freeslot;
|
||||
register size_t mask = (size_t)mp->ma_mask;
|
||||
dictentry *ep0 = mp->ma_table;
|
||||
register dictentry *ep;
|
||||
PyDictEntry *ep0 = mp->ma_table;
|
||||
register PyDictEntry *ep;
|
||||
register int cmp;
|
||||
PyObject *startkey;
|
||||
|
||||
|
@ -354,15 +352,15 @@ unicode_eq(PyObject *aa, PyObject *bb)
|
|||
*
|
||||
* This is valuable because dicts with only unicode keys are very common.
|
||||
*/
|
||||
static dictentry *
|
||||
lookdict_unicode(dictobject *mp, PyObject *key, register long hash)
|
||||
static PyDictEntry *
|
||||
lookdict_unicode(PyDictObject *mp, PyObject *key, register long hash)
|
||||
{
|
||||
register size_t i;
|
||||
register size_t perturb;
|
||||
register dictentry *freeslot;
|
||||
register PyDictEntry *freeslot;
|
||||
register size_t mask = (size_t)mp->ma_mask;
|
||||
dictentry *ep0 = mp->ma_table;
|
||||
register dictentry *ep;
|
||||
PyDictEntry *ep0 = mp->ma_table;
|
||||
register PyDictEntry *ep;
|
||||
|
||||
/* Make sure this function doesn't have to handle non-unicode keys,
|
||||
including subclasses of str; e.g., one reason to subclass
|
||||
|
@ -413,10 +411,10 @@ Eats a reference to key and one to value.
|
|||
Returns -1 if an error occurred, or 0 on success.
|
||||
*/
|
||||
static int
|
||||
insertdict(register dictobject *mp, PyObject *key, long hash, PyObject *value)
|
||||
insertdict(register PyDictObject *mp, PyObject *key, long hash, PyObject *value)
|
||||
{
|
||||
PyObject *old_value;
|
||||
register dictentry *ep;
|
||||
register PyDictEntry *ep;
|
||||
typedef PyDictEntry *(*lookupfunc)(PyDictObject *, PyObject *, long);
|
||||
|
||||
assert(mp->ma_lookup != NULL);
|
||||
|
@ -456,14 +454,14 @@ Note that no refcounts are changed by this routine; if needed, the caller
|
|||
is responsible for incref'ing `key` and `value`.
|
||||
*/
|
||||
static void
|
||||
insertdict_clean(register dictobject *mp, PyObject *key, long hash,
|
||||
insertdict_clean(register PyDictObject *mp, PyObject *key, long hash,
|
||||
PyObject *value)
|
||||
{
|
||||
register size_t i;
|
||||
register size_t perturb;
|
||||
register size_t mask = (size_t)mp->ma_mask;
|
||||
dictentry *ep0 = mp->ma_table;
|
||||
register dictentry *ep;
|
||||
PyDictEntry *ep0 = mp->ma_table;
|
||||
register PyDictEntry *ep;
|
||||
|
||||
i = hash & mask;
|
||||
ep = &ep0[i];
|
||||
|
@ -485,13 +483,13 @@ items again. When entries have been deleted, the new table may
|
|||
actually be smaller than the old one.
|
||||
*/
|
||||
static int
|
||||
dictresize(dictobject *mp, Py_ssize_t minused)
|
||||
dictresize(PyDictObject *mp, Py_ssize_t minused)
|
||||
{
|
||||
Py_ssize_t newsize;
|
||||
dictentry *oldtable, *newtable, *ep;
|
||||
PyDictEntry *oldtable, *newtable, *ep;
|
||||
Py_ssize_t i;
|
||||
int is_oldtable_malloced;
|
||||
dictentry small_copy[PyDict_MINSIZE];
|
||||
PyDictEntry small_copy[PyDict_MINSIZE];
|
||||
|
||||
assert(minused >= 0);
|
||||
|
||||
|
@ -530,7 +528,7 @@ dictresize(dictobject *mp, Py_ssize_t minused)
|
|||
}
|
||||
}
|
||||
else {
|
||||
newtable = PyMem_NEW(dictentry, newsize);
|
||||
newtable = PyMem_NEW(PyDictEntry, newsize);
|
||||
if (newtable == NULL) {
|
||||
PyErr_NoMemory();
|
||||
return -1;
|
||||
|
@ -541,7 +539,7 @@ dictresize(dictobject *mp, Py_ssize_t minused)
|
|||
assert(newtable != oldtable);
|
||||
mp->ma_table = newtable;
|
||||
mp->ma_mask = newsize - 1;
|
||||
memset(newtable, 0, sizeof(dictentry) * newsize);
|
||||
memset(newtable, 0, sizeof(PyDictEntry) * newsize);
|
||||
mp->ma_used = 0;
|
||||
i = mp->ma_fill;
|
||||
mp->ma_fill = 0;
|
||||
|
@ -581,8 +579,8 @@ PyObject *
|
|||
PyDict_GetItem(PyObject *op, PyObject *key)
|
||||
{
|
||||
long hash;
|
||||
dictobject *mp = (dictobject *)op;
|
||||
dictentry *ep;
|
||||
PyDictObject *mp = (PyDictObject *)op;
|
||||
PyDictEntry *ep;
|
||||
PyThreadState *tstate;
|
||||
if (!PyDict_Check(op))
|
||||
return NULL;
|
||||
|
@ -628,8 +626,8 @@ PyObject *
|
|||
PyDict_GetItemWithError(PyObject *op, PyObject *key)
|
||||
{
|
||||
long hash;
|
||||
dictobject *mp = (dictobject *)op;
|
||||
dictentry *ep;
|
||||
PyDictObject*mp = (PyDictObject *)op;
|
||||
PyDictEntry *ep;
|
||||
|
||||
if (!PyDict_Check(op)) {
|
||||
PyErr_BadInternalCall();
|
||||
|
@ -659,7 +657,7 @@ PyDict_GetItemWithError(PyObject *op, PyObject *key)
|
|||
int
|
||||
PyDict_SetItem(register PyObject *op, PyObject *key, PyObject *value)
|
||||
{
|
||||
register dictobject *mp;
|
||||
register PyDictObject *mp;
|
||||
register long hash;
|
||||
register Py_ssize_t n_used;
|
||||
|
||||
|
@ -669,7 +667,7 @@ PyDict_SetItem(register PyObject *op, PyObject *key, PyObject *value)
|
|||
}
|
||||
assert(key);
|
||||
assert(value);
|
||||
mp = (dictobject *)op;
|
||||
mp = (PyDictObject *)op;
|
||||
if (!PyUnicode_CheckExact(key) ||
|
||||
(hash = ((PyUnicodeObject *) key)->hash) == -1)
|
||||
{
|
||||
|
@ -705,9 +703,9 @@ PyDict_SetItem(register PyObject *op, PyObject *key, PyObject *value)
|
|||
int
|
||||
PyDict_DelItem(PyObject *op, PyObject *key)
|
||||
{
|
||||
register dictobject *mp;
|
||||
register PyDictObject *mp;
|
||||
register long hash;
|
||||
register dictentry *ep;
|
||||
register PyDictEntry *ep;
|
||||
PyObject *old_value, *old_key;
|
||||
|
||||
if (!PyDict_Check(op)) {
|
||||
|
@ -721,7 +719,7 @@ PyDict_DelItem(PyObject *op, PyObject *key)
|
|||
if (hash == -1)
|
||||
return -1;
|
||||
}
|
||||
mp = (dictobject *)op;
|
||||
mp = (PyDictObject *)op;
|
||||
ep = (mp->ma_lookup)(mp, key, hash);
|
||||
if (ep == NULL)
|
||||
return -1;
|
||||
|
@ -743,18 +741,18 @@ PyDict_DelItem(PyObject *op, PyObject *key)
|
|||
void
|
||||
PyDict_Clear(PyObject *op)
|
||||
{
|
||||
dictobject *mp;
|
||||
dictentry *ep, *table;
|
||||
PyDictObject *mp;
|
||||
PyDictEntry *ep, *table;
|
||||
int table_is_malloced;
|
||||
Py_ssize_t fill;
|
||||
dictentry small_copy[PyDict_MINSIZE];
|
||||
PyDictEntry small_copy[PyDict_MINSIZE];
|
||||
#ifdef Py_DEBUG
|
||||
Py_ssize_t i, n;
|
||||
#endif
|
||||
|
||||
if (!PyDict_Check(op))
|
||||
return;
|
||||
mp = (dictobject *)op;
|
||||
mp = (PyDictObject *)op;
|
||||
#ifdef Py_DEBUG
|
||||
n = mp->ma_mask + 1;
|
||||
i = 0;
|
||||
|
@ -829,15 +827,15 @@ PyDict_Next(PyObject *op, Py_ssize_t *ppos, PyObject **pkey, PyObject **pvalue)
|
|||
{
|
||||
register Py_ssize_t i;
|
||||
register Py_ssize_t mask;
|
||||
register dictentry *ep;
|
||||
register PyDictEntry *ep;
|
||||
|
||||
if (!PyDict_Check(op))
|
||||
return 0;
|
||||
i = *ppos;
|
||||
if (i < 0)
|
||||
return 0;
|
||||
ep = ((dictobject *)op)->ma_table;
|
||||
mask = ((dictobject *)op)->ma_mask;
|
||||
ep = ((PyDictObject *)op)->ma_table;
|
||||
mask = ((PyDictObject *)op)->ma_mask;
|
||||
while (i <= mask && ep[i].me_value == NULL)
|
||||
i++;
|
||||
*ppos = i+1;
|
||||
|
@ -856,15 +854,15 @@ _PyDict_Next(PyObject *op, Py_ssize_t *ppos, PyObject **pkey, PyObject **pvalue,
|
|||
{
|
||||
register Py_ssize_t i;
|
||||
register Py_ssize_t mask;
|
||||
register dictentry *ep;
|
||||
register PyDictEntry *ep;
|
||||
|
||||
if (!PyDict_Check(op))
|
||||
return 0;
|
||||
i = *ppos;
|
||||
if (i < 0)
|
||||
return 0;
|
||||
ep = ((dictobject *)op)->ma_table;
|
||||
mask = ((dictobject *)op)->ma_mask;
|
||||
ep = ((PyDictObject *)op)->ma_table;
|
||||
mask = ((PyDictObject *)op)->ma_mask;
|
||||
while (i <= mask && ep[i].me_value == NULL)
|
||||
i++;
|
||||
*ppos = i+1;
|
||||
|
@ -881,9 +879,9 @@ _PyDict_Next(PyObject *op, Py_ssize_t *ppos, PyObject **pkey, PyObject **pvalue,
|
|||
/* Methods */
|
||||
|
||||
static void
|
||||
dict_dealloc(register dictobject *mp)
|
||||
dict_dealloc(register PyDictObject *mp)
|
||||
{
|
||||
register dictentry *ep;
|
||||
register PyDictEntry *ep;
|
||||
Py_ssize_t fill = mp->ma_fill;
|
||||
PyObject_GC_UnTrack(mp);
|
||||
Py_TRASHCAN_SAFE_BEGIN(mp)
|
||||
|
@ -904,7 +902,7 @@ dict_dealloc(register dictobject *mp)
|
|||
}
|
||||
|
||||
static PyObject *
|
||||
dict_repr(dictobject *mp)
|
||||
dict_repr(PyDictObject *mp)
|
||||
{
|
||||
Py_ssize_t i;
|
||||
PyObject *s, *temp, *colon = NULL;
|
||||
|
@ -983,17 +981,17 @@ Done:
|
|||
}
|
||||
|
||||
static Py_ssize_t
|
||||
dict_length(dictobject *mp)
|
||||
dict_length(PyDictObject *mp)
|
||||
{
|
||||
return mp->ma_used;
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
dict_subscript(dictobject *mp, register PyObject *key)
|
||||
dict_subscript(PyDictObject *mp, register PyObject *key)
|
||||
{
|
||||
PyObject *v;
|
||||
long hash;
|
||||
dictentry *ep;
|
||||
PyDictEntry *ep;
|
||||
assert(mp->ma_table != NULL);
|
||||
if (!PyUnicode_CheckExact(key) ||
|
||||
(hash = ((PyUnicodeObject *) key)->hash) == -1) {
|
||||
|
@ -1027,7 +1025,7 @@ dict_subscript(dictobject *mp, register PyObject *key)
|
|||
}
|
||||
|
||||
static int
|
||||
dict_ass_sub(dictobject *mp, PyObject *v, PyObject *w)
|
||||
dict_ass_sub(PyDictObject *mp, PyObject *v, PyObject *w)
|
||||
{
|
||||
if (w == NULL)
|
||||
return PyDict_DelItem((PyObject *)mp, v);
|
||||
|
@ -1042,11 +1040,11 @@ static PyMappingMethods dict_as_mapping = {
|
|||
};
|
||||
|
||||
static PyObject *
|
||||
dict_keys(register dictobject *mp)
|
||||
dict_keys(register PyDictObject *mp)
|
||||
{
|
||||
register PyObject *v;
|
||||
register Py_ssize_t i, j;
|
||||
dictentry *ep;
|
||||
PyDictEntry *ep;
|
||||
Py_ssize_t mask, n;
|
||||
|
||||
again:
|
||||
|
@ -1076,11 +1074,11 @@ dict_keys(register dictobject *mp)
|
|||
}
|
||||
|
||||
static PyObject *
|
||||
dict_values(register dictobject *mp)
|
||||
dict_values(register PyDictObject *mp)
|
||||
{
|
||||
register PyObject *v;
|
||||
register Py_ssize_t i, j;
|
||||
dictentry *ep;
|
||||
PyDictEntry *ep;
|
||||
Py_ssize_t mask, n;
|
||||
|
||||
again:
|
||||
|
@ -1110,13 +1108,13 @@ dict_values(register dictobject *mp)
|
|||
}
|
||||
|
||||
static PyObject *
|
||||
dict_items(register dictobject *mp)
|
||||
dict_items(register PyDictObject *mp)
|
||||
{
|
||||
register PyObject *v;
|
||||
register Py_ssize_t i, j, n;
|
||||
Py_ssize_t mask;
|
||||
PyObject *item, *key, *value;
|
||||
dictentry *ep;
|
||||
PyDictEntry *ep;
|
||||
|
||||
/* Preallocate the list of tuples, to avoid allocations during
|
||||
* the loop over the items, which could trigger GC, which
|
||||
|
@ -1178,7 +1176,7 @@ dict_fromkeys(PyObject *cls, PyObject *args)
|
|||
return NULL;
|
||||
|
||||
if (PyDict_CheckExact(d) && PyAnySet_CheckExact(seq)) {
|
||||
dictobject *mp = (dictobject *)d;
|
||||
PyDictObject *mp = (PyDictObject *)d;
|
||||
Py_ssize_t pos = 0;
|
||||
PyObject *key;
|
||||
long hash;
|
||||
|
@ -1342,7 +1340,7 @@ PyDict_Merge(PyObject *a, PyObject *b, int override)
|
|||
{
|
||||
register PyDictObject *mp, *other;
|
||||
register Py_ssize_t i;
|
||||
dictentry *entry;
|
||||
PyDictEntry *entry;
|
||||
|
||||
/* We accept for the argument either a concrete dictionary object,
|
||||
* or an abstract "mapping" object. For the former, we can do
|
||||
|
@ -1353,9 +1351,9 @@ PyDict_Merge(PyObject *a, PyObject *b, int override)
|
|||
PyErr_BadInternalCall();
|
||||
return -1;
|
||||
}
|
||||
mp = (dictobject*)a;
|
||||
mp = (PyDictObject*)a;
|
||||
if (PyDict_CheckExact(b)) {
|
||||
other = (dictobject*)b;
|
||||
other = (PyDictObject*)b;
|
||||
if (other == mp || other->ma_used == 0)
|
||||
/* a.update(a) or a.update({}); nothing to do */
|
||||
return 0;
|
||||
|
@ -1435,7 +1433,7 @@ PyDict_Merge(PyObject *a, PyObject *b, int override)
|
|||
}
|
||||
|
||||
static PyObject *
|
||||
dict_copy(register dictobject *mp)
|
||||
dict_copy(register PyDictObject *mp)
|
||||
{
|
||||
return PyDict_Copy((PyObject*)mp);
|
||||
}
|
||||
|
@ -1465,7 +1463,7 @@ PyDict_Size(PyObject *mp)
|
|||
PyErr_BadInternalCall();
|
||||
return -1;
|
||||
}
|
||||
return ((dictobject *)mp)->ma_used;
|
||||
return ((PyDictObject *)mp)->ma_used;
|
||||
}
|
||||
|
||||
PyObject *
|
||||
|
@ -1475,7 +1473,7 @@ PyDict_Keys(PyObject *mp)
|
|||
PyErr_BadInternalCall();
|
||||
return NULL;
|
||||
}
|
||||
return dict_keys((dictobject *)mp);
|
||||
return dict_keys((PyDictObject *)mp);
|
||||
}
|
||||
|
||||
PyObject *
|
||||
|
@ -1485,7 +1483,7 @@ PyDict_Values(PyObject *mp)
|
|||
PyErr_BadInternalCall();
|
||||
return NULL;
|
||||
}
|
||||
return dict_values((dictobject *)mp);
|
||||
return dict_values((PyDictObject *)mp);
|
||||
}
|
||||
|
||||
PyObject *
|
||||
|
@ -1495,7 +1493,7 @@ PyDict_Items(PyObject *mp)
|
|||
PyErr_BadInternalCall();
|
||||
return NULL;
|
||||
}
|
||||
return dict_items((dictobject *)mp);
|
||||
return dict_items((PyDictObject *)mp);
|
||||
}
|
||||
|
||||
/* Return 1 if dicts equal, 0 if not, -1 if error.
|
||||
|
@ -1503,7 +1501,7 @@ PyDict_Items(PyObject *mp)
|
|||
* Uses only Py_EQ comparison.
|
||||
*/
|
||||
static int
|
||||
dict_equal(dictobject *a, dictobject *b)
|
||||
dict_equal(PyDictObject *a, PyDictObject *b)
|
||||
{
|
||||
Py_ssize_t i;
|
||||
|
||||
|
@ -1550,7 +1548,7 @@ dict_richcompare(PyObject *v, PyObject *w, int op)
|
|||
res = Py_NotImplemented;
|
||||
}
|
||||
else if (op == Py_EQ || op == Py_NE) {
|
||||
cmp = dict_equal((dictobject *)v, (dictobject *)w);
|
||||
cmp = dict_equal((PyDictObject *)v, (PyDictObject *)w);
|
||||
if (cmp < 0)
|
||||
return NULL;
|
||||
res = (cmp == (op == Py_EQ)) ? Py_True : Py_False;
|
||||
|
@ -1562,10 +1560,10 @@ dict_richcompare(PyObject *v, PyObject *w, int op)
|
|||
}
|
||||
|
||||
static PyObject *
|
||||
dict_contains(register dictobject *mp, PyObject *key)
|
||||
dict_contains(register PyDictObject *mp, PyObject *key)
|
||||
{
|
||||
long hash;
|
||||
dictentry *ep;
|
||||
PyDictEntry *ep;
|
||||
|
||||
if (!PyUnicode_CheckExact(key) ||
|
||||
(hash = ((PyUnicodeObject *) key)->hash) == -1) {
|
||||
|
@ -1580,13 +1578,13 @@ dict_contains(register dictobject *mp, PyObject *key)
|
|||
}
|
||||
|
||||
static PyObject *
|
||||
dict_get(register dictobject *mp, PyObject *args)
|
||||
dict_get(register PyDictObject *mp, PyObject *args)
|
||||
{
|
||||
PyObject *key;
|
||||
PyObject *failobj = Py_None;
|
||||
PyObject *val = NULL;
|
||||
long hash;
|
||||
dictentry *ep;
|
||||
PyDictEntry *ep;
|
||||
|
||||
if (!PyArg_UnpackTuple(args, "get", 1, 2, &key, &failobj))
|
||||
return NULL;
|
||||
|
@ -1609,13 +1607,13 @@ dict_get(register dictobject *mp, PyObject *args)
|
|||
|
||||
|
||||
static PyObject *
|
||||
dict_setdefault(register dictobject *mp, PyObject *args)
|
||||
dict_setdefault(register PyDictObject *mp, PyObject *args)
|
||||
{
|
||||
PyObject *key;
|
||||
PyObject *failobj = Py_None;
|
||||
PyObject *val = NULL;
|
||||
long hash;
|
||||
dictentry *ep;
|
||||
PyDictEntry *ep;
|
||||
|
||||
if (!PyArg_UnpackTuple(args, "setdefault", 1, 2, &key, &failobj))
|
||||
return NULL;
|
||||
|
@ -1641,17 +1639,17 @@ dict_setdefault(register dictobject *mp, PyObject *args)
|
|||
|
||||
|
||||
static PyObject *
|
||||
dict_clear(register dictobject *mp)
|
||||
dict_clear(register PyDictObject *mp)
|
||||
{
|
||||
PyDict_Clear((PyObject *)mp);
|
||||
Py_RETURN_NONE;
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
dict_pop(dictobject *mp, PyObject *args)
|
||||
dict_pop(PyDictObject *mp, PyObject *args)
|
||||
{
|
||||
long hash;
|
||||
dictentry *ep;
|
||||
PyDictEntry *ep;
|
||||
PyObject *old_value, *old_key;
|
||||
PyObject *key, *deflt = NULL;
|
||||
|
||||
|
@ -1694,10 +1692,10 @@ dict_pop(dictobject *mp, PyObject *args)
|
|||
}
|
||||
|
||||
static PyObject *
|
||||
dict_popitem(dictobject *mp)
|
||||
dict_popitem(PyDictObject *mp)
|
||||
{
|
||||
Py_ssize_t i = 0;
|
||||
dictentry *ep;
|
||||
PyDictEntry *ep;
|
||||
PyObject *res;
|
||||
|
||||
/* Allocate the result tuple before checking the size. Believe it
|
||||
|
@ -1776,7 +1774,7 @@ dict_tp_clear(PyObject *op)
|
|||
extern PyTypeObject PyDictIterKey_Type; /* Forward */
|
||||
extern PyTypeObject PyDictIterValue_Type; /* Forward */
|
||||
extern PyTypeObject PyDictIterItem_Type; /* Forward */
|
||||
static PyObject *dictiter_new(dictobject *, PyTypeObject *);
|
||||
static PyObject *dictiter_new(PyDictObject *, PyTypeObject *);
|
||||
|
||||
|
||||
PyDoc_STRVAR(contains__doc__,
|
||||
|
@ -1859,8 +1857,8 @@ int
|
|||
PyDict_Contains(PyObject *op, PyObject *key)
|
||||
{
|
||||
long hash;
|
||||
dictobject *mp = (dictobject *)op;
|
||||
dictentry *ep;
|
||||
PyDictObject *mp = (PyDictObject *)op;
|
||||
PyDictEntry *ep;
|
||||
|
||||
if (!PyUnicode_CheckExact(key) ||
|
||||
(hash = ((PyUnicodeObject *) key)->hash) == -1) {
|
||||
|
@ -1876,8 +1874,8 @@ PyDict_Contains(PyObject *op, PyObject *key)
|
|||
int
|
||||
_PyDict_Contains(PyObject *op, PyObject *key, long hash)
|
||||
{
|
||||
dictobject *mp = (dictobject *)op;
|
||||
dictentry *ep;
|
||||
PyDictObject *mp = (PyDictObject *)op;
|
||||
PyDictEntry *ep;
|
||||
|
||||
ep = (mp->ma_lookup)(mp, key, hash);
|
||||
return ep == NULL ? -1 : (ep->me_value != NULL);
|
||||
|
@ -1924,7 +1922,7 @@ dict_init(PyObject *self, PyObject *args, PyObject *kwds)
|
|||
}
|
||||
|
||||
static PyObject *
|
||||
dict_iter(dictobject *dict)
|
||||
dict_iter(PyDictObject *dict)
|
||||
{
|
||||
return dictiter_new(dict, &PyDictIterKey_Type);
|
||||
}
|
||||
|
@ -1943,7 +1941,7 @@ PyDoc_STRVAR(dictionary_doc,
|
|||
PyTypeObject PyDict_Type = {
|
||||
PyVarObject_HEAD_INIT(&PyType_Type, 0)
|
||||
"dict",
|
||||
sizeof(dictobject),
|
||||
sizeof(PyDictObject),
|
||||
0,
|
||||
(destructor)dict_dealloc, /* tp_dealloc */
|
||||
0, /* tp_print */
|
||||
|
@ -2028,7 +2026,7 @@ PyDict_DelItemString(PyObject *v, const char *key)
|
|||
|
||||
typedef struct {
|
||||
PyObject_HEAD
|
||||
dictobject *di_dict; /* Set to NULL when iterator is exhausted */
|
||||
PyDictObject *di_dict; /* Set to NULL when iterator is exhausted */
|
||||
Py_ssize_t di_used;
|
||||
Py_ssize_t di_pos;
|
||||
PyObject* di_result; /* reusable result tuple for iteritems */
|
||||
|
@ -2036,7 +2034,7 @@ typedef struct {
|
|||
} dictiterobject;
|
||||
|
||||
static PyObject *
|
||||
dictiter_new(dictobject *dict, PyTypeObject *itertype)
|
||||
dictiter_new(PyDictObject *dict, PyTypeObject *itertype)
|
||||
{
|
||||
dictiterobject *di;
|
||||
di = PyObject_New(dictiterobject, itertype);
|
||||
|
@ -2089,8 +2087,8 @@ static PyObject *dictiter_iternextkey(dictiterobject *di)
|
|||
{
|
||||
PyObject *key;
|
||||
register Py_ssize_t i, mask;
|
||||
register dictentry *ep;
|
||||
dictobject *d = di->di_dict;
|
||||
register PyDictEntry *ep;
|
||||
PyDictObject *d = di->di_dict;
|
||||
|
||||
if (d == NULL)
|
||||
return NULL;
|
||||
|
@ -2161,8 +2159,8 @@ static PyObject *dictiter_iternextvalue(dictiterobject *di)
|
|||
{
|
||||
PyObject *value;
|
||||
register Py_ssize_t i, mask;
|
||||
register dictentry *ep;
|
||||
dictobject *d = di->di_dict;
|
||||
register PyDictEntry *ep;
|
||||
PyDictObject *d = di->di_dict;
|
||||
|
||||
if (d == NULL)
|
||||
return NULL;
|
||||
|
@ -2233,8 +2231,8 @@ static PyObject *dictiter_iternextitem(dictiterobject *di)
|
|||
{
|
||||
PyObject *key, *value, *result = di->di_result;
|
||||
register Py_ssize_t i, mask;
|
||||
register dictentry *ep;
|
||||
dictobject *d = di->di_dict;
|
||||
register PyDictEntry *ep;
|
||||
PyDictObject *d = di->di_dict;
|
||||
|
||||
if (d == NULL)
|
||||
return NULL;
|
||||
|
@ -2324,7 +2322,7 @@ PyTypeObject PyDictIterItem_Type = {
|
|||
|
||||
typedef struct {
|
||||
PyObject_HEAD
|
||||
dictobject *dv_dict;
|
||||
PyDictObject *dv_dict;
|
||||
} dictviewobject;
|
||||
|
||||
|
||||
|
@ -2363,7 +2361,7 @@ dictview_new(PyObject *dict, PyTypeObject *type)
|
|||
if (dv == NULL)
|
||||
return NULL;
|
||||
Py_INCREF(dict);
|
||||
dv->dv_dict = (dictobject *)dict;
|
||||
dv->dv_dict = (PyDictObject *)dict;
|
||||
return (PyObject *)dv;
|
||||
}
|
||||
|
||||
|
|
|
@ -7,6 +7,7 @@ typedef struct {
|
|||
long en_index; /* current index of enumeration */
|
||||
PyObject* en_sit; /* secondary iterator of enumeration */
|
||||
PyObject* en_result; /* result tuple */
|
||||
PyObject* en_longindex; /* index for sequences >= LONG_MAX */
|
||||
} enumobject;
|
||||
|
||||
static PyObject *
|
||||
|
@ -25,6 +26,7 @@ enum_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
|
|||
return NULL;
|
||||
en->en_index = 0;
|
||||
en->en_sit = PyObject_GetIter(seq);
|
||||
en->en_longindex = NULL;
|
||||
if (en->en_sit == NULL) {
|
||||
Py_DECREF(en);
|
||||
return NULL;
|
||||
|
@ -43,6 +45,7 @@ enum_dealloc(enumobject *en)
|
|||
PyObject_GC_UnTrack(en);
|
||||
Py_XDECREF(en->en_sit);
|
||||
Py_XDECREF(en->en_result);
|
||||
Py_XDECREF(en->en_longindex);
|
||||
Py_Type(en)->tp_free(en);
|
||||
}
|
||||
|
||||
|
@ -51,9 +54,52 @@ enum_traverse(enumobject *en, visitproc visit, void *arg)
|
|||
{
|
||||
Py_VISIT(en->en_sit);
|
||||
Py_VISIT(en->en_result);
|
||||
Py_VISIT(en->en_longindex);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
enum_next_long(enumobject *en, PyObject* next_item)
|
||||
{
|
||||
static PyObject *one = NULL;
|
||||
PyObject *result = en->en_result;
|
||||
PyObject *next_index;
|
||||
PyObject *stepped_up;
|
||||
|
||||
if (en->en_longindex == NULL) {
|
||||
en->en_longindex = PyInt_FromLong(LONG_MAX);
|
||||
if (en->en_longindex == NULL)
|
||||
return NULL;
|
||||
}
|
||||
if (one == NULL) {
|
||||
one = PyInt_FromLong(1);
|
||||
if (one == NULL)
|
||||
return NULL;
|
||||
}
|
||||
next_index = en->en_longindex;
|
||||
assert(next_index != NULL);
|
||||
stepped_up = PyNumber_Add(next_index, one);
|
||||
if (stepped_up == NULL)
|
||||
return NULL;
|
||||
en->en_longindex = stepped_up;
|
||||
|
||||
if (result->ob_refcnt == 1) {
|
||||
Py_INCREF(result);
|
||||
Py_DECREF(PyTuple_GET_ITEM(result, 0));
|
||||
Py_DECREF(PyTuple_GET_ITEM(result, 1));
|
||||
} else {
|
||||
result = PyTuple_New(2);
|
||||
if (result == NULL) {
|
||||
Py_DECREF(next_index);
|
||||
Py_DECREF(next_item);
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
PyTuple_SET_ITEM(result, 0, next_index);
|
||||
PyTuple_SET_ITEM(result, 1, next_item);
|
||||
return result;
|
||||
}
|
||||
|
||||
static PyObject *
|
||||
enum_next(enumobject *en)
|
||||
{
|
||||
|
@ -62,16 +108,13 @@ enum_next(enumobject *en)
|
|||
PyObject *result = en->en_result;
|
||||
PyObject *it = en->en_sit;
|
||||
|
||||
if (en->en_index == LONG_MAX) {
|
||||
PyErr_SetString(PyExc_OverflowError,
|
||||
"enumerate() is limited to LONG_MAX items");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
next_item = (*Py_Type(it)->tp_iternext)(it);
|
||||
if (next_item == NULL)
|
||||
return NULL;
|
||||
|
||||
if (en->en_index == LONG_MAX)
|
||||
return enum_next_long(en, next_item);
|
||||
|
||||
next_index = PyInt_FromLong(en->en_index);
|
||||
if (next_index == NULL) {
|
||||
Py_DECREF(next_item);
|
||||
|
|
|
@ -463,10 +463,10 @@ list_repeat(PyListObject *a, Py_ssize_t n)
|
|||
if (n < 0)
|
||||
n = 0;
|
||||
size = Py_Size(a) * n;
|
||||
if (size == 0)
|
||||
return PyList_New(0);
|
||||
if (n && size/n != Py_Size(a))
|
||||
return PyErr_NoMemory();
|
||||
if (size == 0)
|
||||
return PyList_New(0);
|
||||
np = (PyListObject *) PyList_New(size);
|
||||
if (np == NULL)
|
||||
return NULL;
|
||||
|
@ -633,7 +633,7 @@ static PyObject *
|
|||
list_inplace_repeat(PyListObject *self, Py_ssize_t n)
|
||||
{
|
||||
PyObject **items;
|
||||
Py_ssize_t size, i, j, p;
|
||||
Py_ssize_t size, i, j, p, newsize;
|
||||
|
||||
|
||||
size = PyList_GET_SIZE(self);
|
||||
|
@ -648,7 +648,10 @@ list_inplace_repeat(PyListObject *self, Py_ssize_t n)
|
|||
return (PyObject *)self;
|
||||
}
|
||||
|
||||
if (list_resize(self, size*n) == -1)
|
||||
newsize = size * n;
|
||||
if (newsize/n != size)
|
||||
return PyErr_NoMemory();
|
||||
if (list_resize(self, newsize) == -1)
|
||||
return NULL;
|
||||
|
||||
p = size;
|
||||
|
|
|
@ -423,7 +423,12 @@ _PyObject_Str(PyObject *v)
|
|||
if (Py_Type(v)->tp_str == NULL)
|
||||
return PyObject_Repr(v);
|
||||
|
||||
/* It is possible for a type to have a tp_str representation that loops
|
||||
infinitely. */
|
||||
if (Py_EnterRecursiveCall(" while getting the str of an object"))
|
||||
return NULL;
|
||||
res = (*Py_Type(v)->tp_str)(v);
|
||||
Py_LeaveRecursiveCall();
|
||||
if (res == NULL)
|
||||
return NULL;
|
||||
if (!(PyString_Check(res) || PyUnicode_Check(res))) {
|
||||
|
|
|
@ -598,15 +598,15 @@ PyObject *PyString_DecodeEscape(const char *s,
|
|||
case '0': case '1': case '2': case '3':
|
||||
case '4': case '5': case '6': case '7':
|
||||
c = s[-1] - '0';
|
||||
if ('0' <= *s && *s <= '7') {
|
||||
if (s < end && '0' <= *s && *s <= '7') {
|
||||
c = (c<<3) + *s++ - '0';
|
||||
if ('0' <= *s && *s <= '7')
|
||||
if (s < end && '0' <= *s && *s <= '7')
|
||||
c = (c<<3) + *s++ - '0';
|
||||
}
|
||||
*p++ = c;
|
||||
break;
|
||||
case 'x':
|
||||
if (ISXDIGIT(s[0]) && ISXDIGIT(s[1])) {
|
||||
if (s+1 < end && ISXDIGIT(s[0]) && ISXDIGIT(s[1])) {
|
||||
unsigned int x = 0;
|
||||
c = Py_CHARMASK(*s);
|
||||
s++;
|
||||
|
|
|
@ -195,13 +195,25 @@ tuplerepr(PyTupleObject *v)
|
|||
if (n == 0)
|
||||
return PyUnicode_FromString("()");
|
||||
|
||||
/* While not mutable, it is still possible to end up with a cycle in a
|
||||
tuple through an object that stores itself within a tuple (and thus
|
||||
infinitely asks for the repr of itself). This should only be
|
||||
possible within a type. */
|
||||
i = Py_ReprEnter((PyObject *)v);
|
||||
if (i != 0) {
|
||||
return i > 0 ? PyString_FromString("(...)") : NULL;
|
||||
}
|
||||
|
||||
pieces = PyTuple_New(n);
|
||||
if (pieces == NULL)
|
||||
return NULL;
|
||||
|
||||
/* Do repr() on each element. */
|
||||
for (i = 0; i < n; ++i) {
|
||||
if (Py_EnterRecursiveCall(" while getting the repr of a tuple"))
|
||||
goto Done;
|
||||
s = PyObject_Repr(v->ob_item[i]);
|
||||
Py_LeaveRecursiveCall();
|
||||
if (s == NULL)
|
||||
goto Done;
|
||||
PyTuple_SET_ITEM(pieces, i, s);
|
||||
|
@ -236,6 +248,7 @@ tuplerepr(PyTupleObject *v)
|
|||
|
||||
Done:
|
||||
Py_DECREF(pieces);
|
||||
Py_ReprLeave((PyObject *)v);
|
||||
return result;
|
||||
}
|
||||
|
||||
|
|
|
@ -2671,7 +2671,10 @@ PyObject *PyUnicode_DecodeUnicodeEscape(const char *s,
|
|||
startinpos = s-starts;
|
||||
/* \ - Escapes */
|
||||
s++;
|
||||
switch (*s++) {
|
||||
c = *s++;
|
||||
if (s > end)
|
||||
c = '\0'; /* Invalid after \ */
|
||||
switch (c) {
|
||||
|
||||
/* \x escapes */
|
||||
case '\n': break;
|
||||
|
@ -2690,9 +2693,9 @@ PyObject *PyUnicode_DecodeUnicodeEscape(const char *s,
|
|||
case '0': case '1': case '2': case '3':
|
||||
case '4': case '5': case '6': case '7':
|
||||
x = s[-1] - '0';
|
||||
if ('0' <= *s && *s <= '7') {
|
||||
if (s < end && '0' <= *s && *s <= '7') {
|
||||
x = (x<<3) + *s++ - '0';
|
||||
if ('0' <= *s && *s <= '7')
|
||||
if (s < end && '0' <= *s && *s <= '7')
|
||||
x = (x<<3) + *s++ - '0';
|
||||
}
|
||||
*p++ = x;
|
||||
|
|
|
@ -377,11 +377,11 @@ Py_NO_ENABLE_SHARED to find out. Also support MS_NO_COREDLL for b/w compat */
|
|||
define these.
|
||||
If some compiler does not provide them, modify the #if appropriately. */
|
||||
#if defined(_MSC_VER)
|
||||
#if _MSC_VER > 1201
|
||||
#if _MSC_VER > 1300
|
||||
#define HAVE_UINTPTR_T 1
|
||||
#define HAVE_INTPTR_T 1
|
||||
#else
|
||||
/* VC6 & eVC4 don't support the C99 LL suffix for 64-bit integer literals */
|
||||
/* VC6, VS 2002 and eVC4 don't support the C99 LL suffix for 64-bit integer literals */
|
||||
#define Py_LL(x) x##I64
|
||||
#endif /* _MSC_VER > 1200 */
|
||||
#endif /* _MSC_VER */
|
||||
|
|
|
@ -124,7 +124,7 @@ addnfa(nfagrammar *gr, char *name)
|
|||
|
||||
nf = newnfa(name);
|
||||
gr->gr_nfa = (nfa **)PyObject_REALLOC(gr->gr_nfa,
|
||||
sizeof(nfa) * (gr->gr_nnfas + 1));
|
||||
sizeof(nfa*) * (gr->gr_nnfas + 1));
|
||||
if (gr->gr_nfa == NULL)
|
||||
Py_FatalError("out of mem");
|
||||
gr->gr_nfa[gr->gr_nnfas++] = nf;
|
||||
|
@ -487,6 +487,7 @@ makedfa(nfagrammar *gr, nfa *nf, dfa *d)
|
|||
convert(d, xx_nstates, xx_state);
|
||||
|
||||
/* XXX cleanup */
|
||||
PyObject_FREE(xx_state);
|
||||
}
|
||||
|
||||
static void
|
||||
|
|
|
@ -1539,7 +1539,7 @@ ast_for_binop(struct compiling *c, const node *n)
|
|||
tmp_result = BinOp(result, newoperator, tmp,
|
||||
LINENO(next_oper), next_oper->n_col_offset,
|
||||
c->c_arena);
|
||||
if (!tmp)
|
||||
if (!tmp_result)
|
||||
return NULL;
|
||||
result = tmp_result;
|
||||
}
|
||||
|
|
|
@ -1611,6 +1611,84 @@ builtin_sum(PyObject *self, PyObject *args)
|
|||
Py_INCREF(result);
|
||||
}
|
||||
|
||||
#ifndef SLOW_SUM
|
||||
/* Fast addition by keeping temporary sums in C instead of new Python objects.
|
||||
Assumes all inputs are the same type. If the assumption fails, default
|
||||
to the more general routine.
|
||||
*/
|
||||
if (PyInt_CheckExact(result)) {
|
||||
long i_result = PyInt_AS_LONG(result);
|
||||
Py_DECREF(result);
|
||||
result = NULL;
|
||||
while(result == NULL) {
|
||||
item = PyIter_Next(iter);
|
||||
if (item == NULL) {
|
||||
Py_DECREF(iter);
|
||||
if (PyErr_Occurred())
|
||||
return NULL;
|
||||
return PyInt_FromLong(i_result);
|
||||
}
|
||||
if (PyInt_CheckExact(item)) {
|
||||
long b = PyInt_AS_LONG(item);
|
||||
long x = i_result + b;
|
||||
if ((x^i_result) >= 0 || (x^b) >= 0) {
|
||||
i_result = x;
|
||||
Py_DECREF(item);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
/* Either overflowed or is not an int. Restore real objects and process normally */
|
||||
result = PyInt_FromLong(i_result);
|
||||
temp = PyNumber_Add(result, item);
|
||||
Py_DECREF(result);
|
||||
Py_DECREF(item);
|
||||
result = temp;
|
||||
if (result == NULL) {
|
||||
Py_DECREF(iter);
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (PyFloat_CheckExact(result)) {
|
||||
double f_result = PyFloat_AS_DOUBLE(result);
|
||||
Py_DECREF(result);
|
||||
result = NULL;
|
||||
while(result == NULL) {
|
||||
item = PyIter_Next(iter);
|
||||
if (item == NULL) {
|
||||
Py_DECREF(iter);
|
||||
if (PyErr_Occurred())
|
||||
return NULL;
|
||||
return PyFloat_FromDouble(f_result);
|
||||
}
|
||||
if (PyFloat_CheckExact(item)) {
|
||||
PyFPE_START_PROTECT("add", return 0)
|
||||
f_result += PyFloat_AS_DOUBLE(item);
|
||||
PyFPE_END_PROTECT(f_result)
|
||||
Py_DECREF(item);
|
||||
continue;
|
||||
}
|
||||
if (PyInt_CheckExact(item)) {
|
||||
PyFPE_START_PROTECT("add", return 0)
|
||||
f_result += (double)PyInt_AS_LONG(item);
|
||||
PyFPE_END_PROTECT(f_result)
|
||||
Py_DECREF(item);
|
||||
continue;
|
||||
}
|
||||
result = PyFloat_FromDouble(f_result);
|
||||
temp = PyNumber_Add(result, item);
|
||||
Py_DECREF(result);
|
||||
Py_DECREF(item);
|
||||
result = temp;
|
||||
if (result == NULL) {
|
||||
Py_DECREF(iter);
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
for(;;) {
|
||||
item = PyIter_Next(iter);
|
||||
if (item == NULL) {
|
||||
|
|
|
@ -1009,6 +1009,7 @@ PyMarshal_ReadLongFromFile(FILE *fp)
|
|||
RFILE rf;
|
||||
rf.fp = fp;
|
||||
rf.strings = NULL;
|
||||
rf.ptr = rf.end = NULL;
|
||||
return r_long(&rf);
|
||||
}
|
||||
|
||||
|
@ -1082,6 +1083,7 @@ PyMarshal_ReadObjectFromFile(FILE *fp)
|
|||
rf.fp = fp;
|
||||
rf.strings = PyList_New(0);
|
||||
rf.depth = 0;
|
||||
rf.ptr = rf.end = NULL;
|
||||
result = r_object(&rf);
|
||||
Py_DECREF(rf.strings);
|
||||
return result;
|
||||
|
|
|
@ -13172,6 +13172,138 @@ fi
|
|||
|
||||
|
||||
# Check for use of the system libffi library
|
||||
if test "${ac_cv_header_ffi_h+set}" = set; then
|
||||
{ echo "$as_me:$LINENO: checking for ffi.h" >&5
|
||||
echo $ECHO_N "checking for ffi.h... $ECHO_C" >&6; }
|
||||
if test "${ac_cv_header_ffi_h+set}" = set; then
|
||||
echo $ECHO_N "(cached) $ECHO_C" >&6
|
||||
fi
|
||||
{ echo "$as_me:$LINENO: result: $ac_cv_header_ffi_h" >&5
|
||||
echo "${ECHO_T}$ac_cv_header_ffi_h" >&6; }
|
||||
else
|
||||
# Is the header compilable?
|
||||
{ echo "$as_me:$LINENO: checking ffi.h usability" >&5
|
||||
echo $ECHO_N "checking ffi.h usability... $ECHO_C" >&6; }
|
||||
cat >conftest.$ac_ext <<_ACEOF
|
||||
/* confdefs.h. */
|
||||
_ACEOF
|
||||
cat confdefs.h >>conftest.$ac_ext
|
||||
cat >>conftest.$ac_ext <<_ACEOF
|
||||
/* end confdefs.h. */
|
||||
$ac_includes_default
|
||||
#include <ffi.h>
|
||||
_ACEOF
|
||||
rm -f conftest.$ac_objext
|
||||
if { (ac_try="$ac_compile"
|
||||
case "(($ac_try" in
|
||||
*\"* | *\`* | *\\*) ac_try_echo=\$ac_try;;
|
||||
*) ac_try_echo=$ac_try;;
|
||||
esac
|
||||
eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5
|
||||
(eval "$ac_compile") 2>conftest.er1
|
||||
ac_status=$?
|
||||
grep -v '^ *+' conftest.er1 >conftest.err
|
||||
rm -f conftest.er1
|
||||
cat conftest.err >&5
|
||||
echo "$as_me:$LINENO: \$? = $ac_status" >&5
|
||||
(exit $ac_status); } && {
|
||||
test -z "$ac_c_werror_flag" ||
|
||||
test ! -s conftest.err
|
||||
} && test -s conftest.$ac_objext; then
|
||||
ac_header_compiler=yes
|
||||
else
|
||||
echo "$as_me: failed program was:" >&5
|
||||
sed 's/^/| /' conftest.$ac_ext >&5
|
||||
|
||||
ac_header_compiler=no
|
||||
fi
|
||||
|
||||
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
|
||||
{ echo "$as_me:$LINENO: result: $ac_header_compiler" >&5
|
||||
echo "${ECHO_T}$ac_header_compiler" >&6; }
|
||||
|
||||
# Is the header present?
|
||||
{ echo "$as_me:$LINENO: checking ffi.h presence" >&5
|
||||
echo $ECHO_N "checking ffi.h presence... $ECHO_C" >&6; }
|
||||
cat >conftest.$ac_ext <<_ACEOF
|
||||
/* confdefs.h. */
|
||||
_ACEOF
|
||||
cat confdefs.h >>conftest.$ac_ext
|
||||
cat >>conftest.$ac_ext <<_ACEOF
|
||||
/* end confdefs.h. */
|
||||
#include <ffi.h>
|
||||
_ACEOF
|
||||
if { (ac_try="$ac_cpp conftest.$ac_ext"
|
||||
case "(($ac_try" in
|
||||
*\"* | *\`* | *\\*) ac_try_echo=\$ac_try;;
|
||||
*) ac_try_echo=$ac_try;;
|
||||
esac
|
||||
eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5
|
||||
(eval "$ac_cpp conftest.$ac_ext") 2>conftest.er1
|
||||
ac_status=$?
|
||||
grep -v '^ *+' conftest.er1 >conftest.err
|
||||
rm -f conftest.er1
|
||||
cat conftest.err >&5
|
||||
echo "$as_me:$LINENO: \$? = $ac_status" >&5
|
||||
(exit $ac_status); } >/dev/null && {
|
||||
test -z "$ac_c_preproc_warn_flag$ac_c_werror_flag" ||
|
||||
test ! -s conftest.err
|
||||
}; then
|
||||
ac_header_preproc=yes
|
||||
else
|
||||
echo "$as_me: failed program was:" >&5
|
||||
sed 's/^/| /' conftest.$ac_ext >&5
|
||||
|
||||
ac_header_preproc=no
|
||||
fi
|
||||
|
||||
rm -f conftest.err conftest.$ac_ext
|
||||
{ echo "$as_me:$LINENO: result: $ac_header_preproc" >&5
|
||||
echo "${ECHO_T}$ac_header_preproc" >&6; }
|
||||
|
||||
# So? What about this header?
|
||||
case $ac_header_compiler:$ac_header_preproc:$ac_c_preproc_warn_flag in
|
||||
yes:no: )
|
||||
{ echo "$as_me:$LINENO: WARNING: ffi.h: accepted by the compiler, rejected by the preprocessor!" >&5
|
||||
echo "$as_me: WARNING: ffi.h: accepted by the compiler, rejected by the preprocessor!" >&2;}
|
||||
{ echo "$as_me:$LINENO: WARNING: ffi.h: proceeding with the compiler's result" >&5
|
||||
echo "$as_me: WARNING: ffi.h: proceeding with the compiler's result" >&2;}
|
||||
ac_header_preproc=yes
|
||||
;;
|
||||
no:yes:* )
|
||||
{ echo "$as_me:$LINENO: WARNING: ffi.h: present but cannot be compiled" >&5
|
||||
echo "$as_me: WARNING: ffi.h: present but cannot be compiled" >&2;}
|
||||
{ echo "$as_me:$LINENO: WARNING: ffi.h: check for missing prerequisite headers?" >&5
|
||||
echo "$as_me: WARNING: ffi.h: check for missing prerequisite headers?" >&2;}
|
||||
{ echo "$as_me:$LINENO: WARNING: ffi.h: see the Autoconf documentation" >&5
|
||||
echo "$as_me: WARNING: ffi.h: see the Autoconf documentation" >&2;}
|
||||
{ echo "$as_me:$LINENO: WARNING: ffi.h: section \"Present But Cannot Be Compiled\"" >&5
|
||||
echo "$as_me: WARNING: ffi.h: section \"Present But Cannot Be Compiled\"" >&2;}
|
||||
{ echo "$as_me:$LINENO: WARNING: ffi.h: proceeding with the preprocessor's result" >&5
|
||||
echo "$as_me: WARNING: ffi.h: proceeding with the preprocessor's result" >&2;}
|
||||
{ echo "$as_me:$LINENO: WARNING: ffi.h: in the future, the compiler will take precedence" >&5
|
||||
echo "$as_me: WARNING: ffi.h: in the future, the compiler will take precedence" >&2;}
|
||||
( cat <<\_ASBOX
|
||||
## ------------------------------------------------ ##
|
||||
## Report this to http://www.python.org/python-bugs ##
|
||||
## ------------------------------------------------ ##
|
||||
_ASBOX
|
||||
) | sed "s/^/$as_me: WARNING: /" >&2
|
||||
;;
|
||||
esac
|
||||
{ echo "$as_me:$LINENO: checking for ffi.h" >&5
|
||||
echo $ECHO_N "checking for ffi.h... $ECHO_C" >&6; }
|
||||
if test "${ac_cv_header_ffi_h+set}" = set; then
|
||||
echo $ECHO_N "(cached) $ECHO_C" >&6
|
||||
else
|
||||
ac_cv_header_ffi_h=$ac_header_preproc
|
||||
fi
|
||||
{ echo "$as_me:$LINENO: result: $ac_cv_header_ffi_h" >&5
|
||||
echo "${ECHO_T}$ac_cv_header_ffi_h" >&6; }
|
||||
|
||||
fi
|
||||
|
||||
|
||||
{ echo "$as_me:$LINENO: checking for --with-system-ffi" >&5
|
||||
echo $ECHO_N "checking for --with-system-ffi... $ECHO_C" >&6; }
|
||||
|
||||
|
@ -13181,8 +13313,11 @@ if test "${with_system_ffi+set}" = set; then
|
|||
fi
|
||||
|
||||
|
||||
if test -z "$with_system_ffi"
|
||||
then with_system_ffi="no"
|
||||
if test -z "$with_system_ffi" && test "$ac_cv_header_ffi_h" = yes; then
|
||||
case "$ac_sys_system/`uname -m`" in
|
||||
Linux/arm*) with_system_ffi="yes"; CONFIG_ARGS="$CONFIG_ARGS --with-system-ffi";;
|
||||
*) with_system_ffi="no"
|
||||
esac
|
||||
fi
|
||||
{ echo "$as_me:$LINENO: result: $with_system_ffi" >&5
|
||||
echo "${ECHO_T}$with_system_ffi" >&6; }
|
||||
|
|
|
@ -1724,12 +1724,16 @@ LIBS="$withval $LIBS"
|
|||
[AC_MSG_RESULT(no)])
|
||||
|
||||
# Check for use of the system libffi library
|
||||
AC_CHECK_HEADER(ffi.h)
|
||||
AC_MSG_CHECKING(for --with-system-ffi)
|
||||
AC_ARG_WITH(system_ffi,
|
||||
AC_HELP_STRING(--with-system-ffi, build _ctypes module using an installed ffi library))
|
||||
|
||||
if test -z "$with_system_ffi"
|
||||
then with_system_ffi="no"
|
||||
if test -z "$with_system_ffi" && test "$ac_cv_header_ffi_h" = yes; then
|
||||
case "$ac_sys_system/`uname -m`" in
|
||||
Linux/arm*) with_system_ffi="yes"; CONFIG_ARGS="$CONFIG_ARGS --with-system-ffi";;
|
||||
*) with_system_ffi="no"
|
||||
esac
|
||||
fi
|
||||
AC_MSG_RESULT($with_system_ffi)
|
||||
|
||||
|
|
3
setup.py
3
setup.py
|
@ -773,6 +773,7 @@ class PyBuildExt(build_ext):
|
|||
# some unusual system configurations (e.g. the directory
|
||||
# is on an NFS server that goes away).
|
||||
exts.append(Extension('_bsddb', ['_bsddb.c'],
|
||||
depends = ['bsddb.h'],
|
||||
library_dirs=dblib_dir,
|
||||
runtime_library_dirs=dblib_dir,
|
||||
include_dirs=db_incs,
|
||||
|
@ -1091,7 +1092,7 @@ class PyBuildExt(build_ext):
|
|||
|
||||
# Platform-specific libraries
|
||||
if platform in ('linux2', 'freebsd4', 'freebsd5', 'freebsd6',
|
||||
'freebsd7'):
|
||||
'freebsd7', 'freebsd8'):
|
||||
exts.append( Extension('ossaudiodev', ['ossaudiodev.c']) )
|
||||
else:
|
||||
missing.append('ossaudiodev')
|
||||
|
|
Loading…
Reference in New Issue