Merged revisions 53304-53433,53435-53450 via svnmerge from

svn+ssh://pythondev@svn.python.org/python/trunk

........
  r53304 | vinay.sajip | 2007-01-09 15:50:28 +0100 (Tue, 09 Jan 2007) | 1 line

  Bug #1627575: Added _open() method to FileHandler which can be used to reopen files. The FileHandler instance now saves the encoding (which can be None) in an attribute called "encoding".
........
  r53305 | vinay.sajip | 2007-01-09 15:51:36 +0100 (Tue, 09 Jan 2007) | 1 line

  Added entry about addition of _open() method to logging.FileHandler.
........
  r53306 | vinay.sajip | 2007-01-09 15:54:56 +0100 (Tue, 09 Jan 2007) | 1 line

  Added a docstring
........
  r53316 | thomas.heller | 2007-01-09 20:19:33 +0100 (Tue, 09 Jan 2007) | 4 lines

  Verify the sizes of the basic ctypes data types against the struct
  module.

  Will backport to release25-maint.
........
  r53340 | gustavo.niemeyer | 2007-01-10 17:13:40 +0100 (Wed, 10 Jan 2007) | 3 lines

  Mention in the int() docstring that a base zero has meaning, as
  stated in http://docs.python.org/lib/built-in-funcs.html as well.
........
  r53341 | gustavo.niemeyer | 2007-01-10 17:15:48 +0100 (Wed, 10 Jan 2007) | 2 lines

  Minor change in int() docstring for proper spacing.
........
  r53358 | thomas.heller | 2007-01-10 21:12:13 +0100 (Wed, 10 Jan 2007) | 1 line

  Change the ctypes version number to "1.1.0".
........
  r53361 | thomas.heller | 2007-01-10 21:51:19 +0100 (Wed, 10 Jan 2007) | 1 line

  Must change the version number in the _ctypes extension as well.
........
  r53362 | guido.van.rossum | 2007-01-11 00:12:56 +0100 (Thu, 11 Jan 2007) | 3 lines

  Fix the signature of log_error().  (A subclass that did the right thing
  was getting complaints from pychecker.)
........
  r53370 | matthias.klose | 2007-01-11 11:26:31 +0100 (Thu, 11 Jan 2007) | 2 lines

  - Make the documentation match the code and the docstring
........
  r53375 | matthias.klose | 2007-01-11 12:44:04 +0100 (Thu, 11 Jan 2007) | 2 lines

  - idle: Honor the "Cancel" action in the save dialog (Debian bug #299092).
........
  r53381 | raymond.hettinger | 2007-01-11 19:22:55 +0100 (Thu, 11 Jan 2007) | 1 line

  SF #1486663 -- Allow keyword args in subclasses of set() and frozenset().
........
  r53388 | thomas.heller | 2007-01-11 22:18:56 +0100 (Thu, 11 Jan 2007) | 4 lines

  Fixes for 64-bit Windows: In ctypes.wintypes, correct the definitions
  of HANDLE, WPARAM, LPARAM data types.  Make parameterless foreign
  function calls work.
........
  r53390 | thomas.heller | 2007-01-11 22:23:12 +0100 (Thu, 11 Jan 2007) | 2 lines

  Correct the comments: the code is right.
........
  r53393 | brett.cannon | 2007-01-12 08:27:52 +0100 (Fri, 12 Jan 2007) | 3 lines

  Fix error where the end of a funcdesc environment was accidentally moved too
  far down.
........
  r53397 | anthony.baxter | 2007-01-12 10:35:56 +0100 (Fri, 12 Jan 2007) | 3 lines

  add parsetok.h as a dependency - previously, changing this file doesn't
  cause the right files to be rebuilt.
........
  r53401 | thomas.heller | 2007-01-12 21:08:19 +0100 (Fri, 12 Jan 2007) | 3 lines

  Avoid warnings in the test suite because ctypes.wintypes cannot be
  imported on non-windows systems.
........
  r53402 | thomas.heller | 2007-01-12 21:17:34 +0100 (Fri, 12 Jan 2007) | 6 lines

  patch #1610795: BSD version of ctypes.util.find_library, by Martin
  Kammerhofer.

  release25-maint backport candidate, but the release manager has to
  decide.
........
  r53403 | thomas.heller | 2007-01-12 21:21:53 +0100 (Fri, 12 Jan 2007) | 3 lines

  patch #1610795: BSD version of ctypes.util.find_library, by Martin
  Kammerhofer.
........
  r53406 | brett.cannon | 2007-01-13 01:29:49 +0100 (Sat, 13 Jan 2007) | 2 lines

  Deprecate the sets module.
........
  r53407 | georg.brandl | 2007-01-13 13:31:51 +0100 (Sat, 13 Jan 2007) | 3 lines

  Fix typo.
........
  r53409 | marc-andre.lemburg | 2007-01-13 22:00:08 +0100 (Sat, 13 Jan 2007) | 16 lines

  Bump version number and change copyright year.

  Add new API linux_distribution() which supports reading the full distribution
  name and also knows how to parse LSB-style release files.

  Redirect the old dist() API to the new API (using the short distribution name
  taken from the release file filename).

  Add branch and revision to _sys_version().

  Add work-around for Cygwin to libc_ver().

  Add support for IronPython (thanks for Anthony Baxter) and make
  Jython support more robust.
........
  r53410 | neal.norwitz | 2007-01-13 22:22:37 +0100 (Sat, 13 Jan 2007) | 1 line

  Fix grammar in docstrings
........
  r53411 | marc-andre.lemburg | 2007-01-13 23:32:21 +0100 (Sat, 13 Jan 2007) | 9 lines

  Add parameter sys_version to _sys_version().

  Change the cache for _sys_version() to take the parameter into account.

  Add support for parsing the IronPython 1.0.1 sys.version value - even
  though it still returns '1.0.0'; the version string no longer includes
  the patch level.
........
  r53412 | peter.astrand | 2007-01-13 23:35:35 +0100 (Sat, 13 Jan 2007) | 1 line

  Fix for bug #1634343: allow specifying empty arguments on Windows
........
  r53414 | marc-andre.lemburg | 2007-01-13 23:59:36 +0100 (Sat, 13 Jan 2007) | 14 lines

  Add Python implementation to the machine details.

  Pretty-print the Python version used for running PyBench.

  Let the user know when calibration has finished.

  [ 1563844 ] pybench support for IronPython:

  Simplify Unicode version detection.

  Make garbage collection and check interval settings optional if
  the Python implementation doesn't support thess (e.g. IronPython).
........
  r53415 | marc-andre.lemburg | 2007-01-14 00:13:54 +0100 (Sun, 14 Jan 2007) | 5 lines

  Use defaults if sys.executable isn't set (e.g. on Jython).

  This change allows running PyBench under Jython.
........
  r53416 | marc-andre.lemburg | 2007-01-14 00:15:33 +0100 (Sun, 14 Jan 2007) | 3 lines

  Jython doesn't have sys.setcheckinterval() - ignore it in that case.
........
  r53420 | gerhard.haering | 2007-01-14 02:43:50 +0100 (Sun, 14 Jan 2007) | 29 lines

  Merged changes from standalone version 2.3.3. This should probably all be
  merged into the 2.5 maintenance branch:

  - self->statement was not checked while fetching data, which could
    lead to crashes if you used the pysqlite API in unusual ways.
    Closing the cursor and continuing to fetch data was enough.

  - Converters are stored in a converters dictionary. The converter name
    is uppercased first. The old upper-casing algorithm was wrong and
    was replaced by a simple call to the Python string's upper() method
    instead.

  -Applied patch by Glyph Lefkowitz that fixes the problem with
   subsequent SQLITE_SCHEMA errors.

  - Improvement to the row type: rows can now be iterated over and have a keys()
    method. This improves compatibility with both tuple and dict a lot.

  - A bugfix for the subsecond resolution in timestamps.

  - Corrected the way the flags PARSE_DECLTYPES and PARSE_COLNAMES are
    checked for. Now they work as documented.

  - gcc on Linux sucks. It exports all symbols by default in shared
    libraries, so if symbols are not unique it can lead to problems with
    symbol lookup.  pysqlite used to crash under Apache when mod_cache
    was enabled because both modules had the symbol cache_init. I fixed
    this by applying the prefix pysqlite_ almost everywhere. Sigh.
........
  r53423 | guido.van.rossum | 2007-01-14 04:46:33 +0100 (Sun, 14 Jan 2007) | 2 lines

  Remove a dependency of this test on $COLUMNS.
........
  r53425 | ka-ping.yee | 2007-01-14 05:25:15 +0100 (Sun, 14 Jan 2007) | 3 lines

  Handle old-style instances more gracefully (display documentation on
  the relevant class instead of documentation on <type 'instance'>).
........
  r53440 | vinay.sajip | 2007-01-14 22:49:59 +0100 (Sun, 14 Jan 2007) | 1 line

  Added WatchedFileHandler (based on SF patch #1598415)
........
  r53441 | vinay.sajip | 2007-01-14 22:50:50 +0100 (Sun, 14 Jan 2007) | 1 line

  Added documentation for WatchedFileHandler (based on SF patch #1598415)
........
  r53442 | guido.van.rossum | 2007-01-15 01:02:35 +0100 (Mon, 15 Jan 2007) | 2 lines

  Doc patch matching r53434 (htonl etc. now always take/return positive ints).
........
This commit is contained in:
Thomas Wouters 2007-01-15 15:49:28 +00:00
parent c5c6f24aaf
commit fc7bb8c786
48 changed files with 1199 additions and 667 deletions

View File

@ -989,10 +989,11 @@ The \class{FileHandler} class, located in the core \module{logging}
package, sends logging output to a disk file. It inherits the output package, sends logging output to a disk file. It inherits the output
functionality from \class{StreamHandler}. functionality from \class{StreamHandler}.
\begin{classdesc}{FileHandler}{filename\optional{, mode}} \begin{classdesc}{FileHandler}{filename\optional{, mode\optional{, encoding}}}
Returns a new instance of the \class{FileHandler} class. The specified Returns a new instance of the \class{FileHandler} class. The specified
file is opened and used as the stream for logging. If \var{mode} is file is opened and used as the stream for logging. If \var{mode} is
not specified, \constant{'a'} is used. By default, the file grows not specified, \constant{'a'} is used. If \var{encoding} is not \var{None},
it is used to open the file with that encoding. By default, the file grows
indefinitely. indefinitely.
\end{classdesc} \end{classdesc}
@ -1004,6 +1005,41 @@ Closes the file.
Outputs the record to the file. Outputs the record to the file.
\end{methoddesc} \end{methoddesc}
\subsubsection{WatchedFileHandler}
\versionadded{2.6}
The \class{WatchedFileHandler} class, located in the \module{logging.handlers}
module, is a \class{FileHandler} which watches the file it is logging to.
If the file changes, it is closed and reopened using the file name.
A file change can happen because of usage of programs such as \var{newsyslog}
and \var{logrotate} which perform log file rotation. This handler, intended
for use under Unix/Linux, watches the file to see if it has changed since the
last emit. (A file is deemed to have changed if its device or inode have
changed.) If the file has changed, the old file stream is closed, and the file
opened to get a new stream.
This handler is not appropriate for use under Windows, because under Windows
open log files cannot be moved or renamed - logging opens the files with
exclusive locks - and so there is no need for such a handler. Furthermore,
\var{ST_INO} is not supported under Windows; \function{stat()} always returns
zero for this value.
\begin{classdesc}{WatchedFileHandler}{filename\optional{,mode\optional{,
encoding}}}
Returns a new instance of the \class{WatchedFileHandler} class. The specified
file is opened and used as the stream for logging. If \var{mode} is
not specified, \constant{'a'} is used. If \var{encoding} is not \var{None},
it is used to open the file with that encoding. By default, the file grows
indefinitely.
\end{classdesc}
\begin{methoddesc}{emit}{record}
Outputs the record to the file, but first checks to see if the file has
changed. If it has, the existing stream is flushed and closed and the file
opened again, before outputting the record to the file.
\end{methoddesc}
\subsubsection{RotatingFileHandler} \subsubsection{RotatingFileHandler}
The \class{RotatingFileHandler} class, located in the \module{logging.handlers} The \class{RotatingFileHandler} class, located in the \module{logging.handlers}

View File

@ -185,7 +185,7 @@ or may raise the following exceptions:
The server didn't reply properly to the \samp{HELO} greeting. The server didn't reply properly to the \samp{HELO} greeting.
\item[\exception{SMTPAuthenticationError}] \item[\exception{SMTPAuthenticationError}]
The server didn't accept the username/password combination. The server didn't accept the username/password combination.
\item[\exception{SMTPError}] \item[\exception{SMTPException}]
No suitable authentication method was found. No suitable authentication method was found.
\end{description} \end{description}
\end{methoddesc} \end{methoddesc}

View File

@ -331,25 +331,25 @@ Availability: \UNIX.
\end{funcdesc} \end{funcdesc}
\begin{funcdesc}{ntohl}{x} \begin{funcdesc}{ntohl}{x}
Convert 32-bit integers from network to host byte order. On machines Convert 32-bit positive integers from network to host byte order. On machines
where the host byte order is the same as network byte order, this is a where the host byte order is the same as network byte order, this is a
no-op; otherwise, it performs a 4-byte swap operation. no-op; otherwise, it performs a 4-byte swap operation.
\end{funcdesc} \end{funcdesc}
\begin{funcdesc}{ntohs}{x} \begin{funcdesc}{ntohs}{x}
Convert 16-bit integers from network to host byte order. On machines Convert 16-bit positive integers from network to host byte order. On machines
where the host byte order is the same as network byte order, this is a where the host byte order is the same as network byte order, this is a
no-op; otherwise, it performs a 2-byte swap operation. no-op; otherwise, it performs a 2-byte swap operation.
\end{funcdesc} \end{funcdesc}
\begin{funcdesc}{htonl}{x} \begin{funcdesc}{htonl}{x}
Convert 32-bit integers from host to network byte order. On machines Convert 32-bit positive integers from host to network byte order. On machines
where the host byte order is the same as network byte order, this is a where the host byte order is the same as network byte order, this is a
no-op; otherwise, it performs a 4-byte swap operation. no-op; otherwise, it performs a 4-byte swap operation.
\end{funcdesc} \end{funcdesc}
\begin{funcdesc}{htons}{x} \begin{funcdesc}{htons}{x}
Convert 16-bit integers from host to network byte order. On machines Convert 16-bit positive integers from host to network byte order. On machines
where the host byte order is the same as network byte order, this is a where the host byte order is the same as network byte order, this is a
no-op; otherwise, it performs a 2-byte swap operation. no-op; otherwise, it performs a 2-byte swap operation.
\end{funcdesc} \end{funcdesc}

View File

@ -281,6 +281,7 @@ Execute the \class{unittest.TestSuite} instance \var{suite}.
The optional argument \var{testclass} accepts one of the test classes in the The optional argument \var{testclass} accepts one of the test classes in the
suite so as to print out more detailed information on where the testing suite suite so as to print out more detailed information on where the testing suite
originated from. originated from.
\end{funcdesc}
The \module{test.test_support} module defines the following classes: The \module{test.test_support} module defines the following classes:
@ -299,4 +300,3 @@ Temporarily set the environment variable \code{envvar} to the value of
Temporarily unset the environment variable \code{envvar}. Temporarily unset the environment variable \code{envvar}.
\end{methoddesc} \end{methoddesc}
\end{funcdesc}

View File

@ -396,7 +396,7 @@ class BaseHTTPRequestHandler(SocketServer.StreamRequestHandler):
self.log_message('"%s" %s %s', self.log_message('"%s" %s %s',
self.requestline, str(code), str(size)) self.requestline, str(code), str(size))
def log_error(self, *args): def log_error(self, format, *args):
"""Log an error. """Log an error.
This is called when a request cannot be fulfilled. By This is called when a request cannot be fulfilled. By
@ -408,7 +408,7 @@ class BaseHTTPRequestHandler(SocketServer.StreamRequestHandler):
""" """
self.log_message(*args) self.log_message(format, *args)
def log_message(self, format, *args): def log_message(self, format, *args):
"""Log an arbitrary message. """Log an arbitrary message.

View File

@ -5,7 +5,7 @@
import os as _os, sys as _sys import os as _os, sys as _sys
__version__ = "1.0.1" __version__ = "1.1.0"
from _ctypes import Union, Structure, Array from _ctypes import Union, Structure, Array
from _ctypes import _Pointer from _ctypes import _Pointer
@ -133,6 +133,18 @@ elif _os.name == "posix":
from _ctypes import sizeof, byref, addressof, alignment, resize from _ctypes import sizeof, byref, addressof, alignment, resize
from _ctypes import _SimpleCData from _ctypes import _SimpleCData
def _check_size(typ, typecode=None):
# Check if sizeof(ctypes_type) against struct.calcsize. This
# should protect somewhat against a misconfigured libffi.
from struct import calcsize
if typecode is None:
# Most _type_ codes are the same as used in struct
typecode = typ._type_
actual, required = sizeof(typ), calcsize(typecode)
if actual != required:
raise SystemError("sizeof(%s) wrong: %d instead of %d" % \
(typ, actual, required))
class py_object(_SimpleCData): class py_object(_SimpleCData):
_type_ = "O" _type_ = "O"
def __repr__(self): def __repr__(self):
@ -140,18 +152,23 @@ class py_object(_SimpleCData):
return super(py_object, self).__repr__() return super(py_object, self).__repr__()
except ValueError: except ValueError:
return "%s(<NULL>)" % type(self).__name__ return "%s(<NULL>)" % type(self).__name__
_check_size(py_object, "P")
class c_short(_SimpleCData): class c_short(_SimpleCData):
_type_ = "h" _type_ = "h"
_check_size(c_short)
class c_ushort(_SimpleCData): class c_ushort(_SimpleCData):
_type_ = "H" _type_ = "H"
_check_size(c_ushort)
class c_long(_SimpleCData): class c_long(_SimpleCData):
_type_ = "l" _type_ = "l"
_check_size(c_long)
class c_ulong(_SimpleCData): class c_ulong(_SimpleCData):
_type_ = "L" _type_ = "L"
_check_size(c_ulong)
if _calcsize("i") == _calcsize("l"): if _calcsize("i") == _calcsize("l"):
# if int and long have the same size, make c_int an alias for c_long # if int and long have the same size, make c_int an alias for c_long
@ -160,15 +177,19 @@ if _calcsize("i") == _calcsize("l"):
else: else:
class c_int(_SimpleCData): class c_int(_SimpleCData):
_type_ = "i" _type_ = "i"
_check_size(c_int)
class c_uint(_SimpleCData): class c_uint(_SimpleCData):
_type_ = "I" _type_ = "I"
_check_size(c_uint)
class c_float(_SimpleCData): class c_float(_SimpleCData):
_type_ = "f" _type_ = "f"
_check_size(c_float)
class c_double(_SimpleCData): class c_double(_SimpleCData):
_type_ = "d" _type_ = "d"
_check_size(c_double)
if _calcsize("l") == _calcsize("q"): if _calcsize("l") == _calcsize("q"):
# if long and long long have the same size, make c_longlong an alias for c_long # if long and long long have the same size, make c_longlong an alias for c_long
@ -177,33 +198,40 @@ if _calcsize("l") == _calcsize("q"):
else: else:
class c_longlong(_SimpleCData): class c_longlong(_SimpleCData):
_type_ = "q" _type_ = "q"
_check_size(c_longlong)
class c_ulonglong(_SimpleCData): class c_ulonglong(_SimpleCData):
_type_ = "Q" _type_ = "Q"
## def from_param(cls, val): ## def from_param(cls, val):
## return ('d', float(val), val) ## return ('d', float(val), val)
## from_param = classmethod(from_param) ## from_param = classmethod(from_param)
_check_size(c_ulonglong)
class c_ubyte(_SimpleCData): class c_ubyte(_SimpleCData):
_type_ = "B" _type_ = "B"
c_ubyte.__ctype_le__ = c_ubyte.__ctype_be__ = c_ubyte c_ubyte.__ctype_le__ = c_ubyte.__ctype_be__ = c_ubyte
# backward compatibility: # backward compatibility:
##c_uchar = c_ubyte ##c_uchar = c_ubyte
_check_size(c_ubyte)
class c_byte(_SimpleCData): class c_byte(_SimpleCData):
_type_ = "b" _type_ = "b"
c_byte.__ctype_le__ = c_byte.__ctype_be__ = c_byte c_byte.__ctype_le__ = c_byte.__ctype_be__ = c_byte
_check_size(c_byte)
class c_char(_SimpleCData): class c_char(_SimpleCData):
_type_ = "c" _type_ = "c"
c_char.__ctype_le__ = c_char.__ctype_be__ = c_char c_char.__ctype_le__ = c_char.__ctype_be__ = c_char
_check_size(c_char)
class c_char_p(_SimpleCData): class c_char_p(_SimpleCData):
_type_ = "z" _type_ = "z"
_check_size(c_char_p, "P")
class c_void_p(_SimpleCData): class c_void_p(_SimpleCData):
_type_ = "P" _type_ = "P"
c_voidp = c_void_p # backwards compatibility (to a bug) c_voidp = c_void_p # backwards compatibility (to a bug)
_check_size(c_void_p)
# This cache maps types to pointers to them. # This cache maps types to pointers to them.
_pointer_type_cache = {} _pointer_type_cache = {}

View File

@ -32,12 +32,32 @@ if sys.platform == "win32" and sizeof(c_void_p) == sizeof(c_int):
# or wrong calling convention # or wrong calling convention
self.assertRaises(ValueError, IsWindow, None) self.assertRaises(ValueError, IsWindow, None)
if sys.platform == "win32":
class FunctionCallTestCase(unittest.TestCase):
if is_resource_enabled("SEH"): if is_resource_enabled("SEH"):
def test_SEH(self): def test_SEH(self):
# Call functions with invalid arguments, and make sure that access violations # Call functions with invalid arguments, and make sure
# are trapped and raise an exception. # that access violations are trapped and raise an
# exception.
self.assertRaises(WindowsError, windll.kernel32.GetModuleHandleA, 32) self.assertRaises(WindowsError, windll.kernel32.GetModuleHandleA, 32)
def test_noargs(self):
# This is a special case on win32 x64
windll.user32.GetDesktopWindow()
class TestWintypes(unittest.TestCase):
def test_HWND(self):
from ctypes import wintypes
self.failUnlessEqual(sizeof(wintypes.HWND), sizeof(c_void_p))
def test_PARAM(self):
from ctypes import wintypes
self.failUnlessEqual(sizeof(wintypes.WPARAM),
sizeof(c_void_p))
self.failUnlessEqual(sizeof(wintypes.LPARAM),
sizeof(c_void_p))
class Structures(unittest.TestCase): class Structures(unittest.TestCase):
def test_struct_by_value(self): def test_struct_by_value(self):

View File

@ -46,23 +46,16 @@ elif os.name == "posix":
import re, tempfile, errno import re, tempfile, errno
def _findLib_gcc(name): def _findLib_gcc(name):
expr = '[^\(\)\s]*lib%s\.[^\(\)\s]*' % name expr = r'[^\(\)\s]*lib%s\.[^\(\)\s]*' % re.escape(name)
fdout, ccout = tempfile.mkstemp() fdout, ccout = tempfile.mkstemp()
os.close(fdout) os.close(fdout)
cmd = 'if type gcc &>/dev/null; then CC=gcc; else CC=cc; fi;' \ cmd = 'if type gcc >/dev/null 2>&1; then CC=gcc; else CC=cc; fi;' \
'$CC -Wl,-t -o ' + ccout + ' 2>&1 -l' + name '$CC -Wl,-t -o ' + ccout + ' 2>&1 -l' + name
try: try:
fdout, outfile = tempfile.mkstemp() f = os.popen(cmd)
os.close(fdout) trace = f.read()
fd = os.popen(cmd) f.close()
trace = fd.read()
err = fd.close()
finally: finally:
try:
os.unlink(outfile)
except OSError as e:
if e.errno != errno.ENOENT:
raise
try: try:
os.unlink(ccout) os.unlink(ccout)
except OSError as e: except OSError as e:
@ -73,29 +66,58 @@ elif os.name == "posix":
return None return None
return res.group(0) return res.group(0)
def _findLib_ld(name):
expr = '/[^\(\)\s]*lib%s\.[^\(\)\s]*' % name
res = re.search(expr, os.popen('/sbin/ldconfig -p 2>/dev/null').read())
if not res:
# Hm, this works only for libs needed by the python executable.
cmd = 'ldd %s 2>/dev/null' % sys.executable
res = re.search(expr, os.popen(cmd).read())
if not res:
return None
return res.group(0)
def _get_soname(f): def _get_soname(f):
# assuming GNU binutils / ELF
if not f:
return None
cmd = "objdump -p -j .dynamic 2>/dev/null " + f cmd = "objdump -p -j .dynamic 2>/dev/null " + f
res = re.search(r'\sSONAME\s+([^\s]+)', os.popen(cmd).read()) res = re.search(r'\sSONAME\s+([^\s]+)', os.popen(cmd).read())
if not res: if not res:
return None return None
return res.group(1) return res.group(1)
def find_library(name): if (sys.platform.startswith("freebsd")
lib = _findLib_ld(name) or _findLib_gcc(name) or sys.platform.startswith("openbsd")
if not lib: or sys.platform.startswith("dragonfly")):
return None
return _get_soname(lib) def _num_version(libname):
# "libxyz.so.MAJOR.MINOR" => [ MAJOR, MINOR ]
parts = libname.split(".")
nums = []
try:
while parts:
nums.insert(0, int(parts.pop()))
except ValueError:
pass
return nums or [ sys.maxint ]
def find_library(name):
ename = re.escape(name)
expr = r':-l%s\.\S+ => \S*/(lib%s\.\S+)' % (ename, ename)
res = re.findall(expr,
os.popen('/sbin/ldconfig -r 2>/dev/null').read())
if not res:
return _get_soname(_findLib_gcc(name))
res.sort(cmp= lambda x,y: cmp(_num_version(x), _num_version(y)))
return res[-1]
else:
def _findLib_ldconfig(name):
# XXX assuming GLIBC's ldconfig (with option -p)
expr = r'/[^\(\)\s]*lib%s\.[^\(\)\s]*' % re.escape(name)
res = re.search(expr,
os.popen('/sbin/ldconfig -p 2>/dev/null').read())
if not res:
# Hm, this works only for libs needed by the python executable.
cmd = 'ldd %s 2>/dev/null' % sys.executable
res = re.search(expr, os.popen(cmd).read())
if not res:
return None
return res.group(0)
def find_library(name):
return _get_soname(_findLib_ldconfig(name) or _findLib_gcc(name))
################################################################ ################################################################
# test code # test code

View File

@ -34,8 +34,14 @@ LPCOLESTR = LPOLESTR = OLESTR = c_wchar_p
LPCWSTR = LPWSTR = c_wchar_p LPCWSTR = LPWSTR = c_wchar_p
LPCSTR = LPSTR = c_char_p LPCSTR = LPSTR = c_char_p
WPARAM = c_uint # WPARAM is defined as UINT_PTR (unsigned type)
LPARAM = c_long # LPARAM is defined as LONG_PTR (signed type)
if sizeof(c_long) == sizeof(c_void_p):
WPARAM = c_ulong
LPARAM = c_long
elif sizeof(c_longlong) == sizeof(c_void_p):
WPARAM = c_ulonglong
LPARAM = c_longlong
ATOM = WORD ATOM = WORD
LANGID = WORD LANGID = WORD
@ -48,7 +54,7 @@ LCID = DWORD
################################################################ ################################################################
# HANDLE types # HANDLE types
HANDLE = c_ulong # in the header files: void * HANDLE = c_void_p # in the header files: void *
HACCEL = HANDLE HACCEL = HANDLE
HBITMAP = HANDLE HBITMAP = HANDLE

View File

@ -819,7 +819,7 @@ class EditorWindow(object):
def close(self): def close(self):
reply = self.maybesave() reply = self.maybesave()
if reply != "cancel": if str(reply) != "cancel":
self._close() self._close()
return reply return reply

View File

@ -41,8 +41,8 @@ except ImportError:
__author__ = "Vinay Sajip <vinay_sajip@red-dove.com>" __author__ = "Vinay Sajip <vinay_sajip@red-dove.com>"
__status__ = "production" __status__ = "production"
__version__ = "0.5.0.0" __version__ = "0.5.0.1"
__date__ = "08 January 2007" __date__ = "09 January 2007"
#--------------------------------------------------------------------------- #---------------------------------------------------------------------------
# Miscellaneous module data # Miscellaneous module data
@ -764,17 +764,15 @@ class FileHandler(StreamHandler):
""" """
Open the specified file and use it as the stream for logging. Open the specified file and use it as the stream for logging.
""" """
if codecs is None:
encoding = None
if encoding is None:
stream = open(filename, mode)
else:
stream = codecs.open(filename, mode, encoding)
StreamHandler.__init__(self, stream)
#keep the absolute path, otherwise derived classes which use this #keep the absolute path, otherwise derived classes which use this
#may come a cropper when the current directory changes #may come a cropper when the current directory changes
if codecs is None:
encoding = None
self.baseFilename = os.path.abspath(filename) self.baseFilename = os.path.abspath(filename)
self.mode = mode self.mode = mode
self.encoding = encoding
stream = self._open()
StreamHandler.__init__(self, stream)
def close(self): def close(self):
""" """
@ -784,6 +782,17 @@ class FileHandler(StreamHandler):
self.stream.close() self.stream.close()
StreamHandler.close(self) StreamHandler.close(self)
def _open(self):
"""
Open the current base file with the (original) mode and encoding.
Return the resulting stream.
"""
if self.encoding is None:
stream = open(self.baseFilename, self.mode)
else:
stream = codecs.open(self.baseFilename, self.mode, self.encoding)
return stream
#--------------------------------------------------------------------------- #---------------------------------------------------------------------------
# Manager classes and functions # Manager classes and functions
#--------------------------------------------------------------------------- #---------------------------------------------------------------------------

View File

@ -32,6 +32,7 @@ try:
import cPickle as pickle import cPickle as pickle
except ImportError: except ImportError:
import pickle import pickle
from stat import ST_DEV, ST_INO
try: try:
import codecs import codecs
@ -286,6 +287,54 @@ class TimedRotatingFileHandler(BaseRotatingHandler):
self.stream = open(self.baseFilename, 'w') self.stream = open(self.baseFilename, 'w')
self.rolloverAt = self.rolloverAt + self.interval self.rolloverAt = self.rolloverAt + self.interval
class WatchedFileHandler(logging.FileHandler):
"""
A handler for logging to a file, which watches the file
to see if it has changed while in use. This can happen because of
usage of programs such as newsyslog and logrotate which perform
log file rotation. This handler, intended for use under Unix,
watches the file to see if it has changed since the last emit.
(A file has changed if its device or inode have changed.)
If it has changed, the old file stream is closed, and the file
opened to get a new stream.
This handler is not appropriate for use under Windows, because
under Windows open files cannot be moved or renamed - logging
opens the files with exclusive locks - and so there is no need
for such a handler. Furthermore, ST_INO is not supported under
Windows; stat always returns zero for this value.
This handler is based on a suggestion and patch by Chad J.
Schroeder.
"""
def __init__(self, filename, mode='a', encoding=None):
logging.FileHandler.__init__(self, filename, mode, encoding)
stat = os.stat(self.baseFilename)
self.dev, self.ino = stat[ST_DEV], stat[ST_INO]
def emit(self, record):
"""
Emit a record.
First check if the underlying file has changed, and if it
has, close the old stream and reopen the file to get the
current stream.
"""
if not os.path.exists(self.baseFilename):
stat = None
changed = 1
else:
stat = os.stat(self.baseFilename)
changed = (stat[ST_DEV] != self.dev) or (stat[ST_INO] != self.ino)
if changed:
self.stream.flush()
self.stream.close()
self.stream = self._open()
if stat is None:
stat = os.stat(self.baseFilename)
self.dev, self.ino = stat[ST_DEV], stat[ST_INO]
logging.FileHandler.emit(self, record)
class SocketHandler(logging.Handler): class SocketHandler(logging.Handler):
""" """
A handler class which writes logging records, in pickle format, to A handler class which writes logging records, in pickle format, to

View File

@ -28,12 +28,15 @@
# Betancourt, Randall Hopper, Karl Putland, John Farrell, Greg # Betancourt, Randall Hopper, Karl Putland, John Farrell, Greg
# Andruk, Just van Rossum, Thomas Heller, Mark R. Levinson, Mark # Andruk, Just van Rossum, Thomas Heller, Mark R. Levinson, Mark
# Hammond, Bill Tutt, Hans Nowak, Uwe Zessin (OpenVMS support), # Hammond, Bill Tutt, Hans Nowak, Uwe Zessin (OpenVMS support),
# Colin Kong, Trent Mick, Guido van Rossum # Colin Kong, Trent Mick, Guido van Rossum, Anthony Baxter
# #
# History: # History:
# #
# <see CVS and SVN checkin messages for history> # <see CVS and SVN checkin messages for history>
# #
# 1.0.6 - added linux_distribution()
# 1.0.5 - fixed Java support to allow running the module on Jython
# 1.0.4 - added IronPython support
# 1.0.3 - added normalization of Windows system name # 1.0.3 - added normalization of Windows system name
# 1.0.2 - added more Windows support # 1.0.2 - added more Windows support
# 1.0.1 - reformatted to make doc.py happy # 1.0.1 - reformatted to make doc.py happy
@ -88,7 +91,7 @@
__copyright__ = """ __copyright__ = """
Copyright (c) 1999-2000, Marc-Andre Lemburg; mailto:mal@lemburg.com Copyright (c) 1999-2000, Marc-Andre Lemburg; mailto:mal@lemburg.com
Copyright (c) 2000-2003, eGenix.com Software GmbH; mailto:info@egenix.com Copyright (c) 2000-2007, eGenix.com Software GmbH; mailto:info@egenix.com
Permission to use, copy, modify, and distribute this software and its Permission to use, copy, modify, and distribute this software and its
documentation for any purpose and without fee or royalty is hereby granted, documentation for any purpose and without fee or royalty is hereby granted,
@ -107,7 +110,7 @@ __copyright__ = """
""" """
__version__ = '1.0.4' __version__ = '1.0.6'
import sys,string,os,re import sys,string,os,re
@ -136,6 +139,11 @@ def libc_ver(executable=sys.executable,lib='',version='',
The file is read and scanned in chunks of chunksize bytes. The file is read and scanned in chunks of chunksize bytes.
""" """
if hasattr(os.path, 'realpath'):
# Python 2.2 introduced os.path.realpath(); it is used
# here to work around problems with Cygwin not being
# able to open symlinks for reading
executable = os.path.realpath(executable)
f = open(executable,'rb') f = open(executable,'rb')
binary = f.read(chunksize) binary = f.read(chunksize)
pos = 0 pos = 0
@ -218,14 +226,124 @@ def _dist_try_harder(distname,version,id):
return distname,version,id return distname,version,id
_release_filename = re.compile(r'(\w+)[-_](release|version)') _release_filename = re.compile(r'(\w+)[-_](release|version)')
_release_version = re.compile(r'([\d.]+)[^(]*(?:\((.+)\))?') _lsb_release_version = re.compile(r'(.+)'
' release '
'([\d.]+)'
'[^(]*(?:\((.+)\))?')
_release_version = re.compile(r'([^0-9]+)'
'(?: release )?'
'([\d.]+)'
'[^(]*(?:\((.+)\))?')
# Note:In supported_dists below we need 'fedora' before 'redhat' as in # See also http://www.novell.com/coolsolutions/feature/11251.html
# Fedora redhat-release is a link to fedora-release. # and http://linuxmafia.com/faq/Admin/release-files.html
# and http://data.linux-ntfs.org/rpm/whichrpm
# and http://www.die.net/doc/linux/man/man1/lsb_release.1.html
_supported_dists = ('SuSE', 'debian', 'fedora', 'redhat', 'centos',
'mandrake', 'rocks', 'slackware', 'yellowdog',
'gentoo', 'UnitedLinux')
def _parse_release_file(firstline):
# Parse the first line
m = _lsb_release_version.match(firstline)
if m is not None:
# LSB format: "distro release x.x (codename)"
return tuple(m.groups())
# Pre-LSB format: "distro x.x (codename)"
m = _release_version.match(firstline)
if m is not None:
return tuple(m.groups())
# Unkown format... take the first two words
l = string.split(string.strip(firstline))
if l:
version = l[0]
if len(l) > 1:
id = l[1]
else:
id = ''
return '', version, id
def _test_parse_release_file():
for input, output in (
# Examples of release file contents:
('SuSE Linux 9.3 (x86-64)', ('SuSE Linux ', '9.3', 'x86-64'))
('SUSE LINUX 10.1 (X86-64)', ('SUSE LINUX ', '10.1', 'X86-64'))
('SUSE LINUX 10.1 (i586)', ('SUSE LINUX ', '10.1', 'i586'))
('Fedora Core release 5 (Bordeaux)', ('Fedora Core', '5', 'Bordeaux'))
('Red Hat Linux release 8.0 (Psyche)', ('Red Hat Linux', '8.0', 'Psyche'))
('Red Hat Linux release 9 (Shrike)', ('Red Hat Linux', '9', 'Shrike'))
('Red Hat Enterprise Linux release 4 (Nahant)', ('Red Hat Enterprise Linux', '4', 'Nahant'))
('CentOS release 4', ('CentOS', '4', None))
('Rocks release 4.2.1 (Cydonia)', ('Rocks', '4.2.1', 'Cydonia'))
):
parsed = _parse_release_file(input)
if parsed != output:
print (input, parsed)
def linux_distribution(distname='', version='', id='',
supported_dists=_supported_dists,
full_distribution_name=1):
""" Tries to determine the name of the Linux OS distribution name.
The function first looks for a distribution release file in
/etc and then reverts to _dist_try_harder() in case no
suitable files are found.
supported_dists may be given to define the set of Linux
distributions to look for. It defaults to a list of currently
supported Linux distributions identified by their release file
name.
If full_distribution_name is true (default), the full
distribution read from the OS is returned. Otherwise the short
name taken from supported_dists is used.
Returns a tuple (distname,version,id) which default to the
args given as parameters.
"""
try:
etc = os.listdir('/etc')
except os.error:
# Probably not a Unix system
return distname,version,id
etc.sort()
for file in etc:
m = _release_filename.match(file)
if m is not None:
_distname,dummy = m.groups()
if _distname in supported_dists:
distname = _distname
break
else:
return _dist_try_harder(distname,version,id)
# Read the first line
f = open('/etc/'+file, 'r')
firstline = f.readline()
f.close()
_distname, _version, _id = _parse_release_file(firstline)
if _distname and full_distribution_name:
distname = _distname
if _version:
version = _version
if _id:
id = _id
return distname, version, id
# To maintain backwards compatibility:
def dist(distname='',version='',id='', def dist(distname='',version='',id='',
supported_dists=('SuSE', 'debian', 'fedora', 'redhat', 'mandrake')): supported_dists=_supported_dists):
""" Tries to determine the name of the Linux OS distribution name. """ Tries to determine the name of the Linux OS distribution name.
@ -237,38 +355,9 @@ def dist(distname='',version='',id='',
args given as parameters. args given as parameters.
""" """
try: return linux_distribution(distname, version, id,
etc = os.listdir('/etc') supported_dists=supported_dists,
except os.error: full_distribution_name=0)
# Probably not a Unix system
return distname,version,id
for file in etc:
m = _release_filename.match(file)
if m:
_distname,dummy = m.groups()
if _distname in supported_dists:
distname = _distname
break
else:
return _dist_try_harder(distname,version,id)
f = open('/etc/'+file,'r')
firstline = f.readline()
f.close()
m = _release_version.search(firstline)
if m:
_version,_id = m.groups()
if _version:
version = _version
if _id:
id = _id
else:
# Unkown format... take the first two words
l = string.split(string.strip(firstline))
if l:
version = l[0]
if len(l) > 1:
id = l[1]
return distname,version,id
class _popen: class _popen:
@ -357,7 +446,7 @@ def popen(cmd, mode='r', bufsize=None):
else: else:
return popen(cmd,mode,bufsize) return popen(cmd,mode,bufsize)
def _norm_version(version,build=''): def _norm_version(version, build=''):
""" Normalize the version and build strings and return a single """ Normalize the version and build strings and return a single
version string using the format major.minor.build (or patchlevel). version string using the format major.minor.build (or patchlevel).
@ -378,7 +467,7 @@ _ver_output = re.compile(r'(?:([\w ]+) ([\w.]+) '
'.*' '.*'
'Version ([\d.]+))') 'Version ([\d.]+))')
def _syscmd_ver(system='',release='',version='', def _syscmd_ver(system='', release='', version='',
supported_platforms=('win32','win16','dos','os2')): supported_platforms=('win32','win16','dos','os2')):
@ -418,7 +507,7 @@ def _syscmd_ver(system='',release='',version='',
# Parse the output # Parse the output
info = string.strip(info) info = string.strip(info)
m = _ver_output.match(info) m = _ver_output.match(info)
if m: if m is not None:
system,release,version = m.groups() system,release,version = m.groups()
# Strip trailing dots from version and release # Strip trailing dots from version and release
if release[-1] == '.': if release[-1] == '.':
@ -615,8 +704,11 @@ def _java_getprop(name,default):
from java.lang import System from java.lang import System
try: try:
return System.getProperty(name) value = System.getProperty(name)
except: if value is None:
return default
return value
except AttributeError:
return default return default
def java_ver(release='',vendor='',vminfo=('','',''),osinfo=('','','')): def java_ver(release='',vendor='',vminfo=('','',''),osinfo=('','','')):
@ -637,20 +729,20 @@ def java_ver(release='',vendor='',vminfo=('','',''),osinfo=('','','')):
except ImportError: except ImportError:
return release,vendor,vminfo,osinfo return release,vendor,vminfo,osinfo
vendor = _java_getprop('java.vendor',vendor) vendor = _java_getprop('java.vendor', vendor)
release = _java_getprop('java.version',release) release = _java_getprop('java.version', release)
vm_name,vm_release,vm_vendor = vminfo vm_name, vm_release, vm_vendor = vminfo
vm_name = _java_getprop('java.vm.name',vm_name) vm_name = _java_getprop('java.vm.name', vm_name)
vm_vendor = _java_getprop('java.vm.vendor',vm_vendor) vm_vendor = _java_getprop('java.vm.vendor', vm_vendor)
vm_release = _java_getprop('java.vm.version',vm_release) vm_release = _java_getprop('java.vm.version', vm_release)
vminfo = vm_name,vm_release,vm_vendor vminfo = vm_name, vm_release, vm_vendor
os_name,os_version,os_arch = osinfo os_name, os_version, os_arch = osinfo
os_arch = _java_getprop('java.os.arch',os_arch) os_arch = _java_getprop('java.os.arch', os_arch)
os_name = _java_getprop('java.os.name',os_name) os_name = _java_getprop('java.os.name', os_name)
os_version = _java_getprop('java.os.version',os_version) os_version = _java_getprop('java.os.version', os_version)
osinfo = os_name,os_version,os_arch osinfo = os_name, os_version, os_arch
return release,vendor,vminfo,osinfo return release, vendor, vminfo, osinfo
### System name aliasing ### System name aliasing
@ -716,7 +808,7 @@ def _platform(*args):
# Format the platform string # Format the platform string
platform = string.join( platform = string.join(
map(string.strip, map(string.strip,
filter(len,args)), filter(len, args)),
'-') '-')
# Cleanup some possible filename obstacles... # Cleanup some possible filename obstacles...
@ -871,7 +963,10 @@ def architecture(executable=sys.executable,bits='',linkage=''):
bits = str(size*8) + 'bit' bits = str(size*8) + 'bit'
# Get data from the 'file' system command # Get data from the 'file' system command
output = _syscmd_file(executable,'') if executable:
output = _syscmd_file(executable, '')
else:
output = ''
if not output and \ if not output and \
executable == sys.executable: executable == sys.executable:
@ -960,6 +1055,10 @@ def uname():
release,version,csd,ptype = win32_ver() release,version,csd,ptype = win32_ver()
if release and version: if release and version:
use_syscmd_ver = 0 use_syscmd_ver = 0
# XXX Should try to parse the PROCESSOR_* environment variables
# available on Win XP and later; see
# http://support.microsoft.com/kb/888731 and
# http://www.geocities.com/rick_lively/MANUALS/ENV/MSWIN/PROCESSI.HTM
# Try the 'ver' system command available on some # Try the 'ver' system command available on some
# platforms # platforms
@ -1092,36 +1191,136 @@ def processor():
### Various APIs for extracting information from sys.version ### Various APIs for extracting information from sys.version
_sys_version_parser = re.compile(r'([\w.+]+)\s*' _sys_version_parser = re.compile(
'\(#?([^,]+),\s*([\w ]+),\s*([\w :]+)\)\s*' r'([\w.+]+)\s*'
'\[([^\]]+)\]?') '\(#?([^,]+),\s*([\w ]+),\s*([\w :]+)\)\s*'
_sys_version_cache = None '\[([^\]]+)\]?')
def _sys_version(): _jython_sys_version_parser = re.compile(
r'([\d\.]+)')
_ironpython_sys_version_parser = re.compile(
r'IronPython\s*'
'([\d\.]+)'
'(?: \(([\d\.]+)\))?'
' on (.NET [\d\.]+)')
_sys_version_cache = {}
def _sys_version(sys_version=None):
""" Returns a parsed version of Python's sys.version as tuple """ Returns a parsed version of Python's sys.version as tuple
(version, buildno, builddate, compiler) referring to the Python (name, version, branch, revision, buildno, builddate, compiler)
version, build number, build date/time as string and the compiler referring to the Python implementation name, version, branch,
identification string. revision, build number, build date/time as string and the compiler
identification string.
Note that unlike the Python sys.version, the returned value Note that unlike the Python sys.version, the returned value
for the Python version will always include the patchlevel (it for the Python version will always include the patchlevel (it
defaults to '.0'). defaults to '.0').
""" The function returns empty strings for tuple entries that
global _sys_version_cache cannot be determined.
if _sys_version_cache is not None: sys_version may be given to parse an alternative version
return _sys_version_cache string, e.g. if the version was read from a different Python
version, buildno, builddate, buildtime, compiler = \ interpreter.
_sys_version_parser.match(sys.version).groups()
builddate = builddate + ' ' + buildtime """
# Get the Python version
if sys_version is None:
sys_version = sys.version
# Try the cache first
result = _sys_version_cache.get(sys_version, None)
if result is not None:
return result
# Parse it
if sys_version[:10] == 'IronPython':
# IronPython
name = 'IronPython'
match = _ironpython_sys_version_parser.match(sys_version)
if match is None:
raise ValueError(
'failed to parse IronPython sys.version: %s' %
repr(sys_version))
version, alt_version, compiler = match.groups()
branch = ''
revision = ''
buildno = ''
builddate = ''
elif sys.platform[:4] == 'java':
# Jython
name = 'Jython'
match = _jython_sys_version_parser.match(sys_version)
if match is None:
raise ValueError(
'failed to parse Jython sys.version: %s' %
repr(sys_version))
version, = match.groups()
branch = ''
revision = ''
compiler = sys.platform
buildno = ''
builddate = ''
else:
# CPython
match = _sys_version_parser.match(sys_version)
if match is None:
raise ValueError(
'failed to parse CPython sys.version: %s' %
repr(sys_version))
version, buildno, builddate, buildtime, compiler = \
match.groups()
if hasattr(sys, 'subversion'):
# sys.subversion was added in Python 2.5
name, branch, revision = sys.subversion
else:
name = 'CPython'
branch = ''
revision = ''
builddate = builddate + ' ' + buildtime
# Add the patchlevel version if missing
l = string.split(version, '.') l = string.split(version, '.')
if len(l) == 2: if len(l) == 2:
l.append('0') l.append('0')
version = string.join(l, '.') version = string.join(l, '.')
_sys_version_cache = (version, buildno, builddate, compiler)
return _sys_version_cache # Build and cache the result
result = (name, version, branch, revision, buildno, builddate, compiler)
_sys_version_cache[sys_version] = result
return result
def _test_sys_version():
_sys_version_cache.clear()
for input, output in (
('2.4.3 (#1, Jun 21 2006, 13:54:21) \n[GCC 3.3.4 (pre 3.3.5 20040809)]',
('CPython', '2.4.3', '', '', '1', 'Jun 21 2006 13:54:21', 'GCC 3.3.4 (pre 3.3.5 20040809)')),
('IronPython 1.0.60816 on .NET 2.0.50727.42',
('IronPython', '1.0.60816', '', '', '', '', '.NET 2.0.50727.42')),
('IronPython 1.0 (1.0.61005.1977) on .NET 2.0.50727.42',
('IronPython', '1.0.0', '', '', '', '', '.NET 2.0.50727.42')),
):
parsed = _sys_version(input)
if parsed != output:
print (input, parsed)
def python_implementation():
""" Returns a string identifying the Python implementation.
Currently, the following implementations are identified:
'CPython' (C implementation of Python),
'IronPython' (.NET implementation of Python),
'Jython' (Java implementation of Python).
"""
return _sys_version()[0]
def python_version(): def python_version():
@ -1131,7 +1330,9 @@ def python_version():
will always include the patchlevel (it defaults to 0). will always include the patchlevel (it defaults to 0).
""" """
return _sys_version()[0] if hasattr(sys, 'version_info'):
return '%i.%i.%i' % sys.version_info[:3]
return _sys_version()[1]
def python_version_tuple(): def python_version_tuple():
@ -1142,7 +1343,36 @@ def python_version_tuple():
will always include the patchlevel (it defaults to 0). will always include the patchlevel (it defaults to 0).
""" """
return string.split(_sys_version()[0], '.') if hasattr(sys, 'version_info'):
return sys.version_info[:3]
return tuple(string.split(_sys_version()[1], '.'))
def python_branch():
""" Returns a string identifying the Python implementation
branch.
For CPython this is the Subversion branch from which the
Python binary was built.
If not available, an empty string is returned.
"""
return _sys_version()[2]
def python_revision():
""" Returns a string identifying the Python implementation
revision.
For CPython this is the Subversion revision from which the
Python binary was built.
If not available, an empty string is returned.
"""
return _sys_version()[3]
def python_build(): def python_build():
@ -1150,7 +1380,7 @@ def python_build():
build number and date as strings. build number and date as strings.
""" """
return _sys_version()[1:3] return _sys_version()[4:6]
def python_compiler(): def python_compiler():
@ -1158,7 +1388,7 @@ def python_compiler():
Python. Python.
""" """
return _sys_version()[3] return _sys_version()[6]
### The Opus Magnum of platform strings :-) ### The Opus Magnum of platform strings :-)
@ -1219,7 +1449,7 @@ def platform(aliased=0, terse=0):
elif system == 'Java': elif system == 'Java':
# Java platforms # Java platforms
r,v,vminfo,(os_name,os_version,os_arch) = java_ver() r,v,vminfo,(os_name,os_version,os_arch) = java_ver()
if terse: if terse or not os_name:
platform = _platform(system,release,version) platform = _platform(system,release,version)
else: else:
platform = _platform(system,release,version, platform = _platform(system,release,version,

View File

@ -1448,6 +1448,9 @@ def locate(path, forceload=0):
text = TextDoc() text = TextDoc()
html = HTMLDoc() html = HTMLDoc()
class _OldStyleClass: pass
_OLD_INSTANCE_TYPE = type(_OldStyleClass())
def resolve(thing, forceload=0): def resolve(thing, forceload=0):
"""Given an object or a path to an object, get the object and its name.""" """Given an object or a path to an object, get the object and its name."""
if isinstance(thing, str): if isinstance(thing, str):
@ -1468,12 +1471,16 @@ def doc(thing, title='Python Library Documentation: %s', forceload=0):
desc += ' in ' + name[:name.rfind('.')] desc += ' in ' + name[:name.rfind('.')]
elif module and module is not object: elif module and module is not object:
desc += ' in module ' + module.__name__ desc += ' in module ' + module.__name__
if not (inspect.ismodule(object) or if type(object) is _OLD_INSTANCE_TYPE:
inspect.isclass(object) or # If the passed object is an instance of an old-style class,
inspect.isroutine(object) or # document its available methods instead of its value.
inspect.isgetsetdescriptor(object) or object = object.__class__
inspect.ismemberdescriptor(object) or elif not (inspect.ismodule(object) or
isinstance(object, property)): inspect.isclass(object) or
inspect.isroutine(object) or
inspect.isgetsetdescriptor(object) or
inspect.ismemberdescriptor(object) or
isinstance(object, property)):
# If the passed object is a piece of data or an instance, # If the passed object is a piece of data or an instance,
# document its available methods instead of its value. # document its available methods instead of its value.
object = type(object) object = type(object)

View File

@ -68,7 +68,7 @@ def register_adapters_and_converters():
timepart_full = timepart.split(".") timepart_full = timepart.split(".")
hours, minutes, seconds = map(int, timepart_full[0].split(":")) hours, minutes, seconds = map(int, timepart_full[0].split(":"))
if len(timepart_full) == 2: if len(timepart_full) == 2:
microseconds = int(float("0." + timepart_full[1]) * 1000000) microseconds = int(timepart_full[1])
else: else:
microseconds = 0 microseconds = 0

View File

@ -91,7 +91,7 @@ class RowFactoryTests(unittest.TestCase):
list), list),
"row is not instance of list") "row is not instance of list")
def CheckSqliteRow(self): def CheckSqliteRowIndex(self):
self.con.row_factory = sqlite.Row self.con.row_factory = sqlite.Row
row = self.con.execute("select 1 as a, 2 as b").fetchone() row = self.con.execute("select 1 as a, 2 as b").fetchone()
self.failUnless(isinstance(row, self.failUnless(isinstance(row,
@ -110,6 +110,27 @@ class RowFactoryTests(unittest.TestCase):
self.failUnless(col1 == 1, "by index: wrong result for column 0") self.failUnless(col1 == 1, "by index: wrong result for column 0")
self.failUnless(col2 == 2, "by index: wrong result for column 1") self.failUnless(col2 == 2, "by index: wrong result for column 1")
def CheckSqliteRowIter(self):
"""Checks if the row object is iterable"""
self.con.row_factory = sqlite.Row
row = self.con.execute("select 1 as a, 2 as b").fetchone()
for col in row:
pass
def CheckSqliteRowAsTuple(self):
"""Checks if the row object can be converted to a tuple"""
self.con.row_factory = sqlite.Row
row = self.con.execute("select 1 as a, 2 as b").fetchone()
t = tuple(row)
def CheckSqliteRowAsDict(self):
"""Checks if the row object can be correctly converted to a dictionary"""
self.con.row_factory = sqlite.Row
row = self.con.execute("select 1 as a, 2 as b").fetchone()
d = dict(row)
self.failUnlessEqual(d["a"], row["a"])
self.failUnlessEqual(d["b"], row["b"])
def tearDown(self): def tearDown(self):
self.con.close() self.con.close()

View File

@ -69,6 +69,16 @@ class RegressionTests(unittest.TestCase):
cur.execute('select 1 as "foo baz"') cur.execute('select 1 as "foo baz"')
self.failUnlessEqual(cur.description[0][0], "foo baz") self.failUnlessEqual(cur.description[0][0], "foo baz")
def CheckStatementAvailable(self):
# pysqlite up to 2.3.2 crashed on this, because the active statement handle was not checked
# before trying to fetch data from it. close() destroys the active statement ...
con = sqlite.connect(":memory:", detect_types=sqlite.PARSE_DECLTYPES)
cur = con.cursor()
cur.execute("select 4 union select 5")
cur.close()
cur.fetchone()
cur.fetchone()
def suite(): def suite():
regression_suite = unittest.makeSuite(RegressionTests, "Check") regression_suite = unittest.makeSuite(RegressionTests, "Check")
return unittest.TestSuite((regression_suite,)) return unittest.TestSuite((regression_suite,))

View File

@ -112,6 +112,7 @@ class DeclTypesTests(unittest.TestCase):
# and implement two custom ones # and implement two custom ones
sqlite.converters["BOOL"] = lambda x: bool(int(x)) sqlite.converters["BOOL"] = lambda x: bool(int(x))
sqlite.converters["FOO"] = DeclTypesTests.Foo sqlite.converters["FOO"] = DeclTypesTests.Foo
sqlite.converters["WRONG"] = lambda x: "WRONG"
def tearDown(self): def tearDown(self):
del sqlite.converters["FLOAT"] del sqlite.converters["FLOAT"]
@ -123,7 +124,7 @@ class DeclTypesTests(unittest.TestCase):
def CheckString(self): def CheckString(self):
# default # default
self.cur.execute("insert into test(s) values (?)", ("foo",)) self.cur.execute("insert into test(s) values (?)", ("foo",))
self.cur.execute("select s from test") self.cur.execute('select s as "s [WRONG]" from test')
row = self.cur.fetchone() row = self.cur.fetchone()
self.failUnlessEqual(row[0], "foo") self.failUnlessEqual(row[0], "foo")
@ -210,26 +211,32 @@ class DeclTypesTests(unittest.TestCase):
class ColNamesTests(unittest.TestCase): class ColNamesTests(unittest.TestCase):
def setUp(self): def setUp(self):
self.con = sqlite.connect(":memory:", detect_types=sqlite.PARSE_COLNAMES|sqlite.PARSE_DECLTYPES) self.con = sqlite.connect(":memory:", detect_types=sqlite.PARSE_COLNAMES)
self.cur = self.con.cursor() self.cur = self.con.cursor()
self.cur.execute("create table test(x foo)") self.cur.execute("create table test(x foo)")
sqlite.converters["FOO"] = lambda x: "[%s]" % x sqlite.converters["FOO"] = lambda x: "[%s]" % x
sqlite.converters["BAR"] = lambda x: "<%s>" % x sqlite.converters["BAR"] = lambda x: "<%s>" % x
sqlite.converters["EXC"] = lambda x: 5/0 sqlite.converters["EXC"] = lambda x: 5/0
sqlite.converters["B1B1"] = lambda x: "MARKER"
def tearDown(self): def tearDown(self):
del sqlite.converters["FOO"] del sqlite.converters["FOO"]
del sqlite.converters["BAR"] del sqlite.converters["BAR"]
del sqlite.converters["EXC"] del sqlite.converters["EXC"]
del sqlite.converters["B1B1"]
self.cur.close() self.cur.close()
self.con.close() self.con.close()
def CheckDeclType(self): def CheckDeclTypeNotUsed(self):
"""
Assures that the declared type is not used when PARSE_DECLTYPES
is not set.
"""
self.cur.execute("insert into test(x) values (?)", ("xxx",)) self.cur.execute("insert into test(x) values (?)", ("xxx",))
self.cur.execute("select x from test") self.cur.execute("select x from test")
val = self.cur.fetchone()[0] val = self.cur.fetchone()[0]
self.failUnlessEqual(val, "[xxx]") self.failUnlessEqual(val, "xxx")
def CheckNone(self): def CheckNone(self):
self.cur.execute("insert into test(x) values (?)", (None,)) self.cur.execute("insert into test(x) values (?)", (None,))
@ -247,6 +254,11 @@ class ColNamesTests(unittest.TestCase):
# whitespace should be stripped. # whitespace should be stripped.
self.failUnlessEqual(self.cur.description[0][0], "x") self.failUnlessEqual(self.cur.description[0][0], "x")
def CheckCaseInConverterName(self):
self.cur.execute("""select 'other' as "x [b1b1]\"""")
val = self.cur.fetchone()[0]
self.failUnlessEqual(val, "MARKER")
def CheckCursorDescriptionNoRow(self): def CheckCursorDescriptionNoRow(self):
""" """
cursor.description should at least provide the column name(s), even if cursor.description should at least provide the column name(s), even if
@ -340,6 +352,13 @@ class DateTimeTests(unittest.TestCase):
ts2 = self.cur.fetchone()[0] ts2 = self.cur.fetchone()[0]
self.failUnlessEqual(ts, ts2) self.failUnlessEqual(ts, ts2)
def CheckDateTimeSubSecondsFloatingPoint(self):
ts = sqlite.Timestamp(2004, 2, 14, 7, 15, 0, 510241)
self.cur.execute("insert into test(ts) values (?)", (ts,))
self.cur.execute("select ts from test")
ts2 = self.cur.fetchone()[0]
self.failUnlessEqual(ts, ts2)
def suite(): def suite():
sqlite_type_suite = unittest.makeSuite(SqliteTypeTests, "Check") sqlite_type_suite = unittest.makeSuite(SqliteTypeTests, "Check")
decltypes_type_suite = unittest.makeSuite(DeclTypesTests, "Check") decltypes_type_suite = unittest.makeSuite(DeclTypesTests, "Check")

View File

@ -500,7 +500,7 @@ def list2cmdline(seq):
if result: if result:
result.append(' ') result.append(' ')
needquote = (" " in arg) or ("\t" in arg) needquote = (" " in arg) or ("\t" in arg) or arg == ""
if needquote: if needquote:
result.append('"') result.append('"')

View File

@ -132,7 +132,6 @@ class AllTest(unittest.TestCase):
self.check_all("rlcompleter") self.check_all("rlcompleter")
self.check_all("robotparser") self.check_all("robotparser")
self.check_all("sched") self.check_all("sched")
self.check_all("sets")
self.check_all("sgmllib") self.check_all("sgmllib")
self.check_all("shelve") self.check_all("shelve")
self.check_all("shlex") self.check_all("shlex")

View File

@ -1500,8 +1500,16 @@ class TestHelp(BaseTest):
self.assertHelpEquals(_expected_help_long_opts_first) self.assertHelpEquals(_expected_help_long_opts_first)
def test_help_title_formatter(self): def test_help_title_formatter(self):
self.parser.formatter = TitledHelpFormatter() save = os.environ.get("COLUMNS")
self.assertHelpEquals(_expected_help_title_formatter) try:
os.environ["COLUMNS"] = "80"
self.parser.formatter = TitledHelpFormatter()
self.assertHelpEquals(_expected_help_title_formatter)
finally:
if save is not None:
os.environ["COLUMNS"] = save
else:
del os.environ["COLUMNS"]
def test_wrap_columns(self): def test_wrap_columns(self):
# Ensure that wrapping respects $COLUMNS environment variable. # Ensure that wrapping respects $COLUMNS environment variable.

View File

@ -476,6 +476,16 @@ class SetSubclass(set):
class TestSetSubclass(TestSet): class TestSetSubclass(TestSet):
thetype = SetSubclass thetype = SetSubclass
class SetSubclassWithKeywordArgs(set):
def __init__(self, iterable=[], newarg=None):
set.__init__(self, iterable)
class TestSetSubclassWithKeywordArgs(TestSet):
def test_keywords_in_subclass(self):
'SF bug #1486663 -- this used to erroneously raise a TypeError'
SetSubclassWithKeywordArgs(newarg=1)
class TestFrozenSet(TestJointOps): class TestFrozenSet(TestJointOps):
thetype = frozenset thetype = frozenset
@ -1454,6 +1464,7 @@ def test_main(verbose=None):
test_classes = ( test_classes = (
TestSet, TestSet,
TestSetSubclass, TestSetSubclass,
TestSetSubclassWithKeywordArgs,
TestFrozenSet, TestFrozenSet,
TestFrozenSetSubclass, TestFrozenSetSubclass,
TestSetOfSets, TestSetOfSets,

View File

@ -430,6 +430,8 @@ class ProcessTestCase(unittest.TestCase):
'"a\\\\b c" d e') '"a\\\\b c" d e')
self.assertEqual(subprocess.list2cmdline(['a\\\\b\\ c', 'd', 'e']), self.assertEqual(subprocess.list2cmdline(['a\\\\b\\ c', 'd', 'e']),
'"a\\\\b\\ c" d e') '"a\\\\b\\ c" d e')
self.assertEqual(subprocess.list2cmdline(['ab', '']),
'ab ""')
def test_poll(self): def test_poll(self):

View File

@ -484,6 +484,8 @@ Parser/metagrammar.o: $(srcdir)/Parser/metagrammar.c
Parser/tokenizer_pgen.o: $(srcdir)/Parser/tokenizer.c Parser/tokenizer_pgen.o: $(srcdir)/Parser/tokenizer.c
Parser/pgenmain.o: $(srcdir)/Include/parsetok.h
$(AST_H): $(AST_ASDL) $(ASDLGEN_FILES) $(AST_H): $(AST_ASDL) $(ASDLGEN_FILES)
$(ASDLGEN) -h $(AST_H_DIR) $(AST_ASDL) $(ASDLGEN) -h $(AST_H_DIR) $(AST_ASDL)
@ -537,6 +539,7 @@ PYTHON_HEADERS= \
Include/moduleobject.h \ Include/moduleobject.h \
Include/object.h \ Include/object.h \
Include/objimpl.h \ Include/objimpl.h \
Include/parsetok.h \
Include/patchlevel.h \ Include/patchlevel.h \
Include/pyarena.h \ Include/pyarena.h \
Include/pydebug.h \ Include/pydebug.h \

View File

@ -4749,7 +4749,7 @@ init_ctypes(void)
#endif #endif
PyModule_AddObject(m, "FUNCFLAG_CDECL", PyInt_FromLong(FUNCFLAG_CDECL)); PyModule_AddObject(m, "FUNCFLAG_CDECL", PyInt_FromLong(FUNCFLAG_CDECL));
PyModule_AddObject(m, "FUNCFLAG_PYTHONAPI", PyInt_FromLong(FUNCFLAG_PYTHONAPI)); PyModule_AddObject(m, "FUNCFLAG_PYTHONAPI", PyInt_FromLong(FUNCFLAG_PYTHONAPI));
PyModule_AddStringConstant(m, "__version__", "1.0.1"); PyModule_AddStringConstant(m, "__version__", "1.1.0");
PyModule_AddObject(m, "_memmove_addr", PyLong_FromVoidPtr(memmove)); PyModule_AddObject(m, "_memmove_addr", PyLong_FromVoidPtr(memmove));
PyModule_AddObject(m, "_memset_addr", PyLong_FromVoidPtr(memset)); PyModule_AddObject(m, "_memset_addr", PyLong_FromVoidPtr(memset));

View File

@ -224,7 +224,8 @@ ffi_call(/*@dependent@*/ ffi_cif *cif,
#else #else
case FFI_SYSV: case FFI_SYSV:
/*@-usedef@*/ /*@-usedef@*/
return ffi_call_AMD64(ffi_prep_args, &ecif, cif->bytes, /* Function call needs at least 40 bytes stack size, on win64 AMD64 */
return ffi_call_AMD64(ffi_prep_args, &ecif, cif->bytes ? cif->bytes : 40,
cif->flags, ecif.rvalue, fn); cif->flags, ecif.rvalue, fn);
/*@=usedef@*/ /*@=usedef@*/
break; break;

View File

@ -25,11 +25,11 @@
#include <limits.h> #include <limits.h>
/* only used internally */ /* only used internally */
Node* new_node(PyObject* key, PyObject* data) pysqlite_Node* pysqlite_new_node(PyObject* key, PyObject* data)
{ {
Node* node; pysqlite_Node* node;
node = (Node*) (NodeType.tp_alloc(&NodeType, 0)); node = (pysqlite_Node*) (pysqlite_NodeType.tp_alloc(&pysqlite_NodeType, 0));
if (!node) { if (!node) {
return NULL; return NULL;
} }
@ -46,7 +46,7 @@ Node* new_node(PyObject* key, PyObject* data)
return node; return node;
} }
void node_dealloc(Node* self) void pysqlite_node_dealloc(pysqlite_Node* self)
{ {
Py_DECREF(self->key); Py_DECREF(self->key);
Py_DECREF(self->data); Py_DECREF(self->data);
@ -54,7 +54,7 @@ void node_dealloc(Node* self)
self->ob_type->tp_free((PyObject*)self); self->ob_type->tp_free((PyObject*)self);
} }
int cache_init(Cache* self, PyObject* args, PyObject* kwargs) int pysqlite_cache_init(pysqlite_Cache* self, PyObject* args, PyObject* kwargs)
{ {
PyObject* factory; PyObject* factory;
int size = 10; int size = 10;
@ -86,10 +86,10 @@ int cache_init(Cache* self, PyObject* args, PyObject* kwargs)
return 0; return 0;
} }
void cache_dealloc(Cache* self) void pysqlite_cache_dealloc(pysqlite_Cache* self)
{ {
Node* node; pysqlite_Node* node;
Node* delete_node; pysqlite_Node* delete_node;
if (!self->factory) { if (!self->factory) {
/* constructor failed, just get out of here */ /* constructor failed, just get out of here */
@ -112,14 +112,14 @@ void cache_dealloc(Cache* self)
self->ob_type->tp_free((PyObject*)self); self->ob_type->tp_free((PyObject*)self);
} }
PyObject* cache_get(Cache* self, PyObject* args) PyObject* pysqlite_cache_get(pysqlite_Cache* self, PyObject* args)
{ {
PyObject* key = args; PyObject* key = args;
Node* node; pysqlite_Node* node;
Node* ptr; pysqlite_Node* ptr;
PyObject* data; PyObject* data;
node = (Node*)PyDict_GetItem(self->mapping, key); node = (pysqlite_Node*)PyDict_GetItem(self->mapping, key);
if (node) { if (node) {
/* an entry for this key already exists in the cache */ /* an entry for this key already exists in the cache */
@ -186,7 +186,7 @@ PyObject* cache_get(Cache* self, PyObject* args)
return NULL; return NULL;
} }
node = new_node(key, data); node = pysqlite_new_node(key, data);
if (!node) { if (!node) {
return NULL; return NULL;
} }
@ -211,9 +211,9 @@ PyObject* cache_get(Cache* self, PyObject* args)
return node->data; return node->data;
} }
PyObject* cache_display(Cache* self, PyObject* args) PyObject* pysqlite_cache_display(pysqlite_Cache* self, PyObject* args)
{ {
Node* ptr; pysqlite_Node* ptr;
PyObject* prevkey; PyObject* prevkey;
PyObject* nextkey; PyObject* nextkey;
PyObject* fmt_args; PyObject* fmt_args;
@ -265,20 +265,20 @@ PyObject* cache_display(Cache* self, PyObject* args)
} }
static PyMethodDef cache_methods[] = { static PyMethodDef cache_methods[] = {
{"get", (PyCFunction)cache_get, METH_O, {"get", (PyCFunction)pysqlite_cache_get, METH_O,
PyDoc_STR("Gets an entry from the cache or calls the factory function to produce one.")}, PyDoc_STR("Gets an entry from the cache or calls the factory function to produce one.")},
{"display", (PyCFunction)cache_display, METH_NOARGS, {"display", (PyCFunction)pysqlite_cache_display, METH_NOARGS,
PyDoc_STR("For debugging only.")}, PyDoc_STR("For debugging only.")},
{NULL, NULL} {NULL, NULL}
}; };
PyTypeObject NodeType = { PyTypeObject pysqlite_NodeType = {
PyObject_HEAD_INIT(NULL) PyObject_HEAD_INIT(NULL)
0, /* ob_size */ 0, /* ob_size */
MODULE_NAME "Node", /* tp_name */ MODULE_NAME "Node", /* tp_name */
sizeof(Node), /* tp_basicsize */ sizeof(pysqlite_Node), /* tp_basicsize */
0, /* tp_itemsize */ 0, /* tp_itemsize */
(destructor)node_dealloc, /* tp_dealloc */ (destructor)pysqlite_node_dealloc, /* tp_dealloc */
0, /* tp_print */ 0, /* tp_print */
0, /* tp_getattr */ 0, /* tp_getattr */
0, /* tp_setattr */ 0, /* tp_setattr */
@ -315,13 +315,13 @@ PyTypeObject NodeType = {
0 /* tp_free */ 0 /* tp_free */
}; };
PyTypeObject CacheType = { PyTypeObject pysqlite_CacheType = {
PyObject_HEAD_INIT(NULL) PyObject_HEAD_INIT(NULL)
0, /* ob_size */ 0, /* ob_size */
MODULE_NAME ".Cache", /* tp_name */ MODULE_NAME ".Cache", /* tp_name */
sizeof(Cache), /* tp_basicsize */ sizeof(pysqlite_Cache), /* tp_basicsize */
0, /* tp_itemsize */ 0, /* tp_itemsize */
(destructor)cache_dealloc, /* tp_dealloc */ (destructor)pysqlite_cache_dealloc, /* tp_dealloc */
0, /* tp_print */ 0, /* tp_print */
0, /* tp_getattr */ 0, /* tp_getattr */
0, /* tp_setattr */ 0, /* tp_setattr */
@ -352,24 +352,24 @@ PyTypeObject CacheType = {
0, /* tp_descr_get */ 0, /* tp_descr_get */
0, /* tp_descr_set */ 0, /* tp_descr_set */
0, /* tp_dictoffset */ 0, /* tp_dictoffset */
(initproc)cache_init, /* tp_init */ (initproc)pysqlite_cache_init, /* tp_init */
0, /* tp_alloc */ 0, /* tp_alloc */
0, /* tp_new */ 0, /* tp_new */
0 /* tp_free */ 0 /* tp_free */
}; };
extern int cache_setup_types(void) extern int pysqlite_cache_setup_types(void)
{ {
int rc; int rc;
NodeType.tp_new = PyType_GenericNew; pysqlite_NodeType.tp_new = PyType_GenericNew;
CacheType.tp_new = PyType_GenericNew; pysqlite_CacheType.tp_new = PyType_GenericNew;
rc = PyType_Ready(&NodeType); rc = PyType_Ready(&pysqlite_NodeType);
if (rc < 0) { if (rc < 0) {
return rc; return rc;
} }
rc = PyType_Ready(&CacheType); rc = PyType_Ready(&pysqlite_CacheType);
return rc; return rc;
} }

View File

@ -29,15 +29,15 @@
* dictionary. The list items are of type 'Node' and the dictionary has the * dictionary. The list items are of type 'Node' and the dictionary has the
* nodes as values. */ * nodes as values. */
typedef struct _Node typedef struct _pysqlite_Node
{ {
PyObject_HEAD PyObject_HEAD
PyObject* key; PyObject* key;
PyObject* data; PyObject* data;
long count; long count;
struct _Node* prev; struct _pysqlite_Node* prev;
struct _Node* next; struct _pysqlite_Node* next;
} Node; } pysqlite_Node;
typedef struct typedef struct
{ {
@ -50,24 +50,24 @@ typedef struct
/* the factory callable */ /* the factory callable */
PyObject* factory; PyObject* factory;
Node* first; pysqlite_Node* first;
Node* last; pysqlite_Node* last;
/* if set, decrement the factory function when the Cache is deallocated. /* if set, decrement the factory function when the Cache is deallocated.
* this is almost always desirable, but not in the pysqlite context */ * this is almost always desirable, but not in the pysqlite context */
int decref_factory; int decref_factory;
} Cache; } pysqlite_Cache;
extern PyTypeObject NodeType; extern PyTypeObject pysqlite_NodeType;
extern PyTypeObject CacheType; extern PyTypeObject pysqlite_CacheType;
int node_init(Node* self, PyObject* args, PyObject* kwargs); int pysqlite_node_init(pysqlite_Node* self, PyObject* args, PyObject* kwargs);
void node_dealloc(Node* self); void pysqlite_node_dealloc(pysqlite_Node* self);
int cache_init(Cache* self, PyObject* args, PyObject* kwargs); int pysqlite_cache_init(pysqlite_Cache* self, PyObject* args, PyObject* kwargs);
void cache_dealloc(Cache* self); void pysqlite_cache_dealloc(pysqlite_Cache* self);
PyObject* cache_get(Cache* self, PyObject* args); PyObject* pysqlite_cache_get(pysqlite_Cache* self, PyObject* args);
int cache_setup_types(void); int pysqlite_cache_setup_types(void);
#endif #endif

View File

@ -32,7 +32,7 @@
#include "pythread.h" #include "pythread.h"
static int connection_set_isolation_level(Connection* self, PyObject* isolation_level); static int pysqlite_connection_set_isolation_level(pysqlite_Connection* self, PyObject* isolation_level);
void _sqlite3_result_error(sqlite3_context* ctx, const char* errmsg, int len) void _sqlite3_result_error(sqlite3_context* ctx, const char* errmsg, int len)
@ -43,11 +43,11 @@ void _sqlite3_result_error(sqlite3_context* ctx, const char* errmsg, int len)
#if SQLITE_VERSION_NUMBER >= 3003003 #if SQLITE_VERSION_NUMBER >= 3003003
sqlite3_result_error(ctx, errmsg, len); sqlite3_result_error(ctx, errmsg, len);
#else #else
PyErr_SetString(OperationalError, errmsg); PyErr_SetString(pysqlite_OperationalError, errmsg);
#endif #endif
} }
int connection_init(Connection* self, PyObject* args, PyObject* kwargs) int pysqlite_connection_init(pysqlite_Connection* self, PyObject* args, PyObject* kwargs)
{ {
static char *kwlist[] = {"database", "timeout", "detect_types", "isolation_level", "check_same_thread", "factory", "cached_statements", NULL, NULL}; static char *kwlist[] = {"database", "timeout", "detect_types", "isolation_level", "check_same_thread", "factory", "cached_statements", NULL, NULL};
@ -82,7 +82,7 @@ int connection_init(Connection* self, PyObject* args, PyObject* kwargs)
Py_END_ALLOW_THREADS Py_END_ALLOW_THREADS
if (rc != SQLITE_OK) { if (rc != SQLITE_OK) {
_seterror(self->db); _pysqlite_seterror(self->db);
return -1; return -1;
} }
@ -95,10 +95,10 @@ int connection_init(Connection* self, PyObject* args, PyObject* kwargs)
Py_INCREF(isolation_level); Py_INCREF(isolation_level);
} }
self->isolation_level = NULL; self->isolation_level = NULL;
connection_set_isolation_level(self, isolation_level); pysqlite_connection_set_isolation_level(self, isolation_level);
Py_DECREF(isolation_level); Py_DECREF(isolation_level);
self->statement_cache = (Cache*)PyObject_CallFunction((PyObject*)&CacheType, "Oi", self, cached_statements); self->statement_cache = (pysqlite_Cache*)PyObject_CallFunction((PyObject*)&pysqlite_CacheType, "Oi", self, cached_statements);
if (PyErr_Occurred()) { if (PyErr_Occurred()) {
return -1; return -1;
} }
@ -135,41 +135,41 @@ int connection_init(Connection* self, PyObject* args, PyObject* kwargs)
return -1; return -1;
} }
self->Warning = Warning; self->Warning = pysqlite_Warning;
self->Error = Error; self->Error = pysqlite_Error;
self->InterfaceError = InterfaceError; self->InterfaceError = pysqlite_InterfaceError;
self->DatabaseError = DatabaseError; self->DatabaseError = pysqlite_DatabaseError;
self->DataError = DataError; self->DataError = pysqlite_DataError;
self->OperationalError = OperationalError; self->OperationalError = pysqlite_OperationalError;
self->IntegrityError = IntegrityError; self->IntegrityError = pysqlite_IntegrityError;
self->InternalError = InternalError; self->InternalError = pysqlite_InternalError;
self->ProgrammingError = ProgrammingError; self->ProgrammingError = pysqlite_ProgrammingError;
self->NotSupportedError = NotSupportedError; self->NotSupportedError = pysqlite_NotSupportedError;
return 0; return 0;
} }
/* Empty the entire statement cache of this connection */ /* Empty the entire statement cache of this connection */
void flush_statement_cache(Connection* self) void pysqlite_flush_statement_cache(pysqlite_Connection* self)
{ {
Node* node; pysqlite_Node* node;
Statement* statement; pysqlite_Statement* statement;
node = self->statement_cache->first; node = self->statement_cache->first;
while (node) { while (node) {
statement = (Statement*)(node->data); statement = (pysqlite_Statement*)(node->data);
(void)statement_finalize(statement); (void)pysqlite_statement_finalize(statement);
node = node->next; node = node->next;
} }
Py_DECREF(self->statement_cache); Py_DECREF(self->statement_cache);
self->statement_cache = (Cache*)PyObject_CallFunction((PyObject*)&CacheType, "O", self); self->statement_cache = (pysqlite_Cache*)PyObject_CallFunction((PyObject*)&pysqlite_CacheType, "O", self);
Py_DECREF(self); Py_DECREF(self);
self->statement_cache->decref_factory = 0; self->statement_cache->decref_factory = 0;
} }
void reset_all_statements(Connection* self) void pysqlite_reset_all_statements(pysqlite_Connection* self)
{ {
int i; int i;
PyObject* weakref; PyObject* weakref;
@ -179,12 +179,12 @@ void reset_all_statements(Connection* self)
weakref = PyList_GetItem(self->statements, i); weakref = PyList_GetItem(self->statements, i);
statement = PyWeakref_GetObject(weakref); statement = PyWeakref_GetObject(weakref);
if (statement != Py_None) { if (statement != Py_None) {
(void)statement_reset((Statement*)statement); (void)pysqlite_statement_reset((pysqlite_Statement*)statement);
} }
} }
} }
void connection_dealloc(Connection* self) void pysqlite_connection_dealloc(pysqlite_Connection* self)
{ {
Py_XDECREF(self->statement_cache); Py_XDECREF(self->statement_cache);
@ -208,7 +208,7 @@ void connection_dealloc(Connection* self)
self->ob_type->tp_free((PyObject*)self); self->ob_type->tp_free((PyObject*)self);
} }
PyObject* connection_cursor(Connection* self, PyObject* args, PyObject* kwargs) PyObject* pysqlite_connection_cursor(pysqlite_Connection* self, PyObject* args, PyObject* kwargs)
{ {
static char *kwlist[] = {"factory", NULL, NULL}; static char *kwlist[] = {"factory", NULL, NULL};
PyObject* factory = NULL; PyObject* factory = NULL;
@ -220,34 +220,34 @@ PyObject* connection_cursor(Connection* self, PyObject* args, PyObject* kwargs)
return NULL; return NULL;
} }
if (!check_thread(self) || !check_connection(self)) { if (!pysqlite_check_thread(self) || !pysqlite_check_connection(self)) {
return NULL; return NULL;
} }
if (factory == NULL) { if (factory == NULL) {
factory = (PyObject*)&CursorType; factory = (PyObject*)&pysqlite_CursorType;
} }
cursor = PyObject_CallFunction(factory, "O", self); cursor = PyObject_CallFunction(factory, "O", self);
if (cursor && self->row_factory != Py_None) { if (cursor && self->row_factory != Py_None) {
Py_XDECREF(((Cursor*)cursor)->row_factory); Py_XDECREF(((pysqlite_Cursor*)cursor)->row_factory);
Py_INCREF(self->row_factory); Py_INCREF(self->row_factory);
((Cursor*)cursor)->row_factory = self->row_factory; ((pysqlite_Cursor*)cursor)->row_factory = self->row_factory;
} }
return cursor; return cursor;
} }
PyObject* connection_close(Connection* self, PyObject* args) PyObject* pysqlite_connection_close(pysqlite_Connection* self, PyObject* args)
{ {
int rc; int rc;
if (!check_thread(self)) { if (!pysqlite_check_thread(self)) {
return NULL; return NULL;
} }
flush_statement_cache(self); pysqlite_flush_statement_cache(self);
if (self->db) { if (self->db) {
Py_BEGIN_ALLOW_THREADS Py_BEGIN_ALLOW_THREADS
@ -255,7 +255,7 @@ PyObject* connection_close(Connection* self, PyObject* args)
Py_END_ALLOW_THREADS Py_END_ALLOW_THREADS
if (rc != SQLITE_OK) { if (rc != SQLITE_OK) {
_seterror(self->db); _pysqlite_seterror(self->db);
return NULL; return NULL;
} else { } else {
self->db = NULL; self->db = NULL;
@ -271,17 +271,17 @@ PyObject* connection_close(Connection* self, PyObject* args)
* *
* 0 => error; 1 => ok * 0 => error; 1 => ok
*/ */
int check_connection(Connection* con) int pysqlite_check_connection(pysqlite_Connection* con)
{ {
if (!con->db) { if (!con->db) {
PyErr_SetString(ProgrammingError, "Cannot operate on a closed database."); PyErr_SetString(pysqlite_ProgrammingError, "Cannot operate on a closed database.");
return 0; return 0;
} else { } else {
return 1; return 1;
} }
} }
PyObject* _connection_begin(Connection* self) PyObject* _pysqlite_connection_begin(pysqlite_Connection* self)
{ {
int rc; int rc;
const char* tail; const char* tail;
@ -292,7 +292,7 @@ PyObject* _connection_begin(Connection* self)
Py_END_ALLOW_THREADS Py_END_ALLOW_THREADS
if (rc != SQLITE_OK) { if (rc != SQLITE_OK) {
_seterror(self->db); _pysqlite_seterror(self->db);
goto error; goto error;
} }
@ -300,7 +300,7 @@ PyObject* _connection_begin(Connection* self)
if (rc == SQLITE_DONE) { if (rc == SQLITE_DONE) {
self->inTransaction = 1; self->inTransaction = 1;
} else { } else {
_seterror(self->db); _pysqlite_seterror(self->db);
} }
Py_BEGIN_ALLOW_THREADS Py_BEGIN_ALLOW_THREADS
@ -308,7 +308,7 @@ PyObject* _connection_begin(Connection* self)
Py_END_ALLOW_THREADS Py_END_ALLOW_THREADS
if (rc != SQLITE_OK && !PyErr_Occurred()) { if (rc != SQLITE_OK && !PyErr_Occurred()) {
_seterror(self->db); _pysqlite_seterror(self->db);
} }
error: error:
@ -320,13 +320,13 @@ error:
} }
} }
PyObject* connection_commit(Connection* self, PyObject* args) PyObject* pysqlite_connection_commit(pysqlite_Connection* self, PyObject* args)
{ {
int rc; int rc;
const char* tail; const char* tail;
sqlite3_stmt* statement; sqlite3_stmt* statement;
if (!check_thread(self) || !check_connection(self)) { if (!pysqlite_check_thread(self) || !pysqlite_check_connection(self)) {
return NULL; return NULL;
} }
@ -335,7 +335,7 @@ PyObject* connection_commit(Connection* self, PyObject* args)
rc = sqlite3_prepare(self->db, "COMMIT", -1, &statement, &tail); rc = sqlite3_prepare(self->db, "COMMIT", -1, &statement, &tail);
Py_END_ALLOW_THREADS Py_END_ALLOW_THREADS
if (rc != SQLITE_OK) { if (rc != SQLITE_OK) {
_seterror(self->db); _pysqlite_seterror(self->db);
goto error; goto error;
} }
@ -343,14 +343,14 @@ PyObject* connection_commit(Connection* self, PyObject* args)
if (rc == SQLITE_DONE) { if (rc == SQLITE_DONE) {
self->inTransaction = 0; self->inTransaction = 0;
} else { } else {
_seterror(self->db); _pysqlite_seterror(self->db);
} }
Py_BEGIN_ALLOW_THREADS Py_BEGIN_ALLOW_THREADS
rc = sqlite3_finalize(statement); rc = sqlite3_finalize(statement);
Py_END_ALLOW_THREADS Py_END_ALLOW_THREADS
if (rc != SQLITE_OK && !PyErr_Occurred()) { if (rc != SQLITE_OK && !PyErr_Occurred()) {
_seterror(self->db); _pysqlite_seterror(self->db);
} }
} }
@ -364,24 +364,24 @@ error:
} }
} }
PyObject* connection_rollback(Connection* self, PyObject* args) PyObject* pysqlite_connection_rollback(pysqlite_Connection* self, PyObject* args)
{ {
int rc; int rc;
const char* tail; const char* tail;
sqlite3_stmt* statement; sqlite3_stmt* statement;
if (!check_thread(self) || !check_connection(self)) { if (!pysqlite_check_thread(self) || !pysqlite_check_connection(self)) {
return NULL; return NULL;
} }
if (self->inTransaction) { if (self->inTransaction) {
reset_all_statements(self); pysqlite_reset_all_statements(self);
Py_BEGIN_ALLOW_THREADS Py_BEGIN_ALLOW_THREADS
rc = sqlite3_prepare(self->db, "ROLLBACK", -1, &statement, &tail); rc = sqlite3_prepare(self->db, "ROLLBACK", -1, &statement, &tail);
Py_END_ALLOW_THREADS Py_END_ALLOW_THREADS
if (rc != SQLITE_OK) { if (rc != SQLITE_OK) {
_seterror(self->db); _pysqlite_seterror(self->db);
goto error; goto error;
} }
@ -389,14 +389,14 @@ PyObject* connection_rollback(Connection* self, PyObject* args)
if (rc == SQLITE_DONE) { if (rc == SQLITE_DONE) {
self->inTransaction = 0; self->inTransaction = 0;
} else { } else {
_seterror(self->db); _pysqlite_seterror(self->db);
} }
Py_BEGIN_ALLOW_THREADS Py_BEGIN_ALLOW_THREADS
rc = sqlite3_finalize(statement); rc = sqlite3_finalize(statement);
Py_END_ALLOW_THREADS Py_END_ALLOW_THREADS
if (rc != SQLITE_OK && !PyErr_Occurred()) { if (rc != SQLITE_OK && !PyErr_Occurred()) {
_seterror(self->db); _pysqlite_seterror(self->db);
} }
} }
@ -410,7 +410,7 @@ error:
} }
} }
void _set_result(sqlite3_context* context, PyObject* py_val) void _pysqlite_set_result(sqlite3_context* context, PyObject* py_val)
{ {
long longval; long longval;
const char* buffer; const char* buffer;
@ -445,7 +445,7 @@ void _set_result(sqlite3_context* context, PyObject* py_val)
} }
} }
PyObject* _build_py_params(sqlite3_context *context, int argc, sqlite3_value** argv) PyObject* _pysqlite_build_py_params(sqlite3_context *context, int argc, sqlite3_value** argv)
{ {
PyObject* args; PyObject* args;
int i; int i;
@ -512,7 +512,7 @@ PyObject* _build_py_params(sqlite3_context *context, int argc, sqlite3_value** a
return args; return args;
} }
void _func_callback(sqlite3_context* context, int argc, sqlite3_value** argv) void _pysqlite_func_callback(sqlite3_context* context, int argc, sqlite3_value** argv)
{ {
PyObject* args; PyObject* args;
PyObject* py_func; PyObject* py_func;
@ -524,14 +524,14 @@ void _func_callback(sqlite3_context* context, int argc, sqlite3_value** argv)
py_func = (PyObject*)sqlite3_user_data(context); py_func = (PyObject*)sqlite3_user_data(context);
args = _build_py_params(context, argc, argv); args = _pysqlite_build_py_params(context, argc, argv);
if (args) { if (args) {
py_retval = PyObject_CallObject(py_func, args); py_retval = PyObject_CallObject(py_func, args);
Py_DECREF(args); Py_DECREF(args);
} }
if (py_retval) { if (py_retval) {
_set_result(context, py_retval); _pysqlite_set_result(context, py_retval);
Py_DECREF(py_retval); Py_DECREF(py_retval);
} else { } else {
if (_enable_callback_tracebacks) { if (_enable_callback_tracebacks) {
@ -545,7 +545,7 @@ void _func_callback(sqlite3_context* context, int argc, sqlite3_value** argv)
PyGILState_Release(threadstate); PyGILState_Release(threadstate);
} }
static void _step_callback(sqlite3_context *context, int argc, sqlite3_value** params) static void _pysqlite_step_callback(sqlite3_context *context, int argc, sqlite3_value** params)
{ {
PyObject* args; PyObject* args;
PyObject* function_result = NULL; PyObject* function_result = NULL;
@ -581,7 +581,7 @@ static void _step_callback(sqlite3_context *context, int argc, sqlite3_value** p
goto error; goto error;
} }
args = _build_py_params(context, argc, params); args = _pysqlite_build_py_params(context, argc, params);
if (!args) { if (!args) {
goto error; goto error;
} }
@ -605,7 +605,7 @@ error:
PyGILState_Release(threadstate); PyGILState_Release(threadstate);
} }
void _final_callback(sqlite3_context* context) void _pysqlite_final_callback(sqlite3_context* context)
{ {
PyObject* function_result = NULL; PyObject* function_result = NULL;
PyObject** aggregate_instance; PyObject** aggregate_instance;
@ -634,7 +634,7 @@ void _final_callback(sqlite3_context* context)
} }
_sqlite3_result_error(context, "user-defined aggregate's 'finalize' method raised error", -1); _sqlite3_result_error(context, "user-defined aggregate's 'finalize' method raised error", -1);
} else { } else {
_set_result(context, function_result); _pysqlite_set_result(context, function_result);
} }
error: error:
@ -644,7 +644,7 @@ error:
PyGILState_Release(threadstate); PyGILState_Release(threadstate);
} }
void _drop_unused_statement_references(Connection* self) void _pysqlite_drop_unused_statement_references(pysqlite_Connection* self)
{ {
PyObject* new_list; PyObject* new_list;
PyObject* weakref; PyObject* weakref;
@ -676,7 +676,7 @@ void _drop_unused_statement_references(Connection* self)
self->statements = new_list; self->statements = new_list;
} }
PyObject* connection_create_function(Connection* self, PyObject* args, PyObject* kwargs) PyObject* pysqlite_connection_create_function(pysqlite_Connection* self, PyObject* args, PyObject* kwargs)
{ {
static char *kwlist[] = {"name", "narg", "func", NULL, NULL}; static char *kwlist[] = {"name", "narg", "func", NULL, NULL};
@ -691,11 +691,11 @@ PyObject* connection_create_function(Connection* self, PyObject* args, PyObject*
return NULL; return NULL;
} }
rc = sqlite3_create_function(self->db, name, narg, SQLITE_UTF8, (void*)func, _func_callback, NULL, NULL); rc = sqlite3_create_function(self->db, name, narg, SQLITE_UTF8, (void*)func, _pysqlite_func_callback, NULL, NULL);
if (rc != SQLITE_OK) { if (rc != SQLITE_OK) {
/* Workaround for SQLite bug: no error code or string is available here */ /* Workaround for SQLite bug: no error code or string is available here */
PyErr_SetString(OperationalError, "Error creating function"); PyErr_SetString(pysqlite_OperationalError, "Error creating function");
return NULL; return NULL;
} else { } else {
PyDict_SetItem(self->function_pinboard, func, Py_None); PyDict_SetItem(self->function_pinboard, func, Py_None);
@ -705,7 +705,7 @@ PyObject* connection_create_function(Connection* self, PyObject* args, PyObject*
} }
} }
PyObject* connection_create_aggregate(Connection* self, PyObject* args, PyObject* kwargs) PyObject* pysqlite_connection_create_aggregate(pysqlite_Connection* self, PyObject* args, PyObject* kwargs)
{ {
PyObject* aggregate_class; PyObject* aggregate_class;
@ -719,10 +719,10 @@ PyObject* connection_create_aggregate(Connection* self, PyObject* args, PyObject
return NULL; return NULL;
} }
rc = sqlite3_create_function(self->db, name, n_arg, SQLITE_UTF8, (void*)aggregate_class, 0, &_step_callback, &_final_callback); rc = sqlite3_create_function(self->db, name, n_arg, SQLITE_UTF8, (void*)aggregate_class, 0, &_pysqlite_step_callback, &_pysqlite_final_callback);
if (rc != SQLITE_OK) { if (rc != SQLITE_OK) {
/* Workaround for SQLite bug: no error code or string is available here */ /* Workaround for SQLite bug: no error code or string is available here */
PyErr_SetString(OperationalError, "Error creating aggregate"); PyErr_SetString(pysqlite_OperationalError, "Error creating aggregate");
return NULL; return NULL;
} else { } else {
PyDict_SetItem(self->function_pinboard, aggregate_class, Py_None); PyDict_SetItem(self->function_pinboard, aggregate_class, Py_None);
@ -732,7 +732,7 @@ PyObject* connection_create_aggregate(Connection* self, PyObject* args, PyObject
} }
} }
int _authorizer_callback(void* user_arg, int action, const char* arg1, const char* arg2 , const char* dbname, const char* access_attempt_source) static int _authorizer_callback(void* user_arg, int action, const char* arg1, const char* arg2 , const char* dbname, const char* access_attempt_source)
{ {
PyObject *ret; PyObject *ret;
int rc; int rc;
@ -762,7 +762,7 @@ int _authorizer_callback(void* user_arg, int action, const char* arg1, const cha
return rc; return rc;
} }
PyObject* connection_set_authorizer(Connection* self, PyObject* args, PyObject* kwargs) PyObject* pysqlite_connection_set_authorizer(pysqlite_Connection* self, PyObject* args, PyObject* kwargs)
{ {
PyObject* authorizer_cb; PyObject* authorizer_cb;
@ -777,7 +777,7 @@ PyObject* connection_set_authorizer(Connection* self, PyObject* args, PyObject*
rc = sqlite3_set_authorizer(self->db, _authorizer_callback, (void*)authorizer_cb); rc = sqlite3_set_authorizer(self->db, _authorizer_callback, (void*)authorizer_cb);
if (rc != SQLITE_OK) { if (rc != SQLITE_OK) {
PyErr_SetString(OperationalError, "Error setting authorizer callback"); PyErr_SetString(pysqlite_OperationalError, "Error setting authorizer callback");
return NULL; return NULL;
} else { } else {
PyDict_SetItem(self->function_pinboard, authorizer_cb, Py_None); PyDict_SetItem(self->function_pinboard, authorizer_cb, Py_None);
@ -787,11 +787,11 @@ PyObject* connection_set_authorizer(Connection* self, PyObject* args, PyObject*
} }
} }
int check_thread(Connection* self) int pysqlite_check_thread(pysqlite_Connection* self)
{ {
if (self->check_same_thread) { if (self->check_same_thread) {
if (PyThread_get_thread_ident() != self->thread_ident) { if (PyThread_get_thread_ident() != self->thread_ident) {
PyErr_Format(ProgrammingError, PyErr_Format(pysqlite_ProgrammingError,
"SQLite objects created in a thread can only be used in that same thread." "SQLite objects created in a thread can only be used in that same thread."
"The object was created in thread id %ld and this is thread id %ld", "The object was created in thread id %ld and this is thread id %ld",
self->thread_ident, PyThread_get_thread_ident()); self->thread_ident, PyThread_get_thread_ident());
@ -803,22 +803,22 @@ int check_thread(Connection* self)
return 1; return 1;
} }
static PyObject* connection_get_isolation_level(Connection* self, void* unused) static PyObject* pysqlite_connection_get_isolation_level(pysqlite_Connection* self, void* unused)
{ {
Py_INCREF(self->isolation_level); Py_INCREF(self->isolation_level);
return self->isolation_level; return self->isolation_level;
} }
static PyObject* connection_get_total_changes(Connection* self, void* unused) static PyObject* pysqlite_connection_get_total_changes(pysqlite_Connection* self, void* unused)
{ {
if (!check_connection(self)) { if (!pysqlite_check_connection(self)) {
return NULL; return NULL;
} else { } else {
return Py_BuildValue("i", sqlite3_total_changes(self->db)); return Py_BuildValue("i", sqlite3_total_changes(self->db));
} }
} }
static int connection_set_isolation_level(Connection* self, PyObject* isolation_level) static int pysqlite_connection_set_isolation_level(pysqlite_Connection* self, PyObject* isolation_level)
{ {
PyObject* res; PyObject* res;
PyObject* begin_statement; PyObject* begin_statement;
@ -834,7 +834,7 @@ static int connection_set_isolation_level(Connection* self, PyObject* isolation_
Py_INCREF(Py_None); Py_INCREF(Py_None);
self->isolation_level = Py_None; self->isolation_level = Py_None;
res = connection_commit(self, NULL); res = pysqlite_connection_commit(self, NULL);
if (!res) { if (!res) {
return -1; return -1;
} }
@ -866,10 +866,10 @@ static int connection_set_isolation_level(Connection* self, PyObject* isolation_
return 0; return 0;
} }
PyObject* connection_call(Connection* self, PyObject* args, PyObject* kwargs) PyObject* pysqlite_connection_call(pysqlite_Connection* self, PyObject* args, PyObject* kwargs)
{ {
PyObject* sql; PyObject* sql;
Statement* statement; pysqlite_Statement* statement;
PyObject* weakref; PyObject* weakref;
int rc; int rc;
@ -877,22 +877,22 @@ PyObject* connection_call(Connection* self, PyObject* args, PyObject* kwargs)
return NULL; return NULL;
} }
_drop_unused_statement_references(self); _pysqlite_drop_unused_statement_references(self);
statement = PyObject_New(Statement, &StatementType); statement = PyObject_New(pysqlite_Statement, &pysqlite_StatementType);
if (!statement) { if (!statement) {
return NULL; return NULL;
} }
rc = statement_create(statement, self, sql); rc = pysqlite_statement_create(statement, self, sql);
if (rc != SQLITE_OK) { if (rc != SQLITE_OK) {
if (rc == PYSQLITE_TOO_MUCH_SQL) { if (rc == PYSQLITE_TOO_MUCH_SQL) {
PyErr_SetString(Warning, "You can only execute one statement at a time."); PyErr_SetString(pysqlite_Warning, "You can only execute one statement at a time.");
} else if (rc == PYSQLITE_SQL_WRONG_TYPE) { } else if (rc == PYSQLITE_SQL_WRONG_TYPE) {
PyErr_SetString(Warning, "SQL is of wrong type. Must be string or unicode."); PyErr_SetString(pysqlite_Warning, "SQL is of wrong type. Must be string or unicode.");
} else { } else {
_seterror(self->db); _pysqlite_seterror(self->db);
} }
Py_DECREF(statement); Py_DECREF(statement);
@ -918,7 +918,7 @@ error:
return (PyObject*)statement; return (PyObject*)statement;
} }
PyObject* connection_execute(Connection* self, PyObject* args, PyObject* kwargs) PyObject* pysqlite_connection_execute(pysqlite_Connection* self, PyObject* args, PyObject* kwargs)
{ {
PyObject* cursor = 0; PyObject* cursor = 0;
PyObject* result = 0; PyObject* result = 0;
@ -949,7 +949,7 @@ error:
return cursor; return cursor;
} }
PyObject* connection_executemany(Connection* self, PyObject* args, PyObject* kwargs) PyObject* pysqlite_connection_executemany(pysqlite_Connection* self, PyObject* args, PyObject* kwargs)
{ {
PyObject* cursor = 0; PyObject* cursor = 0;
PyObject* result = 0; PyObject* result = 0;
@ -980,7 +980,7 @@ error:
return cursor; return cursor;
} }
PyObject* connection_executescript(Connection* self, PyObject* args, PyObject* kwargs) PyObject* pysqlite_connection_executescript(pysqlite_Connection* self, PyObject* args, PyObject* kwargs)
{ {
PyObject* cursor = 0; PyObject* cursor = 0;
PyObject* result = 0; PyObject* result = 0;
@ -1014,7 +1014,7 @@ error:
/* ------------------------- COLLATION CODE ------------------------ */ /* ------------------------- COLLATION CODE ------------------------ */
static int static int
collation_callback( pysqlite_collation_callback(
void* context, void* context,
int text1_length, const void* text1_data, int text1_length, const void* text1_data,
int text2_length, const void* text2_data) int text2_length, const void* text2_data)
@ -1063,11 +1063,11 @@ finally:
} }
static PyObject * static PyObject *
connection_interrupt(Connection* self, PyObject* args) pysqlite_connection_interrupt(pysqlite_Connection* self, PyObject* args)
{ {
PyObject* retval = NULL; PyObject* retval = NULL;
if (!check_connection(self)) { if (!pysqlite_check_connection(self)) {
goto finally; goto finally;
} }
@ -1081,7 +1081,7 @@ finally:
} }
static PyObject * static PyObject *
connection_create_collation(Connection* self, PyObject* args) pysqlite_connection_create_collation(pysqlite_Connection* self, PyObject* args)
{ {
PyObject* callable; PyObject* callable;
PyObject* uppercase_name = 0; PyObject* uppercase_name = 0;
@ -1090,7 +1090,7 @@ connection_create_collation(Connection* self, PyObject* args)
char* chk; char* chk;
int rc; int rc;
if (!check_thread(self) || !check_connection(self)) { if (!pysqlite_check_thread(self) || !pysqlite_check_connection(self)) {
goto finally; goto finally;
} }
@ -1111,7 +1111,7 @@ connection_create_collation(Connection* self, PyObject* args)
{ {
chk++; chk++;
} else { } else {
PyErr_SetString(ProgrammingError, "invalid character in collation name"); PyErr_SetString(pysqlite_ProgrammingError, "invalid character in collation name");
goto finally; goto finally;
} }
} }
@ -1131,10 +1131,10 @@ connection_create_collation(Connection* self, PyObject* args)
PyString_AsString(uppercase_name), PyString_AsString(uppercase_name),
SQLITE_UTF8, SQLITE_UTF8,
(callable != Py_None) ? callable : NULL, (callable != Py_None) ? callable : NULL,
(callable != Py_None) ? collation_callback : NULL); (callable != Py_None) ? pysqlite_collation_callback : NULL);
if (rc != SQLITE_OK) { if (rc != SQLITE_OK) {
PyDict_DelItem(self->collations, uppercase_name); PyDict_DelItem(self->collations, uppercase_name);
_seterror(self->db); _pysqlite_seterror(self->db);
goto finally; goto finally;
} }
@ -1155,63 +1155,63 @@ static char connection_doc[] =
PyDoc_STR("SQLite database connection object."); PyDoc_STR("SQLite database connection object.");
static PyGetSetDef connection_getset[] = { static PyGetSetDef connection_getset[] = {
{"isolation_level", (getter)connection_get_isolation_level, (setter)connection_set_isolation_level}, {"isolation_level", (getter)pysqlite_connection_get_isolation_level, (setter)pysqlite_connection_set_isolation_level},
{"total_changes", (getter)connection_get_total_changes, (setter)0}, {"total_changes", (getter)pysqlite_connection_get_total_changes, (setter)0},
{NULL} {NULL}
}; };
static PyMethodDef connection_methods[] = { static PyMethodDef connection_methods[] = {
{"cursor", (PyCFunction)connection_cursor, METH_VARARGS|METH_KEYWORDS, {"cursor", (PyCFunction)pysqlite_connection_cursor, METH_VARARGS|METH_KEYWORDS,
PyDoc_STR("Return a cursor for the connection.")}, PyDoc_STR("Return a cursor for the connection.")},
{"close", (PyCFunction)connection_close, METH_NOARGS, {"close", (PyCFunction)pysqlite_connection_close, METH_NOARGS,
PyDoc_STR("Closes the connection.")}, PyDoc_STR("Closes the connection.")},
{"commit", (PyCFunction)connection_commit, METH_NOARGS, {"commit", (PyCFunction)pysqlite_connection_commit, METH_NOARGS,
PyDoc_STR("Commit the current transaction.")}, PyDoc_STR("Commit the current transaction.")},
{"rollback", (PyCFunction)connection_rollback, METH_NOARGS, {"rollback", (PyCFunction)pysqlite_connection_rollback, METH_NOARGS,
PyDoc_STR("Roll back the current transaction.")}, PyDoc_STR("Roll back the current transaction.")},
{"create_function", (PyCFunction)connection_create_function, METH_VARARGS|METH_KEYWORDS, {"create_function", (PyCFunction)pysqlite_connection_create_function, METH_VARARGS|METH_KEYWORDS,
PyDoc_STR("Creates a new function. Non-standard.")}, PyDoc_STR("Creates a new function. Non-standard.")},
{"create_aggregate", (PyCFunction)connection_create_aggregate, METH_VARARGS|METH_KEYWORDS, {"create_aggregate", (PyCFunction)pysqlite_connection_create_aggregate, METH_VARARGS|METH_KEYWORDS,
PyDoc_STR("Creates a new aggregate. Non-standard.")}, PyDoc_STR("Creates a new aggregate. Non-standard.")},
{"set_authorizer", (PyCFunction)connection_set_authorizer, METH_VARARGS|METH_KEYWORDS, {"set_authorizer", (PyCFunction)pysqlite_connection_set_authorizer, METH_VARARGS|METH_KEYWORDS,
PyDoc_STR("Sets authorizer callback. Non-standard.")}, PyDoc_STR("Sets authorizer callback. Non-standard.")},
{"execute", (PyCFunction)connection_execute, METH_VARARGS, {"execute", (PyCFunction)pysqlite_connection_execute, METH_VARARGS,
PyDoc_STR("Executes a SQL statement. Non-standard.")}, PyDoc_STR("Executes a SQL statement. Non-standard.")},
{"executemany", (PyCFunction)connection_executemany, METH_VARARGS, {"executemany", (PyCFunction)pysqlite_connection_executemany, METH_VARARGS,
PyDoc_STR("Repeatedly executes a SQL statement. Non-standard.")}, PyDoc_STR("Repeatedly executes a SQL statement. Non-standard.")},
{"executescript", (PyCFunction)connection_executescript, METH_VARARGS, {"executescript", (PyCFunction)pysqlite_connection_executescript, METH_VARARGS,
PyDoc_STR("Executes a multiple SQL statements at once. Non-standard.")}, PyDoc_STR("Executes a multiple SQL statements at once. Non-standard.")},
{"create_collation", (PyCFunction)connection_create_collation, METH_VARARGS, {"create_collation", (PyCFunction)pysqlite_connection_create_collation, METH_VARARGS,
PyDoc_STR("Creates a collation function. Non-standard.")}, PyDoc_STR("Creates a collation function. Non-standard.")},
{"interrupt", (PyCFunction)connection_interrupt, METH_NOARGS, {"interrupt", (PyCFunction)pysqlite_connection_interrupt, METH_NOARGS,
PyDoc_STR("Abort any pending database operation. Non-standard.")}, PyDoc_STR("Abort any pending database operation. Non-standard.")},
{NULL, NULL} {NULL, NULL}
}; };
static struct PyMemberDef connection_members[] = static struct PyMemberDef connection_members[] =
{ {
{"Warning", T_OBJECT, offsetof(Connection, Warning), RO}, {"Warning", T_OBJECT, offsetof(pysqlite_Connection, Warning), RO},
{"Error", T_OBJECT, offsetof(Connection, Error), RO}, {"Error", T_OBJECT, offsetof(pysqlite_Connection, Error), RO},
{"InterfaceError", T_OBJECT, offsetof(Connection, InterfaceError), RO}, {"InterfaceError", T_OBJECT, offsetof(pysqlite_Connection, InterfaceError), RO},
{"DatabaseError", T_OBJECT, offsetof(Connection, DatabaseError), RO}, {"DatabaseError", T_OBJECT, offsetof(pysqlite_Connection, DatabaseError), RO},
{"DataError", T_OBJECT, offsetof(Connection, DataError), RO}, {"DataError", T_OBJECT, offsetof(pysqlite_Connection, DataError), RO},
{"OperationalError", T_OBJECT, offsetof(Connection, OperationalError), RO}, {"OperationalError", T_OBJECT, offsetof(pysqlite_Connection, OperationalError), RO},
{"IntegrityError", T_OBJECT, offsetof(Connection, IntegrityError), RO}, {"IntegrityError", T_OBJECT, offsetof(pysqlite_Connection, IntegrityError), RO},
{"InternalError", T_OBJECT, offsetof(Connection, InternalError), RO}, {"InternalError", T_OBJECT, offsetof(pysqlite_Connection, InternalError), RO},
{"ProgrammingError", T_OBJECT, offsetof(Connection, ProgrammingError), RO}, {"ProgrammingError", T_OBJECT, offsetof(pysqlite_Connection, ProgrammingError), RO},
{"NotSupportedError", T_OBJECT, offsetof(Connection, NotSupportedError), RO}, {"NotSupportedError", T_OBJECT, offsetof(pysqlite_Connection, NotSupportedError), RO},
{"row_factory", T_OBJECT, offsetof(Connection, row_factory)}, {"row_factory", T_OBJECT, offsetof(pysqlite_Connection, row_factory)},
{"text_factory", T_OBJECT, offsetof(Connection, text_factory)}, {"text_factory", T_OBJECT, offsetof(pysqlite_Connection, text_factory)},
{NULL} {NULL}
}; };
PyTypeObject ConnectionType = { PyTypeObject pysqlite_ConnectionType = {
PyObject_HEAD_INIT(NULL) PyObject_HEAD_INIT(NULL)
0, /* ob_size */ 0, /* ob_size */
MODULE_NAME ".Connection", /* tp_name */ MODULE_NAME ".Connection", /* tp_name */
sizeof(Connection), /* tp_basicsize */ sizeof(pysqlite_Connection), /* tp_basicsize */
0, /* tp_itemsize */ 0, /* tp_itemsize */
(destructor)connection_dealloc, /* tp_dealloc */ (destructor)pysqlite_connection_dealloc, /* tp_dealloc */
0, /* tp_print */ 0, /* tp_print */
0, /* tp_getattr */ 0, /* tp_getattr */
0, /* tp_setattr */ 0, /* tp_setattr */
@ -1221,7 +1221,7 @@ PyTypeObject ConnectionType = {
0, /* tp_as_sequence */ 0, /* tp_as_sequence */
0, /* tp_as_mapping */ 0, /* tp_as_mapping */
0, /* tp_hash */ 0, /* tp_hash */
(ternaryfunc)connection_call, /* tp_call */ (ternaryfunc)pysqlite_connection_call, /* tp_call */
0, /* tp_str */ 0, /* tp_str */
0, /* tp_getattro */ 0, /* tp_getattro */
0, /* tp_setattro */ 0, /* tp_setattro */
@ -1242,14 +1242,14 @@ PyTypeObject ConnectionType = {
0, /* tp_descr_get */ 0, /* tp_descr_get */
0, /* tp_descr_set */ 0, /* tp_descr_set */
0, /* tp_dictoffset */ 0, /* tp_dictoffset */
(initproc)connection_init, /* tp_init */ (initproc)pysqlite_connection_init, /* tp_init */
0, /* tp_alloc */ 0, /* tp_alloc */
0, /* tp_new */ 0, /* tp_new */
0 /* tp_free */ 0 /* tp_free */
}; };
extern int connection_setup_types(void) extern int pysqlite_connection_setup_types(void)
{ {
ConnectionType.tp_new = PyType_GenericNew; pysqlite_ConnectionType.tp_new = PyType_GenericNew;
return PyType_Ready(&ConnectionType); return PyType_Ready(&pysqlite_ConnectionType);
} }

View File

@ -66,7 +66,7 @@ typedef struct
/* thread identification of the thread the connection was created in */ /* thread identification of the thread the connection was created in */
long thread_ident; long thread_ident;
Cache* statement_cache; pysqlite_Cache* statement_cache;
/* A list of weak references to statements used within this connection */ /* A list of weak references to statements used within this connection */
PyObject* statements; PyObject* statements;
@ -106,24 +106,23 @@ typedef struct
PyObject* InternalError; PyObject* InternalError;
PyObject* ProgrammingError; PyObject* ProgrammingError;
PyObject* NotSupportedError; PyObject* NotSupportedError;
} Connection; } pysqlite_Connection;
extern PyTypeObject ConnectionType; extern PyTypeObject pysqlite_ConnectionType;
PyObject* connection_alloc(PyTypeObject* type, int aware); PyObject* pysqlite_connection_alloc(PyTypeObject* type, int aware);
void connection_dealloc(Connection* self); void pysqlite_connection_dealloc(pysqlite_Connection* self);
PyObject* connection_cursor(Connection* self, PyObject* args, PyObject* kwargs); PyObject* pysqlite_connection_cursor(pysqlite_Connection* self, PyObject* args, PyObject* kwargs);
PyObject* connection_close(Connection* self, PyObject* args); PyObject* pysqlite_connection_close(pysqlite_Connection* self, PyObject* args);
PyObject* _connection_begin(Connection* self); PyObject* _pysqlite_connection_begin(pysqlite_Connection* self);
PyObject* connection_begin(Connection* self, PyObject* args); PyObject* pysqlite_connection_commit(pysqlite_Connection* self, PyObject* args);
PyObject* connection_commit(Connection* self, PyObject* args); PyObject* pysqlite_connection_rollback(pysqlite_Connection* self, PyObject* args);
PyObject* connection_rollback(Connection* self, PyObject* args); PyObject* pysqlite_connection_new(PyTypeObject* type, PyObject* args, PyObject* kw);
PyObject* connection_new(PyTypeObject* type, PyObject* args, PyObject* kw); int pysqlite_connection_init(pysqlite_Connection* self, PyObject* args, PyObject* kwargs);
int connection_init(Connection* self, PyObject* args, PyObject* kwargs);
int check_thread(Connection* self); int pysqlite_check_thread(pysqlite_Connection* self);
int check_connection(Connection* con); int pysqlite_check_connection(pysqlite_Connection* con);
int connection_setup_types(void); int pysqlite_connection_setup_types(void);
#endif #endif

View File

@ -34,9 +34,9 @@
#define INT32_MAX 2147483647 #define INT32_MAX 2147483647
#endif #endif
PyObject* cursor_iternext(Cursor *self); PyObject* pysqlite_cursor_iternext(pysqlite_Cursor* self);
static StatementKind detect_statement_type(char* statement) static pysqlite_StatementKind detect_statement_type(char* statement)
{ {
char buf[20]; char buf[20];
char* src; char* src;
@ -74,11 +74,11 @@ static StatementKind detect_statement_type(char* statement)
} }
} }
int cursor_init(Cursor* self, PyObject* args, PyObject* kwargs) int pysqlite_cursor_init(pysqlite_Cursor* self, PyObject* args, PyObject* kwargs)
{ {
Connection* connection; pysqlite_Connection* connection;
if (!PyArg_ParseTuple(args, "O!", &ConnectionType, &connection)) if (!PyArg_ParseTuple(args, "O!", &pysqlite_ConnectionType, &connection))
{ {
return -1; return -1;
} }
@ -109,20 +109,20 @@ int cursor_init(Cursor* self, PyObject* args, PyObject* kwargs)
Py_INCREF(Py_None); Py_INCREF(Py_None);
self->row_factory = Py_None; self->row_factory = Py_None;
if (!check_thread(self->connection)) { if (!pysqlite_check_thread(self->connection)) {
return -1; return -1;
} }
return 0; return 0;
} }
void cursor_dealloc(Cursor* self) void pysqlite_cursor_dealloc(pysqlite_Cursor* self)
{ {
int rc; int rc;
/* Reset the statement if the user has not closed the cursor */ /* Reset the statement if the user has not closed the cursor */
if (self->statement) { if (self->statement) {
rc = statement_reset(self->statement); rc = pysqlite_statement_reset(self->statement);
Py_DECREF(self->statement); Py_DECREF(self->statement);
} }
@ -137,7 +137,7 @@ void cursor_dealloc(Cursor* self)
self->ob_type->tp_free((PyObject*)self); self->ob_type->tp_free((PyObject*)self);
} }
PyObject* _get_converter(PyObject* key) PyObject* _pysqlite_get_converter(PyObject* key)
{ {
PyObject* upcase_key; PyObject* upcase_key;
PyObject* retval; PyObject* retval;
@ -153,7 +153,7 @@ PyObject* _get_converter(PyObject* key)
return retval; return retval;
} }
int build_row_cast_map(Cursor* self) int pysqlite_build_row_cast_map(pysqlite_Cursor* self)
{ {
int i; int i;
const char* type_start = (const char*)-1; const char* type_start = (const char*)-1;
@ -175,7 +175,7 @@ int build_row_cast_map(Cursor* self)
for (i = 0; i < sqlite3_column_count(self->statement->st); i++) { for (i = 0; i < sqlite3_column_count(self->statement->st); i++) {
converter = NULL; converter = NULL;
if (self->connection->detect_types | PARSE_COLNAMES) { if (self->connection->detect_types & PARSE_COLNAMES) {
colname = sqlite3_column_name(self->statement->st, i); colname = sqlite3_column_name(self->statement->st, i);
if (colname) { if (colname) {
for (pos = colname; *pos != 0; pos++) { for (pos = colname; *pos != 0; pos++) {
@ -190,7 +190,7 @@ int build_row_cast_map(Cursor* self)
break; break;
} }
converter = _get_converter(key); converter = _pysqlite_get_converter(key);
Py_DECREF(key); Py_DECREF(key);
break; break;
} }
@ -198,7 +198,7 @@ int build_row_cast_map(Cursor* self)
} }
} }
if (!converter && self->connection->detect_types | PARSE_DECLTYPES) { if (!converter && self->connection->detect_types & PARSE_DECLTYPES) {
decltype = sqlite3_column_decltype(self->statement->st, i); decltype = sqlite3_column_decltype(self->statement->st, i);
if (decltype) { if (decltype) {
for (pos = decltype;;pos++) { for (pos = decltype;;pos++) {
@ -211,7 +211,7 @@ int build_row_cast_map(Cursor* self)
} }
} }
converter = _get_converter(py_decltype); converter = _pysqlite_get_converter(py_decltype);
Py_DECREF(py_decltype); Py_DECREF(py_decltype);
} }
} }
@ -234,7 +234,7 @@ int build_row_cast_map(Cursor* self)
return 0; return 0;
} }
PyObject* _build_column_name(const char* colname) PyObject* _pysqlite_build_column_name(const char* colname)
{ {
const char* pos; const char* pos;
@ -253,7 +253,7 @@ PyObject* _build_column_name(const char* colname)
} }
} }
PyObject* unicode_from_string(const char* val_str, int optimize) PyObject* pysqlite_unicode_from_string(const char* val_str, int optimize)
{ {
const char* check; const char* check;
int is_ascii = 0; int is_ascii = 0;
@ -285,7 +285,7 @@ PyObject* unicode_from_string(const char* val_str, int optimize)
* Precondidition: * Precondidition:
* - sqlite3_step() has been called before and it returned SQLITE_ROW. * - sqlite3_step() has been called before and it returned SQLITE_ROW.
*/ */
PyObject* _fetch_one_row(Cursor* self) PyObject* _pysqlite_fetch_one_row(pysqlite_Cursor* self)
{ {
int i, numcols; int i, numcols;
PyObject* row; PyObject* row;
@ -356,10 +356,10 @@ PyObject* _fetch_one_row(Cursor* self)
} else if (coltype == SQLITE_TEXT) { } else if (coltype == SQLITE_TEXT) {
val_str = (const char*)sqlite3_column_text(self->statement->st, i); val_str = (const char*)sqlite3_column_text(self->statement->st, i);
if ((self->connection->text_factory == (PyObject*)&PyUnicode_Type) if ((self->connection->text_factory == (PyObject*)&PyUnicode_Type)
|| (self->connection->text_factory == OptimizedUnicode)) { || (self->connection->text_factory == pysqlite_OptimizedUnicode)) {
converted = unicode_from_string(val_str, converted = pysqlite_unicode_from_string(val_str,
self->connection->text_factory == OptimizedUnicode ? 1 : 0); self->connection->text_factory == pysqlite_OptimizedUnicode ? 1 : 0);
if (!converted) { if (!converted) {
colname = sqlite3_column_name(self->statement->st, i); colname = sqlite3_column_name(self->statement->st, i);
@ -368,7 +368,7 @@ PyObject* _fetch_one_row(Cursor* self)
} }
PyOS_snprintf(buf, sizeof(buf) - 1, "Could not decode to UTF-8 column '%s' with text '%s'", PyOS_snprintf(buf, sizeof(buf) - 1, "Could not decode to UTF-8 column '%s' with text '%s'",
colname , val_str); colname , val_str);
PyErr_SetString(OperationalError, buf); PyErr_SetString(pysqlite_OperationalError, buf);
} }
} else if (self->connection->text_factory == (PyObject*)&PyString_Type) { } else if (self->connection->text_factory == (PyObject*)&PyString_Type) {
converted = PyString_FromString(val_str); converted = PyString_FromString(val_str);
@ -406,7 +406,7 @@ PyObject* _fetch_one_row(Cursor* self)
return row; return row;
} }
PyObject* _query_execute(Cursor* self, int multiple, PyObject* args) PyObject* _pysqlite_query_execute(pysqlite_Cursor* self, int multiple, PyObject* args)
{ {
PyObject* operation; PyObject* operation;
PyObject* operation_bytestr = NULL; PyObject* operation_bytestr = NULL;
@ -425,7 +425,7 @@ PyObject* _query_execute(Cursor* self, int multiple, PyObject* args)
PyObject* second_argument = NULL; PyObject* second_argument = NULL;
long rowcount = 0; long rowcount = 0;
if (!check_thread(self->connection) || !check_connection(self->connection)) { if (!pysqlite_check_thread(self->connection) || !pysqlite_check_connection(self->connection)) {
return NULL; return NULL;
} }
@ -492,7 +492,7 @@ PyObject* _query_execute(Cursor* self, int multiple, PyObject* args)
if (self->statement != NULL) { if (self->statement != NULL) {
/* There is an active statement */ /* There is an active statement */
rc = statement_reset(self->statement); rc = pysqlite_statement_reset(self->statement);
} }
if (PyString_Check(operation)) { if (PyString_Check(operation)) {
@ -525,7 +525,7 @@ PyObject* _query_execute(Cursor* self, int multiple, PyObject* args)
case STATEMENT_INSERT: case STATEMENT_INSERT:
case STATEMENT_REPLACE: case STATEMENT_REPLACE:
if (!self->connection->inTransaction) { if (!self->connection->inTransaction) {
result = _connection_begin(self->connection); result = _pysqlite_connection_begin(self->connection);
if (!result) { if (!result) {
goto error; goto error;
} }
@ -536,7 +536,7 @@ PyObject* _query_execute(Cursor* self, int multiple, PyObject* args)
/* it's a DDL statement or something similar /* it's a DDL statement or something similar
- we better COMMIT first so it works for all cases */ - we better COMMIT first so it works for all cases */
if (self->connection->inTransaction) { if (self->connection->inTransaction) {
result = connection_commit(self->connection, NULL); result = pysqlite_connection_commit(self->connection, NULL);
if (!result) { if (!result) {
goto error; goto error;
} }
@ -545,7 +545,7 @@ PyObject* _query_execute(Cursor* self, int multiple, PyObject* args)
break; break;
case STATEMENT_SELECT: case STATEMENT_SELECT:
if (multiple) { if (multiple) {
PyErr_SetString(ProgrammingError, PyErr_SetString(pysqlite_ProgrammingError,
"You cannot execute SELECT statements in executemany()."); "You cannot execute SELECT statements in executemany().");
goto error; goto error;
} }
@ -563,11 +563,11 @@ PyObject* _query_execute(Cursor* self, int multiple, PyObject* args)
} }
if (self->statement) { if (self->statement) {
(void)statement_reset(self->statement); (void)pysqlite_statement_reset(self->statement);
Py_DECREF(self->statement); Py_DECREF(self->statement);
} }
self->statement = (Statement*)cache_get(self->connection->statement_cache, func_args); self->statement = (pysqlite_Statement*)pysqlite_cache_get(self->connection->statement_cache, func_args);
Py_DECREF(func_args); Py_DECREF(func_args);
if (!self->statement) { if (!self->statement) {
@ -576,19 +576,19 @@ PyObject* _query_execute(Cursor* self, int multiple, PyObject* args)
if (self->statement->in_use) { if (self->statement->in_use) {
Py_DECREF(self->statement); Py_DECREF(self->statement);
self->statement = PyObject_New(Statement, &StatementType); self->statement = PyObject_New(pysqlite_Statement, &pysqlite_StatementType);
if (!self->statement) { if (!self->statement) {
goto error; goto error;
} }
rc = statement_create(self->statement, self->connection, operation); rc = pysqlite_statement_create(self->statement, self->connection, operation);
if (rc != SQLITE_OK) { if (rc != SQLITE_OK) {
self->statement = 0; self->statement = 0;
goto error; goto error;
} }
} }
statement_reset(self->statement); pysqlite_statement_reset(self->statement);
statement_mark_dirty(self->statement); pysqlite_statement_mark_dirty(self->statement);
while (1) { while (1) {
parameters = PyIter_Next(parameters_iter); parameters = PyIter_Next(parameters_iter);
@ -596,27 +596,37 @@ PyObject* _query_execute(Cursor* self, int multiple, PyObject* args)
break; break;
} }
statement_mark_dirty(self->statement); pysqlite_statement_mark_dirty(self->statement);
statement_bind_parameters(self->statement, parameters); pysqlite_statement_bind_parameters(self->statement, parameters);
if (PyErr_Occurred()) { if (PyErr_Occurred()) {
goto error; goto error;
} }
if (build_row_cast_map(self) != 0) { if (pysqlite_build_row_cast_map(self) != 0) {
PyErr_SetString(OperationalError, "Error while building row_cast_map"); PyErr_SetString(pysqlite_OperationalError, "Error while building row_cast_map");
goto error; goto error;
} }
rc = _sqlite_step_with_busyhandler(self->statement->st, self->connection); /* Keep trying the SQL statement until the schema stops changing. */
if (rc != SQLITE_DONE && rc != SQLITE_ROW) { while (1) {
rc = statement_reset(self->statement); /* Actually execute the SQL statement. */
rc = _sqlite_step_with_busyhandler(self->statement->st, self->connection);
if (rc == SQLITE_DONE || rc == SQLITE_ROW) {
/* If it worked, let's get out of the loop */
break;
}
/* Something went wrong. Re-set the statement and try again. */
rc = pysqlite_statement_reset(self->statement);
if (rc == SQLITE_SCHEMA) { if (rc == SQLITE_SCHEMA) {
rc = statement_recompile(self->statement, parameters); /* If this was a result of the schema changing, let's try
again. */
rc = pysqlite_statement_recompile(self->statement, parameters);
if (rc == SQLITE_OK) { if (rc == SQLITE_OK) {
rc = _sqlite_step_with_busyhandler(self->statement->st, self->connection); continue;
} else { } else {
_seterror(self->connection->db); /* If the database gave us an error, promote it to Python. */
_pysqlite_seterror(self->connection->db);
goto error; goto error;
} }
} else { } else {
@ -628,7 +638,7 @@ PyObject* _query_execute(Cursor* self, int multiple, PyObject* args)
PyErr_Clear(); PyErr_Clear();
} }
} }
_seterror(self->connection->db); _pysqlite_seterror(self->connection->db);
goto error; goto error;
} }
} }
@ -649,7 +659,7 @@ PyObject* _query_execute(Cursor* self, int multiple, PyObject* args)
if (!descriptor) { if (!descriptor) {
goto error; goto error;
} }
PyTuple_SetItem(descriptor, 0, _build_column_name(sqlite3_column_name(self->statement->st, i))); PyTuple_SetItem(descriptor, 0, _pysqlite_build_column_name(sqlite3_column_name(self->statement->st, i)));
Py_INCREF(Py_None); PyTuple_SetItem(descriptor, 1, Py_None); Py_INCREF(Py_None); PyTuple_SetItem(descriptor, 1, Py_None);
Py_INCREF(Py_None); PyTuple_SetItem(descriptor, 2, Py_None); Py_INCREF(Py_None); PyTuple_SetItem(descriptor, 2, Py_None);
Py_INCREF(Py_None); PyTuple_SetItem(descriptor, 3, Py_None); Py_INCREF(Py_None); PyTuple_SetItem(descriptor, 3, Py_None);
@ -663,13 +673,13 @@ PyObject* _query_execute(Cursor* self, int multiple, PyObject* args)
if (rc == SQLITE_ROW) { if (rc == SQLITE_ROW) {
if (multiple) { if (multiple) {
PyErr_SetString(ProgrammingError, "executemany() can only execute DML statements."); PyErr_SetString(pysqlite_ProgrammingError, "executemany() can only execute DML statements.");
goto error; goto error;
} }
self->next_row = _fetch_one_row(self); self->next_row = _pysqlite_fetch_one_row(self);
} else if (rc == SQLITE_DONE && !multiple) { } else if (rc == SQLITE_DONE && !multiple) {
statement_reset(self->statement); pysqlite_statement_reset(self->statement);
Py_DECREF(self->statement); Py_DECREF(self->statement);
self->statement = 0; self->statement = 0;
} }
@ -698,7 +708,7 @@ PyObject* _query_execute(Cursor* self, int multiple, PyObject* args)
} }
if (multiple) { if (multiple) {
rc = statement_reset(self->statement); rc = pysqlite_statement_reset(self->statement);
} }
Py_XDECREF(parameters); Py_XDECREF(parameters);
} }
@ -717,17 +727,17 @@ error:
} }
} }
PyObject* cursor_execute(Cursor* self, PyObject* args) PyObject* pysqlite_cursor_execute(pysqlite_Cursor* self, PyObject* args)
{ {
return _query_execute(self, 0, args); return _pysqlite_query_execute(self, 0, args);
} }
PyObject* cursor_executemany(Cursor* self, PyObject* args) PyObject* pysqlite_cursor_executemany(pysqlite_Cursor* self, PyObject* args)
{ {
return _query_execute(self, 1, args); return _pysqlite_query_execute(self, 1, args);
} }
PyObject* cursor_executescript(Cursor* self, PyObject* args) PyObject* pysqlite_cursor_executescript(pysqlite_Cursor* self, PyObject* args)
{ {
PyObject* script_obj; PyObject* script_obj;
PyObject* script_str = NULL; PyObject* script_str = NULL;
@ -741,7 +751,7 @@ PyObject* cursor_executescript(Cursor* self, PyObject* args)
return NULL; return NULL;
} }
if (!check_thread(self->connection) || !check_connection(self->connection)) { if (!pysqlite_check_thread(self->connection) || !pysqlite_check_connection(self->connection)) {
return NULL; return NULL;
} }
@ -760,7 +770,7 @@ PyObject* cursor_executescript(Cursor* self, PyObject* args)
} }
/* commit first */ /* commit first */
result = connection_commit(self->connection, NULL); result = pysqlite_connection_commit(self->connection, NULL);
if (!result) { if (!result) {
goto error; goto error;
} }
@ -778,7 +788,7 @@ PyObject* cursor_executescript(Cursor* self, PyObject* args)
&statement, &statement,
&script_cstr); &script_cstr);
if (rc != SQLITE_OK) { if (rc != SQLITE_OK) {
_seterror(self->connection->db); _pysqlite_seterror(self->connection->db);
goto error; goto error;
} }
@ -790,13 +800,13 @@ PyObject* cursor_executescript(Cursor* self, PyObject* args)
if (rc != SQLITE_DONE) { if (rc != SQLITE_DONE) {
(void)sqlite3_finalize(statement); (void)sqlite3_finalize(statement);
_seterror(self->connection->db); _pysqlite_seterror(self->connection->db);
goto error; goto error;
} }
rc = sqlite3_finalize(statement); rc = sqlite3_finalize(statement);
if (rc != SQLITE_OK) { if (rc != SQLITE_OK) {
_seterror(self->connection->db); _pysqlite_seterror(self->connection->db);
goto error; goto error;
} }
} }
@ -805,7 +815,7 @@ error:
Py_XDECREF(script_str); Py_XDECREF(script_str);
if (!statement_completed) { if (!statement_completed) {
PyErr_SetString(ProgrammingError, "you did not provide a complete SQL statement"); PyErr_SetString(pysqlite_ProgrammingError, "you did not provide a complete SQL statement");
} }
if (PyErr_Occurred()) { if (PyErr_Occurred()) {
@ -816,25 +826,25 @@ error:
} }
} }
PyObject* cursor_getiter(Cursor *self) PyObject* pysqlite_cursor_getiter(pysqlite_Cursor *self)
{ {
Py_INCREF(self); Py_INCREF(self);
return (PyObject*)self; return (PyObject*)self;
} }
PyObject* cursor_iternext(Cursor *self) PyObject* pysqlite_cursor_iternext(pysqlite_Cursor *self)
{ {
PyObject* next_row_tuple; PyObject* next_row_tuple;
PyObject* next_row; PyObject* next_row;
int rc; int rc;
if (!check_thread(self->connection) || !check_connection(self->connection)) { if (!pysqlite_check_thread(self->connection) || !pysqlite_check_connection(self->connection)) {
return NULL; return NULL;
} }
if (!self->next_row) { if (!self->next_row) {
if (self->statement) { if (self->statement) {
(void)statement_reset(self->statement); (void)pysqlite_statement_reset(self->statement);
Py_DECREF(self->statement); Py_DECREF(self->statement);
self->statement = NULL; self->statement = NULL;
} }
@ -851,25 +861,27 @@ PyObject* cursor_iternext(Cursor *self)
next_row = next_row_tuple; next_row = next_row_tuple;
} }
rc = _sqlite_step_with_busyhandler(self->statement->st, self->connection); if (self->statement) {
if (rc != SQLITE_DONE && rc != SQLITE_ROW) { rc = _sqlite_step_with_busyhandler(self->statement->st, self->connection);
Py_DECREF(next_row); if (rc != SQLITE_DONE && rc != SQLITE_ROW) {
_seterror(self->connection->db); Py_DECREF(next_row);
return NULL; _pysqlite_seterror(self->connection->db);
} return NULL;
}
if (rc == SQLITE_ROW) { if (rc == SQLITE_ROW) {
self->next_row = _fetch_one_row(self); self->next_row = _pysqlite_fetch_one_row(self);
}
} }
return next_row; return next_row;
} }
PyObject* cursor_fetchone(Cursor* self, PyObject* args) PyObject* pysqlite_cursor_fetchone(pysqlite_Cursor* self, PyObject* args)
{ {
PyObject* row; PyObject* row;
row = cursor_iternext(self); row = pysqlite_cursor_iternext(self);
if (!row && !PyErr_Occurred()) { if (!row && !PyErr_Occurred()) {
Py_INCREF(Py_None); Py_INCREF(Py_None);
return Py_None; return Py_None;
@ -878,7 +890,7 @@ PyObject* cursor_fetchone(Cursor* self, PyObject* args)
return row; return row;
} }
PyObject* cursor_fetchmany(Cursor* self, PyObject* args) PyObject* pysqlite_cursor_fetchmany(pysqlite_Cursor* self, PyObject* args)
{ {
PyObject* row; PyObject* row;
PyObject* list; PyObject* list;
@ -898,7 +910,7 @@ PyObject* cursor_fetchmany(Cursor* self, PyObject* args)
row = Py_None; row = Py_None;
while (row) { while (row) {
row = cursor_iternext(self); row = pysqlite_cursor_iternext(self);
if (row) { if (row) {
PyList_Append(list, row); PyList_Append(list, row);
Py_DECREF(row); Py_DECREF(row);
@ -919,7 +931,7 @@ PyObject* cursor_fetchmany(Cursor* self, PyObject* args)
} }
} }
PyObject* cursor_fetchall(Cursor* self, PyObject* args) PyObject* pysqlite_cursor_fetchall(pysqlite_Cursor* self, PyObject* args)
{ {
PyObject* row; PyObject* row;
PyObject* list; PyObject* list;
@ -933,7 +945,7 @@ PyObject* cursor_fetchall(Cursor* self, PyObject* args)
row = (PyObject*)Py_None; row = (PyObject*)Py_None;
while (row) { while (row) {
row = cursor_iternext(self); row = pysqlite_cursor_iternext(self);
if (row) { if (row) {
PyList_Append(list, row); PyList_Append(list, row);
Py_DECREF(row); Py_DECREF(row);
@ -948,21 +960,21 @@ PyObject* cursor_fetchall(Cursor* self, PyObject* args)
} }
} }
PyObject* pysqlite_noop(Connection* self, PyObject* args) PyObject* pysqlite_noop(pysqlite_Connection* self, PyObject* args)
{ {
/* don't care, return None */ /* don't care, return None */
Py_INCREF(Py_None); Py_INCREF(Py_None);
return Py_None; return Py_None;
} }
PyObject* cursor_close(Cursor* self, PyObject* args) PyObject* pysqlite_cursor_close(pysqlite_Cursor* self, PyObject* args)
{ {
if (!check_thread(self->connection) || !check_connection(self->connection)) { if (!pysqlite_check_thread(self->connection) || !pysqlite_check_connection(self->connection)) {
return NULL; return NULL;
} }
if (self->statement) { if (self->statement) {
(void)statement_reset(self->statement); (void)pysqlite_statement_reset(self->statement);
Py_DECREF(self->statement); Py_DECREF(self->statement);
self->statement = 0; self->statement = 0;
} }
@ -972,19 +984,19 @@ PyObject* cursor_close(Cursor* self, PyObject* args)
} }
static PyMethodDef cursor_methods[] = { static PyMethodDef cursor_methods[] = {
{"execute", (PyCFunction)cursor_execute, METH_VARARGS, {"execute", (PyCFunction)pysqlite_cursor_execute, METH_VARARGS,
PyDoc_STR("Executes a SQL statement.")}, PyDoc_STR("Executes a SQL statement.")},
{"executemany", (PyCFunction)cursor_executemany, METH_VARARGS, {"executemany", (PyCFunction)pysqlite_cursor_executemany, METH_VARARGS,
PyDoc_STR("Repeatedly executes a SQL statement.")}, PyDoc_STR("Repeatedly executes a SQL statement.")},
{"executescript", (PyCFunction)cursor_executescript, METH_VARARGS, {"executescript", (PyCFunction)pysqlite_cursor_executescript, METH_VARARGS,
PyDoc_STR("Executes a multiple SQL statements at once. Non-standard.")}, PyDoc_STR("Executes a multiple SQL statements at once. Non-standard.")},
{"fetchone", (PyCFunction)cursor_fetchone, METH_NOARGS, {"fetchone", (PyCFunction)pysqlite_cursor_fetchone, METH_NOARGS,
PyDoc_STR("Fetches several rows from the resultset.")}, PyDoc_STR("Fetches several rows from the resultset.")},
{"fetchmany", (PyCFunction)cursor_fetchmany, METH_VARARGS, {"fetchmany", (PyCFunction)pysqlite_cursor_fetchmany, METH_VARARGS,
PyDoc_STR("Fetches all rows from the resultset.")}, PyDoc_STR("Fetches all rows from the resultset.")},
{"fetchall", (PyCFunction)cursor_fetchall, METH_NOARGS, {"fetchall", (PyCFunction)pysqlite_cursor_fetchall, METH_NOARGS,
PyDoc_STR("Fetches one row from the resultset.")}, PyDoc_STR("Fetches one row from the resultset.")},
{"close", (PyCFunction)cursor_close, METH_NOARGS, {"close", (PyCFunction)pysqlite_cursor_close, METH_NOARGS,
PyDoc_STR("Closes the cursor.")}, PyDoc_STR("Closes the cursor.")},
{"setinputsizes", (PyCFunction)pysqlite_noop, METH_VARARGS, {"setinputsizes", (PyCFunction)pysqlite_noop, METH_VARARGS,
PyDoc_STR("Required by DB-API. Does nothing in pysqlite.")}, PyDoc_STR("Required by DB-API. Does nothing in pysqlite.")},
@ -995,25 +1007,25 @@ static PyMethodDef cursor_methods[] = {
static struct PyMemberDef cursor_members[] = static struct PyMemberDef cursor_members[] =
{ {
{"connection", T_OBJECT, offsetof(Cursor, connection), RO}, {"connection", T_OBJECT, offsetof(pysqlite_Cursor, connection), RO},
{"description", T_OBJECT, offsetof(Cursor, description), RO}, {"description", T_OBJECT, offsetof(pysqlite_Cursor, description), RO},
{"arraysize", T_INT, offsetof(Cursor, arraysize), 0}, {"arraysize", T_INT, offsetof(pysqlite_Cursor, arraysize), 0},
{"lastrowid", T_OBJECT, offsetof(Cursor, lastrowid), RO}, {"lastrowid", T_OBJECT, offsetof(pysqlite_Cursor, lastrowid), RO},
{"rowcount", T_OBJECT, offsetof(Cursor, rowcount), RO}, {"rowcount", T_OBJECT, offsetof(pysqlite_Cursor, rowcount), RO},
{"row_factory", T_OBJECT, offsetof(Cursor, row_factory), 0}, {"row_factory", T_OBJECT, offsetof(pysqlite_Cursor, row_factory), 0},
{NULL} {NULL}
}; };
static char cursor_doc[] = static char cursor_doc[] =
PyDoc_STR("SQLite database cursor class."); PyDoc_STR("SQLite database cursor class.");
PyTypeObject CursorType = { PyTypeObject pysqlite_CursorType = {
PyObject_HEAD_INIT(NULL) PyObject_HEAD_INIT(NULL)
0, /* ob_size */ 0, /* ob_size */
MODULE_NAME ".Cursor", /* tp_name */ MODULE_NAME ".Cursor", /* tp_name */
sizeof(Cursor), /* tp_basicsize */ sizeof(pysqlite_Cursor), /* tp_basicsize */
0, /* tp_itemsize */ 0, /* tp_itemsize */
(destructor)cursor_dealloc, /* tp_dealloc */ (destructor)pysqlite_cursor_dealloc, /* tp_dealloc */
0, /* tp_print */ 0, /* tp_print */
0, /* tp_getattr */ 0, /* tp_getattr */
0, /* tp_setattr */ 0, /* tp_setattr */
@ -1034,8 +1046,8 @@ PyTypeObject CursorType = {
0, /* tp_clear */ 0, /* tp_clear */
0, /* tp_richcompare */ 0, /* tp_richcompare */
0, /* tp_weaklistoffset */ 0, /* tp_weaklistoffset */
(getiterfunc)cursor_getiter, /* tp_iter */ (getiterfunc)pysqlite_cursor_getiter, /* tp_iter */
(iternextfunc)cursor_iternext, /* tp_iternext */ (iternextfunc)pysqlite_cursor_iternext, /* tp_iternext */
cursor_methods, /* tp_methods */ cursor_methods, /* tp_methods */
cursor_members, /* tp_members */ cursor_members, /* tp_members */
0, /* tp_getset */ 0, /* tp_getset */
@ -1044,14 +1056,14 @@ PyTypeObject CursorType = {
0, /* tp_descr_get */ 0, /* tp_descr_get */
0, /* tp_descr_set */ 0, /* tp_descr_set */
0, /* tp_dictoffset */ 0, /* tp_dictoffset */
(initproc)cursor_init, /* tp_init */ (initproc)pysqlite_cursor_init, /* tp_init */
0, /* tp_alloc */ 0, /* tp_alloc */
0, /* tp_new */ 0, /* tp_new */
0 /* tp_free */ 0 /* tp_free */
}; };
extern int cursor_setup_types(void) extern int pysqlite_cursor_setup_types(void)
{ {
CursorType.tp_new = PyType_GenericNew; pysqlite_CursorType.tp_new = PyType_GenericNew;
return PyType_Ready(&CursorType); return PyType_Ready(&pysqlite_CursorType);
} }

View File

@ -32,40 +32,40 @@
typedef struct typedef struct
{ {
PyObject_HEAD PyObject_HEAD
Connection* connection; pysqlite_Connection* connection;
PyObject* description; PyObject* description;
PyObject* row_cast_map; PyObject* row_cast_map;
int arraysize; int arraysize;
PyObject* lastrowid; PyObject* lastrowid;
PyObject* rowcount; PyObject* rowcount;
PyObject* row_factory; PyObject* row_factory;
Statement* statement; pysqlite_Statement* statement;
/* the next row to be returned, NULL if no next row available */ /* the next row to be returned, NULL if no next row available */
PyObject* next_row; PyObject* next_row;
} Cursor; } pysqlite_Cursor;
typedef enum { typedef enum {
STATEMENT_INVALID, STATEMENT_INSERT, STATEMENT_DELETE, STATEMENT_INVALID, STATEMENT_INSERT, STATEMENT_DELETE,
STATEMENT_UPDATE, STATEMENT_REPLACE, STATEMENT_SELECT, STATEMENT_UPDATE, STATEMENT_REPLACE, STATEMENT_SELECT,
STATEMENT_OTHER STATEMENT_OTHER
} StatementKind; } pysqlite_StatementKind;
extern PyTypeObject CursorType; extern PyTypeObject pysqlite_CursorType;
int cursor_init(Cursor* self, PyObject* args, PyObject* kwargs); int pysqlite_cursor_init(pysqlite_Cursor* self, PyObject* args, PyObject* kwargs);
void cursor_dealloc(Cursor* self); void pysqlite_cursor_dealloc(pysqlite_Cursor* self);
PyObject* cursor_execute(Cursor* self, PyObject* args); PyObject* pysqlite_cursor_execute(pysqlite_Cursor* self, PyObject* args);
PyObject* cursor_executemany(Cursor* self, PyObject* args); PyObject* pysqlite_cursor_executemany(pysqlite_Cursor* self, PyObject* args);
PyObject* cursor_getiter(Cursor *self); PyObject* pysqlite_cursor_getiter(pysqlite_Cursor *self);
PyObject* cursor_iternext(Cursor *self); PyObject* pysqlite_cursor_iternext(pysqlite_Cursor *self);
PyObject* cursor_fetchone(Cursor* self, PyObject* args); PyObject* pysqlite_cursor_fetchone(pysqlite_Cursor* self, PyObject* args);
PyObject* cursor_fetchmany(Cursor* self, PyObject* args); PyObject* pysqlite_cursor_fetchmany(pysqlite_Cursor* self, PyObject* args);
PyObject* cursor_fetchall(Cursor* self, PyObject* args); PyObject* pysqlite_cursor_fetchall(pysqlite_Cursor* self, PyObject* args);
PyObject* pysqlite_noop(Connection* self, PyObject* args); PyObject* pysqlite_noop(pysqlite_Connection* self, PyObject* args);
PyObject* cursor_close(Cursor* self, PyObject* args); PyObject* pysqlite_cursor_close(pysqlite_Cursor* self, PyObject* args);
int cursor_setup_types(void); int pysqlite_cursor_setup_types(void);
#define UNKNOWN (-1) #define UNKNOWN (-1)
#endif #endif

View File

@ -57,7 +57,7 @@ microprotocols_add(PyTypeObject *type, PyObject *proto, PyObject *cast)
PyObject* key; PyObject* key;
int rc; int rc;
if (proto == NULL) proto = (PyObject*)&SQLitePrepareProtocolType; if (proto == NULL) proto = (PyObject*)&pysqlite_PrepareProtocolType;
key = Py_BuildValue("(OO)", (PyObject*)type, proto); key = Py_BuildValue("(OO)", (PyObject*)type, proto);
if (!key) { if (!key) {
@ -78,7 +78,7 @@ microprotocols_adapt(PyObject *obj, PyObject *proto, PyObject *alt)
PyObject *adapter, *key; PyObject *adapter, *key;
/* we don't check for exact type conformance as specified in PEP 246 /* we don't check for exact type conformance as specified in PEP 246
because the SQLitePrepareProtocolType type is abstract and there is no because the pysqlite_PrepareProtocolType type is abstract and there is no
way to get a quotable object to be its instance */ way to get a quotable object to be its instance */
/* look for an adapter in the registry */ /* look for an adapter in the registry */
@ -125,17 +125,17 @@ microprotocols_adapt(PyObject *obj, PyObject *proto, PyObject *alt)
} }
/* else set the right exception and return NULL */ /* else set the right exception and return NULL */
PyErr_SetString(ProgrammingError, "can't adapt"); PyErr_SetString(pysqlite_ProgrammingError, "can't adapt");
return NULL; return NULL;
} }
/** module-level functions **/ /** module-level functions **/
PyObject * PyObject *
psyco_microprotocols_adapt(Cursor *self, PyObject *args) psyco_microprotocols_adapt(pysqlite_Cursor *self, PyObject *args)
{ {
PyObject *obj, *alt = NULL; PyObject *obj, *alt = NULL;
PyObject *proto = (PyObject*)&SQLitePrepareProtocolType; PyObject *proto = (PyObject*)&pysqlite_PrepareProtocolType;
if (!PyArg_ParseTuple(args, "O|OO", &obj, &proto, &alt)) return NULL; if (!PyArg_ParseTuple(args, "O|OO", &obj, &proto, &alt)) return NULL;
return microprotocols_adapt(obj, proto, alt); return microprotocols_adapt(obj, proto, alt);

View File

@ -52,7 +52,7 @@ extern PyObject *microprotocols_adapt(
PyObject *obj, PyObject *proto, PyObject *alt); PyObject *obj, PyObject *proto, PyObject *alt);
extern PyObject * extern PyObject *
psyco_microprotocols_adapt(Cursor* self, PyObject *args); psyco_microprotocols_adapt(pysqlite_Cursor* self, PyObject *args);
#define psyco_microprotocols_adapt_doc \ #define psyco_microprotocols_adapt_doc \
"adapt(obj, protocol, alternate) -> adapt obj to given protocol. Non-standard." "adapt(obj, protocol, alternate) -> adapt obj to given protocol. Non-standard."

View File

@ -35,9 +35,9 @@
/* static objects at module-level */ /* static objects at module-level */
PyObject* Error, *Warning, *InterfaceError, *DatabaseError, *InternalError, PyObject* pysqlite_Error, *pysqlite_Warning, *pysqlite_InterfaceError, *pysqlite_DatabaseError,
*OperationalError, *ProgrammingError, *IntegrityError, *DataError, *pysqlite_InternalError, *pysqlite_OperationalError, *pysqlite_ProgrammingError,
*NotSupportedError, *OptimizedUnicode; *pysqlite_IntegrityError, *pysqlite_DataError, *pysqlite_NotSupportedError, *pysqlite_OptimizedUnicode;
PyObject* converters; PyObject* converters;
int _enable_callback_tracebacks; int _enable_callback_tracebacks;
@ -67,7 +67,7 @@ static PyObject* module_connect(PyObject* self, PyObject* args, PyObject*
} }
if (factory == NULL) { if (factory == NULL) {
factory = (PyObject*)&ConnectionType; factory = (PyObject*)&pysqlite_ConnectionType;
} }
result = PyObject_Call(factory, args, kwargs); result = PyObject_Call(factory, args, kwargs);
@ -115,7 +115,7 @@ static PyObject* module_enable_shared_cache(PyObject* self, PyObject* args, PyOb
rc = sqlite3_enable_shared_cache(do_enable); rc = sqlite3_enable_shared_cache(do_enable);
if (rc != SQLITE_OK) { if (rc != SQLITE_OK) {
PyErr_SetString(OperationalError, "Changing the shared_cache flag failed"); PyErr_SetString(pysqlite_OperationalError, "Changing the shared_cache flag failed");
return NULL; return NULL;
} else { } else {
Py_INCREF(Py_None); Py_INCREF(Py_None);
@ -133,7 +133,7 @@ static PyObject* module_register_adapter(PyObject* self, PyObject* args, PyObjec
return NULL; return NULL;
} }
microprotocols_add(type, (PyObject*)&SQLitePrepareProtocolType, caster); microprotocols_add(type, (PyObject*)&pysqlite_PrepareProtocolType, caster);
Py_INCREF(Py_None); Py_INCREF(Py_None);
return Py_None; return Py_None;
@ -141,36 +141,29 @@ static PyObject* module_register_adapter(PyObject* self, PyObject* args, PyObjec
static PyObject* module_register_converter(PyObject* self, PyObject* args, PyObject* kwargs) static PyObject* module_register_converter(PyObject* self, PyObject* args, PyObject* kwargs)
{ {
char* orig_name; PyObject* orig_name;
char* name = NULL; PyObject* name = NULL;
char* c;
PyObject* callable; PyObject* callable;
PyObject* retval = NULL; PyObject* retval = NULL;
if (!PyArg_ParseTuple(args, "sO", &orig_name, &callable)) { if (!PyArg_ParseTuple(args, "SO", &orig_name, &callable)) {
return NULL; return NULL;
} }
/* convert the name to lowercase */ /* convert the name to upper case */
name = PyMem_Malloc(strlen(orig_name) + 2); name = PyObject_CallMethod(orig_name, "upper", "");
if (!name) { if (!name) {
goto error; goto error;
} }
strcpy(name, orig_name);
for (c = name; *c != (char)0; c++) {
*c = (*c) & 0xDF;
}
if (PyDict_SetItemString(converters, name, callable) != 0) { if (PyDict_SetItem(converters, name, callable) != 0) {
goto error; goto error;
} }
Py_INCREF(Py_None); Py_INCREF(Py_None);
retval = Py_None; retval = Py_None;
error: error:
if (name) { Py_XDECREF(name);
PyMem_Free(name);
}
return retval; return retval;
} }
@ -184,7 +177,7 @@ static PyObject* enable_callback_tracebacks(PyObject* self, PyObject* args, PyOb
return Py_None; return Py_None;
} }
void converters_init(PyObject* dict) static void converters_init(PyObject* dict)
{ {
converters = PyDict_New(); converters = PyDict_New();
if (!converters) { if (!converters) {
@ -265,28 +258,28 @@ PyMODINIT_FUNC init_sqlite3(void)
module = Py_InitModule("_sqlite3", module_methods); module = Py_InitModule("_sqlite3", module_methods);
if (!module || if (!module ||
(row_setup_types() < 0) || (pysqlite_row_setup_types() < 0) ||
(cursor_setup_types() < 0) || (pysqlite_cursor_setup_types() < 0) ||
(connection_setup_types() < 0) || (pysqlite_connection_setup_types() < 0) ||
(cache_setup_types() < 0) || (pysqlite_cache_setup_types() < 0) ||
(statement_setup_types() < 0) || (pysqlite_statement_setup_types() < 0) ||
(prepare_protocol_setup_types() < 0) (pysqlite_prepare_protocol_setup_types() < 0)
) { ) {
return; return;
} }
Py_INCREF(&ConnectionType); Py_INCREF(&pysqlite_ConnectionType);
PyModule_AddObject(module, "Connection", (PyObject*) &ConnectionType); PyModule_AddObject(module, "Connection", (PyObject*) &pysqlite_ConnectionType);
Py_INCREF(&CursorType); Py_INCREF(&pysqlite_CursorType);
PyModule_AddObject(module, "Cursor", (PyObject*) &CursorType); PyModule_AddObject(module, "Cursor", (PyObject*) &pysqlite_CursorType);
Py_INCREF(&CacheType); Py_INCREF(&pysqlite_CacheType);
PyModule_AddObject(module, "Statement", (PyObject*)&StatementType); PyModule_AddObject(module, "Statement", (PyObject*)&pysqlite_StatementType);
Py_INCREF(&StatementType); Py_INCREF(&pysqlite_StatementType);
PyModule_AddObject(module, "Cache", (PyObject*) &CacheType); PyModule_AddObject(module, "Cache", (PyObject*) &pysqlite_CacheType);
Py_INCREF(&SQLitePrepareProtocolType); Py_INCREF(&pysqlite_PrepareProtocolType);
PyModule_AddObject(module, "PrepareProtocol", (PyObject*) &SQLitePrepareProtocolType); PyModule_AddObject(module, "PrepareProtocol", (PyObject*) &pysqlite_PrepareProtocolType);
Py_INCREF(&RowType); Py_INCREF(&pysqlite_RowType);
PyModule_AddObject(module, "Row", (PyObject*) &RowType); PyModule_AddObject(module, "Row", (PyObject*) &pysqlite_RowType);
if (!(dict = PyModule_GetDict(module))) { if (!(dict = PyModule_GetDict(module))) {
goto error; goto error;
@ -294,67 +287,67 @@ PyMODINIT_FUNC init_sqlite3(void)
/*** Create DB-API Exception hierarchy */ /*** Create DB-API Exception hierarchy */
if (!(Error = PyErr_NewException(MODULE_NAME ".Error", PyExc_StandardError, NULL))) { if (!(pysqlite_Error = PyErr_NewException(MODULE_NAME ".Error", PyExc_StandardError, NULL))) {
goto error; goto error;
} }
PyDict_SetItemString(dict, "Error", Error); PyDict_SetItemString(dict, "Error", pysqlite_Error);
if (!(Warning = PyErr_NewException(MODULE_NAME ".Warning", PyExc_StandardError, NULL))) { if (!(pysqlite_Warning = PyErr_NewException(MODULE_NAME ".Warning", PyExc_StandardError, NULL))) {
goto error; goto error;
} }
PyDict_SetItemString(dict, "Warning", Warning); PyDict_SetItemString(dict, "Warning", pysqlite_Warning);
/* Error subclasses */ /* Error subclasses */
if (!(InterfaceError = PyErr_NewException(MODULE_NAME ".InterfaceError", Error, NULL))) { if (!(pysqlite_InterfaceError = PyErr_NewException(MODULE_NAME ".InterfaceError", pysqlite_Error, NULL))) {
goto error; goto error;
} }
PyDict_SetItemString(dict, "InterfaceError", InterfaceError); PyDict_SetItemString(dict, "InterfaceError", pysqlite_InterfaceError);
if (!(DatabaseError = PyErr_NewException(MODULE_NAME ".DatabaseError", Error, NULL))) { if (!(pysqlite_DatabaseError = PyErr_NewException(MODULE_NAME ".DatabaseError", pysqlite_Error, NULL))) {
goto error; goto error;
} }
PyDict_SetItemString(dict, "DatabaseError", DatabaseError); PyDict_SetItemString(dict, "DatabaseError", pysqlite_DatabaseError);
/* DatabaseError subclasses */ /* pysqlite_DatabaseError subclasses */
if (!(InternalError = PyErr_NewException(MODULE_NAME ".InternalError", DatabaseError, NULL))) { if (!(pysqlite_InternalError = PyErr_NewException(MODULE_NAME ".InternalError", pysqlite_DatabaseError, NULL))) {
goto error; goto error;
} }
PyDict_SetItemString(dict, "InternalError", InternalError); PyDict_SetItemString(dict, "InternalError", pysqlite_InternalError);
if (!(OperationalError = PyErr_NewException(MODULE_NAME ".OperationalError", DatabaseError, NULL))) { if (!(pysqlite_OperationalError = PyErr_NewException(MODULE_NAME ".OperationalError", pysqlite_DatabaseError, NULL))) {
goto error; goto error;
} }
PyDict_SetItemString(dict, "OperationalError", OperationalError); PyDict_SetItemString(dict, "OperationalError", pysqlite_OperationalError);
if (!(ProgrammingError = PyErr_NewException(MODULE_NAME ".ProgrammingError", DatabaseError, NULL))) { if (!(pysqlite_ProgrammingError = PyErr_NewException(MODULE_NAME ".ProgrammingError", pysqlite_DatabaseError, NULL))) {
goto error; goto error;
} }
PyDict_SetItemString(dict, "ProgrammingError", ProgrammingError); PyDict_SetItemString(dict, "ProgrammingError", pysqlite_ProgrammingError);
if (!(IntegrityError = PyErr_NewException(MODULE_NAME ".IntegrityError", DatabaseError,NULL))) { if (!(pysqlite_IntegrityError = PyErr_NewException(MODULE_NAME ".IntegrityError", pysqlite_DatabaseError,NULL))) {
goto error; goto error;
} }
PyDict_SetItemString(dict, "IntegrityError", IntegrityError); PyDict_SetItemString(dict, "IntegrityError", pysqlite_IntegrityError);
if (!(DataError = PyErr_NewException(MODULE_NAME ".DataError", DatabaseError, NULL))) { if (!(pysqlite_DataError = PyErr_NewException(MODULE_NAME ".DataError", pysqlite_DatabaseError, NULL))) {
goto error; goto error;
} }
PyDict_SetItemString(dict, "DataError", DataError); PyDict_SetItemString(dict, "DataError", pysqlite_DataError);
if (!(NotSupportedError = PyErr_NewException(MODULE_NAME ".NotSupportedError", DatabaseError, NULL))) { if (!(pysqlite_NotSupportedError = PyErr_NewException(MODULE_NAME ".NotSupportedError", pysqlite_DatabaseError, NULL))) {
goto error; goto error;
} }
PyDict_SetItemString(dict, "NotSupportedError", NotSupportedError); PyDict_SetItemString(dict, "NotSupportedError", pysqlite_NotSupportedError);
/* We just need "something" unique for OptimizedUnicode. It does not really /* We just need "something" unique for pysqlite_OptimizedUnicode. It does not really
* need to be a string subclass. Just anything that can act as a special * need to be a string subclass. Just anything that can act as a special
* marker for us. So I pulled PyCell_Type out of my magic hat. * marker for us. So I pulled PyCell_Type out of my magic hat.
*/ */
Py_INCREF((PyObject*)&PyCell_Type); Py_INCREF((PyObject*)&PyCell_Type);
OptimizedUnicode = (PyObject*)&PyCell_Type; pysqlite_OptimizedUnicode = (PyObject*)&PyCell_Type;
PyDict_SetItemString(dict, "OptimizedUnicode", OptimizedUnicode); PyDict_SetItemString(dict, "OptimizedUnicode", pysqlite_OptimizedUnicode);
/* Set integer constants */ /* Set integer constants */
for (i = 0; _int_constants[i].constant_name != 0; i++) { for (i = 0; _int_constants[i].constant_name != 0; i++) {

View File

@ -25,20 +25,20 @@
#define PYSQLITE_MODULE_H #define PYSQLITE_MODULE_H
#include "Python.h" #include "Python.h"
#define PYSQLITE_VERSION "2.3.2" #define PYSQLITE_VERSION "2.3.3"
extern PyObject* Error; extern PyObject* pysqlite_Error;
extern PyObject* Warning; extern PyObject* pysqlite_Warning;
extern PyObject* InterfaceError; extern PyObject* pysqlite_InterfaceError;
extern PyObject* DatabaseError; extern PyObject* pysqlite_DatabaseError;
extern PyObject* InternalError; extern PyObject* pysqlite_InternalError;
extern PyObject* OperationalError; extern PyObject* pysqlite_OperationalError;
extern PyObject* ProgrammingError; extern PyObject* pysqlite_ProgrammingError;
extern PyObject* IntegrityError; extern PyObject* pysqlite_IntegrityError;
extern PyObject* DataError; extern PyObject* pysqlite_DataError;
extern PyObject* NotSupportedError; extern PyObject* pysqlite_NotSupportedError;
extern PyObject* OptimizedUnicode; extern PyObject* pysqlite_OptimizedUnicode;
/* the functions time.time() and time.sleep() */ /* the functions time.time() and time.sleep() */
extern PyObject* time_time; extern PyObject* time_time;

View File

@ -23,23 +23,23 @@
#include "prepare_protocol.h" #include "prepare_protocol.h"
int prepare_protocol_init(SQLitePrepareProtocol* self, PyObject* args, PyObject* kwargs) int pysqlite_prepare_protocol_init(pysqlite_PrepareProtocol* self, PyObject* args, PyObject* kwargs)
{ {
return 0; return 0;
} }
void prepare_protocol_dealloc(SQLitePrepareProtocol* self) void pysqlite_prepare_protocol_dealloc(pysqlite_PrepareProtocol* self)
{ {
self->ob_type->tp_free((PyObject*)self); self->ob_type->tp_free((PyObject*)self);
} }
PyTypeObject SQLitePrepareProtocolType= { PyTypeObject pysqlite_PrepareProtocolType= {
PyObject_HEAD_INIT(NULL) PyObject_HEAD_INIT(NULL)
0, /* ob_size */ 0, /* ob_size */
MODULE_NAME ".PrepareProtocol", /* tp_name */ MODULE_NAME ".PrepareProtocol", /* tp_name */
sizeof(SQLitePrepareProtocol), /* tp_basicsize */ sizeof(pysqlite_PrepareProtocol), /* tp_basicsize */
0, /* tp_itemsize */ 0, /* tp_itemsize */
(destructor)prepare_protocol_dealloc, /* tp_dealloc */ (destructor)pysqlite_prepare_protocol_dealloc, /* tp_dealloc */
0, /* tp_print */ 0, /* tp_print */
0, /* tp_getattr */ 0, /* tp_getattr */
0, /* tp_setattr */ 0, /* tp_setattr */
@ -70,15 +70,15 @@ PyTypeObject SQLitePrepareProtocolType= {
0, /* tp_descr_get */ 0, /* tp_descr_get */
0, /* tp_descr_set */ 0, /* tp_descr_set */
0, /* tp_dictoffset */ 0, /* tp_dictoffset */
(initproc)prepare_protocol_init, /* tp_init */ (initproc)pysqlite_prepare_protocol_init, /* tp_init */
0, /* tp_alloc */ 0, /* tp_alloc */
0, /* tp_new */ 0, /* tp_new */
0 /* tp_free */ 0 /* tp_free */
}; };
extern int prepare_protocol_setup_types(void) extern int pysqlite_prepare_protocol_setup_types(void)
{ {
SQLitePrepareProtocolType.tp_new = PyType_GenericNew; pysqlite_PrepareProtocolType.tp_new = PyType_GenericNew;
SQLitePrepareProtocolType.ob_type= &PyType_Type; pysqlite_PrepareProtocolType.ob_type= &PyType_Type;
return PyType_Ready(&SQLitePrepareProtocolType); return PyType_Ready(&pysqlite_PrepareProtocolType);
} }

View File

@ -28,14 +28,14 @@
typedef struct typedef struct
{ {
PyObject_HEAD PyObject_HEAD
} SQLitePrepareProtocol; } pysqlite_PrepareProtocol;
extern PyTypeObject SQLitePrepareProtocolType; extern PyTypeObject pysqlite_PrepareProtocolType;
int prepare_protocol_init(SQLitePrepareProtocol* self, PyObject* args, PyObject* kwargs); int pysqlite_prepare_protocol_init(pysqlite_PrepareProtocol* self, PyObject* args, PyObject* kwargs);
void prepare_protocol_dealloc(SQLitePrepareProtocol* self); void pysqlite_prepare_protocol_dealloc(pysqlite_PrepareProtocol* self);
int prepare_protocol_setup_types(void); int pysqlite_prepare_protocol_setup_types(void);
#define UNKNOWN (-1) #define UNKNOWN (-1)
#endif #endif

View File

@ -25,7 +25,7 @@
#include "cursor.h" #include "cursor.h"
#include "sqlitecompat.h" #include "sqlitecompat.h"
void row_dealloc(Row* self) void pysqlite_row_dealloc(pysqlite_Row* self)
{ {
Py_XDECREF(self->data); Py_XDECREF(self->data);
Py_XDECREF(self->description); Py_XDECREF(self->description);
@ -33,10 +33,10 @@ void row_dealloc(Row* self)
self->ob_type->tp_free((PyObject*)self); self->ob_type->tp_free((PyObject*)self);
} }
int row_init(Row* self, PyObject* args, PyObject* kwargs) int pysqlite_row_init(pysqlite_Row* self, PyObject* args, PyObject* kwargs)
{ {
PyObject* data; PyObject* data;
Cursor* cursor; pysqlite_Cursor* cursor;
self->data = 0; self->data = 0;
self->description = 0; self->description = 0;
@ -45,7 +45,7 @@ int row_init(Row* self, PyObject* args, PyObject* kwargs)
return -1; return -1;
} }
if (!PyObject_IsInstance((PyObject*)cursor, (PyObject*)&CursorType)) { if (!PyObject_IsInstance((PyObject*)cursor, (PyObject*)&pysqlite_CursorType)) {
PyErr_SetString(PyExc_TypeError, "instance of cursor required for first argument"); PyErr_SetString(PyExc_TypeError, "instance of cursor required for first argument");
return -1; return -1;
} }
@ -64,7 +64,7 @@ int row_init(Row* self, PyObject* args, PyObject* kwargs)
return 0; return 0;
} }
PyObject* row_subscript(Row* self, PyObject* idx) PyObject* pysqlite_row_subscript(pysqlite_Row* self, PyObject* idx)
{ {
long _idx; long _idx;
char* key; char* key;
@ -133,32 +133,63 @@ PyObject* row_subscript(Row* self, PyObject* idx)
} }
} }
Py_ssize_t row_length(Row* self, PyObject* args, PyObject* kwargs) Py_ssize_t pysqlite_row_length(pysqlite_Row* self, PyObject* args, PyObject* kwargs)
{ {
return PyTuple_GET_SIZE(self->data); return PyTuple_GET_SIZE(self->data);
} }
static int row_print(Row* self, FILE *fp, int flags) PyObject* pysqlite_row_keys(pysqlite_Row* self, PyObject* args, PyObject* kwargs)
{
PyObject* list;
int nitems, i;
list = PyList_New(0);
if (!list) {
return NULL;
}
nitems = PyTuple_Size(self->description);
for (i = 0; i < nitems; i++) {
if (PyList_Append(list, PyTuple_GET_ITEM(PyTuple_GET_ITEM(self->description, i), 0)) != 0) {
Py_DECREF(list);
return NULL;
}
}
return list;
}
static int pysqlite_row_print(pysqlite_Row* self, FILE *fp, int flags)
{ {
return (&PyTuple_Type)->tp_print(self->data, fp, flags); return (&PyTuple_Type)->tp_print(self->data, fp, flags);
} }
static PyObject* pysqlite_iter(pysqlite_Row* self)
{
return PyObject_GetIter(self->data);
}
PyMappingMethods row_as_mapping = { PyMappingMethods pysqlite_row_as_mapping = {
/* mp_length */ (lenfunc)row_length, /* mp_length */ (lenfunc)pysqlite_row_length,
/* mp_subscript */ (binaryfunc)row_subscript, /* mp_subscript */ (binaryfunc)pysqlite_row_subscript,
/* mp_ass_subscript */ (objobjargproc)0, /* mp_ass_subscript */ (objobjargproc)0,
}; };
static PyMethodDef pysqlite_row_methods[] = {
{"keys", (PyCFunction)pysqlite_row_keys, METH_NOARGS,
PyDoc_STR("Returns the keys of the row.")},
{NULL, NULL}
};
PyTypeObject RowType = {
PyTypeObject pysqlite_RowType = {
PyObject_HEAD_INIT(NULL) PyObject_HEAD_INIT(NULL)
0, /* ob_size */ 0, /* ob_size */
MODULE_NAME ".Row", /* tp_name */ MODULE_NAME ".Row", /* tp_name */
sizeof(Row), /* tp_basicsize */ sizeof(pysqlite_Row), /* tp_basicsize */
0, /* tp_itemsize */ 0, /* tp_itemsize */
(destructor)row_dealloc, /* tp_dealloc */ (destructor)pysqlite_row_dealloc, /* tp_dealloc */
(printfunc)row_print, /* tp_print */ (printfunc)pysqlite_row_print, /* tp_print */
0, /* tp_getattr */ 0, /* tp_getattr */
0, /* tp_setattr */ 0, /* tp_setattr */
0, /* tp_compare */ 0, /* tp_compare */
@ -174,13 +205,13 @@ PyTypeObject RowType = {
0, /* tp_as_buffer */ 0, /* tp_as_buffer */
Py_TPFLAGS_DEFAULT|Py_TPFLAGS_BASETYPE, /* tp_flags */ Py_TPFLAGS_DEFAULT|Py_TPFLAGS_BASETYPE, /* tp_flags */
0, /* tp_doc */ 0, /* tp_doc */
0, /* tp_traverse */ (traverseproc)0, /* tp_traverse */
0, /* tp_clear */ 0, /* tp_clear */
0, /* tp_richcompare */ 0, /* tp_richcompare */
0, /* tp_weaklistoffset */ 0, /* tp_weaklistoffset */
0, /* tp_iter */ (getiterfunc)pysqlite_iter, /* tp_iter */
0, /* tp_iternext */ 0, /* tp_iternext */
0, /* tp_methods */ pysqlite_row_methods, /* tp_methods */
0, /* tp_members */ 0, /* tp_members */
0, /* tp_getset */ 0, /* tp_getset */
0, /* tp_base */ 0, /* tp_base */
@ -188,15 +219,15 @@ PyTypeObject RowType = {
0, /* tp_descr_get */ 0, /* tp_descr_get */
0, /* tp_descr_set */ 0, /* tp_descr_set */
0, /* tp_dictoffset */ 0, /* tp_dictoffset */
(initproc)row_init, /* tp_init */ (initproc)pysqlite_row_init, /* tp_init */
0, /* tp_alloc */ 0, /* tp_alloc */
0, /* tp_new */ 0, /* tp_new */
0 /* tp_free */ 0 /* tp_free */
}; };
extern int row_setup_types(void) extern int pysqlite_row_setup_types(void)
{ {
RowType.tp_new = PyType_GenericNew; pysqlite_RowType.tp_new = PyType_GenericNew;
RowType.tp_as_mapping = &row_as_mapping; pysqlite_RowType.tp_as_mapping = &pysqlite_row_as_mapping;
return PyType_Ready(&RowType); return PyType_Ready(&pysqlite_RowType);
} }

View File

@ -30,10 +30,10 @@ typedef struct _Row
PyObject_HEAD PyObject_HEAD
PyObject* data; PyObject* data;
PyObject* description; PyObject* description;
} Row; } pysqlite_Row;
extern PyTypeObject RowType; extern PyTypeObject pysqlite_RowType;
int row_setup_types(void); int pysqlite_row_setup_types(void);
#endif #endif

View File

@ -29,7 +29,7 @@
#include "sqlitecompat.h" #include "sqlitecompat.h"
/* prototypes */ /* prototypes */
int check_remaining_sql(const char* tail); static int pysqlite_check_remaining_sql(const char* tail);
typedef enum { typedef enum {
LINECOMMENT_1, LINECOMMENT_1,
@ -40,7 +40,7 @@ typedef enum {
NORMAL NORMAL
} parse_remaining_sql_state; } parse_remaining_sql_state;
int statement_create(Statement* self, Connection* connection, PyObject* sql) int pysqlite_statement_create(pysqlite_Statement* self, pysqlite_Connection* connection, PyObject* sql)
{ {
const char* tail; const char* tail;
int rc; int rc;
@ -77,7 +77,7 @@ int statement_create(Statement* self, Connection* connection, PyObject* sql)
self->db = connection->db; self->db = connection->db;
if (rc == SQLITE_OK && check_remaining_sql(tail)) { if (rc == SQLITE_OK && pysqlite_check_remaining_sql(tail)) {
(void)sqlite3_finalize(self->st); (void)sqlite3_finalize(self->st);
self->st = NULL; self->st = NULL;
rc = PYSQLITE_TOO_MUCH_SQL; rc = PYSQLITE_TOO_MUCH_SQL;
@ -86,7 +86,7 @@ int statement_create(Statement* self, Connection* connection, PyObject* sql)
return rc; return rc;
} }
int statement_bind_parameter(Statement* self, int pos, PyObject* parameter) int pysqlite_statement_bind_parameter(pysqlite_Statement* self, int pos, PyObject* parameter)
{ {
int rc = SQLITE_OK; int rc = SQLITE_OK;
long longval; long longval;
@ -133,7 +133,7 @@ int statement_bind_parameter(Statement* self, int pos, PyObject* parameter)
return rc; return rc;
} }
void statement_bind_parameters(Statement* self, PyObject* parameters) void pysqlite_statement_bind_parameters(pysqlite_Statement* self, PyObject* parameters)
{ {
PyObject* current_param; PyObject* current_param;
PyObject* adapted; PyObject* adapted;
@ -154,19 +154,19 @@ void statement_bind_parameters(Statement* self, PyObject* parameters)
binding_name = sqlite3_bind_parameter_name(self->st, i); binding_name = sqlite3_bind_parameter_name(self->st, i);
Py_END_ALLOW_THREADS Py_END_ALLOW_THREADS
if (!binding_name) { if (!binding_name) {
PyErr_Format(ProgrammingError, "Binding %d has no name, but you supplied a dictionary (which has only names).", i); PyErr_Format(pysqlite_ProgrammingError, "Binding %d has no name, but you supplied a dictionary (which has only names).", i);
return; return;
} }
binding_name++; /* skip first char (the colon) */ binding_name++; /* skip first char (the colon) */
current_param = PyDict_GetItemString(parameters, binding_name); current_param = PyDict_GetItemString(parameters, binding_name);
if (!current_param) { if (!current_param) {
PyErr_Format(ProgrammingError, "You did not supply a value for binding %d.", i); PyErr_Format(pysqlite_ProgrammingError, "You did not supply a value for binding %d.", i);
return; return;
} }
Py_INCREF(current_param); Py_INCREF(current_param);
adapted = microprotocols_adapt(current_param, (PyObject*)&SQLitePrepareProtocolType, NULL); adapted = microprotocols_adapt(current_param, (PyObject*)&pysqlite_PrepareProtocolType, NULL);
if (adapted) { if (adapted) {
Py_DECREF(current_param); Py_DECREF(current_param);
} else { } else {
@ -174,11 +174,11 @@ void statement_bind_parameters(Statement* self, PyObject* parameters)
adapted = current_param; adapted = current_param;
} }
rc = statement_bind_parameter(self, i, adapted); rc = pysqlite_statement_bind_parameter(self, i, adapted);
Py_DECREF(adapted); Py_DECREF(adapted);
if (rc != SQLITE_OK) { if (rc != SQLITE_OK) {
PyErr_Format(InterfaceError, "Error binding parameter :%s - probably unsupported type.", binding_name); PyErr_Format(pysqlite_InterfaceError, "Error binding parameter :%s - probably unsupported type.", binding_name);
return; return;
} }
} }
@ -186,7 +186,7 @@ void statement_bind_parameters(Statement* self, PyObject* parameters)
/* parameters passed as sequence */ /* parameters passed as sequence */
num_params = PySequence_Length(parameters); num_params = PySequence_Length(parameters);
if (num_params != num_params_needed) { if (num_params != num_params_needed) {
PyErr_Format(ProgrammingError, "Incorrect number of bindings supplied. The current statement uses %d, and there are %d supplied.", PyErr_Format(pysqlite_ProgrammingError, "Incorrect number of bindings supplied. The current statement uses %d, and there are %d supplied.",
num_params_needed, num_params); num_params_needed, num_params);
return; return;
} }
@ -195,7 +195,7 @@ void statement_bind_parameters(Statement* self, PyObject* parameters)
if (!current_param) { if (!current_param) {
return; return;
} }
adapted = microprotocols_adapt(current_param, (PyObject*)&SQLitePrepareProtocolType, NULL); adapted = microprotocols_adapt(current_param, (PyObject*)&pysqlite_PrepareProtocolType, NULL);
if (adapted) { if (adapted) {
Py_DECREF(current_param); Py_DECREF(current_param);
@ -204,18 +204,18 @@ void statement_bind_parameters(Statement* self, PyObject* parameters)
adapted = current_param; adapted = current_param;
} }
rc = statement_bind_parameter(self, i + 1, adapted); rc = pysqlite_statement_bind_parameter(self, i + 1, adapted);
Py_DECREF(adapted); Py_DECREF(adapted);
if (rc != SQLITE_OK) { if (rc != SQLITE_OK) {
PyErr_Format(InterfaceError, "Error binding parameter %d - probably unsupported type.", i); PyErr_Format(pysqlite_InterfaceError, "Error binding parameter %d - probably unsupported type.", i);
return; return;
} }
} }
} }
} }
int statement_recompile(Statement* self, PyObject* params) int pysqlite_statement_recompile(pysqlite_Statement* self, PyObject* params)
{ {
const char* tail; const char* tail;
int rc; int rc;
@ -250,7 +250,7 @@ int statement_recompile(Statement* self, PyObject* params)
return rc; return rc;
} }
int statement_finalize(Statement* self) int pysqlite_statement_finalize(pysqlite_Statement* self)
{ {
int rc; int rc;
@ -267,7 +267,7 @@ int statement_finalize(Statement* self)
return rc; return rc;
} }
int statement_reset(Statement* self) int pysqlite_statement_reset(pysqlite_Statement* self)
{ {
int rc; int rc;
@ -286,12 +286,12 @@ int statement_reset(Statement* self)
return rc; return rc;
} }
void statement_mark_dirty(Statement* self) void pysqlite_statement_mark_dirty(pysqlite_Statement* self)
{ {
self->in_use = 1; self->in_use = 1;
} }
void statement_dealloc(Statement* self) void pysqlite_statement_dealloc(pysqlite_Statement* self)
{ {
int rc; int rc;
@ -320,7 +320,7 @@ void statement_dealloc(Statement* self)
* *
* Returns 1 if there is more left than should be. 0 if ok. * Returns 1 if there is more left than should be. 0 if ok.
*/ */
int check_remaining_sql(const char* tail) static int pysqlite_check_remaining_sql(const char* tail)
{ {
const char* pos = tail; const char* pos = tail;
@ -382,13 +382,13 @@ int check_remaining_sql(const char* tail)
return 0; return 0;
} }
PyTypeObject StatementType = { PyTypeObject pysqlite_StatementType = {
PyObject_HEAD_INIT(NULL) PyObject_HEAD_INIT(NULL)
0, /* ob_size */ 0, /* ob_size */
MODULE_NAME ".Statement", /* tp_name */ MODULE_NAME ".Statement", /* tp_name */
sizeof(Statement), /* tp_basicsize */ sizeof(pysqlite_Statement), /* tp_basicsize */
0, /* tp_itemsize */ 0, /* tp_itemsize */
(destructor)statement_dealloc, /* tp_dealloc */ (destructor)pysqlite_statement_dealloc, /* tp_dealloc */
0, /* tp_print */ 0, /* tp_print */
0, /* tp_getattr */ 0, /* tp_getattr */
0, /* tp_setattr */ 0, /* tp_setattr */
@ -408,7 +408,7 @@ PyTypeObject StatementType = {
0, /* tp_traverse */ 0, /* tp_traverse */
0, /* tp_clear */ 0, /* tp_clear */
0, /* tp_richcompare */ 0, /* tp_richcompare */
offsetof(Statement, in_weakreflist), /* tp_weaklistoffset */ offsetof(pysqlite_Statement, in_weakreflist), /* tp_weaklistoffset */
0, /* tp_iter */ 0, /* tp_iter */
0, /* tp_iternext */ 0, /* tp_iternext */
0, /* tp_methods */ 0, /* tp_methods */
@ -425,8 +425,8 @@ PyTypeObject StatementType = {
0 /* tp_free */ 0 /* tp_free */
}; };
extern int statement_setup_types(void) extern int pysqlite_statement_setup_types(void)
{ {
StatementType.tp_new = PyType_GenericNew; pysqlite_StatementType.tp_new = PyType_GenericNew;
return PyType_Ready(&StatementType); return PyType_Ready(&pysqlite_StatementType);
} }

View File

@ -39,21 +39,21 @@ typedef struct
PyObject* sql; PyObject* sql;
int in_use; int in_use;
PyObject* in_weakreflist; /* List of weak references */ PyObject* in_weakreflist; /* List of weak references */
} Statement; } pysqlite_Statement;
extern PyTypeObject StatementType; extern PyTypeObject pysqlite_StatementType;
int statement_create(Statement* self, Connection* connection, PyObject* sql); int pysqlite_statement_create(pysqlite_Statement* self, pysqlite_Connection* connection, PyObject* sql);
void statement_dealloc(Statement* self); void pysqlite_statement_dealloc(pysqlite_Statement* self);
int statement_bind_parameter(Statement* self, int pos, PyObject* parameter); int pysqlite_statement_bind_parameter(pysqlite_Statement* self, int pos, PyObject* parameter);
void statement_bind_parameters(Statement* self, PyObject* parameters); void pysqlite_statement_bind_parameters(pysqlite_Statement* self, PyObject* parameters);
int statement_recompile(Statement* self, PyObject* parameters); int pysqlite_statement_recompile(pysqlite_Statement* self, PyObject* parameters);
int statement_finalize(Statement* self); int pysqlite_statement_finalize(pysqlite_Statement* self);
int statement_reset(Statement* self); int pysqlite_statement_reset(pysqlite_Statement* self);
void statement_mark_dirty(Statement* self); void pysqlite_statement_mark_dirty(pysqlite_Statement* self);
int statement_setup_types(void); int pysqlite_statement_setup_types(void);
#endif #endif

View File

@ -24,8 +24,7 @@
#include "module.h" #include "module.h"
#include "connection.h" #include "connection.h"
int _sqlite_step_with_busyhandler(sqlite3_stmt* statement, Connection* connection int _sqlite_step_with_busyhandler(sqlite3_stmt* statement, pysqlite_Connection* connection)
)
{ {
int rc; int rc;
@ -40,7 +39,7 @@ int _sqlite_step_with_busyhandler(sqlite3_stmt* statement, Connection* connectio
* Checks the SQLite error code and sets the appropriate DB-API exception. * Checks the SQLite error code and sets the appropriate DB-API exception.
* Returns the error code (0 means no error occurred). * Returns the error code (0 means no error occurred).
*/ */
int _seterror(sqlite3* db) int _pysqlite_seterror(sqlite3* db)
{ {
int errorcode; int errorcode;
@ -53,7 +52,7 @@ int _seterror(sqlite3* db)
break; break;
case SQLITE_INTERNAL: case SQLITE_INTERNAL:
case SQLITE_NOTFOUND: case SQLITE_NOTFOUND:
PyErr_SetString(InternalError, sqlite3_errmsg(db)); PyErr_SetString(pysqlite_InternalError, sqlite3_errmsg(db));
break; break;
case SQLITE_NOMEM: case SQLITE_NOMEM:
(void)PyErr_NoMemory(); (void)PyErr_NoMemory();
@ -71,23 +70,23 @@ int _seterror(sqlite3* db)
case SQLITE_PROTOCOL: case SQLITE_PROTOCOL:
case SQLITE_EMPTY: case SQLITE_EMPTY:
case SQLITE_SCHEMA: case SQLITE_SCHEMA:
PyErr_SetString(OperationalError, sqlite3_errmsg(db)); PyErr_SetString(pysqlite_OperationalError, sqlite3_errmsg(db));
break; break;
case SQLITE_CORRUPT: case SQLITE_CORRUPT:
PyErr_SetString(DatabaseError, sqlite3_errmsg(db)); PyErr_SetString(pysqlite_DatabaseError, sqlite3_errmsg(db));
break; break;
case SQLITE_TOOBIG: case SQLITE_TOOBIG:
PyErr_SetString(DataError, sqlite3_errmsg(db)); PyErr_SetString(pysqlite_DataError, sqlite3_errmsg(db));
break; break;
case SQLITE_CONSTRAINT: case SQLITE_CONSTRAINT:
case SQLITE_MISMATCH: case SQLITE_MISMATCH:
PyErr_SetString(IntegrityError, sqlite3_errmsg(db)); PyErr_SetString(pysqlite_IntegrityError, sqlite3_errmsg(db));
break; break;
case SQLITE_MISUSE: case SQLITE_MISUSE:
PyErr_SetString(ProgrammingError, sqlite3_errmsg(db)); PyErr_SetString(pysqlite_ProgrammingError, sqlite3_errmsg(db));
break; break;
default: default:
PyErr_SetString(DatabaseError, sqlite3_errmsg(db)); PyErr_SetString(pysqlite_DatabaseError, sqlite3_errmsg(db));
break; break;
} }

View File

@ -28,11 +28,11 @@
#include "sqlite3.h" #include "sqlite3.h"
#include "connection.h" #include "connection.h"
int _sqlite_step_with_busyhandler(sqlite3_stmt* statement, Connection* connection); int _sqlite_step_with_busyhandler(sqlite3_stmt* statement, pysqlite_Connection* connection);
/** /**
* Checks the SQLite error code and sets the appropriate DB-API exception. * Checks the SQLite error code and sets the appropriate DB-API exception.
* Returns the error code (0 means no error occurred). * Returns the error code (0 means no error occurred).
*/ */
int _seterror(sqlite3* db); int _pysqlite_seterror(sqlite3* db);
#endif #endif

View File

@ -1050,8 +1050,9 @@ Convert a string or number to an integer, if possible. A floating point\n\
argument will be truncated towards zero (this does not include a string\n\ argument will be truncated towards zero (this does not include a string\n\
representation of a floating point number!) When converting a string, use\n\ representation of a floating point number!) When converting a string, use\n\
the optional base. It is an error to supply a base when converting a\n\ the optional base. It is an error to supply a base when converting a\n\
non-string. If the argument is outside the integer range a long object\n\ non-string. If base is zero, the proper base is guessed based on the\n\
will be returned instead."); string content. If the argument is outside the integer range a\n\
long object will be returned instead.");
static PyNumberMethods int_as_number = { static PyNumberMethods int_as_number = {
(binaryfunc)int_add, /*nb_add*/ (binaryfunc)int_add, /*nb_add*/

View File

@ -1024,7 +1024,7 @@ frozenset_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
{ {
PyObject *iterable = NULL, *result; PyObject *iterable = NULL, *result;
if (!_PyArg_NoKeywords("frozenset()", kwds)) if (type == &PyFrozenSet_Type && !_PyArg_NoKeywords("frozenset()", kwds))
return NULL; return NULL;
if (!PyArg_UnpackTuple(args, type->tp_name, 0, 1, &iterable)) if (!PyArg_UnpackTuple(args, type->tp_name, 0, 1, &iterable))
@ -1068,7 +1068,7 @@ PySet_Fini(void)
static PyObject * static PyObject *
set_new(PyTypeObject *type, PyObject *args, PyObject *kwds) set_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
{ {
if (!_PyArg_NoKeywords("set()", kwds)) if (type == &PySet_Type && !_PyArg_NoKeywords("set()", kwds))
return NULL; return NULL;
return make_new_set(type, NULL); return make_new_set(type, NULL);

View File

@ -3126,7 +3126,7 @@ init_ast(void)
if (PyDict_SetItemString(d, "AST", (PyObject*)AST_type) < 0) return; if (PyDict_SetItemString(d, "AST", (PyObject*)AST_type) < 0) return;
if (PyModule_AddIntConstant(m, "PyCF_ONLY_AST", PyCF_ONLY_AST) < 0) if (PyModule_AddIntConstant(m, "PyCF_ONLY_AST", PyCF_ONLY_AST) < 0)
return; return;
if (PyModule_AddStringConstant(m, "__version__", "53170") < 0) if (PyModule_AddStringConstant(m, "__version__", "53349") < 0)
return; return;
if (PyDict_SetItemString(d, "mod", (PyObject*)mod_type) < 0) return; if (PyDict_SetItemString(d, "mod", (PyObject*)mod_type) < 0) return;
if (PyDict_SetItemString(d, "Module", (PyObject*)Module_type) < 0) if (PyDict_SetItemString(d, "Module", (PyObject*)Module_type) < 0)

View File

@ -34,7 +34,7 @@ NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION
WITH THE USE OR PERFORMANCE OF THIS SOFTWARE ! WITH THE USE OR PERFORMANCE OF THIS SOFTWARE !
""" """
import sys, time, operator, string import sys, time, operator, string, platform
from CommandLine import * from CommandLine import *
try: try:
@ -102,27 +102,26 @@ def get_timer(timertype):
def get_machine_details(): def get_machine_details():
import platform
if _debug: if _debug:
print 'Getting machine details...' print 'Getting machine details...'
buildno, builddate = platform.python_build() buildno, builddate = platform.python_build()
python = platform.python_version() python = platform.python_version()
if python > '2.0': try:
try: unichr(100000)
unichr(100000) except ValueError:
except ValueError: # UCS2 build (standard)
# UCS2 build (standard) unicode = 'UCS2'
unicode = 'UCS2' except NameError:
else:
# UCS4 build (most recent Linux distros)
unicode = 'UCS4'
else:
unicode = None unicode = None
else:
# UCS4 build (most recent Linux distros)
unicode = 'UCS4'
bits, linkage = platform.architecture() bits, linkage = platform.architecture()
return { return {
'platform': platform.platform(), 'platform': platform.platform(),
'processor': platform.processor(), 'processor': platform.processor(),
'executable': sys.executable, 'executable': sys.executable,
'implementation': platform.python_implementation(),
'python': platform.python_version(), 'python': platform.python_version(),
'compiler': platform.python_compiler(), 'compiler': platform.python_compiler(),
'buildno': buildno, 'buildno': buildno,
@ -134,17 +133,18 @@ def get_machine_details():
def print_machine_details(d, indent=''): def print_machine_details(d, indent=''):
l = ['Machine Details:', l = ['Machine Details:',
' Platform ID: %s' % d.get('platform', 'n/a'), ' Platform ID: %s' % d.get('platform', 'n/a'),
' Processor: %s' % d.get('processor', 'n/a'), ' Processor: %s' % d.get('processor', 'n/a'),
'', '',
'Python:', 'Python:',
' Executable: %s' % d.get('executable', 'n/a'), ' Implementation: %s' % d.get('implementation', 'n/a'),
' Version: %s' % d.get('python', 'n/a'), ' Executable: %s' % d.get('executable', 'n/a'),
' Compiler: %s' % d.get('compiler', 'n/a'), ' Version: %s' % d.get('python', 'n/a'),
' Bits: %s' % d.get('bits', 'n/a'), ' Compiler: %s' % d.get('compiler', 'n/a'),
' Build: %s (#%s)' % (d.get('builddate', 'n/a'), ' Bits: %s' % d.get('bits', 'n/a'),
d.get('buildno', 'n/a')), ' Build: %s (#%s)' % (d.get('builddate', 'n/a'),
' Unicode: %s' % d.get('unicode', 'n/a'), d.get('buildno', 'n/a')),
' Unicode: %s' % d.get('unicode', 'n/a'),
] ]
print indent + string.join(l, '\n' + indent) + '\n' print indent + string.join(l, '\n' + indent) + '\n'
@ -499,8 +499,9 @@ class Benchmark:
def calibrate(self): def calibrate(self):
print 'Calibrating tests. Please wait...' print 'Calibrating tests. Please wait...',
if self.verbose: if self.verbose:
print
print print
print 'Test min max' print 'Test min max'
print '-' * LINE print '-' * LINE
@ -514,6 +515,11 @@ class Benchmark:
(name, (name,
min(test.overhead_times) * MILLI_SECONDS, min(test.overhead_times) * MILLI_SECONDS,
max(test.overhead_times) * MILLI_SECONDS) max(test.overhead_times) * MILLI_SECONDS)
if self.verbose:
print
print 'Done with the calibration.'
else:
print 'done.'
print print
def run(self): def run(self):
@ -830,7 +836,9 @@ python pybench.py -s p25.pybench -c p21.pybench
print '-' * LINE print '-' * LINE
print 'PYBENCH %s' % __version__ print 'PYBENCH %s' % __version__
print '-' * LINE print '-' * LINE
print '* using Python %s' % (string.split(sys.version)[0]) print '* using %s %s' % (
platform.python_implementation(),
string.join(string.split(sys.version), ' '))
# Switch off garbage collection # Switch off garbage collection
if not withgc: if not withgc:
@ -839,15 +847,23 @@ python pybench.py -s p25.pybench -c p21.pybench
except ImportError: except ImportError:
print '* Python version doesn\'t support garbage collection' print '* Python version doesn\'t support garbage collection'
else: else:
gc.disable() try:
print '* disabled garbage collection' gc.disable()
except NotImplementedError:
print '* Python version doesn\'t support gc.disable'
else:
print '* disabled garbage collection'
# "Disable" sys check interval # "Disable" sys check interval
if not withsyscheck: if not withsyscheck:
# Too bad the check interval uses an int instead of a long... # Too bad the check interval uses an int instead of a long...
value = 2147483647 value = 2147483647
sys.setcheckinterval(value) try:
print '* system check interval set to maximum: %s' % value sys.setcheckinterval(value)
except (AttributeError, NotImplementedError):
print '* Python version doesn\'t support sys.setcheckinterval'
else:
print '* system check interval set to maximum: %s' % value
if timer == TIMER_SYSTIMES_PROCESSTIME: if timer == TIMER_SYSTIMES_PROCESSTIME:
import systimes import systimes