\chapter{Defining New Types \label{defining-new-types}} \sectionauthor{Michael Hudson}{mwh@python.net} \sectionauthor{Dave Kuhlman}{dkuhlman@rexx.com} \sectionauthor{Jim Fulton}{jim@zope.com} As mentioned in the last chapter, Python allows the writer of an extension module to define new types that can be manipulated from Python code, much like strings and lists in core Python. This is not hard; the code for all extension types follows a pattern, but there are some details that you need to understand before you can get started. \begin{notice} The way new types are defined changed dramatically (and for the better) in Python 2.2. This document documents how to define new types for Python 2.2 and later. If you need to support older versions of Python, you will need to refer to older versions of this documentation. \end{notice} \section{The Basics \label{dnt-basics}} The Python runtime sees all Python objects as variables of type \ctype{PyObject*}. A \ctype{PyObject} is not a very magnificent object - it just contains the refcount and a pointer to the object's ``type object''. This is where the action is; the type object determines which (C) functions get called when, for instance, an attribute gets looked up on an object or it is multiplied by another object. These C functions are called ``type methods'' to distinguish them from things like \code{[].append} (which we call ``object methods''). So, if you want to define a new object type, you need to create a new type object. This sort of thing can only be explained by example, so here's a minimal, but complete, module that defines a new type: \verbatiminput{noddy.c} Now that's quite a bit to take in at once, but hopefully bits will seem familiar from the last chapter. The first bit that will be new is: \begin{verbatim} typedef struct { PyObject_HEAD } noddy_NoddyObject; \end{verbatim} This is what a Noddy object will contain---in this case, nothing more than every Python object contains, namely a refcount and a pointer to a type object. These are the fields the \code{PyObject_HEAD} macro brings in. The reason for the macro is to standardize the layout and to enable special debugging fields in debug builds. Note that there is no semicolon after the \code{PyObject_HEAD} macro; one is included in the macro definition. Be wary of adding one by accident; it's easy to do from habit, and your compiler might not complain, but someone else's probably will! (On Windows, MSVC is known to call this an error and refuse to compile the code.) For contrast, let's take a look at the corresponding definition for standard Python integers: \begin{verbatim} typedef struct { PyObject_HEAD long ob_ival; } PyIntObject; \end{verbatim} Moving on, we come to the crunch --- the type object. \begin{verbatim} static PyTypeObject noddy_NoddyType = { PyObject_HEAD_INIT(NULL) 0, /*ob_size*/ "noddy.Noddy", /*tp_name*/ sizeof(noddy_NoddyObject), /*tp_basicsize*/ 0, /*tp_itemsize*/ 0, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash */ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT, /*tp_flags*/ "Noddy objects", /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ 0, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ 0, /* tp_init */ 0, /* tp_alloc */ PyType_GenericNew, /* tp_new */ }; \end{verbatim} Now if you go and look up the definition of \ctype{PyTypeObject} in \file{object.h} you'll see that it has many more fields that the definition above. The remaining fields will be filled with zeros by the C compiler, and it's common practice to not specify them explicitly unless you need them. This is so important that we're going to pick the top of it apart still further: \begin{verbatim} PyObject_HEAD_INIT(NULL) \end{verbatim} This line is a bit of a wart; what we'd like to write is: \begin{verbatim} PyObject_HEAD_INIT(&PyType_Type) \end{verbatim} as the type of a type object is ``type'', but this isn't strictly conforming C and some compilers complain. Fortunately, this member will be filled in for us by \cfunction{PyType_Ready()}. \begin{verbatim} 0, /* ob_size */ \end{verbatim} The \member{ob_size} field of the header is not used; its presence in the type structure is a historical artifact that is maintained for binary compatibility with extension modules compiled for older versions of Python. Always set this field to zero. \begin{verbatim} "noddy.Noddy", /* tp_name */ \end{verbatim} The name of our type. This will appear in the default textual representation of our objects and in some error messages, for example: \begin{verbatim} >>> "" + noddy.new_noddy() Traceback (most recent call last): File "", line 1, in ? TypeError: cannot add type "noddy.Noddy" to string \end{verbatim} Note that the name is a dotted name that includes both the module name and the name of the type within the module. The module in this case is \module{noddy} and the type is \class{Noddy}, so we set the type name to \class{noddy.Noddy}. \begin{verbatim} sizeof(noddy_NoddyObject), /* tp_basicsize */ \end{verbatim} This is so that Python knows how much memory to allocate when you call \cfunction{PyObject_New}. \begin{verbatim} 0, /* tp_itemsize */ \end{verbatim} This has to do with variable length objects like lists and strings. Ignore this for now. Skipping a number of type methods that we don't provide, we set the class flags to \constant{Py_TPFLAGS_DEFAULT}. \begin{verbatim} Py_TPFLAGS_DEFAULT, /*tp_flags*/ \end{verbatim} All types should include this constant in their flags. It enables all of the members defined by the current version of Python. We provide a doc string for the type in \member{tp_doc}. \begin{verbatim} "Noddy objects", /* tp_doc */ \end{verbatim} Now we get into the type methods, the things that make your objects different from the others. We aren't going to implement any of these in this version of the module. We'll expand this example later to have more interesting behavior. For now, all we want to be able to do is to create new \class{Noddy} objects. To enable object creation, we have to provide a \member{tp_new} implementation. In this case, we can just use the default implementation provided by the API function \cfunction{PyType_GenericNew}. \begin{verbatim} PyType_GenericNew, /* tp_new */ \end{verbatim} All the other type methods are \NULL, so we'll go over them later --- that's for a later section! Everything else in the file should be familiar, except for some code in \cfunction{initnoddy}: \begin{verbatim} if (PyType_Ready(&noddy_NoddyType) < 0) return; \end{verbatim} This initializes the \class{Noddy} type, filing in a number of members, including \member{ob_type} that we initially set to \NULL. \begin{verbatim} PyModule_AddObject(m, "Noddy", (PyObject *)&noddy_NoddyType); \end{verbatim} This adds the type to the module dictionary. This allows us to create \class{Noddy} instances by calling the \class{Noddy} class: \begin{verbatim} import noddy mynoddy = noddy.Noddy() \end{verbatim} That's it! All that remains is to build it; put the above code in a file called \file{noddy.c} and \begin{verbatim} from distutils.core import setup, Extension setup(name="noddy", version="1.0", ext_modules=[Extension("noddy", ["noddy.c"])]) \end{verbatim} in a file called \file{setup.py}; then typing \begin{verbatim} $ python setup.py build \end{verbatim} %$ <-- bow to font-lock ;-( at a shell should produce a file \file{noddy.so} in a subdirectory; move to that directory and fire up Python --- you should be able to \code{import noddy} and play around with Noddy objects. That wasn't so hard, was it? Of course, the current Noddy type is pretty uninteresting. It has no data and doesn't do anything. It can't even be subclassed. \subsection{Adding data and methods to the Basic example} Let's expend the basic example to add some data and methods. Let's also make the type usable as a base class. We'll create a new module, \module{noddy2} that adds these capabilities: \verbatiminput{noddy2.c} This version of the module has a number of changes. We've added an extra include: \begin{verbatim} #include "structmember.h" \end{verbatim} This include provides declarations that we use to handle attributes, as described a bit later. The name of the \class{Noddy} object structure has been shortened to \class{Noddy}. The type object name has been shortened to \class{NoddyType}. The \class{Noddy} type now has three data attributes, \var{first}, \var{last}, and \var{number}. The \var{first} and \var{last} variables are Python strings containing first and last names. The \var{number} attribute is an integer. The object structure is updated accordingly: \begin{verbatim} typedef struct { PyObject_HEAD PyObject *first; PyObject *last; int number; } Noddy; \end{verbatim} Because we now have data to manage, we have to be more careful about object allocation and deallocation. At a minimum, we need a deallocation method: \begin{verbatim} static void Noddy_dealloc(Noddy* self) { Py_XDECREF(self->first); Py_XDECREF(self->last); self->ob_type->tp_free((PyObject*)self); } \end{verbatim} which is assigned to the \member{tp_dealloc} member: \begin{verbatim} (destructor)Noddy_dealloc, /*tp_dealloc*/ \end{verbatim} This method decrements the reference counts of the two Python attributes. We use \cfunction{Py_XDECREF} here because the \member{first} and \member{last} members could be \NULL. It then calls the \member{tp_free} member of the object's type to free the object's memory. Note that the object's type might not be \class{NoddyType}, because the object may be an instance of a subclass. We want to make sure that the first and last names are initialized to empty strings, so we provide a new method: \begin{verbatim} static PyObject * Noddy_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { Noddy *self; self = (Noddy *)type->tp_alloc(type, 0); if (self != NULL) { self->first = PyString_FromString(""); if (self->first == NULL) { Py_DECREF(self); return NULL; } self->last = PyString_FromString(""); if (self->last == NULL) { Py_DECREF(self); return NULL; } self->number = 0; } return (PyObject *)self; } \end{verbatim} and install it in the \member{tp_new} member: \begin{verbatim} Noddy_new, /* tp_new */ \end{verbatim} The new member is responsible for creating (as opposed to initializing) objects of the type. It is exposed in Python as the \method{__new__} method. See the paper titled ``Unifying types and classes in Python'' for a detailed discussion of the \method{__new__} method. One reason to implement a new method is to assure the initial values of instance variables. In this case, we use the new method to make sure that the initial values of the members \member{first} and \member{last} are not \NULL. If we didn't care whether the initial values were \NULL, we could have used \cfunction{PyType_GenericNew} as our new method, as we did before. \cfunction{PyType_GenericNew} initializes all of the instance variable members to NULLs. The new method is a static method that is passed the type being instantiated and any arguments passed when the type was called, and that returns the new object created. New methods always accept positional and keyword arguments, but they often ignore the arguments, leaving the argument handling to initializer methods. Note that if the type supports subclassing, the type passed may not be the type being defined. The new method calls the tp_alloc slot to allocate memory. We don't fill the \member{tp_alloc} slot ourselves. Rather \cfunction{PyType_Ready()} fills it for us by inheriting it from our base class, which is \class{object} by default. Most types use the default allocation. We provide an initialization function: \begin{verbatim} static PyObject * Noddy_init(Noddy *self, PyObject *args, PyObject *kwds) { PyObject *first=NULL, *last=NULL; static char *kwlist[] = {"first", "last", "number", NULL}; if (! PyArg_ParseTupleAndKeywords(args, kwds, "|OOi", kwlist, &first, &last, &self->number)) return NULL; if (first) { Py_XDECREF(self->first); Py_INCREF(first); self->first = first; } if (last) { Py_XDECREF(self->last); Py_INCREF(last); self->last = last; } Py_INCREF(Py_None); return Py_None; } \end{verbatim} by filling the \member{tp_init} slot. \begin{verbatim} (initproc)Noddy_init, /* tp_init */ \end{verbatim} The \member{tp_init} slot is exposed in Python as the \method{__init__} method. It is used to initialize an object after it's created. Unlike the new method, we can't guarantee that the initializer is called. The initializer isn't called when unpickling objects and it can be overridden. Our initializer accepts arguments to provide initial values for our instance. Initializers always accept positional and keyword arguments. We want to want to expose our instance variables as attributes. There are a number of ways to do that. The simplest way is to define member definitions: \begin{verbatim} static PyMemberDef Noddy_members[] = { {"first", T_OBJECT_EX, offsetof(Noddy, first), 0, "first name"}, {"last", T_OBJECT_EX, offsetof(Noddy, last), 0, "last name"}, {"number", T_INT, offsetof(Noddy, number), 0, "noddy number"}, {NULL} /* Sentinel */ }; \end{verbatim} and put the definitions in the \member{tp_members} slot: \begin{verbatim} Noddy_members, /* tp_members */ \end{verbatim} Each member definition has a member name, type, offset, access flags and documentation string. See the ``Generic Attribute Management'' section below for details. A disadvantage of this approach is that it doesn't provide a way to restrict the types of objects that can be assigned to the Python attributes. We expect the first and last names to be strings, but any Python objects can be assigned. Further, the attributes can be deleted, setting the C pointers to \NULL. Even though we can make sure the members are initialized to non-\NULL values, the members can be set to \NULL if the attributes are deleted. We define a single method, \method{name}, that outputs the objects name as the concatenation of the first and last names. \begin{verbatim} static PyObject * Noddy_name(Noddy* self) { static PyObject *format = NULL; PyObject *args, *result; if (format == NULL) { format = PyString_FromString("%s %s"); if (format == NULL) return NULL; } if (self->first == NULL) { PyErr_SetString(PyExc_AttributeError, "first"); return NULL; } if (self->last == NULL) { PyErr_SetString(PyExc_AttributeError, "last"); return NULL; } args = Py_BuildValue("OO", self->first, self->last); if (args == NULL) return NULL; result = PyString_Format(format, args); Py_DECREF(args); return result; } \end{verbatim} The method is implemented as a C function that takes a \class{Noddy} (or \class{Noddy} subclass) instance as the first argument. Methods always take an instance as the first argument. Methods often take positional and keyword arguments as well, but in this cased we don't take any and don't need to accept a positional argument tuple or keyword argument dictionary. This method is equivalent to the Python method: \begin{verbatim} def name(self): return "%s %s" % (self.first, self.last) \end{verbatim} Note that we have to check for the possibility that our \member{first} and \member{last} members are \NULL. This is because they can be deleted, in which case they are set to \NULL. It would be better to prevent deletion of these attributes and to restrict the attribute values to be strings. We'll see how to do that in the next section. Now that we've defined the method, we need to create an array of method definitions: \begin{verbatim} static PyMethodDef Noddy_methods[] = { {"name", (PyCFunction)Noddy_name, METH_NOARGS, "Return the name, combining the first and last name" }, {NULL} /* Sentinel */ }; \end{verbatim} and assign them to the \member{tp_methods} slot: \begin{verbatim} Noddy_methods, /* tp_methods */ \end{verbatim} Note that used the \constant{METH_NOARGS} flag to indicate that the method is passed no arguments. Finally, we'll make our type usable as a base class. We've written our methods carefully so far so that they don't make any assumptions about the type of the object being created or used, so all we need to do is to add the \constant{Py_TPFLAGS_BASETYPE} to our class flag definition: \begin{verbatim} Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /*tp_flags*/ \end{verbatim} We rename \cfunction{initnoddy} to \cfunction{initnoddy2} and update the module name passed to \cfunction{Py_InitModule3}. Finally, we update our \file{setup.py} file to build the new module: \begin{verbatim} from distutils.core import setup, Extension setup(name="noddy", version="1.0", ext_modules=[ Extension("noddy", ["noddy.c"]), Extension("noddy2", ["noddy2.c"]), ]) \end{verbatim} \subsection{Providing finer control over data attributes} In this section, we'll provide finer control over how the \member{first} and \member{last} attributes are set in the \class{Noddy} example. In the previous version of our module, the instance variables \member{first} and \member{last} could be set to non-string values or even deleted. We want to make sure that these attributes always contain strings. \verbatiminput{noddy3.c} To provide greater control, over the \member{first} and \member{last} attributes, we'll use custom getter and setter functions. Here are the functions for getting and setting the \member{first} attribute: \begin{verbatim} Noddy_getfirst(Noddy *self, void *closure) { Py_INCREF(self->first); return self->first; } static int Noddy_setfirst(Noddy *self, PyObject *value, void *closure) { if (value == NULL) { PyErr_SetString(PyExc_TypeError, "Cannot delete the first attribute"); return -1; } if (! PyString_Check(value)) { PyErr_SetString(PyExc_TypeError, "The first attribute value must be a string"); return -1; } Py_DECREF(self->first); Py_INCREF(value); self->first = value; return 0; } \end{verbatim} The getter function is passed a \class{Noddy} object and a ``closure'', which is void pointer. In this case, the closure is ignored. (The closure supports an advanced usage in which definition data is passed to the getter and setter. This could, for example, be used to allow a single set of getter and setter functions that decide the attribute to get or set based on data in the closure.) The setter function is passed the \class{Noddy} object, the new value, and the closure. The new value may be \NULL, in which case the attribute is being deleted. In our setter, we raise an error if the attribute is deleted or if the attribute value is not a string. We create an array of \ctype{PyGetSetDef} structures: \begin{verbatim} static PyGetSetDef Noddy_getseters[] = { {"first", (getter)Noddy_getfirst, (setter)Noddy_setfirst, "first name", NULL}, {"last", (getter)Noddy_getlast, (setter)Noddy_setlast, "last name", NULL}, {NULL} /* Sentinel */ }; \end{verbatim} and register it in the \member{tp_getset} slot: \begin{verbatim} Noddy_getseters, /* tp_getset */ \end{verbatim} to register out attribute getters and setters. The last item in a \ctype{PyGetSetDef} structure is the closure mentioned above. In this case, we aren't using the closure, so we just pass \NULL. We also remove the member definitions for these attributes: \begin{verbatim} static PyMemberDef Noddy_members[] = { {"number", T_INT, offsetof(Noddy, number), 0, "noddy number"}, {NULL} /* Sentinel */ }; \end{verbatim} With these changes, we can assure that the \member{first} and \member{last} members are never NULL so we can remove checks for \NULL values in almost all cases. This means that most of the \cfunction{Py_XDECREF} calls can be converted to \cfunction{Py_DECREF} calls. The only place we can't change these calls is in the deallocator, where there is the possibility that the initialization of these members failed in the constructor. We also rename the module initialization function and module name in the initialization function, as we did before, and we add an extra definition to the \file{setup.py} file. \section{Type Methods \label{dnt-type-methods}} This section aims to give a quick fly-by on the various type methods you can implement and what they do. Here is the definition of \ctype{PyTypeObject}, with some fields only used in debug builds omitted: \verbatiminput{typestruct.h} Now that's a \emph{lot} of methods. Don't worry too much though - if you have a type you want to define, the chances are very good that you will only implement a handful of these. As you probably expect by now, we're going to go over this and give more information about the various handlers. We won't go in the order they are defined in the structure, because there is a lot of historical baggage that impacts the ordering of the fields; be sure your type initializaion keeps the fields in the right order! It's often easiest to find an example that includes all the fields you need (even if they're initialized to \code{0}) and then change the values to suit your new type. \begin{verbatim} char *tp_name; /* For printing */ \end{verbatim} The name of the type - as mentioned in the last section, this will appear in various places, almost entirely for diagnostic purposes. Try to choose something that will be helpful in such a situation! \begin{verbatim} int tp_basicsize, tp_itemsize; /* For allocation */ \end{verbatim} These fields tell the runtime how much memory to allocate when new objects of this type are created. Python has some builtin support for variable length structures (think: strings, lists) which is where the \member{tp_itemsize} field comes in. This will be dealt with later. \begin{verbatim} char *tp_doc; \end{verbatim} Here you can put a string (or its address) that you want returned when the Python script references \code{obj.__doc__} to retrieve the docstring. Now we come to the basic type methods---the ones most extension types will implement. \subsection{Finalization and De-allocation} \index{object!deallocation} \index{deallocation, object} \index{object!finalization} \index{finalization, of objects} \begin{verbatim} destructor tp_dealloc; \end{verbatim} This function is called when the reference count of the instance of your type is reduced to zero and the Python interpreter wants to reclaim it. If your type has memory to free or other clean-up to perform, put it here. The object itself needs to be freed here as well. Here is an example of this function: \begin{verbatim} static void newdatatype_dealloc(newdatatypeobject * obj) { free(obj->obj_UnderlyingDatatypePtr); obj->ob_type->tp_free(obj); } \end{verbatim} One important requirement of the deallocator function is that it leaves any pending exceptions alone. This is important since deallocators are frequently called as the interpreter unwinds the Python stack; when the stack is unwound due to an exception (rather than normal returns), nothing is done to protect the deallocators from seeing that an exception has already been set. Any actions which a deallocator performs which may cause additional Python code to be executed may detect that an exception has been set. This can lead to misleading errors from the interpreter. The proper way to protect against this is to save a pending exception before performing the unsafe action, and restoring it when done. This can be done using the \cfunction{PyErr_Fetch()}\ttindex{PyErr_Fetch()} and \cfunction{PyErr_Restore()}\ttindex{PyErr_Restore()} functions: \begin{verbatim} static void my_dealloc(PyObject *obj) { MyObject *self = (MyObject *) obj; PyObject *cbresult; if (self->my_callback != NULL) { PyObject *err_type, *err_value, *err_traceback; int have_error = PyErr_Occurred() ? 1 : 0; if (have_error) PyErr_Fetch(&err_type, &err_value, &err_traceback); cbresult = PyObject_CallObject(self->my_callback, NULL); if (cbresult == NULL) PyErr_WriteUnraisable(); else Py_DECREF(cbresult); if (have_error) PyErr_Restore(err_type, err_value, err_traceback); Py_DECREF(self->my_callback); } obj->ob_type->tp_free((PyObject*)self); } \end{verbatim} \subsection{Object Presentation} In Python, there are three ways to generate a textual representation of an object: the \function{repr()}\bifuncindex{repr} function (or equivalent backtick syntax), the \function{str()}\bifuncindex{str} function, and the \keyword{print} statement. For most objects, the \keyword{print} statement is equivalent to the \function{str()} function, but it is possible to special-case printing to a \ctype{FILE*} if necessary; this should only be done if efficiency is identified as a problem and profiling suggests that creating a temporary string object to be written to a file is too expensive. These handlers are all optional, and most types at most need to implement the \member{tp_str} and \member{tp_repr} handlers. \begin{verbatim} reprfunc tp_repr; reprfunc tp_str; printfunc tp_print; \end{verbatim} The \member{tp_repr} handler should return a string object containing a representation of the instance for which it is called. Here is a simple example: \begin{verbatim} static PyObject * newdatatype_repr(newdatatypeobject * obj) { return PyString_FromFormat("Repr-ified_newdatatype{{size:\%d}}", obj->obj_UnderlyingDatatypePtr->size); } \end{verbatim} If no \member{tp_repr} handler is specified, the interpreter will supply a representation that uses the type's \member{tp_name} and a uniquely-identifying value for the object. The \member{tp_str} handler is to \function{str()} what the \member{tp_repr} handler described above is to \function{repr()}; that is, it is called when Python code calls \function{str()} on an instance of your object. Its implementation is very similar to the \member{tp_repr} function, but the resulting string is intended for human consumption. If \member{tp_str} is not specified, the \member{tp_repr} handler is used instead. Here is a simple example: \begin{verbatim} static PyObject * newdatatype_str(newdatatypeobject * obj) { return PyString_FromFormat("Stringified_newdatatype{{size:\%d}}", obj->obj_UnderlyingDatatypePtr->size); } \end{verbatim} The print function will be called whenever Python needs to "print" an instance of the type. For example, if 'node' is an instance of type TreeNode, then the print function is called when Python code calls: \begin{verbatim} print node \end{verbatim} There is a flags argument and one flag, \constant{Py_PRINT_RAW}, and it suggests that you print without string quotes and possibly without interpreting escape sequences. The print function receives a file object as an argument. You will likely want to write to that file object. Here is a sampe print function: \begin{verbatim} static int newdatatype_print(newdatatypeobject *obj, FILE *fp, int flags) { if (flags & Py_PRINT_RAW) { fprintf(fp, "<{newdatatype object--size: %d}>", obj->obj_UnderlyingDatatypePtr->size); } else { fprintf(fp, "\"<{newdatatype object--size: %d}>\"", obj->obj_UnderlyingDatatypePtr->size); } return 0; } \end{verbatim} \subsection{Attribute Management} For every object which can support attributes, the corresponding type must provide the functions that control how the attributes are resolved. There needs to be a function which can retrieve attributes (if any are defined), and another to set attributes (if setting attributes is allowed). Removing an attribute is a special case, for which the new value passed to the handler is \NULL. Python supports two pairs of attribute handlers; a type that supports attributes only needs to implement the functions for one pair. The difference is that one pair takes the name of the attribute as a \ctype{char*}, while the other accepts a \ctype{PyObject*}. Each type can use whichever pair makes more sense for the implementation's convenience. \begin{verbatim} getattrfunc tp_getattr; /* char * version */ setattrfunc tp_setattr; /* ... */ getattrofunc tp_getattrofunc; /* PyObject * version */ setattrofunc tp_setattrofunc; \end{verbatim} If accessing attributes of an object is always a simple operation (this will be explained shortly), there are generic implementations which can be used to provide the \ctype{PyObject*} version of the attribute management functions. The actual need for type-specific attribute handlers almost completely disappeared starting with Python 2.2, though there are many examples which have not been updated to use some of the new generic mechanism that is available. \subsubsection{Generic Attribute Management} \versionadded{2.2} Most extension types only use \emph{simple} attributes. So, what makes the attributes simple? There are only a couple of conditions that must be met: \begin{enumerate} \item The name of the attributes must be known when \cfunction{PyType_Ready()} is called. \item No special processing is needed to record that an attribute was looked up or set, nor do actions need to be taken based on the value. \end{enumerate} Note that this list does not place any restrictions on the values of the attributes, when the values are computed, or how relevant data is stored. When \cfunction{PyType_Ready()} is called, it uses three tables referenced by the type object to create \emph{descriptors} which are placed in the dictionary of the type object. Each descriptor controls access to one attribute of the instance object. Each of the tables is optional; if all three are \NULL, instances of the type will only have attributes that are inherited from their base type, and should leave the \member{tp_getattro} and \member{tp_setattro} fields \NULL{} as well, allowing the base type to handle attributes. The tables are declared as three fields of the type object: \begin{verbatim} struct PyMethodDef *tp_methods; struct PyMemberDef *tp_members; struct PyGetSetDef *tp_getset; \end{verbatim} If \member{tp_methods} is not \NULL, it must refer to an array of \ctype{PyMethodDef} structures. Each entry in the table is an instance of this structure: \begin{verbatim} typedef struct PyMethodDef { char *ml_name; /* method name */ PyCFunction ml_meth; /* implementation function */ int ml_flags; /* flags */ char *ml_doc; /* docstring */ } PyMethodDef; \end{verbatim} One entry should be defined for each method provided by the type; no entries are needed for methods inherited from a base type. One additional entry is needed at the end; it is a sentinel that marks the end of the array. The \member{ml_name} field of the sentinel must be \NULL. XXX Need to refer to some unified discussion of the structure fields, shared with the next section. The second table is used to define attributes which map directly to data stored in the instance. A variety of primitive C types are supported, and access may be read-only or read-write. The structures in the table are defined as: \begin{verbatim} typedef struct PyMemberDef { char *name; int type; int offset; int flags; char *doc; } PyMemberDef; \end{verbatim} For each entry in the table, a descriptor will be constructed and added to the type which will be able to extract a value from the instance structure. The \member{type} field should contain one of the type codes defined in the \file{structmember.h} header; the value will be used to determine how to convert Python values to and from C values. The \member{flags} field is used to store flags which control how the attribute can be accessed. XXX Need to move some of this to a shared section! The following flag constants are defined in \file{structmember.h}; they may be combined using bitwise-OR. \begin{tableii}{l|l}{constant}{Constant}{Meaning} \lineii{READONLY \ttindex{READONLY}} {Never writable.} \lineii{RO \ttindex{RO}} {Shorthand for \constant{READONLY}.} \lineii{READ_RESTRICTED \ttindex{READ_RESTRICTED}} {Not readable in restricted mode.} \lineii{WRITE_RESTRICTED \ttindex{WRITE_RESTRICTED}} {Not writable in restricted mode.} \lineii{RESTRICTED \ttindex{RESTRICTED}} {Not readable or writable in restricted mode.} \end{tableii} An interesting advantage of using the \member{tp_members} table to build descriptors that are used at runtime is that any attribute defined this way can have an associated docstring simply by providing the text in the table. An application can use the introspection API to retrieve the descriptor from the class object, and get the docstring using its \member{__doc__} attribute. As with the \member{tp_methods} table, a sentinel entry with a \member{name} value of \NULL{} is required. % XXX Descriptors need to be explained in more detail somewhere, but % not here. % % Descriptor objects have two handler functions which correspond to % the \member{tp_getattro} and \member{tp_setattro} handlers. The % \method{__get__()} handler is a function which is passed the % descriptor, instance, and type objects, and returns the value of the % attribute, or it returns \NULL{} and sets an exception. The % \method{__set__()} handler is passed the descriptor, instance, type, % and new value; \subsubsection{Type-specific Attribute Management} For simplicity, only the \ctype{char*} version will be demonstrated here; the type of the name parameter is the only difference between the \ctype{char*} and \ctype{PyObject*} flavors of the interface. This example effectively does the same thing as the generic example above, but does not use the generic support added in Python 2.2. The value in showing this is two-fold: it demonstrates how basic attribute management can be done in a way that is portable to older versions of Python, and explains how the handler functions are called, so that if you do need to extend their functionality, you'll understand what needs to be done. The \member{tp_getattr} handler is called when the object requires an attribute look-up. It is called in the same situations where the \method{__getattr__()} method of a class would be called. A likely way to handle this is (1) to implement a set of functions (such as \cfunction{newdatatype_getSize()} and \cfunction{newdatatype_setSize()} in the example below), (2) provide a method table listing these functions, and (3) provide a getattr function that returns the result of a lookup in that table. The method table uses the same structure as the \member{tp_methods} field of the type object. Here is an example: \begin{verbatim} static PyMethodDef newdatatype_methods[] = { {"getSize", (PyCFunction)newdatatype_getSize, METH_VARARGS, "Return the current size."}, {"setSize", (PyCFunction)newdatatype_setSize, METH_VARARGS, "Set the size."}, {NULL, NULL, 0, NULL} /* sentinel */ }; static PyObject * newdatatype_getattr(newdatatypeobject *obj, char *name) { return Py_FindMethod(newdatatype_methods, (PyObject *)obj, name); } \end{verbatim} The \member{tp_setattr} handler is called when the \method{__setattr__()} or \method{__delattr__()} method of a class instance would be called. When an attribute should be deleted, the third parameter will be \NULL. Here is an example that simply raises an exception; if this were really all you wanted, the \member{tp_setattr} handler should be set to \NULL. \begin{verbatim} static int newdatatype_setattr(newdatatypeobject *obj, char *name, PyObject *v) { (void)PyErr_Format(PyExc_RuntimeError, "Read-only attribute: \%s", name); return -1; } \end{verbatim} \subsection{Object Comparison} \begin{verbatim} cmpfunc tp_compare; \end{verbatim} The \member{tp_compare} handler is called when comparisons are needed and the object does not implement the specific rich comparison method which matches the requested comparison. (It is always used if defined and the \cfunction{PyObject_Compare()} or \cfunction{PyObject_Cmp()} functions are used, or if \function{cmp()} is used from Python.) It is analogous to the \method{__cmp__()} method. This function should return \code{-1} if \var{obj1} is less than \var{obj2}, \code{0} if they are equal, and \code{1} if \var{obj1} is greater than \var{obj2}. (It was previously allowed to return arbitrary negative or positive integers for less than and greater than, respectively; as of Python 2.2, this is no longer allowed. In the future, other return values may be assigned a different meaning.) A \member{tp_compare} handler may raise an exception. In this case it should return a negative value. The caller has to test for the exception using \cfunction{PyErr_Occurred()}. Here is a sample implementation: \begin{verbatim} static int newdatatype_compare(newdatatypeobject * obj1, newdatatypeobject * obj2) { long result; if (obj1->obj_UnderlyingDatatypePtr->size < obj2->obj_UnderlyingDatatypePtr->size) { result = -1; } else if (obj1->obj_UnderlyingDatatypePtr->size > obj2->obj_UnderlyingDatatypePtr->size) { result = 1; } else { result = 0; } return result; } \end{verbatim} \subsection{Abstract Protocol Support} Python supports a variety of \emph{abstract} `protocols;' the specific interfaces provided to use these interfaces are documented in the \citetitle[../api/api.html]{Python/C API Reference Manual} in the chapter ``\ulink{Abstract Objects Layer}{../api/abstract.html}.'' A number of these abstract interfaces were defined early in the development of the Python implementation. In particular, the number, mapping, and sequence protocols have been part of Python since the beginning. Other protocols have been added over time. For protocols which depend on several handler routines from the type implementation, the older protocols have been defined as optional blocks of handlers referenced by the type object. For newer protocols there are additional slots in the main type object, with a flag bit being set to indicate that the slots are present and should be checked by the interpreter. (The flag bit does not indicate that the slot values are non-\NULL. The flag may be set to indicate the presense of a slot, but a slot may still be unfilled.) \begin{verbatim} PyNumberMethods tp_as_number; PySequenceMethods tp_as_sequence; PyMappingMethods tp_as_mapping; \end{verbatim} If you wish your object to be able to act like a number, a sequence, or a mapping object, then you place the address of a structure that implements the C type \ctype{PyNumberMethods}, \ctype{PySequenceMethods}, or \ctype{PyMappingMethods}, respectively. It is up to you to fill in this structure with appropriate values. You can find examples of the use of each of these in the \file{Objects} directory of the Python source distribution. \begin{verbatim} hashfunc tp_hash; \end{verbatim} This function, if you choose to provide it, should return a hash number for an instance of your datatype. Here is a moderately pointless example: \begin{verbatim} static long newdatatype_hash(newdatatypeobject *obj) { long result; result = obj->obj_UnderlyingDatatypePtr->size; result = result * 3; return result; } \end{verbatim} \begin{verbatim} ternaryfunc tp_call; \end{verbatim} This function is called when an instance of your datatype is "called", for example, if \code{obj1} is an instance of your datatype and the Python script contains \code{obj1('hello')}, the \member{tp_call} handler is invoked. This function takes three arguments: \begin{enumerate} \item \var{arg1} is the instance of the datatype which is the subject of the call. If the call is \code{obj1('hello')}, then \var{arg1} is \code{obj1}. \item \var{arg2} is a tuple containing the arguments to the call. You can use \cfunction{PyArg_ParseTuple()} to extract the arguments. \item \var{arg3} is a dictionary of keyword arguments that were passed. If this is non-\NULL{} and you support keyword arguments, use \cfunction{PyArg_ParseTupleAndKeywords()} to extract the arguments. If you do not want to support keyword arguments and this is non-\NULL, raise a \exception{TypeError} with a message saying that keyword arguments are not supported. \end{enumerate} Here is a desultory example of the implementation of the call function. \begin{verbatim} /* Implement the call function. * obj1 is the instance receiving the call. * obj2 is a tuple containing the arguments to the call, in this * case 3 strings. */ static PyObject * newdatatype_call(newdatatypeobject *obj, PyObject *args, PyObject *other) { PyObject *result; char *arg1; char *arg2; char *arg3; if (!PyArg_ParseTuple(args, "sss:call", &arg1, &arg2, &arg3)) { return NULL; } result = PyString_FromFormat( "Returning -- value: [\%d] arg1: [\%s] arg2: [\%s] arg3: [\%s]\n", obj->obj_UnderlyingDatatypePtr->size, arg1, arg2, arg3); printf("\%s", PyString_AS_STRING(result)); return result; } \end{verbatim} XXX some fields need to be added here... \begin{verbatim} /* Added in release 2.2 */ /* Iterators */ getiterfunc tp_iter; iternextfunc tp_iternext; \end{verbatim} These functions provide support for the iterator protocol. Any object which wishes to support iteration over its contents (which may be generated during iteration) must implement the \code{tp_iter} handler. Objects which are returned by a \code{tp_iter} handler must implement both the \code{tp_iter} and \code{tp_iternext} handlers. Both handlers take exactly one parameter, the instance for which they are being called, and return a new reference. In the case of an error, they should set an exception and return \NULL. For an object which represents an iterable collection, the \code{tp_iter} handler must return an iterator object. The iterator object is responsible for maintaining the state of the iteration. For collections which can support multiple iterators which do not interfere with each other (as lists and tuples do), a new iterator should be created and returned. Objects which can only be iterated over once (usually due to side effects of iteration) should implement this handler by returning a new reference to themselves, and should also implement the \code{tp_iternext} handler. File objects are an example of such an iterator. Iterator objects should implement both handlers. The \code{tp_iter} handler should return a new reference to the iterator (this is the same as the \code{tp_iter} handler for objects which can only be iterated over destructively). The \code{tp_iternext} handler should return a new reference to the next object in the iteration if there is one. If the iteration has reached the end, it may return \NULL{} without setting an exception or it may set \exception{StopIteration}; avoiding the exception can yield slightly better performance. If an actual error occurs, it should set an exception and return \NULL. \subsection{Supporting the Cycle Collector \label{example-cycle-support}} This example shows only enough of the implementation of an extension type to show how the garbage collector support needs to be added. It shows the definition of the object structure, the \member{tp_traverse}, \member{tp_clear} and \member{tp_dealloc} implementations, the type structure, and a constructor --- the module initialization needed to export the constructor to Python is not shown as there are no special considerations there for the collector. To make this interesting, assume that the module exposes ways for the \member{container} field of the object to be modified. Note that since no checks are made on the type of the object used to initialize \member{container}, we have to assume that it may be a container. \verbatiminput{cycle-gc.c} Full details on the APIs related to the cycle detector are in \ulink{Supporting Cyclic Garbarge Collection}{../api/supporting-cycle-detection.html} in the \citetitle[../api/api.html]{Python/C API Reference Manual}. \subsection{More Suggestions} Remember that you can omit most of these functions, in which case you provide \code{0} as a value. There are type definitions for each of the functions you must provide. They are in \file{object.h} in the Python include directory that comes with the source distribution of Python. In order to learn how to implement any specific method for your new datatype, do the following: Download and unpack the Python source distribution. Go the the \file{Objects} directory, then search the C source files for \code{tp_} plus the function you want (for example, \code{tp_print} or \code{tp_compare}). You will find examples of the function you want to implement. When you need to verify that an object is an instance of the type you are implementing, use the \cfunction{PyObject_TypeCheck} function. A sample of its use might be something like the following: \begin{verbatim} if (! PyObject_TypeCheck(some_object, &MyType)) { PyErr_SetString(PyExc_TypeError, "arg #1 not a mything"); return NULL; } \end{verbatim}