symbol table for each top-level compilation unit. The information in
the symbol table allows the elimination of the later optimize() pass;
the bytecode generation emits the correct opcodes.
The current version passes the complete regression test, but may still
contain some bugs. It's a fairly substantial revision. The current
code adds an assert() and a test that may lead to a Py_FatalError().
I expect to remove these before 2.1 beta 1.
The symbol table (struct symtable) is described in comments in the
code.
The changes affects the several com_XXX() functions that were used to
emit LOAD_NAME and its ilk. The primary interface for this bytecode
is now com_addop_varname() which takes a kind and a name, where kind
is one of VAR_LOAD, VAR_STORE, or VAR_DELETE.
There are many other smaller changes:
- The name mangling code is no longer contained in ifdefs. There are
two functions that expose the mangling logical: com_mangle() and
symtable_mangle().
- The com_error() function can accept NULL for its first argument;
this is useful with is_constant_false() is called during symbol
table generation.
- The loop index names used by list comprehensions have been changed
from __1__ to [1], so that they can not be accessed by Python code.
- in com_funcdef(), com_argdefs() is now called before the body of the
function is compiled. This provides consistency with com_lambdef()
and symtable_funcdef().
- Helpers do_pad(), dump(), and DUMP() are added to aid in debugging
the compiler.
Also fixes two long-standing bugs (present in 2.0):
1. .join() didn't check that the result size fit in an int.
2. string.join(s) when len(s)==1 returned s[0] regardless of s[0]'s
type; e.g., "".join([3]) returned 3 (overly optimistic optimization).
I resisted a keen temptation to make .join() apply str() automagically.
for SocketServer.py (inherited by TCPServer)
Luke wrote:
The socketserver code, with a little bit of tweaking, can be made
sufficiently general to service "requests" of any kind, not just by sockets.
The BaseServer class was created, for example, to poll a table in a MYSQL
database every 2 seconds. each entry in the table can be allocated a
Handler which deals with the entry.
With this patch, using BaseServer and ThreadedServer classes, the creation
of the server that reads and handles MySQL table entries instead of a
socket was utterly trivial: about 50 lines of python code.
You may consider this code to be utterly useless [why would anyone else
want to do anything like this???] - you are entitled to your opinion. if you
think so, then think of this: have you considered how to cleanly add SSL to
the TCPSocketServer? What about using shared memory as the
communications mechanism for a server, instead of sockets? What about
communication using files?
The SocketServer code is extremely good every useful. it's just that as it
stands, it is tied to sockets, which is not as useful.
I heartily approve of this idea.
I found where rich comparison of unequal recursive objects gave
unintuituve results. In a discussion with Tim, where we discovered
that our intuition on when a<=b should be true was failing, we decided
to outlaw ordering comparisons on recursive objects. (Once we have
fixed our intuition and designed a matching algorithm that's practical
and reasonable to implement, we can allow such orderings again.)
- Refactored the recursive-object comparison framework; more is now
done in the support routines so less needs to be done in the calling
routines (even at the expense of slowing it down a bit -- this
should normally never be invoked, it's mostly just there to avoid
blowing up the interpreter).
- Changed the framework so that the comparison operator used is also
stored. (The dictionary now stores triples (v, w, op) instead of
pairs (v, w).)
- Changed the nesting limit to a more reasonable small 20; this only
slows down comparisons of very deeply nested objects (unlikely to
occur in practice), while speeding up comparisons of recursive
objects (previously, this would first waste time and space on 500
nested comparisons before it would start detecting recursion).
- Changed rich comparisons for recursive objects to raise a ValueError
exception when recursion is detected for ordering oprators (<, <=,
>, >=).
Unrelated change:
- Moved PyObject_Unicode() to just under PyObject_Str(), where it
belongs. MAL's patch must've inserted in a random spot between two
functions in the file -- between two helpers for rich comparison...
wraps to 80chars, and adds some really hacky setting of compiler
options when CC and LDSHARED are given on the make command line.
(The Distutils should probably provide a utility function to
automatically handle a number of common environment variables)
Check additional include directories for SSL
Don't build modules that are linked into the Python binary statically
Factored out the detection of Tkinter out into a method, since it's
the most complicated module to set up
Simplify the logic for detecting Tkinter
- Changed description of rich comparisons to emphasize that < and >
(etc.) are each other's reflection. Also use this word in the note
about the demise of __rcmp__.
exceptions when compared using <, <=, > or >=.
NOTE: This is a tentative change: this means that cmp() involving
complex numbers will raise an exception when the numbers differ, and
that in turn means that e.g. dictionaries and certain other compounds
(e.g. UserLists) containing complex numbers can't be compared either.
So we'll have to decide whether this is acceptable. The alpha test
cycle is a good time to keep an eye on this!
- In count(), remove(), index(): call RichCompare(Py_EQ).
- Get rid of array_compare(), in favor of new array_richcompare() (a
near clone of list_compare()).
- Aligned items in array_methods initializer and comments for type
struct initializer.
- Folded a few long lines.
- Use PyObject_RichCompareBool() when comparing keys; this makes the
error handling cleaner.
- There were two implementations for dictionary comparison, an old one
(#ifdef'ed out) and a new one. Got rid of the old one, which was
abandoned years ago.
- In the characterize() function, part of dictionary comparison, use
PyObject_RichCompareBool() to compare keys and values instead. But
continue to use PyObject_Compare() for comparing the final
(deciding) elements.
- Align the comments in the type struct initializer.
Note: I don't implement rich comparison for dictionaries -- there
doesn't seem to be much to be gained. (The existing comparison
already decides that shorter dicts are always smaller than longer
dicts.)
- tuplecontains(): call RichCompare(Py_EQ).
- Get rid of tuplecompare(), in favor of new tuplerichcompare() (a
clone of list_compare()).
- Aligned the comments for large struct initializers.
earlier coercion changes, not by rich comparisons. When a coercion
function returns 1 (meaning it cannot do it), it should not INCREF the
arguments. When no __coerce__() method was found, instance_coerce()
originally returned 0, pretending it did it. Neil changed the return
value to 1, more accurately reflecting that it didn't do anything, but
forgot to take out the two INCREF calls.