module. (Small problem: struct.pack() won't deal with the Python long
ints returned by struct.unpack() for the 'L' format. Worked around
that for now.)
-- The whole implementation is now more table-driven.
-- Unsigned integers. Format characters 'B', 'H', 'I' and 'L'
mean unsigned byte, short, int and long. For 'I' and 'L', the return
value is a Python long integer if a Python plain integer can't
represent the required range (note: this is dependent on the size of
the relevant C types only, not of the sign of the actual value).
-- A new format character 's' packs/unpacks a string. When given a
count prefix, this is the size of the string, not a repeat count like
for the other format characters; e.g. '10s' means a single 10-byte
string, while '10c' means 10 characters. For packing, the string is
truncated or padded with null bytes as appropriate to make it fit.
For unpacking, the resulting string always has exactly the specified
number of bytes. As a special case, '0s' means a single, empty
string (while '0c' means 0 characters).
-- Various byte order options. The first character of the format
string determines the byte order, size and alignment, as follows:
First character Byte order size and alignment
'@' native native
'=' native standard
'<' little-endian standard
'>' big-endian standard
'!' network (= big-endian) standard
If the first character is not one of these, '@' is assumed.
Native byte order is big-endian or little-endian, depending on the
host system (e.g. Motorola and Sun are big-endian; Intel and DEC are
little-endian).
Native size and alignment are determined using the C compiler's sizeof
expression. This is always combined with native byte order.
Standard size and alignment are as follows: no alignment is required
for any type (so you have to use pad bytes); short is 2 bytes; int and
long are 4 bytes. In this mode, there is no support for float and
double.
Note the difference between '@' and '=': both use native byte order,
but the size and alignment of the latter is standardized.
The form '!' is available for those poor souls who can't remember
whether network byte order is big-endian or little-endian.
There is no way to indicate non-native byte order (i.e. force
byte-swapping); use the appropriate choice of '<' or '>'.
all the modules attributes are present and creates a small criss-cross
window for 5 seconds (example from the documentation :-) A more
comprehensive test would probably be useful... but maybe overkill.
test_rotor.py: New test of the rotor module.
test_*: converted to the new test harness. GvR note! test_signal.py
works interatively (i.e. when verbose=1) but does not work inside the
test harness. It must be a timing issue, but I haven't figured it out
yet.
test_*: converted to the new test harness. GvR note! test_signal.py
works interatively (i.e. when verbose=1) but does not work inside the
test harness. It must be a timing issue, but I haven't figured it out
yet.
non-checked error return values, and where appropriate,
PyArg_ParseTuple() style argument parsing.
I also changed some function names and converted all malloc/free calls
to PyMem_NEW/PyMem_DEL.
Some stylistic changes and formatting standardization.
take an optional string key, but if key is not given, the method does
nothing! In the rewrite (see upcoming check-in), I left things this
way, but here I document that this is the case.
'verbose' flag ala GvR updated test harness architecture.
Old way:
verbose = 0
if __name__ == '__main__':
verbose = 1
New way:
from test_support import verbose
Some other small readablility and functionality updates.
[NOTE: testall.py and autotest.py might could go away soon, I've
played with Guido's new regrtest.py script and it seems to work well.
I'll wait until Guido gives the word to completely switch over -- and
change the Makefile too!]
- Where optional arguments were being used, converted to
PyArg_ParseTuple() style instead of nested PyArg_Parse() style.
- Check for and handle many potential error conditions that were never
being tested.
- internal reg_* functions renamed to regobj_* (makes it easier to
figure out which are global regex functions and which are for regex
objects).
- reg_group (now regobj_group) was quite extensively reworked. it no
longer recurses to do its job (by factoring core functionality into
a separate function that knows about string and integer indexes).
- some minor formatting fixes.
- regex_set_syntax() now invalidates the cache. Without this change
(in the example below), the second search would produce different
output depending on whether the first search were performed or not
(since performing the first search would cache the compiled object
with RE_SYNTAX_EMACS, causing the second test to unexpectedly fail).
regex.search('(a+)|(b+)', 'cdb')
prev = regex.set_syntax(RE_SYNTAX_AWK)
regex.search('(a+)|(b+)', 'cdb')