- 'delete' is a C++ keyword; use 'del_table' instead
- apply Py_CHARMASK() to del_table[i] before using it as an index
*** this fixes a bug that was just reported on the list ***
- if the translation didn't make any changes, INCREF and return
the original string
- when del_table is empty or omitted, don't copy the translation
table to a table of ints (should be a bit faster)
Rewrote maketrans() to avoid copying the table (2-3% faster).
performance hit. Urg. Reverted.
strop_joinfields(): re-instate optimizations for lists and tuples, but
support arbitrary other kinds of sequences as well.
- split_whitespace(): slightly better memory ref handling when errors
occur.
- strop_joinfields(): First argument can now be any sequence-protocol
conformant object.
- strop_find(), strop_rfind(): Use PyArg_ParseTuple for optional
arguments
- strop_lower(), strop_upper(): Factor logic into a common function
do_casechange().
- strop_atoi(), strop_atol(): Use PyArg_ParseTuple.
- strop_maketrans(): arguments used to be optional, although the
documentation doesn't reflect this. Make the source conform to the
docs. Arguments are required, but two empty strings will return the
identity translation table.
- General pass fixing up formatting, and checking for return values.
int/long types, and use the new PyLong_FromUnsignedLong() and
PyLong_AsUnsignedLong() interfaces instead.
Semantic change: the 'I' format will now always return a long int.
- Conform to standard Python C coding styles.
- All static symbols were renamed and shorted.
- Eyeballed all return values and memory references.
- Fixed a bug in signal.pause() so that exceptions raised in signal
handlers are now properly caught after pause() returns.
- Removed SIGCPU and SIGFSZ. We surmise that these were typos for the
previously missing SIGXCPU and SIGXFSZ.
-- The whole implementation is now more table-driven.
-- Unsigned integers. Format characters 'B', 'H', 'I' and 'L'
mean unsigned byte, short, int and long. For 'I' and 'L', the return
value is a Python long integer if a Python plain integer can't
represent the required range (note: this is dependent on the size of
the relevant C types only, not of the sign of the actual value).
-- A new format character 's' packs/unpacks a string. When given a
count prefix, this is the size of the string, not a repeat count like
for the other format characters; e.g. '10s' means a single 10-byte
string, while '10c' means 10 characters. For packing, the string is
truncated or padded with null bytes as appropriate to make it fit.
For unpacking, the resulting string always has exactly the specified
number of bytes. As a special case, '0s' means a single, empty
string (while '0c' means 0 characters).
-- Various byte order options. The first character of the format
string determines the byte order, size and alignment, as follows:
First character Byte order size and alignment
'@' native native
'=' native standard
'<' little-endian standard
'>' big-endian standard
'!' network (= big-endian) standard
If the first character is not one of these, '@' is assumed.
Native byte order is big-endian or little-endian, depending on the
host system (e.g. Motorola and Sun are big-endian; Intel and DEC are
little-endian).
Native size and alignment are determined using the C compiler's sizeof
expression. This is always combined with native byte order.
Standard size and alignment are as follows: no alignment is required
for any type (so you have to use pad bytes); short is 2 bytes; int and
long are 4 bytes. In this mode, there is no support for float and
double.
Note the difference between '@' and '=': both use native byte order,
but the size and alignment of the latter is standardized.
The form '!' is available for those poor souls who can't remember
whether network byte order is big-endian or little-endian.
There is no way to indicate non-native byte order (i.e. force
byte-swapping); use the appropriate choice of '<' or '>'.
non-checked error return values, and where appropriate,
PyArg_ParseTuple() style argument parsing.
I also changed some function names and converted all malloc/free calls
to PyMem_NEW/PyMem_DEL.
Some stylistic changes and formatting standardization.
- Where optional arguments were being used, converted to
PyArg_ParseTuple() style instead of nested PyArg_Parse() style.
- Check for and handle many potential error conditions that were never
being tested.
- internal reg_* functions renamed to regobj_* (makes it easier to
figure out which are global regex functions and which are for regex
objects).
- reg_group (now regobj_group) was quite extensively reworked. it no
longer recurses to do its job (by factoring core functionality into
a separate function that knows about string and integer indexes).
- some minor formatting fixes.
- regex_set_syntax() now invalidates the cache. Without this change
(in the example below), the second search would produce different
output depending on whether the first search were performed or not
(since performing the first search would cache the compiled object
with RE_SYNTAX_EMACS, causing the second test to unexpectedly fail).
regex.search('(a+)|(b+)', 'cdb')
prev = regex.set_syntax(RE_SYNTAX_AWK)
regex.search('(a+)|(b+)', 'cdb')
This is needed because if a configure option has " as its value,
it will be rendered as {"}; after stripping one level of quoting it's
just ", on which splitlist will barf.
SIGCPU and SIGFSZ but we're (Jeremy and I) are actually unsure whether
these were typos or if there are systems that use these alternate
names. We've checked Solaris, SunOS, and IRIX; they contain only the
SIGX* names.
1. Renamed
2. Several coding styles were being used here, owing to the multiple
contributors. I tried to convert everything to standard "python"
coding style for indentation, paren and brace placement, etc.
3. There were several potential error conditions that were never being
checked, and where I saw them, I added checks of return values,
etc. I'm pretty sure I got them all.
4. There were some old-style (pre PyArg_ParseTuple) argument
extraction and these were converted to use PyArg_ParseTuple.
All changes compile and run with the new test_select.py module, at
least on my Solaris/Sparc box.
Two interesting problems in nis_maplist(). First, it is possible that
clnt_create() will return NULL. This was being caught, but no Python
error was being set. I use clnt_spcreateerror() to generate the value
of the exception.
But why would clnt_create() fail? It's because no server was being
found. And why was this? It was because nis_maplist() tried only to
get the NIS master for the first map in the aliases list, which is
passwd.byname, and guess what? That's the one NIS map CNRI does *not*
export! So the yp_master() call was failing to return a valid
server. I now cycle through all the map aliases until I find a valid
master. If not, a different exception is set.
I'm not sure this is the completely correct way to do all this, but
short of rewriting the entire nismodule.c (to expose the proper API to
Python), it should do the trick.