where their capabilities intersect. Would be nice if people using non-
MSVC compilers (Borland etc) took a whack at doing something similar for
them (this code relies on the MS _cwait function).
[ #510644 ] test_curses segfaults
If we use the *object* *allocator*, we should use the *object* *deallocator*,
not the *raw memory* deallocator (confused yet?).
I think this was what caused segfaults when pymalloc was enabled.
Even if it wasn't the cause, it's still wrong.
2.2.1 candidate.
[ #504284 ] Last build problems on AIX
I'm ignoring the suggestion that this should be an autoconf test in the
interests of having a fix today. Feel free to quibble.
new.instancemethod() -- the instancemethod object is now a perfectly
general container.
This fixes SF bug ##503091 (Pedro Rodriquez): new.instancemethod fails
for new classes
This is a 2.2.1 candidate.
Patch from Mark Hammond, plus code rearrangement and comments from me.
posix_do_stat(): Windows-specific code could try to free() stack
memory in some cases when a path ending with a forward or backward slash
was passed to os.stat().
metaclass, reported by Dan Parisien.
Objects that are instances of custom metaclasses, i.e. whose ob_type
is a subclass of PyType_Type, should be pickled the same as new-style
classes (objects whose ob_type is PyType_Type). This can't be done
through the existing dispatch switches, and the __reduce__ trick
doesn't work for these, since it finds the unbound __reduce__ for
instances of the class (inherited from PyBaseObject_Type). So check
explicitly using PyType_IsSubtype().
binascii_b2a_base64(): We didn't allocate enough buffer space for very
short inputs (e.g., a 1-byte input can produce a 5-byte output, but we
only allocated 2 bytes). I expect that malloc overheads absorbed the
overrun in practice, but computing a correct upper bound is a very simple
change.
got a barrage of compile errors that didn't make sense to the C++ brain:
MSVC does not allow C (but does allow C++) initializers to contain
data addresses supplied by other DLLs. So changed the initializers here
to use dummy nulls, and changed module init to plug in the foreign
addresses at runtime (manually simulating what C++ does by magic). Tested
on Windows, and Guido tested on Linux (thanks!). BTW, the *point* is that
people are going to use this module as a template for writing their own
subtypes, and it's unusual for extension authors to build their extensions
into Python directly (separate DLLs are the norm on Windows); so it's
better if we give them a template that works <wink>.
work with Mac OS X Aqua-Tk, all nicely within ifdefs.
The process is not for the faint of heart, though: you need to download
and install the (alfa) Aqua-Tk, obtain a few needed X11 headers from
somewhere else and then everything builds. To run scripts using Tkinter
you must build with --enable-framework, build Python.app in Mac/OSX
and run your Tkinter scripts with that. Then, about half the tests in
Demo/tkinter work (or at least do something).
Checking this in anyway because it shouldn't break anything, and newer
versions of Aqua-Tk will streamline the process.
backed out of broken minimal repeat patch from July
also fixed a couple of minor potential resource leaks in pattern_subx
(Guido had already fixed the big one)
type.__module__ behavior.
This adds the module name and a dot in front of the type name in every
type object initializer, except for built-in types (and those that
already had this). Note that it touches lots of Mac modules -- I have
no way to test these but the changes look right. Apologies if they're
not. This also touches the weakref docs, which contains a sample type
object initializer. It also touches the mmap test output, because the
mmap type's repr is included in that output. It touches object.h to
put the correct description in a comment.
The OSS Programmer's Reference (www.4front-tech.com)
states:
*Setting Sampling Parameters
There are three parameters which affect the sound
quality (and therefore memory and bandwidth
requirements) of sampled audio data. These are:
** sample format (sometimes called number of bits)
** number of channels (mono or stereo), and
** sampling rate (speed)
NOTE:
It is important to always set these parameters in the
above order. Setting sampling rate before the number
of channels doesn't work with all devices.
Anthony Roach.
Release the global interpreter lock around platform spawn calls.
Bugfix candidate? Hard to say; I favor "yes, bugfix".
These clearly *should* have been releasing the GIL all along, if for no
other reason than compatibility with the similar os.system(). But it's
possible some program out there is (a) multithreaded, (b) calling a spawn
function with P_WAIT, and (c) relying on the spawn call to block all their
threads until the spawned program completes. I think it's very unlikely
anyone is doing that on purpose, but someone may be doing so by accident.
casts with a variable oself that has the proper type. A smart
compiler may put this thing into a register.
(I'm not sure what good this does except satisfy my desire to
understand this function; I got a report about an uninitialized read
from Insure++ about this function and it hurt my eyes to even look at
it. I gotta run away or I'll get tempted to reformat the entire
file...)
(At least for the repeatable test case that Tim produced.)
pattern_subx(): Add missing DECREF(filter) in both exit branches
(normal and error return). Also fix a DECREF(args) that should
certainly be a DECREF(match) -- because it's inside if (!args) and
right after allocation of match.
The st_future slot of the symtable is not freed by PySymtable_Free()
because it is shared by the symtable and compiling structs in
compiel.c. Since it is shared, it is explicitly deallocated when the
compiling struct is freed.
Big Hammer to implement -Qnew as PEP 238 says it should work (a global
option affecting all instances of "/").
pydebug.h, main.c, pythonrun.c: define a private _Py_QnewFlag flag, true
iff -Qnew is passed on the command line. This should go away (as the
comments say) when true division becomes The Rule. This is
deliberately not exposed to runtime inspection or modification: it's
a one-way one-shot switch to pretend you're using Python 3.
ceval.c: when _Py_QnewFlag is set, treat BINARY_DIVIDE as
BINARY_TRUE_DIVIDE.
test_{descr, generators, zipfile}.py: fiddle so these pass under
-Qnew too. This was just a matter of s!/!//! in test_generators and
test_zipfile. test_descr was trickier, as testbinop() is passed
assumptions that "/" is the same as calling a "__div__" method; put
a temporary hack there to call "__truediv__" instead when the method
name is "__div__" and 1/2 evaluates to 0.5.
Three standard tests still fail under -Qnew (on Windows; somebody
please try the Linux tests with -Qnew too! Linux runs a whole bunch
of tests Windows doesn't):
test_augassign
test_class
test_coercion
I can't stay awake longer to stare at this (be my guest). Offhand
cures weren't obvious, nor was it even obvious that cures are possible
without major hackery.
Question: when -Qnew is in effect, should calls to __div__ magically
change into calls to __truediv__? See "major hackery" at tail end of
last paragraph <wink>.
parser_tuple2st() and a failure to propogate an error in
build_node_children() (masking yet another leak, of course!).
This closes SF bug #485133 (confirmed by Insure++).
Bugfix candidate.
A numerically naive computation of output buffer size caused crashes
and spurious MemoryErrors for reasonable arguments.
audioop_ratecv(): Avoid spurious overflow by careful reworking of the
buffer size computations, triggering MemoryError if and only if the
final buffer size can't be represented in a C int (although
PyString_FromStringAndSize may legitimately raise MemoryError even if
it does fit in a C int). All reasonable arguments should work as
intended now, and all unreasonable arguments should be cuaght.
find_class(): We no longer mask all exceptions[1] by transforming them
into SystemError. The latter is definitely not the right thing to do,
so we let any exceptions that occur in the PyObject_GetAttr() call to
simply propagate up if they occur.
[1] Note that pickle only masked ImportError, KeyError, and
AttributeError, but cPickle masked all exceptions.
This gives mmap() on Windows the ability to create read-only, write-
through and copy-on-write mmaps. A new keyword argument is introduced
because the mmap() signatures diverged between Windows and Unix, so
while they (now) both support this functionality, there wasn't a way to
spell it in a common way without introducing a new spelling gimmick.
The old spellings are still accepted, so there isn't a backward-
compatibility issue here.
objects to save in gc.garbage. This should be the last change needed to
fix SF bug 477059: "__del__ on new classes vs. GC".
Note that this change slightly changes the behavior of the collector.
Before, if a cycle was found that contained instances with __del__
methods then all instance objects in that cycle were saved in
gc.garbage. Now, only objects with __del__ methods are saved in
gc.garbage.
When moving objects with a __del__ attribute to a special list, look
for __del__ on new-style classes with the HEAPTYPE flag set as well.
(HEAPTYPE means the class was created by a class statement.)
to lists of values, giving the contents of all the ADD_INFO records
seen so far. This is initialized agressively when the log file is
opened, so that whoever is looking at the log reader can always see
the initial data loaded into the data stream. ADD_INFO events later
in the log file continue to be reported to the application layer as
before.
Add a new method, addinfo(), to the profiler. This can be used to
insert additional ADD_INFO records into the profiler log.
Fix the tp_flags and tp_name slots on the type objects.
"socket.socket" -- on Windows, "socket.socket" is the wrapper class.
Also added the module name to the SSL type (which is not a new-style
class -- I don't want to mess with it yet).
constructor acts just like socket() before. All three arguments have
a sensible default now; socket() is equivalent to
socket(AF_INET, SOCK_STREAM).
One minor issue: the socket() function and the SocketType had
different doc strings; socket.__doc__ gave the signature,
SocketType.__doc__ gave the methods. I've merged these for now, but
maybe the list of methods is no longer necessary since it can easily
be recovered through socket.__dict__.keys(). The problem with keeping
it is that the total doc string is a bit long (34 lines -- it scrolls
of a standard tty screen).
Another general issue with the socket module is that it's a big mess.
There's pages and pages of random platform #ifdefs, and the naming
conventions are totally wrong: it uses Py prefixes and CapWords for
static functions. That's a cleanup for another day... (Also I think
the big starting comment that summarizes the API can go -- it's a
repeat of the docstring.)
error occurs, and doesn't return a count. (This is my second patch
from SF patch #474307, with small change to the docstring for send().)
2.1.2 "bugfix" candidate.
Replace some tortuous code that was trying to be clever but forgot to
DECREF the key and value, by more longwinded but obviously correct
code.
(Inspired by but not copying the fix from SF patch #475033.)
STRICT_SYSV_CURSES when compiling curses module on HP/UX. Generalize
access to _flags on systems where WINDOW is opaque. Fixes bugs
#432497, #422265, and the curses parts of #467145 and #473150.
strings, not C strings)
removed USE_PYTHON defines, and related sre.py helpers
skip calling the subx helper if the template is callable.
interestingly enough, this means that
def callback(m):
return literal
result = pattern.sub(callback, string)
is much faster than
result = pattern.sub(literal, string)
removed (conceptually flawed) getliteral helper; the new sub/subn code
uses a faster code path for literal replacement strings, but doesn't
(yet) look for literal patterns.
added STATE_OFFSET macro, and use it to convert state.start/ptr to
char indexes
Allow passing strings to the .border() method
Correct some error messages ("1 or 4" -> "1 to 4")
Bump version number
Tweak code formatting
Update my e-mail address
This adds unsetenv to posix, and uses it in the __delitem__ method of
os.environ.
(XXX Should we change the preferred name for putenv to setenv, for
consistency?)
This was submitted by Moshe, but apparently he's too busy to check it
in himself. He wrote:
Here is a function in GNU readline called add_history,
which is used to manage the history list. Though Python
uses this function internally, it does not expose it to
the Python programmer. This patch adds direct interface
to this function with documentation.
This could be used by friendly modules to "seed" the
history with commands.
This is a big one, touching lots of files. Some of the platforms
aren't tested yet. Briefly, this changes the return value of the
os/posix functions stat(), fstat(), statvfs(), fstatvfs(), and the
time functions localtime(), gmtime(), and strptime() from tuples into
pseudo-sequences. When accessed as a sequence, they behave exactly as
before. But they also have attributes like st_mtime or tm_year. The
stat return value, moreover, has a few platform-specific attributes
that are not available through the sequence interface (because
everybody expects the sequence to have a fixed length, these couldn't
be added there). If your platform's struct stat doesn't define
st_blksize, st_blocks or st_rdev, they won't be accessible from Python
either.
(Still missing is a documentation update.)
Quoth the OpenSSL RAND_add man page:
OpenSSL makes sure that the PRNG state is unique for each
thread. On systems that provide /dev/urandom, the
randomness device is used to seed the PRNG transparently.
However, on all other systems, the application is
responsible for seeding the PRNG by calling RAND_add(),
RAND_egd(3) or RAND_load_file(3).
I decided to expose RAND_add() because it's general and RAND_egd()
because it's a useful special case. RAND_load_file() didn't seem to
offer much over RAND_add(), so I skipped it. Also supplied
RAND_status() which returns true if the PRNG is seeded and false if
not.
Apparently this patch (rev 2.41) replaced all the good old "s#"
formats in PyArg_ParseTuple() with "S". Then it did
PyString_FromStringAndSize() to get back the values setup by the
"s#" format. It also incref'd and decref'd the string obtained by
"S" even though the argument tuple had a reference to it.
Replace PyString_AsString() calls with PyString_AS_STRING().
A good rule of thumb -- if you never check the return value of
PyString_AsString() to see if it's NULL, you ought to be using the
macro <wink>.
Many functions used a local variable called return_error, which was
initialized to zero. If an error occurred, it was set to true. Most
of the code paths checked were only executed if return_error was
false. goto is clearer.
The code also seemed to be written under the curious assumption that
calling Py_DECREF() on a local variable would assign the variable to
NULL. As a result, more of the error-exit code paths returned an
object that had a reference count of zero instead of just returning
NULL. Fixed the code to explicitly assign NULL after the DECREF.
A bit more reformatting, but not much.
XXX Need a much better test suite for zlib, since it the current tests
don't exercise any of this broken code.
This changes Pythread_start_thread() to return the thread ID, or -1
for an error. (It's technically an incompatible API change, but I
doubt anyone calls it.)
Mostly by Toby Dickenson and Titus Brown.
Add an optional argument to a decompression object's decompress()
method. The argument specifies the maximum length of the return
value. If the uncompressed data exceeds this length, the excess data
is stored as the unconsumed_tail attribute. (Not to be confused with
unused_data, which is a separate issue.)
Difference from SF patch: Default value for unconsumed_tail is ""
rather than None. It's simpler if the attribute is always a string.
Added support for saving the names of the functions observed into the
profile log.
Added support for using the profiler to measure coverage without collecting
timing information (which is the slow part). Coverage logs can also be
substantially smaller than profiling logs where per-line information is
being collected.
Updated comments on the log format; corrected record type values in some
of the record descriptions.
Raise ValueError when an object contains an arbitrarily nested
reference to itself. (The previous fix just produced invalid
pickles.)
Solution is very much like Py_ReprEnter() and Py_ReprLeave():
fast_save_enter() and fast_save_leave() that tracks the fast_container
limit and keeps a fast_memo of objects currently being pickled.
The cost of the solution is moderately expensive for deeply nested
structures, but it still seems to be faster than normal pickling,
based on tests with deeply nested lists.
Once FAST_LIMIT is exceeded, the new code is about twice as slow as
fast-mode code that doesn't check for recursion. It's still twice as
fast as the normal pickling code. In the absence of deeply nested
structures, I couldn't measure a difference.
To whoever who changed a bunch of (PyCFunction) casts to
(PyNoArgsFunction) in PyMethodDef initializers: don't do that. The
cast is to shut the compiler up. The compiler wants the function
pointer initializer to be a PyCFunction.
"for <var> in <testlist> may no longer be a single test followed by
a comma. This solves SF bug #431886. Note that if the testlist
contains more than one test, a trailing comma is still allowed, for
maximum backward compatibility; but this example is not:
[(x, y) for x in range(10), for y in range(10)]
^
The fix involved creating a new nonterminal 'testlist_safe' whose
definition doesn't allow the trailing comma if there's only one test:
testlist_safe: test [(',' test)+ [',']]
a misunderstanding of the refcont behavior of the 'O' format code in
PyArg_ParseTuple() and Py_BuildValue(), respectively.
- pobj is only a borrowed reference, so should *not* be DECREF'ed at
the end. This was the cause of SF bug #470635.
- The Py_BuildValue() call would leak the object produced by
makesockaddr(). (I found this by eyeballing the code.)
Add a fast_container member to Picklerobject. If fast is true, then
fast_container counts the depth of nested container calls. If the
depth exceeds FAST_LIMIT (2000), the fast flag is ignored and the
normal checks occur. This approach is much like the approach for
prevent stack overflow for comparison and reprs of recursive objects
(e.g. [[...]]).
- Fast container used for save_list(), save_dict(), and
save_inst().
XXX Not clear which other save_xxx() functions should use it.
Make Picklerobject into new-style types, using PyObject_GenericGetAttr()
and PyObject_GenericSetAttr().
- Use PyMemberDef for binary and fast members
- Use PyGetSetDef for persistent_id, inst_persistent_id, memo, and
PicklingError.
XXX Not all of these seem like they need to use getset, but it's
not clear why the old getattr() and setattr() had such odd
semantics. One change is that the getvalue() attribute will
exist on all Picklers, not just list-based picklers; I think
this is a more rationale interface.
There is a long laundry list of other changes:
- Remove unused #defines for PyList_SET_ITEM() etc.
- Make some of the indentation consistent
- Replace uses of cPickle_PyMapping_HasKey() where the first
argument is self->memo with calls to PyDict_GetItem(), because
self->memo must be a dictionary.
- Don't bother to check if cPickle_PyMapping_HasKey() returns < 0,
because it can only return 0 or 1.
- Replace uses of PyObject_CallObject() with PyObject_Call(), when
we can guarantee that the argument tuple is really a tuple.
Performance impacts of these changes:
- 5% speedup for normal pickling
- No change to fast-mode pickling.
XXX Really need tests for all the features in cPickle that aren't in
pickle.
The platform requires 8-byte alignment for doubles, but the GC header
was 12 bytes and that threw off the natural alignment of the double
members of a subtype of complex. The fix puts the GC header into a
union with a double as the other member, to force no-looser-than
double alignment of GC headers. On boxes that require 8-byte alignment
for doubles, this may add pad bytes to the GC header accordingly; ditto
for platforms that *prefer* 8-byte alignment for doubles. On platforms
that don't care, it shouldn't change the memory layout (because the
size of the old GC header is certainly greater than the size of a double
on all platforms, so unioning with a double shouldn't change size or
alignment on such boxes).
Use #define X509_NAME_MAXLEN for server/issuer length on an SSL
object.
Update doc strings for socket.ssl() and ssl methods read() and
write().
PySSL_SSLwrite(): Check return value and raise exception on error.
Use int for len instead of size_t. (All the function the size_t obj
was passed to our from expected an int!)
PySSL_SSLread(): Check return value of PyArg_ParseTuple()! More
robust checks of return values from SSL_read().
Change all the local names that start with SSL to start with PySSL.
The OpenSSL library defines lots of calls that start with "SSL_". The
calls for Python's SSL objects also started with "SSL_". This choice
made it really confusing to figure out which calls were to the library
and which calls were local to the file.
Add PySSL_SetError() that sets an exception based on the information
from SSL_get_error(). This function will eventually replace all the
calls that set it with an error message that is based on the name of
the call that failed rather than the reason it failed. (Example: If
SSL_connect() failed it used to report "SSL_connect error" now it will
offer a specific message about why SSL_connect failed.)
XXX It might be helpful to augment the error message generated
below with the name of the SSL function that generated the error.
I expect it's obvious most of the time.
Remove several unnecessary INCREFs in the module's constructor call.
PyDict_SetItem() and friends do the INCREF for you.
In SSL_dealloc(), free/dealloc them only if they're non-NULL.
Fixes some obvious core dumps, but not sure yet if there are more
semantics to the SSL calls that would affect the dealloc.
XXX [1] These changes aren't tested very thoroughly, because regrtest
doesn't do any SSL tests. I've done some trivial tests on my own, but
don't really know how to use the key and cert files. In one case, an
SSL-level error causes Python to dump core. I'll get the fixed in the
next round of changes.
XXX [2] The checkin removes the x_attr member of the SSLObject struct.
I'm not sure if this is kosher for backwards compatibility at the
binary level. Perhaps its safer to keep the member but keep it
assigned to NULL.
And the leaks?
newSSLObject() called PyDict_New(), stored the result in x_attr
without checking it, and later stored NULL in x_attr without doing
anything to the dict. So the dict always leaks. There is no further
reference to x_attr, so I just removed it completely.
The error cases in newSSLObject() passed the return value of
PyString_FromString() directly to PyErr_SetObject().
PyErr_SetObject() expects a borrowed reference, so the string leaked.
This simplifies the rounding in _PyObject_VAR_SIZE, allows to restore the
pre-rounding calling sequence, and allows some nice little simplifications
in its callers. I'm still making it return a size_t, though.
As Guido suggested, this makes the new subclassing code substantially
simpler. But the mechanics of doing it w/ C macro semantics are a mess,
and _PyObject_VAR_SIZE has a new calling sequence now.
Question: The PyObject_NEW_VAR macro appears to be part of the public API.
Regardless of what it expands to, the notion that it has to round up the
memory it allocates is new, and extensions containing the old
PyObject_NEW_VAR macro expansion (which was embedded in the
PyObject_NEW_VAR expansion) won't do this rounding. But the rounding
isn't actually *needed* except for new-style instances with dict pointers
after a variable-length blob of embedded data. So my guess is that we do
not need to bump the API version for this (as the rounding isn't needed
for anything an extension can do unless it's recompiled anyway). What's
your guess?
pad memory to properly align the __dict__ pointer in all cases.
gcmodule.c/objimpl.h, _PyObject_GC_Malloc:
+ Added a "padding" argument so that this flavor of malloc can allocate
enough bytes for alignment padding (it can't know this is needed, but
its callers do).
typeobject.c, PyType_GenericAlloc:
+ Allocated enough bytes to align the __dict__ pointer.
+ Sped and simplified the round-up-to-PTRSIZE logic.
+ Added blank lines so I could parse the if/else blocks <0.7 wink>.
Generalize PyLong_AsLongLong to accept int arguments too. The real point
is so that PyArg_ParseTuple's 'L' code does too. That code was
undocumented (AFAICT), so documented it.
#424002.
Refactor init_path_from_argv0() and rename to copy_absolute(); add
absolutize() which does the same in-place.
Clean up whitespace (leading tabs -> spaces, delete trailing
spaces/tabs).
Add raise_exception() to the _testcapi module. It isn't a test, but
the C API exists only to support test_exceptions. raise_exception()
takes two arguments -- an exception class and an integer specifying
how many arguments it should be called with.
test_exceptions uses BadException() to test the interpreter's behavior
when there is a problem instantiating the exception. test_capi1()
calls it with too many arguments. test_capi2() causes an exception to
be raised in the Python code of the constructor.
no backwards compatibility to worry about, so I just pushed the
'closure' struct member to the back -- it's never used in the current
code base (I may eliminate it, but that's more work because the getter
and setter signatures would have to change.)
As examples, I added actual docstrings to the getset attributes of a
few types: file.closed, xxsubtype.spamdict.state.
compatibility, this required all places where an array of "struct
memberlist" structures was declared that is referenced from a type's
tp_members slot to change the type of the structure to PyMemberDef;
"struct memberlist" is now only used by old code that still calls
PyMember_Get/Set. The code in PyObject_GenericGetAttr/SetAttr now
calls the new APIs PyMember_GetOne/SetOne, which take a PyMemberDef
argument.
As examples, I added actual docstrings to the attributes of a few
types: file, complex, instance method, super, and xxsubtype.spamlist.
Also converted the symtable to new style getattr.
#462270: sub-tle difference between pre.sub and sre.sub. PRE ignored
an empty match at the previous location, SRE didn't.
also synced with Secret Labs "sreopen" codebase.
Curious: the MS docs say stati64 etc are supported even on Win95, but
Win95 doesn't support a filesystem that allows partitions > 2 Gb.
test_largefile: This was opening its test file in text mode. I have no
idea how that worked under Win64, but it sure needs binary mode on Win98.
BTW, on Win98 test_largefile runs quickly (under a second).
requires that errno ever get set, and it looks like glibc is already
playing that game. New rules:
+ Never use HUGE_VAL. Use the new Py_HUGE_VAL instead.
+ Never believe errno. If overflow is the only thing you're interested in,
use the new Py_OVERFLOWED(x) macro. If you're interested in any libm
errors, use the new Py_SET_ERANGE_IF_OVERFLOW(x) macro, which attempts
to set errno the way C89 said it worked.
Unfortunately, none of these are reliable, but they work on Windows and I
*expect* under glibc too.
getting Infs, NaNs, or nonsense in 2.1 and before; in yesterday's CVS we
were getting OverflowError; but these functions always make good sense
for positive arguments, no matter how large).
PEP 238. Changes:
- add a new flag variable Py_DivisionWarningFlag, declared in
pydebug.h, defined in object.c, set in main.c, and used in
{int,long,float,complex}object.c. When this flag is set, the
classic division operator issues a DeprecationWarning message.
- add a new API PyRun_SimpleStringFlags() to match
PyRun_SimpleString(). The main() function calls this so that
commands run with -c can also benefit from -Dnew.
- While I was at it, I changed the usage message in main() somewhat:
alphabetized the options, split it in *four* parts to fit in under
512 bytes (not that I still believe this is necessary -- doc strings
elsewhere are much longer), and perhaps most visibly, don't display
the full list of options on each command line error. Instead, the
full list is only displayed when -h is used, and otherwise a brief
reminder of -h is displayed. When -h is used, write to stdout so
that you can do `python -h | more'.
Notes:
- I don't want to use the -W option to control whether the classic
division warning is issued or not, because the machinery to decide
whether to display the warning or not is very expensive (it involves
calling into the warnings.py module). You can use -Werror to turn
the warnings into exceptions though.
- The -Dnew option doesn't select future division for all of the
program -- only for the __main__ module. I don't know if I'll ever
change this -- it would require changes to the .pyc file magic
number to do it right, and a more global notion of compiler flags.
- You can usefully combine -Dwarn and -Dnew: this gives the __main__
module new division, and warns about classic division everywhere
else.
visit_finalizer_reachable since it's the same as visit_reachable.
Rename visit_reachable to visit_move. Objects can now have the GC type
flag set, reachable by tp_traverse and not be in a GC linked list. This
should make the collector more robust and easier to use by extension
module writers. Add memory management functions for container objects
(new, del, resize).
pyport.h: typedef a new Py_intptr_t type.
DELICATE ASSUMPTION: That HAVE_UINTPTR_T implies intptr_t is
available as well as uintptr_t. If that turns out not to be
true, things must get uglier (C99 wants both, so I think it's
an assumption we're *likely* to get away with).
thread_nt.h, PyThread_start_new_thread: MS _beginthread is documented
as returning unsigned long; no idea why uintptr_t was being used.
Others: Always use Py_[u]intptr_t, never [u]intptr_t directly.
This patch attempts to do to cPickle what Guido did
for pickle.py v 1.50. That is: save_global tries
importing the module, and fetching the name from the
module. If that fails, or the returned object is not
the same one we started with, it raises a
PicklingError. (All this so pickling a lambda will
fail at save time, rather than load time).
right way"). Fiddle __future__.py to use them.
Jeremy's pyassem.py may also want to use them (by-hand duplication of
magic numbers is brittle), but leaving that to his judgment.
Beef up __future__'s test to verify the exported feature names appear
correct.
- Do not compile unicodeobject, unicodectype, and unicodedata if Unicode is disabled
- check for Py_USING_UNICODE in all places that use Unicode functions
- disables unicode literals, and the builtin functions
- add the types.StringTypes list
- remove Unicode literals from most tests.
the "#ifdef MS_WINDOWS" to "#ifdef SELECT_USES_HEAP" and by
setting SELECT_USES_HEAP when FD_SETSIZE > 1024.
The indirection seems useful since this subtly changes the path
that "normal" Windows programs take (where Timmie sez FD_SETSIZE =
512). If that's a problem for Windows, he has only one place to
change.
Peter Schneider-Kamp.
Clarified some docstrings in the spirit of the patch; left out the
degrees() and radians() functions (see the patch comments on SF).
- Add an explicit call to PyType_Ready(&PyList_Type) to pythonrun.c
(just for the heck of it, really -- we should either explicitly
ready all types, or none).
New functions getnameinfo, getaddrinfo. New exceptions socket.gaierror,
socket.herror. Various new constants, in particular AF_INET6 and error
codes and parameters for getaddrinfo.
AF_INET6 support in setipaddr, makesockaddr, getsockaddr, getsockaddrlen,
gethost_common, PySocket_gethostbyaddr.
The ob_sval member of a string object isn't necessarily aligned to better
than a native long, so the new "q" and "Q" struct codes can't get away w/
casting tricks on platforms where LONG_LONG requires stricter-than-long
alignment. After I thought of a few elaborate workarounds, Guido bashed
me over the head with the obvious memcpy approach, herewith implemented.
Check for slice/item deletion, which calls slice/item assignment with a NULL
value, and raise a TypeError instead of coredumping. Bugreport and suggested
fix by Alex Martelli.
that info to code dynamically compiled *by* code compiled with generators
enabled. Doesn't yet work because there's still no way to tell the parser
that "yield" is OK (unlike nested_scopes, the parser has its fingers in
this too).
Replaced PyEval_GetNestedScopes by a more-general
PyEval_MergeCompilerFlags. Perhaps I should not have? I doubted it was
*intended* to be part of the public API, so just did.
Include sys/poll.h if it was found by the configure script. The OpenGroup
spec says poll.h is the correct header file to use, so that file is
preferred.
Also note that it isn't just Linux nice() that is broken: at least FreeBSD
and BSDI also have this problem. os.nice() should probably just be emulated
using getpriority()/setpriority(), if they are available, but I'll get to
that later.
This patch allows the readline module to build cleanly with GNU
readline 4.2 without breaking the build for earlier GNU readline
versions. The configure script checks for the presence of
rl_completion_matches in libreadline.
unicodeobject.h, which forces sizeof(Py_UNICODE) == sizeof(Py_UCS4).
(this may be good enough for platforms that doesn't have a 16-bit
type. the UTF-16 codecs don't work, though)
Protect several more uses of constants with #ifdefs; these are necessary on
(at least) SCO OpenServer 5. Fixes a non-SF-submitted bugreport by Michael
Kent.
the new PyLong_{As,From}{Unsigned,}LongLong tests, so the bulk of the
code is in the new #include file testcapi_long.h, which generates
different code depending on how macros are set. This sucks, but I couldn't
think of anything that sucked less.
UNIX headache? If we still maintain dependencies by hand, someone who
knows what they're doing should teach whatever needs it that
_testcapimodule.c includes testcapi_long.h.
Replaced PyLong_{As,From}{Unsigned,}LongLong guts with calls
to _PyLong_{As,From}ByteArray.
_testcapimodule.c:
Added strong tests of PyLong_{As,From}{Unsigned,}LongLong.
Fixes SF bug #432552 PyLong_AsLongLong() problems.
Possible bugfix candidate, but the fix relies on code added to longobject
to support the new q/Q structmodule format codes.
functions. I intend to replace their guts with calls to the new
_PyLong_{As,From}ByteArray() functions, but AFAICT there's no tests for
them at all now; I also suspect PyLong_AsLongLong() isn't catching all
overflow cases, but without a std test to demonstrate that why should you
believe me <wink>.
Also added a raiseTestError() utility function.
This completes the q/Q project.
longobject.c _PyLong_AsByteArray: The original code had a gross bug:
the most-significant Python digit doesn't necessarily have SHIFT
significant bits, and you really need to count how many copies of the sign
bit it has else spurious overflow errors result.
test_struct.py: This now does exhaustive std q/Q testing at, and on both
sides of, all relevant power-of-2 boundaries, both positive and negative.
NEWS: Added brief dict news while I was at it.
native mode, and only when config #defines HAVE_LONG_LONG. Standard mode
will eventually treat them as 8-byte ints across all platforms, but that
likely requires a new set of routines in longobject.c first (while
sizeof(long) >= 4 is guaranteed by C, there's nothing in C we can rely
on x-platform to hold 8 bytes of int, so we'll have to roll our own;
I'm thinking of a simple pair of conversion functions, Python long
to/from sized vector of unsigned bytes; that may be useful for GMP
conversions too; std q/Q would call them with size fixed at 8).
test_struct.py: In addition to adding some native-mode 'q' and 'Q' tests,
got rid of unused code, and repaired a non-portable assumption about
native sizeof(short) (it isn't 2 on some Cray boxes).
libstruct.tex: In addition to adding a bit of 'q'/'Q' docs (more needed
later), removed an erroneous footnote about 'I' behavior.
*are* obsolete; three variables and the maketrans() function are not
(yet) obsolete.
Add a compensating warnings.filterwarnings() call to test_strop.py.
Add this to the NEWS.
return a (C) long: PyArg_ParseTuple and Py_BuildValue may not let us get
at the size_t we really want, but C int is clearly too small for a 64-bit
box, and both the start parameter and the return value should work for
large mapped files even on 32-bit boxes. The code really needs to be
rethought from scratch (not by me, though ...).
1) it didn't obey the "start" parameter (and when it does, we must validate
the value)
2) the return value needs to be an absolute index, rather than relative to
some arbitrary point in the file
(checking CVS, it appears this method never worked; these changes bring it
into line with typical .find() behavior)
When getting a string buffer for a string we just created, use
PyString_AS_STRING() instead of PyString_AsString() to avoid the
call overhead and extra type check.
meaning infinity -- but at least warn about it in the code! I pissed
away a couple hours on this today, and don't wish the same on the next
in line.
Bugfix candidate.
means "replace everything". But the string module, string.replace()
amd test_string.py believe a 0 count means "replace nothing".
"Nothing" wins, strop loses.
Bugfix candidate.
Platform blew up on "123".replace("123", ""). Michael Hudson pinned the
blame on platform malloc(0) returning NULL.
This is a candidate for all bugfix releases.
This closes SF bug #231328.
Added all constants needed to use the functions defined in this module
that are not defined elsewhere (the O_* symbols are available in the
os module). No additonal modules are needed to use this now.
PyObject_AsFileDescriptor() -- it does the same thing everywhere, so
use it the same way everyone else does so that exceptions are
consistent. This means we have less code here, and we do not need to
resort to hackish ways of getting the Python-visible function name to
fdconv().
this could cause invalid paths to be returned for AF_UNIX sockets on some
platforms (including FreeBSD 4.2-RELEASE), appearantly because there is
no assurance that the address will be nul-terminated when filled in by the
kernel.
PySocketSock_recvfrom(): Use PyString_AS_STRING() to get the data pointer
of a string we create ourselves; there is no need for the extra type
check from PyString_AsString().
This closes SF bug #416573.
This patch does several things to termios:
(1) changes all functions to be METH_VARARGS
(2) changes all functions to be able to take a file object as the
first parameter, as per
http://mail.python.org/pipermail/python-dev/2001-February/012701.html
(3) give better error messages
(4) removes a bunch of comments that just repeat the docstrings
(5) #includes <termio.h> before #including <sys/ioctl.h> so more
#constants are actually #defined.
(6) a couple of docstring tweaks
I have tested this minimally (i.e. it builds, and
doesn't blow up too embarassingly) on OSF1/alpha and
on one of the sf compile farm's solaris boxes, and
rather more comprehansively on my linux/x86 box.
It still needs to be tested on all the other platforms
we build termios on.
This closes the code portion of SF patch #417081.
while not generally a good idea, this is used by RDF users, and works
to implement RDF-style namespace+localname concatenation as defined
in the RDF specifications. (This also corrects a backwards-compatibility
bug.)
Be more conservative while clearing out handlers; set the slot in the
self->handlers array to NULL before DECREFing the callback.
Still more adjustments to make the code style internally consistent.
OpenSSL versions beore 0.9.5. This just is too experimental to be
worth it, especially since the user would have to do some severe
hacking of the Modules/Setup file to even enable the EGD code, and
without the EGD code it would always spit out a warning on some
systems -- even when socket.ssl() is not used. Fixing that properly
is not my job; the EGD patch is clearly not so important that it
should hold up the 2.1 release.
always:
- #undef HAVE_CONFIG_H (because otherwise chardefs.h tries to include
strings.h)
- #include readline.h and history.h
and we never declare any readline function prototypes ourselves.
This makes it compile with readline 4.2, albeit with a few warnings.
Some of the remaining warnings are about completion_matches(), which
is renamed to rl_completion_matches().
I've tested it with various other versions, from 2.0 up, and they all
seem to work (some with warnings) -- but only on Red Hat Linux 6.2.
Fixing the warnings for readline 4.2 would break compatibility with
3.0 (and maybe even earlier versions), and readline doesn't seem to
have a way to test for its version at compile time, so I'd rather
leave the warnings in than break compilation with older versions.
problem reported by Neil Schemenauer on python-dev on 4/12/01, wth
subject "Problem with SSL and socketmodule on Debian Potato?".
It's tentative because Moshe objected, but Martin rebutted, and Moshe
seems unavailable for comments.
(Note that with OpenSSL 0.9.6a, I get a lot of compilation warnings
for socketmodule.c -- I'm assuming I can safely ignore these until 2.1
is released.)
before calling any callbacks. This is important
since the callback objects only look at themselves
to determine that they are invalide. This change
avoids a segfault when callbacks use a different
reference to an object in the process of being
deallocated.
This fixes SF bug #415660.
(with modification of existing dict elements!).
This is part of SF patch #409864: lazy fix for Pings bizarre scoping
crash.
The adaptation I made to Michael's patch was to change the error
handling to avoid masking other errors (moving the specific error
message to inside test_dict_inner()), and to insert a test for
dict==NULL at the start.
Michael Hudson suggested this fox for the Tru64 problem (SF bug
232597). It looks reasonable, it works on Tru64, and it doesn't beak
anything on Linux, so I say go for it.
of 2-space and 4-space indents. Whatever, when I saw the checkin diff it
was clear that what my editor thinks a tab means didn't match this module's
belief. Removed all the tabs from the lines I added and changed, left
everything else alone.
pickled into the signed(!) 4-byte BININT format, so were getting unpickled
again as negative ints. Repaired that.
Added some minimal docs at the top about what I've learned about the pickle
format codes (little of which was obvious from staring at the code,
although that's partly because all the size-related bugs greatly obscured
the true intent of the code).
Happy side effect: because save_int() needed to grow a *proper* range
check in order to fix this bug, it can now use the more-efficient BININT1,
BININT2 and BININT formats when the long's value is small enough to fit
in a signed 4-byte int (before this, on a sizeof(long)==8 box it always
used the general INT format for negative ints).
test_cpickle works again on sizeof(long)==8 machines. test_pickle is
still busted big-time.
binary pickle, and the latter contains a pickle of a negative Python
int i written on a sizeof(long)==4 box (and whether by cPickle or
pickle.py), it's read incorrectly as i + 2**32. The patch repairs that,
and allows test_cpickle.py (to which I added a relevant test case earlier
today) to work again on sizeof(long)==8 boxes.
There's another (at least one) sizeof(long)==8 binary pickle bug, but in
pickle.py instead. That bug is still there, and test_pickle.py doesn't
catch it yet (try pickling and unpickling, e.g., 1 << 46).
where sizeof(long)==8. This *was* broken on boxes where signed right
shifts didn't sign-extend, but not elsewhere. Unfortunately, apart
from the Cray T3E I don't know of such a box, and Guido has so far
refused to buy me any Cray machines for home Python testing <wink>.
More immediately interesting would be if someone could please test
this on *any* sizeof(long)==8 box, to make sure I didn't break it.
handling of EAGAIN.
This may or may not fix the problem for me (Mandrake 7.2 on a Dell
Optiplex GX110 desktop): I can't hear the output, but it does pass the
test now. It doesn't fix the problem for Fred (Mandrake 7.2 on a Dell
Inspiron 7500 which has the Maestro sound drivers). Fred suspects
that it's the kernel version in combination with the driver.
gives the CVS revision of this file even if it does not include the
extra RCS "$Revision: " cruft.
initpyexpat(): Use get_version_string() instead of hard-coding magic
indexes into the RCS string (which may be affected by export options).
tracked as soon as it is clear; this can decrease the number of roots for
the cycle detector sooner rather than later in applications which hold on
to weak references beyond the time of the invalidation.
If a module has a future statement enabling nested scopes, they are
also enable for the exec statement and the functions compile() and
execfile() if they occur in the module.
If Python is run with the -i option, which enters interactive mode
after executing a script, and the script it runs enables nested
scopes, they are also enabled in interactive mode.
XXX The use of -i with -c "from __future__ import nested_scopes" is
not supported. What's the point?
To support these changes, many function variants have been added to
pythonrun.c. All the variants names end with Flags and they take an
extra PyCompilerFlags * argument. It is possible that this complexity
will be eliminated in a future version of the interpreter in which
nested scopes are not optional.
with free variables. Thanks to Martin v. Loewis for finding two of
the problems. This fixes SF buf 405583.
There is also a C API change: PyFrame_New() is reverting to its
pre-2.1 signature. The change introduced by nested scopes was a
mistake. XXX Is this okay between beta releases?
cell_clear(), the GC helper, must decref its reference to break
cycles.
frame_dealloc() must dealloc all cell vars and free vars in addition
to locals.
eval_code2() setup code must INCREF cells it copies out of the
closure.
The STORE_DEREF opcode implementation must DECREF the object it passes
to PyCell_Set().
these can be missing on some (all?) Irix and Tru64 versions.
Protect the CRTSCTS value with a cast; this can be a larger value on
Solaris/SPARC.
This should fix SF tracker items #405092, #405350, and #405355.
defined and export both names.
Solaris also does not define CBAUDEX; it is not clear that CBAUDEXT
(which is defined there) is the same thing, so we only protect against
the lack of CBAUDEX.
Reported by Greg V. Wilson.
Fixed recno support (keys are integers rather than strings).
Work around DB bug that cause stdin to be closed by rnopen() when the
DB file needed to exist but did not (no longer segfaults).
This closes SF tracker patch #403445.
Also wrapped some long lines and added whitespace around operators -- FLD.
the internal API function to release the interned strings as the very
last thing before returning status. This aids in memory use debugging
because it eliminates a huge source of noise from the reports. This
is never called during normal (non-debugging) use because releasing
the interned strings slows Python's shutdown and isn't necessary
anyway because the system will always reclaim the memory.
(Fred, I'll leave the doc changes to you, because I don't know if you
want to delete libsoundex.tex or leave it in.
Someone else will have to tweak PC/os2vacpp/{config.c,makefile} and
PCbuild/pythoncore.dsp, both of which refer to soundex.c)
The bug report title isn't correct, but was on the right track.
Rev 2.13 applied a patch intended to improve asinh and acosh, but the
author mistakenly replaced the body of asin with their new code for asinh.
See bug report for all the gory details.
This patch: (a) puts the "new" (as of 2.13) asinh code into the asinh
function; and, (b) repairs asin via what Abramowitz & Stegun say it should
be (which is probably the same as what 2.12 did for asin, although I got
tired of matching parentheses before being 100% sure of that -- and I don't
care! The source of the old code is a mystery, and I *know* why I picked
the new code.).
* fixes the zlib decompress sync flush bug as reported in bug #124981
* avoids repeat calls to (in|de)flateEnd when destroying (de)compression
objects
* raises exception when allocating unused_data fails
* fixes memory leak when allocating unused_data fails
* raises exception when allocating decompress data fails
* removes vestigial code from decompress flush now that decompression
returns all available data
* tidies code so object compress/decompress/flush routines are consistent
compiled only for some versions of Expat, but was no longer needed as the
new implementation works for all versions. Keeping it created multiple
definitions for Expat 1.2, which caused compilation to fail.
in_callback field that's set to true whenever a callback into an
event handler is true. Needed for:
set_error(): Add line number of offset information to the exception
as attributes, so users don't need to parse the text of the
message.
set_error_attr(): New helper function for set_error().
xmlparse_GetInputContext(): New function of the parser object;
returns the document source for an event during a callback, None
at all other times.
xmlparse_SetParamEntityParsing(): Make the signature consistent with
the other parser methods (use xmlparseobject* for self instead of
PyObject*).
initpyexpat(): Don't lose the reference to the exception class!
call_with_frame(),
getcode(): Re-indent to be consistent with the rest of the file.
_testcapimodule.c
make sure PyList_Reverse doesn't blow up again
getargs.c
assert args isn't NULL at the top of vgetargs1 instead of
waiting for a NULL-pointer dereference at the end
Fix the .binary() method of mpz objects for 64-bit systems.
[Also removed a lot of trailing whitespace elsewhere in the file. --FLD]
This closes SF patch #103547.
of nested functions. Either is allowed in a function if it contains
no defs or lambdas or the defs and lambdas it contains have no free
variables. If a function is itself nested and has free variables,
either is illegal.
Revise the symtable to use a PySymtableEntryObject, which holds all
the revelent information for a scope, rather than using a bunch of
st_cur_XXX pointers in the symtable struct. The changes simplify the
internal management of the current symtable scope and of the stack.
Added new C source file: Python/symtable.c. (Does the Windows build
process need to be updated?)
As part of these changes, the initial _symtable module interface
introduced in 2.1a2 is replaced. A dictionary of
PySymtableEntryObjects are returned.
now carry a 'code' attribute that gives the Expat error
number.
Added support for additional handlers for Expat 1.95.*, including
XmlDeclHandler, EntityDeclHandler, ElementDeclHandler, and
AttlistDeclHandler. Associated constants are in the 'model'
sub-object.
Added two new attributes to the parser object: ordered_attributes and
specified_attributes. These are used to control how attributes are
reported and which attributes are reported.
Adds support for raw packets (AF_PACKET) under Linux. I haven't
tested this code thoroughly; it compiles and the basic calls all work
without crashing. Not sure what to actually do with raw sockets though.
Not sure what other platforms this might be useful for.
This involves changing the zlib build process to build zlib itself from sources, then use that library. Also updated are the comments to reflect the new official home of zlib, and add Windows specific notes regarding the build process.
"Partial" as the code uses sys.prefix in an attempt to locate 'w9xpopen.exe', but sys.prefix is not set if Python can't find it itself. So this _still_ fails in Pythonwin, but I am committing the patch for 2 reasons:
* Embedded apps that set sys.prefix or use PYTHONHOME will work
* The exception raised on failure to find the executable is far more obvious
directory. Modify meaning of -s option to specify the Modules directory.
Add -l option to specify library source directory when building extension
modules. Perhaps these names should be switched to avoid breaking old
code. Add -c compiler option to when emitting rules to build object files.
supposed to be declared in system include files (with a proper prototype.)
Should be moved to a platform-specific block if anyone finds out which
broken platforms need it :-)
Participate in garbage collection if available.
Potentially decref handlers in clear_handlers.
Partially reindent.
Put synthetic frame object on the stack to support better error output.
Expose Python codecs to pyexpat.
Add new Expat 1.2 handlers and API.
Fix memory leak: release self->handlers.
Do not expect PyModule_AddObject and PyModule_AddStringConstant in 2.0b1.
Raise exception in ParseFile.
ctime, gmtime and localtime optional, defaulting to 'the current time' in
all cases. Adjust docs, add news item. Also convert all argument-handling to
METH_VARARGS. Closes SF patch #103265.
implementation details inside the ucnhash module.
also cleaned up the unicode copyright blurb a little; Secret Labs'
internal revision history isn't that interesting...
- In count(), remove(), index(): call RichCompare(Py_EQ).
- Get rid of array_compare(), in favor of new array_richcompare() (a
near clone of list_compare()).
- Aligned items in array_methods initializer and comments for type
struct initializer.
- Folded a few long lines.
The final piece of this change...
Strip down Setup.config.in and Setup.dist to the minimal sets required
to get a working Python; setup.py will handle the rest
builds during which he forgot to uncomment crucial library lines in
Setup, walks into Guido's East End nightclub with a tactical nuclear
weapon on his shoulder. Said nuclear weapon is promptly deployed
exactly where it will do the most good, right in the middle of
configure.in.
With this patch, the set of libraries autoconfigured in is extended to
include ndbm, gdbm, and crypt. This essentially eliminates any need to
tweak Setup for a normal Linux build.
"'E was a fair man. Cruel, but fair."
of dbmmodule dynamically by default (otherwise it can pull in
dependencies with libdb that croak pybsddb3). This change moves the
Setup line for dbmmodule to Setup.config.in.
(bugs #115903, #115696)
This is based on a patch by Darrel Gallion. I'm not 100%
sure about this fix, but I haven't managed to come up with
any test case it cannot handle...
-- added some more docstrings
-- fixed typo in scanner class (#125531)
-- the multiline flag (?m) should't affect the \Z operator (#127259)
-- fixed non-greedy backtracking bug (#123769, #127259)
-- added sre.DEBUG flag (currently dumps the parsed pattern structure)
-- fixed a couple of glitches in groupdict (the #126587 memory leak
had already been fixed by AMK)
xreadlines inserted themselves inbetween the two) and clarify that the
normal socket module should be commented out. (Someone also suggested the
latter on c.l.py some time ago, I forget who, sorry.)
Wasn't built on Windows; not in config.c either.
Module init function missing DL_EXPORT magic.
test_xreadline output file obviously wrong (started w/ "test_xrl").
test program very unclear about what was expected.
bugs #126161 and 123634).
The solution doesn't use the unicode-escape encoding; that has other
problems (it seems not 100% reversible). Rather, it transforms the
input Unicode object slightly before encoding it using
raw-unicode-escape, so that the decoding will reconstruct the original
string: backslash and newline characters are translated into their
\uXXXX counterparts.
This is backwards incompatible for strings containing backslashes, but
for some of those strings, the pickling was already broken.
Note that SF bug #123634 complains specifically that cPickle fails to
unpickle the pickle for u'' (the empty Unicode string) correctly.
This was an off-by-one error in load_unicode().
XXX Ugliness: in order to do the modified raw-unicode-escape, I've
cut-and-pasted a copy of PyUnicode_EncodeRawUnicodeEscape() into this
file that also encodes '\\' and '\n'. It might be nice to migrate
this into the Unicode implementation and give this encoding a new name
('half-raw-unicode-escape'? 'pickle-unicode-escape'?); that would help
pickle.py too. But right now I can't be bothered with the necessary
infrastructural changes.
documented, and as is reasonable (since it is optional, but there's
another argument following it that may require you to specify a
value). This solves SF bug 121887.
regardless of whether the system getopt() does what we want. This avoids the
hassle with prototypes and externs, and the check to see if the system
getopt() does what we want. Prefix optind, optarg and opterr with _PyOS_ to
avoid name clashes. Add new include file to define the right symbols. Fix
Demo/pyserv/pyserv.c to include getopt.h itself, instead of relying on
Python to provide it.
build on SGI":
* Check for 'sgi' preprocessor symbol, not '__sgi__'
* Surround individual character macros with #ifdef's, instead of making them
all rely on STRICT_SYSV_CURSES
-- fixed negative lookbehind to work correctly at the beginning
of the target string (bug #117242)
-- improved syntax check; you can no longer refer to a group
inside itself (bug #110866)
Direct use of interp->result is deprecated; changing this to
Tcl_GetStringResult(interp) everywhere fixed the problem of losing the
error message with TclError exceptions, on Windows.
libm result is 0). Cautiously add a few libm exception test cases:
1. That exp(-huge) returns 0 without exception.
2. That exp(+huge) triggers OverflowError.
3. That sqrt(-1) raises ValueError specifically (apparently under glibc linked
with -lieee, it was raising OverflowError due to an accident of the way
mathmodule.c's CHECK() macro happened to deal with Infs and NaNs under gcc).
a Z_BUF_ERROR while decompressing. If it is, assume that this means
the data being decompressed is bad and raise an exception, instead of
just assuming that Z_BUF_ERROR always means that more space is required.
read the header from the .au file and do a sanity check
pass only the data to the audio device
call flush() so that program does not exit until playback is complete
call all the other methods to verify that they work minimally
call setparameters with a bunch of bugs arguments
linuxaudiodev.c:
use explicit O_WRONLY and O_RDONLY instead of 1 and 0
add a string name to each of the entries in audio_types[]
add AFMT_A_LAW to the list of known formats
add x_mode attribute to lad object, stores imode from open call
test ioctl return value as == -1, not < 0
in read() method, resize string before return
add getptr() method, that calls does ioctl on GETIPTR or GETOPTR
depending on x_mode
in setparameters() method, do better error checking and raise
ValueErrors; also use ioctl calls recommended by Open Sound
System Programmer's Guido (www.opensound.com)
use PyModule_AddXXX to define names in module
operations are defined. This will, hopefully clarify
some of the logic.
Added close test to raise proper error when operations
are performed on closed StringIOs.
Added a position argument to the truncate method.
Added a size argument to readline.
Added PyArg_Parse calls for methods that don't take arguments to
make sure they don't take arguments.
http://sourceforge.net/bugs/?func=detailbug&bug_id=113803&group_id=5470
Add Unicode support and error handling to AsString(). Both AsString()
and Merge() now return NULL and set a proper Python exception
condition when an error happens; Merge() and other callers of
AsString() check for errors from AsString(). Also fixed cleanup in
Merge() and Tkapp_Call() return cleanup code; the fv array was not
necessarily completely initialized, causing calls to ckfree() with
garbage arguments!
(Also reindented some lines that were longer than 80 chars and
reformatted some code that used an alien coding standard.)
Versions are defined for Windows and Unix; the Unix
flavor uses sysconf() to get the page size; this avoids
the use of getpagesize(), which is deprecated and
requires an additional library on some platforms
(specifically, Reliant UNIX).
This partially closes SourceForge bug #113797.
copied strings from environment variables and argv[0] into
fixed-length buffers without checking their length.
Reported by Stan Bubrouski; advice on fix from John Viega.
Add definitions of INT_MAX and LONG_MAX to pyport.h.
Remove includes of limits.h and conditional definitions of INT_MAX
and LONG_MAX elsewhere.
This closes SourceForge patch #101659 and bug #115323.
undefined. ccording to MvL, this is safe: the MS_SYNC flag means that
msync() returns when all I/O operations are scheduled; without it, it
waits until they are complete, which is acceptable behavior.
- fixed attributions
- moved decomposition data to a separate table, in preparation
for step 3 (which won't happen before 2.0 final, promise!)
- use relative paths in the generator script
I have a lot more stuff in the works for 2.1, but let's leave
that for another day...
"xml.parsers.expat.error", so it will reflect the public name of the
exception rather than the internal name.
Also change some of the initialization to use the new PyModule_Add*()
convenience functions.
collector will be saved in gc.garbage. This is useful for debugging a
program that creates reference cycles.
- Fix else statements in gcmodule.c to conform to Python coding standards.
subset of Win32 ShellExecute's functionality. Guido wants this because
IDLE's Help -> Docs function currently crashes his machine because of a
conflict between his version of Norton AntiVirus (6.10.20) and MS's
_popen. Docs for startfile are being mailed to Fred (or just read the
docstring -- it tells the whole story).
Changed webbrowser.py to use os.startfile instead of os.popen on Windows.
Changed IDLE's EditorWindow.py to pass an absolute path for the docs
(hardcoding ShellExecute's "directory" arg to "." as used to be done let
IDLE work, but made the startfile command exceedingly obscure for other
uses -- the MS docs are terrible, of course, & still not sure I
understand it).
Note that Windows Python must link with shell32.lib now! That's where
ShellExecute lives.
data and default handlers -- a new reference was being passed to
Py_BuildValue() for the "O" format character; using "N" plugs the leak.
Fixed two other (minor) leaks that occurred on various error conditions.
Removed uses of the UNLESS macro, which makes code hard to read, and is
Evil.