mismatches. At least I hope this fixes them all.
This reverts part of my change from yesterday that converted everything
in Parser/*.c to use PyObject_* API. The encoding doesn't really need
to use PyMem_*, however, it uses new_string() which must return PyMem_*
for handling the result of PyOS_Readline() which returns PyMem_* memory.
If there were 2 versions of new_string() one that returned PyMem_*
for tokens and one that return PyObject_* for encodings that could
also fix this problem. I'm not sure which version would be clearer.
This seems to fix both Guido's and Phillip's problems, so it's good enough
for now. After this change, it would be good to review Parser/*.c
for consistent use of the 2 memory APIs.
malloc/realloc type functions, as well as renaming one variable called 'new'
in tokensizer.c. Still lots more to be done, going to be checking in one
chunk at a time or the patch will be massively huge. Still compiles ok with
gcc.
This was the result of inconsistent use of PyMem_* and PyObject_* allocators.
By changing to use PyObject_* allocator almost everywhere, this removes
the inconsistency.
objimpl.h, pymem.h: Stop mapping PyMem_{Del, DEL} and PyMem_{Free, FREE}
to PyObject_{Free, FREE} in a release build. They're aliases for the
system free() now.
_subprocess.c/sp_handle_dealloc(): Since the memory was originally
obtained via PyObject_NEW, it must be released via PyObject_FREE (or
_DEL).
pythonrun.c, tokenizer.c, parsermodule.c: I lost count of the number of
PyObject vs PyMem mismatches in these -- it's like the specific
function called at each site was picked at random, sometimes even with
memory obtained via PyMem getting released via PyObject. Changed most
to use PyObject uniformly, since the blobs allocated are predictably
small in most cases, and obmalloc is generally faster than system
mallocs then.
If extension modules in real life prove as sloppy as Python's front
end, we'll have to revert the objimpl.h + pymem.h part of this patch.
Note that no problems will show up in a debug build (all calls still go
thru obmalloc then). Problems will show up only in a release build, most
likely segfaults.
This will hopefully get rid of some Coverity warnings, be a hint to
developers, and be marginally faster.
Some asserts were added when the type is currently known, but depends
on values from another function.
but without a specified encoding: decoding_fgets() (and decoding_feof()) can
return NULL and fiddle with the 'tok' struct, making tok->buf NULL. This is
okay in the other cases of calls to decoding_*(), it seems, but not in this
one.
This should get a test added, somewhere, but the testsuite doesn't seem to
test encoding anywhere (although plenty of tests use it.)
It seems to me that decoding errors in other places in the code (like at the
start of a token, instead of in the middle of one) make the code end up
adding small integers to NULL pointers, but happen to check for error states
before using the calculated new pointers. I haven't been able to trigger any
other crashes, in any case.
I would nominate this file for a comlete rewrite for Py3k. The whole
decoding trick is too bolted-on for my tastes.
Call error_ret() in decode_str(). It was called in some other places,
but seemed inconsistent. It is safe to call PyTokenizer_Free() after
calling error_ret().
- SF Bug #772896, unknown encoding results in MemoryError, which is not helpful
I will only backport the segfault fix. I'll let Anthony decide if he wants
the other changes backported. I will do the backport if asked.
modes like non-interactive modes. This allows for non-latin-1 users
to write unicode strings directly and sets Japanese users free from
weird manual escaping <wink> in shift_jis environments.
(Reviewed by Martin v. Loewis)
-- replace then with slightly faster PyObject_Call(o,a,NULL). (The
difference is that the latter requires a to be a tuple; the former
allows other values and wraps them in a tuple if necessary; it
involves two more levels of C function calls to accomplish all that.)
Highlights: import and friends will understand any of \r, \n and \r\n
as end of line. Python file input will do the same if you use mode 'U'.
Everything can be disabled by configuring with --without-universal-newlines.
See PEP278 for details.
This introduces:
- A new operator // that means floor division (the kind of division
where 1/2 is 0).
- The "future division" statement ("from __future__ import division)
which changes the meaning of the / operator to implement "true
division" (where 1/2 is 0.5).
- New overloadable operators __truediv__ and __floordiv__.
- New slots in the PyNumberMethods struct for true and floor division,
new abstract APIs for them, new opcodes, and so on.
I emphasize that without the future division statement, the semantics
of / will remain unchanged until Python 3.0.
Not yet implemented are warnings (default off) when / is used with int
or long arguments.
This has been on display since 7/31 as SF patch #443474.
Flames to /dev/null.
Work around intrcheck.c's desire to pass 'PyErr_CheckSignals' to
'Py_AddPendingCall' by providing a (static) wrapper function that has the
right number of arguments.
used for indentation related errors. This patch includes Ping's
improvements for indentation-related error messages.
Closes SourceForge patches #100734 and #100856.