longer needs to be public, and shoudn't be public because all datetime
objects are immutable. The Python implementation has changed
accordingly, but still need to change the C implementation.
The 4th item can be None or an iterator yielding list items, which are
used to append() or extend() the object. The 5th item can be None or
an iterator yielding a dict's (key, value) pairs, which are stuffed
into the object using __setitem__.
Also (as a separate, though related, feature) add "batching" for list
and dict items. If you pickled a dict or list with a million items in
the past, it would push a million items onto the stack. It now pushes
only 1000 items at a time on the stack, using repeated APPENDS or
SETITEMS opcodes. (For lists, I hope that using many short extend()
calls doesn't exhibit quadratic behavior.)
__module__ is the string name of the module the function was defined
in, just like __module__ of classes. In some cases, particularly for
C functions, the __module__ may be None.
Change PyCFunction_New() from a function to a macro, but keep an
unused copy of the function around so that we don't change the binary
API.
Change pickle's save_global() to use whichmodule() if __module__ is
None, but add the __module__ logic to whichmodule() since it might be
used outside of pickle.
error handers in the Unicode codecs: Negative
positions are treated as being relative to the end of
the input and out of bounds positions result in an
IndexError.
Also update the PEP and include an explanation of
this in the documentation for codecs.register_error.
Fixes a small bug in iconv_codecs: if the position
from the callback is negative *add* it to the size
instead of substracting it.
From SF patch #677429.
M rpc.py
SF Bug 676398 Doesn't handle non-built-in exceptions
1. Move exception formatting to the subprocess; allows subclassing of
exceptions, including subclasses created in the shell without
introducing excessive complexity in the RPC mechanism.
2. Provide access to linecache from subprocess to support this.
classes have a __reduce__ that returns (self.__class__,
self.__getstate__()). tzinfo.__reduce__() is a bit smarter, calling
__getinitargs__ and __getstate__ if they exist, and falling back to
__dict__ if it exists and isn't empty.
for this iconv() implementation in the init function.
For encoding: use a byteswapped version of the input if
neccessary.
For decoding: byteswap every piece returned by iconv()
if neccessary (but not those pieces returned from the
callback)
Comment out test_sane() in the test script, because
whether this works depends on whether byte swapping
is neccessary or not (an on Py_UNICODE_SIZE)
on the type instead of self.save(t). This defeated the purpose of
NEWOBJ, because it didn't generate a BINGET opcode when t was already
memoized; but moreover, it would generate multiple BINPUT opcodes for
the same type! pickletools.dis() doesn't like this.
How I found this? I was playing with picklesize.py in the datetime
sandbox, and noticed that protocol 2 pickles for multiple objects were
in fact larger than protocol 1 pickles! That was suspicious, so I
decided to disassemble one of the pickles.
This really needs a unit test, but I'm exhausted. I'll be late for
work as it is. :-(
the same function, don't save the state or write a BUILD opcode. This
is so that a type (e.g. datetime :-) can support protocol 2 using
__getnewargs__ while also supporting protocol 0 and 1 using
__getstate__. (Without this, the state would be pickled twice with
protocol 2, unless __getstate__ is defined to return None, which
breaks protocol 0 and 1.)
popped a MARK, but without stack emulation the disassembler couldn't
know that, and subsequent indentation got hosed.
Now the disassembler does do enough stack emulation to catch this. While
I was at it, also added lots of sanity checks for other stack operations,
and correct use of the memo. This goes (I think) a long way toward being
a "pickle verifier" now too.
types. The special handling for these can now be removed from save_newobj().
Add some testing for this.
Also add support for setting the 'fast' flag on the Python Pickler class,
which suppresses use of the memo.
only get run by test_pickle.py now (& not by test_cpickle.py). This
should be undone when protocol 2 is implemented in cPickle too.
test_cpickle should pass again.
object.__reduce__, do a getattr() on the class so we can explicitly
test for it. The reduce()-calling code becomes a bit more regular as
a result.
Also add support slots: if an object has slots, the default state is
(dict, slots) where dict is the __dict__ or None, and slots is a dict
mapping slot names to slot values. We do a best-effort approach to
find slot names, assuming the __slots__ fields of classes aren't
modified after class definition time to misrepresent the actual list
of slots defined by a class.
of the opcode character instead (but stripping the quotes).
Added a proto 2 test section for the canonical recursive-tuple case.
Note that since pickle's save_tuple() takes different paths depending on
tuple length now, beefier tests are really needed (but not in pickletools);
the "short tuple" case tried here was actually broken yesterday, and it's
subtle stuff so needs to be tested.
be one of 0, 1 or 2).
I should note that the previous checkin also added NEWOBJ support to
the unpickler -- but there's nothing yet that generates this.
some notion of low-level efficiency. Undid that, but left one routine
alone: save_inst() claims it has a reason for not using memoize().
I don't understand that comment, so added an XXX comment there.