message/delivery-status clause, and genericize it to handle all (other)
message/* content types. This lets us correctly parse 2 more of Anthony's
MIME torture tests (specifically, the message/external-body examples).
bunch of module globals that aren't used.
__maxheaderlen -> _maxheaderlen
_handle_multipart(): This should be more RFC compliant now, and does match the
updated/fixed semantics for preamble and epilogue.
(standard) tests, and doesn't throw parse errors. I still need throw
Anthony's torture test at it, but I wanted to get this checked in and off my
disk.
> ----------------------------
> revision 1.20.4.4
> date: 2003/06/12 09:14:17; author: anthonybaxter; state: Exp; lines: +13 -6
> preamble is None when missing, not ''.
> Handle a couple of bogus formatted messages - now parses my main testsuite.
> Handle message/external-body.
> ----------------------------
> revision 1.20.4.3
> date: 2003/06/12 07:16:40; author: anthonybaxter; state: Exp; lines: +6 -4
> epilogue-processing is now the same as the old parser - the newline at the
> end of the line with the --endboundary-- is included as part of the epilogue.
> Note that any whitespace after the boundary is _not_ part of the epilogue.
> ----------------------------
> revision 1.20.4.2
> date: 2003/06/12 06:39:09; author: anthonybaxter; state: Exp; lines: +6 -4
> message/delivery-status fixed.
> HeaderParser fixed.
> ----------------------------
> revision 1.20.4.1
> date: 2003/06/12 06:08:56; author: anthonybaxter; state: Exp; lines: +163 -129
> A work-in-progress snapshot of the new parser. A couple of known problems:
>
> - first (blank) line of MIME epilogues is being consumed
> - message/delivery-status isn't quite right
>
> It still needs a lot of cleanup, but right now it parses a whole lot of
> badness that the old parser failed on. I also need to think about adding
> back the old 'strict' flag in some way.
> =============================================================================
patch removes dependencies on the old unsupported KoreanCodecs package
and the alternative JapaneseCodecs package. Since both of those
provide aliases for their codecs, this removal just makes the generic
codec names work.
We needed to make slight changes to __init__() as well.
This will be backported to Python 2.3 when its branch freeze is over.
gets done when maxheaderlen <> 0. The header really gets wrapped via
the email.Header.Header class, which has a more sophisticated
algorithm than just splitting on semi-colons.
quotes. Fixes SF bug #794466, with the essential patch provided by
Stuart D. Gathman. Specifically,
_parseparam(), _get_params_preserve(): Use the parsing function that
takes quotes into account, as given (essentially) in the bug report's
test program.
Backport candidate.
test_rfc2231_no_language_or_charset_in_boundary(),
test_rfc2231_no_language_or_charset_in_charset(): New tests for proper
decoding of some RFC 2231 headers.
Backport candidate (as was the Utils.py 1.25 change) to both Python
2.3.1 and 2.2.4 -- will do momentarily.
can be None, and what to do in that situation.
get_filename(), get_boundary(), get_content_charset(): Make sure these
handle RFC 2231 headers without a CHARSET field.
Backport candidate (as was the Utils.py 1.25 change) to both Python
2.3.1 and 2.2.4 -- will do momentarily.
in some locales. This code simplifies the boundary algorithm to use
randint() which is what we wanted anyway.
Bump package version to 2.5.3.
Backport candidate for Python 2.2.3
long header lines is now (properly) in the Header class. So we no
longer need _split_header() and we'll just defer to Header.encode()
when we have a plain string.
_encode_chunks(): Pass maxlinelen in instead of always using
self._maxlinelen, so we can adjust for shorter initial lines.
Pass this value through to _max_append().
encode(): Weave maxlinelen through to the _encode_chunks() call.
_split_ascii(): When recursively splitting a line on spaces
(i.e. lower level syntactic split), don't append the whole returned
string. Instead, split it on linejoiners and extend the lines up to
the last line (for proper packing). Calculate the linelen based on
the last element in the this list.
part itself is longer than maxlen, and we aren't already splitting on
whitespace, then we recursively split the part on whitespace and
append that to the this list.
preserve spaces in the encoded/unencoded word boundaries. RFC 2047 is
ambiguous here, but most people expect the space to be preserved.
Really closes SF bug # 640110.
_split(): New implementation of ASCII line splitting which should do a
better job and not be subject to the various weird artifacts (bugs)
reported. This should also do a better job of higher-level syntactic
splits by trying first to split on semis, then commas, then
whitespace.
Use a Timbot-ly binary search for optimal non-ASCII split points for
better packing of header lines. This also lets us remove one
recursion call. Don't pass in firstline, but instead pass in the
actual line length we're shooting for. Also pass in the list of split
characters.
encode(): Pass in the list of split characters so applications can
have some control over what "higher level syntactic breaks" are.
Also,
decode_header(): Transform binascii.Errors which can occur when
decoding a base64 RFC 2047 header with bogus data, into an
email.Errors.HeaderParseError. Closes SF bug #696712.
_handle_multipart(): Ensure that if the preamble exists but does not
end in a newline, a newline is still added. Without this, the
boundary separator will end up on the preamble line, breaking the MIME
structure.
_make_boundary(): Handle differences in the decimal point character
based on the locale.
Charset: Alias __repr__ to __str__ for debugging.
header_encode(): When calling quopriMIME.header_encode(), set
maxlinelen=None so that the lower level function doesn't (also) try to
wrap/fold the line.
_max_append(): Change the comparison so that the new string is
concatenated if it's less than or equal to the max length.
header_encode(): Allow for maxlinelen == None to mean, don't do any
line splitting. This is because this module is mostly used by higher
level abstractions (Header.py) which already ensures line lengths. We
do this in a cheapo way by setting the max_encoding to some insanely
<100k wink> large number.
because the test file, msg_26.txt which has \r\n line endings, was
getting munged by cvs, which knows to do line ending conversions for
text files. But we want \r\n to be preserved on all platforms, so we
cvs admin'd the file to be -kb (binary), which means we have to open
the file in binary mode to preserve these line ends. Hopefully this
will be the end of the thrashing on this issue (but probably not).
Test passes on *nix now, and Tim confirms it passes on Windows. We'll
leave it to Jack to test MacOS.
is passed straight through to the unicode() and ustr.encode() calls.
I think it's the best we can do to address the UnicodeErrors in badly
encoded headers such as is described in SF bug #648119.