1. It's not portable; different processors order the bytes
differently.
2. It's very wasteful of space. In most texts, the majority of the code
points are less than 127, or less than 255, so a lot of space is occupied
by zero bytes. The above string takes 24 bytes compared to the 6
bytes needed for an ASCII representation. Increased RAM usage doesn't
matter too much (desktop computers have megabytes of RAM, and strings
aren't usually that large), but expanding our usage of disk and
network bandwidth by a factor of 4 is intolerable.
3. It's not compatible with existing C functions such as ``strlen()``,
so a new family of wide string functions would need to be used.
4. Many Internet standards are defined in terms of textual data, and
can't handle content with embedded zero bytes.
Generally people don't use this encoding, choosing other encodings
that are more efficient and convenient.
Encodings don't have to handle every possible Unicode character, and
most encodings don't. For example, Python's default encoding is the
'ascii' encoding. The rules for converting a Unicode string into the
ASCII encoding are are simple; for each code point:
1. If the code point is <128, each byte is the same as the value of the
code point.
2. If the code point is 128 or greater, the Unicode string can't
be represented in this encoding. (Python raises a
``UnicodeEncodeError`` exception in this case.)
Latin-1, also known as ISO-8859-1, is a similar encoding. Unicode
code points 0-255 are identical to the Latin-1 values, so converting
to this encoding simply requires converting code points to byte
values; if a code point larger than 255 is encountered, the string
can't be encoded into Latin-1.
Encodings don't have to be simple one-to-one mappings like Latin-1.
Consider IBM's EBCDIC, which was used on IBM mainframes. Letter
values weren't in one block: 'a' through 'i' had values from 129 to
137, but 'j' through 'r' were 145 through 153. If you wanted to use
EBCDIC as an encoding, you'd probably use some sort of lookup table to
perform the conversion, but this is largely an internal detail.
UTF-8 is one of the most commonly used encodings. UTF stands for
"Unicode Transformation Format", and the '8' means that 8-bit numbers
are used in the encoding. (There's also a UTF-16 encoding, but it's
less frequently used than UTF-8.) UTF-8 uses the following rules:
1. If the code point is <128, it's represented by the corresponding byte value.
2. If the code point is between 128 and 0x7ff, it's turned into two byte values
between 128 and 255.
3. Code points >0x7ff are turned into three- or four-byte sequences, where
each byte of the sequence is between 128 and 255.
UTF-8 has several convenient properties:
1. It can handle any Unicode code point.
2. A Unicode string is turned into a string of bytes containing no embedded zero bytes. This avoids byte-ordering issues, and means UTF-8 strings can be processed by C functions such as ``strcpy()`` and sent through protocols that can't handle zero bytes.
3. A string of ASCII text is also valid UTF-8 text.
4. UTF-8 is fairly compact; the majority of code points are turned into two bytes, and values less than 128 occupy only a single byte.
5. If bytes are corrupted or lost, it's possible to determine the start of the next UTF-8-encoded code point and resynchronize. It's also unlikely that random 8-bit data will look like valid UTF-8.
References
''''''''''''''
The Unicode Consortium site at <http://www.unicode.org> has character
charts, a glossary, and PDF versions of the Unicode specification. Be
prepared for some difficult reading.
<http://www.unicode.org/history/> is a chronology of the origin and
development of Unicode.
To help understand the standard, Jukka Korpela has written an
introductory guide to reading the Unicode character tables,
available at <http://www.cs.tut.fi/~jkorpela/unicode/guide.html>.
Roman Czyborra wrote another explanation of Unicode's basic principles;
it's at <http://czyborra.com/unicode/characters.html>.
Czyborra has written a number of other Unicode-related documentation,
available from <http://www.cyzborra.com>.
Two other good introductory articles were written by Joel Spolsky
<http://www.joelonsoftware.com/articles/Unicode.html> and Jason
Orendorff <http://www.jorendorff.com/articles/unicode/>. If this
introduction didn't make things clear to you, you should try reading
one of these alternate articles before continuing.
Wikipedia entries are often helpful; see the entries for "character
encoding" <http://en.wikipedia.org/wiki/Character_encoding> and UTF-8
<http://en.wikipedia.org/wiki/UTF-8>, for example.
Python's Unicode Support
------------------------
Now that you've learned the rudiments of Unicode, we can look at
Python's Unicode features.
The Unicode Type
'''''''''''''''''''
Unicode strings are expressed as instances of the ``unicode`` type,
one of Python's repertoire of built-in types. It derives from an
abstract type called ``basestring``, which is also an ancestor of the
``str`` type; you can therefore check if a value is a string type with
``isinstance(value, basestring)``. Under the hood, Python represents
Unicode strings as either 16- or 32-bit integers, depending on how the
The ``unicode()`` constructor has the signature ``unicode(string[, encoding, errors])``.
All of its arguments should be 8-bit strings. The first argument is converted
to Unicode using the specified encoding; if you leave off the ``encoding`` argument,
the ASCII encoding is used for the conversion, so characters greater than 127 will
be treated as errors::
>>> unicode('abcdef')
u'abcdef'
>>> s = unicode('abcdef')
>>> type(s)
<type 'unicode'>
>>> unicode('abcdef' + chr(255))
Traceback (most recent call last):
File "<stdin>", line 1, in ?
UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 6:
ordinal not in range(128)
The ``errors`` argument specifies the response when the input string can't be converted according to the encoding's rules. Legal values for this argument
are 'strict' (raise a ``UnicodeDecodeError`` exception),
'replace' (add U+FFFD, 'REPLACEMENT CHARACTER'),
or 'ignore' (just leave the character out of the Unicode result).
The following examples show the differences::
>>> unicode('\x80abc', errors='strict')
Traceback (most recent call last):
File "<stdin>", line 1, in ?
UnicodeDecodeError: 'ascii' codec can't decode byte 0x80 in position 0:
ordinal not in range(128)
>>> unicode('\x80abc', errors='replace')
u'\ufffdabc'
>>> unicode('\x80abc', errors='ignore')
u'abc'
Encodings are specified as strings containing the encoding's name.
Python 2.4 comes with roughly 100 different encodings; see the Python
Library Reference at
<http://docs.python.org/lib/standard-encodings.html> for a list. Some
encodings have multiple names; for example, 'latin-1', 'iso_8859_1'
and '8859' are all synonyms for the same encoding.
One-character Unicode strings can also be created with the
``unichr()`` built-in function, which takes integers and returns a
Unicode string of length 1 that contains the corresponding code point.
The reverse operation is the built-in `ord()` function that takes a
one-character Unicode string and returns the code point value::
>>> unichr(40960)
u'\ua000'
>>> ord(u'\ua000')
40960
Instances of the ``unicode`` type have many of the same methods as
the 8-bit string type for operations such as searching and formatting::
>>> s = u'Was ever feather so lightly blown to and fro as this multitude?'
>>> s.count('e')
5
>>> s.find('feather')
9
>>> s.find('bird')
-1
>>> s.replace('feather', 'sand')
u'Was ever sand so lightly blown to and fro as this multitude?'
>>> s.upper()
u'WAS EVER FEATHER SO LIGHTLY BLOWN TO AND FRO AS THIS MULTITUDE?'
Note that the arguments to these methods can be Unicode strings or 8-bit strings.
8-bit strings will be converted to Unicode before carrying out the operation;
Python's default ASCII encoding will be used, so characters greater than 127 will cause an exception::
>>> s.find('Was\x9f')
Traceback (most recent call last):
File "<stdin>", line 1, in ?
UnicodeDecodeError: 'ascii' codec can't decode byte 0x9f in position 3: ordinal not in range(128)
>>> s.find(u'Was\x9f')
-1
Much Python code that operates on strings will therefore work with
Unicode strings without requiring any changes to the code. (Input and
output code needs more updating for Unicode; more on this later.)
Another important method is ``.encode([encoding], [errors='strict'])``,
which returns an 8-bit string version of the
Unicode string, encoded in the requested encoding. The ``errors``
parameter is the same as the parameter of the ``unicode()``
constructor, with one additional possibility; as well as 'strict',
'ignore', and 'replace', you can also pass 'xmlcharrefreplace' which
uses XML's character references. The following example shows the
different results::
>>> u = unichr(40960) + u'abcd' + unichr(1972)
>>> u.encode('utf-8')
'\xea\x80\x80abcd\xde\xb4'
>>> u.encode('ascii')
Traceback (most recent call last):
File "<stdin>", line 1, in ?
UnicodeEncodeError: 'ascii' codec can't encode character '\ua000' in position 0: ordinal not in range(128)
>>> u.encode('ascii', 'ignore')
'abcd'
>>> u.encode('ascii', 'replace')
'?abcd?'
>>> u.encode('ascii', 'xmlcharrefreplace')
'ꀀabcd޴'
Python's 8-bit strings have a ``.decode([encoding], [errors])`` method
that interprets the string using the given encoding::
>>> u = unichr(40960) + u'abcd' + unichr(1972) # Assemble a string
>>> utf8_version = u.encode('utf-8') # Encode as UTF-8
>>> type(utf8_version), utf8_version
(<type 'str'>, '\xea\x80\x80abcd\xde\xb4')
>>> u2 = utf8_version.decode('utf-8') # Decode using UTF-8
>>> u == u2 # The two strings match
True
The low-level routines for registering and accessing the available
encodings are found in the ``codecs`` module. However, the encoding
and decoding functions returned by this module are usually more
low-level than is comfortable, so I'm not going to describe the
``codecs`` module here. If you need to implement a completely new
encoding, you'll need to learn about the ``codecs`` module interfaces,
but implementing encodings is a specialized task that also won't be
covered here. Consult the Python documentation to learn more about
this module.
The most commonly used part of the ``codecs`` module is the
``codecs.open()`` function which will be discussed in the section
on input and output.
Unicode Literals in Python Source Code
''''''''''''''''''''''''''''''''''''''''''
In Python source code, Unicode literals are written as strings
prefixed with the 'u' or 'U' character: ``u'abcdefghijk'``. Specific
code points can be written using the ``\u`` escape sequence, which is
followed by four hex digits giving the code point. The ``\U`` escape
sequence is similar, but expects 8 hex digits, not 4.
Unicode literals can also use the same escape sequences as 8-bit
strings, including ``\x``, but ``\x`` only takes two hex digits so it
can't express an arbitrary code point. Octal escapes can go up to
U+01ff, which is octal 777.
::
>>> s = u"a\xac\u1234\u20ac\U00008000"
^^^^ two-digit hex escape
^^^^^^ four-digit Unicode escape
^^^^^^^^^^ eight-digit Unicode escape
>>> for c in s: print ord(c),
...
97 172 4660 8364 32768
Using escape sequences for code points greater than 127 is fine in
small doses, but becomes an annoyance if you're using many accented
characters, as you would in a program with messages in French or some
other accent-using language. You can also assemble strings using the
``unichr()`` built-in function, but this is even more tedious.
Ideally, you'd want to be able to write literals in your language's
natural encoding. You could then edit Python source code with your
favorite editor which would display the accented characters naturally,
and have the right characters used at runtime.
Python supports writing Unicode literals in any encoding, but you have
to declare the encoding being used. This is done by including a
special comment as either the first or second line of the source
file::
#!/usr/bin/env python
# -*- coding: latin-1 -*-
u = u'abcdé'
print ord(u[-1])
The syntax is inspired by Emacs's notation for specifying variables local to a file.
Emacs supports many different variables, but Python only supports 'coding'.
The ``-*-`` symbols indicate that the comment is special; within them,
you must supply the name ``coding`` and the name of your chosen encoding,
separated by ``':'``.
If you don't include such a comment, the default encoding used will be
ASCII. Versions of Python before 2.4 were Euro-centric and assumed
Latin-1 as a default encoding for string literals; in Python 2.4,
characters greater than 127 still work but result in a warning. For
example, the following program has no encoding declaration::
#!/usr/bin/env python
u = u'abcdé'
print ord(u[-1])
When you run it with Python 2.4, it will output the following warning::
amk:~$ python p263.py
sys:1: DeprecationWarning: Non-ASCII character '\xe9'
in file p263.py on line 2, but no encoding declared;
see http://www.python.org/peps/pep-0263.html for details
Unicode Properties
'''''''''''''''''''
The Unicode specification includes a database of information about
code points. For each code point that's defined, the information
includes the character's name, its category, the numeric value if
applicable (Unicode has characters representing the Roman numerals and
fractions such as one-third and four-fifths). There are also
properties related to the code point's use in bidirectional text and
other display-related properties.
The following program displays some information about several
characters, and prints the numeric value of one particular character::