cpython/Doc/howto/regex.rst

1369 lines
60 KiB
ReStructuredText
Raw Normal View History

Merged revisions 60481,60485,60489-60492,60494-60496,60498-60499,60501-60503,60505-60506,60508-60509,60523-60524,60532,60543,60545,60547-60548,60552,60554,60556-60559,60561-60562,60568-60598,60600-60616 via svnmerge from svn+ssh://pythondev@svn.python.org/python/trunk ........ r60568 | christian.heimes | 2008-02-04 19:48:38 +0100 (Mon, 04 Feb 2008) | 1 line Increase debugging to investige failing tests on some build bots ........ r60570 | christian.heimes | 2008-02-04 20:30:05 +0100 (Mon, 04 Feb 2008) | 1 line Small adjustments for test compact freelist test. It's no passing on Windows as well. ........ r60573 | amaury.forgeotdarc | 2008-02-04 21:53:14 +0100 (Mon, 04 Feb 2008) | 2 lines Correct quotes in NEWS file ........ r60575 | amaury.forgeotdarc | 2008-02-04 22:45:05 +0100 (Mon, 04 Feb 2008) | 13 lines #1750076: Debugger did not step on every iteration of a while statement. The mapping between bytecode offsets and source lines (lnotab) did not contain an entry for the beginning of the loop. Now it does, and the lnotab can be a bit larger: in particular, several statements on the same line generate several entries. However, this does not bother the settrace function, which will trigger only one 'line' event. The lnotab seems to be exactly the same as with python2.4. ........ r60584 | amaury.forgeotdarc | 2008-02-05 01:26:21 +0100 (Tue, 05 Feb 2008) | 3 lines Change r60575 broke test_compile: there is no need to emit co_lnotab item when both offsets are zeros. ........ r60587 | skip.montanaro | 2008-02-05 03:32:16 +0100 (Tue, 05 Feb 2008) | 1 line sync with most recent version from python-mode sf project ........ r60588 | lars.gustaebel | 2008-02-05 12:51:40 +0100 (Tue, 05 Feb 2008) | 5 lines Issue #2004: Use mode 0700 for temporary directories and default permissions for missing directories. (will backport to 2.5) ........ r60590 | georg.brandl | 2008-02-05 13:01:24 +0100 (Tue, 05 Feb 2008) | 2 lines Convert external links to internal links. Fixes #2010. ........ r60592 | marc-andre.lemburg | 2008-02-05 15:50:40 +0100 (Tue, 05 Feb 2008) | 3 lines Keep distutils Python 2.1 compatible (or even Python 2.4 in this case). ........ r60593 | andrew.kuchling | 2008-02-05 17:06:57 +0100 (Tue, 05 Feb 2008) | 5 lines Update PEP URL. (This code is duplicated between pydoc and DocXMLRPCServer; maybe it should be refactored as a GHOP project.) 2.5.2 backport candidate. ........ r60596 | guido.van.rossum | 2008-02-05 18:32:15 +0100 (Tue, 05 Feb 2008) | 2 lines In the experimental 'Scanner' feature, the group count was set wrong. ........ r60602 | facundo.batista | 2008-02-05 20:03:32 +0100 (Tue, 05 Feb 2008) | 3 lines Issue 1951. Converts wave test cases to unittest. ........ r60603 | georg.brandl | 2008-02-05 20:07:10 +0100 (Tue, 05 Feb 2008) | 2 lines Actually run the test. ........ r60604 | skip.montanaro | 2008-02-05 20:24:30 +0100 (Tue, 05 Feb 2008) | 2 lines correct object name ........ r60605 | georg.brandl | 2008-02-05 20:58:17 +0100 (Tue, 05 Feb 2008) | 7 lines * Use the same code to profile for test_profile and test_cprofile. * Convert both to unittest. * Use the same unit testing code. * Include the expected output in both test files. * Make it possible to regenerate the expected output by running the file as a script with an '-r' argument. ........ r60613 | raymond.hettinger | 2008-02-06 02:49:00 +0100 (Wed, 06 Feb 2008) | 1 line Sync-up with Py3k work. ........ r60614 | christian.heimes | 2008-02-06 13:44:34 +0100 (Wed, 06 Feb 2008) | 1 line Limit free list of method and builtin function objects to 256 entries each. ........ r60616 | christian.heimes | 2008-02-06 14:33:44 +0100 (Wed, 06 Feb 2008) | 7 lines Unified naming convention for free lists and their limits. All free lists in Object/ are named ``free_list``, the counter ``numfree`` and the upper limit is a macro ``PyName_MAXFREELIST`` inside an #ifndef block. The chances should make it easier to adjust Python for platforms with less memory, e.g. mobile phones. ........
2008-02-06 10:31:34 -04:00
.. _regex-howto:
2007-08-15 11:28:22 -03:00
****************************
Regular Expression HOWTO
2007-08-15 11:28:22 -03:00
****************************
Merged revisions 70342,70385-70387,70389-70390,70392-70393,70395,70400,70405-70406,70418,70438,70464,70468 via svnmerge from svn+ssh://pythondev@svn.python.org/python/trunk ........ r70342 | georg.brandl | 2009-03-13 14:03:58 -0500 (Fri, 13 Mar 2009) | 1 line #5486: typos. ........ r70385 | benjamin.peterson | 2009-03-15 09:38:55 -0500 (Sun, 15 Mar 2009) | 1 line fix tuple.index() error message #5495 ........ r70386 | georg.brandl | 2009-03-15 16:32:06 -0500 (Sun, 15 Mar 2009) | 1 line #5496: fix docstring of lookup(). ........ r70387 | georg.brandl | 2009-03-15 16:37:16 -0500 (Sun, 15 Mar 2009) | 1 line #5493: clarify __nonzero__ docs. ........ r70389 | georg.brandl | 2009-03-15 16:43:38 -0500 (Sun, 15 Mar 2009) | 1 line Fix a small nit in the error message if bool() falls back on __len__ and it returns the wrong type: it would tell the user that __nonzero__ should return bool or int. ........ r70390 | georg.brandl | 2009-03-15 16:44:43 -0500 (Sun, 15 Mar 2009) | 1 line #5491: clarify nested() semantics. ........ r70392 | georg.brandl | 2009-03-15 16:46:00 -0500 (Sun, 15 Mar 2009) | 1 line #5488: add missing struct member. ........ r70393 | georg.brandl | 2009-03-15 16:47:42 -0500 (Sun, 15 Mar 2009) | 1 line #5478: fix copy-paste oversight in function signature. ........ r70395 | georg.brandl | 2009-03-15 16:51:48 -0500 (Sun, 15 Mar 2009) | 1 line #5276: document IDLESTARTUP and .Idle.py. ........ r70400 | georg.brandl | 2009-03-15 16:59:37 -0500 (Sun, 15 Mar 2009) | 3 lines Fix markup in re docs and give a mail address in regex howto, so that the recommendation to send suggestions to the author can be followed. ........ r70405 | georg.brandl | 2009-03-15 17:11:07 -0500 (Sun, 15 Mar 2009) | 7 lines Move the previously local import of threading to module level. This is cleaner and avoids lockups in obscure cases where a Queue is instantiated while the import lock is already held by another thread. OKed by Tim Peters. ........ r70406 | hirokazu.yamamoto | 2009-03-15 17:43:14 -0500 (Sun, 15 Mar 2009) | 1 line Added skip for old MSVC. ........ r70418 | georg.brandl | 2009-03-16 14:42:03 -0500 (Mon, 16 Mar 2009) | 1 line Add token markup. ........ r70438 | benjamin.peterson | 2009-03-17 15:29:51 -0500 (Tue, 17 Mar 2009) | 1 line I thought this was begging for an example ........ r70464 | benjamin.peterson | 2009-03-18 15:58:09 -0500 (Wed, 18 Mar 2009) | 1 line a much better example ........ r70468 | benjamin.peterson | 2009-03-18 22:04:31 -0500 (Wed, 18 Mar 2009) | 1 line close files after comparing them ........
2009-03-21 14:31:58 -03:00
:Author: A.M. Kuchling <amk@amk.ca>
2007-08-15 11:28:22 -03:00
.. TODO:
Document lookbehind assertions
Better way of displaying a RE, a string, and what it matches
Mention optional argument to match.groups()
Unicode (at least a reference)
2007-08-15 11:28:22 -03:00
.. topic:: Abstract
This document is an introductory tutorial to using regular expressions in Python
with the :mod:`re` module. It provides a gentler introduction than the
corresponding section in the Library Reference.
Introduction
============
Regular expressions (called REs, or regexes, or regex patterns) are essentially
a tiny, highly specialized programming language embedded inside Python and made
available through the :mod:`re` module. Using this little language, you specify
the rules for the set of possible strings that you want to match; this set might
contain English sentences, or e-mail addresses, or TeX commands, or anything you
like. You can then ask questions such as "Does this string match the pattern?",
or "Is there a match for the pattern anywhere in this string?". You can also
use REs to modify a string or to split it apart in various ways.
Regular expression patterns are compiled into a series of bytecodes which are
then executed by a matching engine written in C. For advanced use, it may be
necessary to pay careful attention to how the engine will execute a given RE,
and write the RE in a certain way in order to produce bytecode that runs faster.
Optimization isn't covered in this document, because it requires that you have a
good understanding of the matching engine's internals.
The regular expression language is relatively small and restricted, so not all
possible string processing tasks can be done using regular expressions. There
are also tasks that *can* be done with regular expressions, but the expressions
turn out to be very complicated. In these cases, you may be better off writing
Python code to do the processing; while Python code will be slower than an
elaborate regular expression, it will also probably be more understandable.
Simple Patterns
===============
We'll start by learning about the simplest possible regular expressions. Since
regular expressions are used to operate on strings, we'll begin with the most
common task: matching characters.
For a detailed explanation of the computer science underlying regular
expressions (deterministic and non-deterministic finite automata), you can refer
to almost any textbook on writing compilers.
Matching Characters
-------------------
Most letters and characters will simply match themselves. For example, the
regular expression ``test`` will match the string ``test`` exactly. (You can
enable a case-insensitive mode that would let this RE match ``Test`` or ``TEST``
as well; more about this later.)
There are exceptions to this rule; some characters are special
:dfn:`metacharacters`, and don't match themselves. Instead, they signal that
some out-of-the-ordinary thing should be matched, or they affect other portions
of the RE by repeating them or changing their meaning. Much of this document is
devoted to discussing various metacharacters and what they do.
Here's a complete list of the metacharacters; their meanings will be discussed
in the rest of this HOWTO.
.. code-block:: none
2007-08-15 11:28:22 -03:00
. ^ $ * + ? { } [ ] \ | ( )
2007-08-15 11:28:22 -03:00
The first metacharacters we'll look at are ``[`` and ``]``. They're used for
specifying a character class, which is a set of characters that you wish to
match. Characters can be listed individually, or a range of characters can be
indicated by giving two characters and separating them by a ``'-'``. For
example, ``[abc]`` will match any of the characters ``a``, ``b``, or ``c``; this
is the same as ``[a-c]``, which uses a range to express the same set of
characters. If you wanted to match only lowercase letters, your RE would be
``[a-z]``.
Metacharacters are not active inside classes. For example, ``[akm$]`` will
match any of the characters ``'a'``, ``'k'``, ``'m'``, or ``'$'``; ``'$'`` is
usually a metacharacter, but inside a character class it's stripped of its
special nature.
You can match the characters not listed within the class by :dfn:`complementing`
the set. This is indicated by including a ``'^'`` as the first character of the
class; ``'^'`` outside a character class will simply match the ``'^'``
character. For example, ``[^5]`` will match any character except ``'5'``.
Perhaps the most important metacharacter is the backslash, ``\``. As in Python
string literals, the backslash can be followed by various characters to signal
various special sequences. It's also used to escape all the metacharacters so
you can still match them in patterns; for example, if you need to match a ``[``
or ``\``, you can precede them with a backslash to remove their special
meaning: ``\[`` or ``\\``.
Some of the special sequences beginning with ``'\'`` represent
predefined sets of characters that are often useful, such as the set
of digits, the set of letters, or the set of anything that isn't
whitespace.
Let's take an example: ``\w`` matches any alphanumeric character. If
the regex pattern is expressed in bytes, this is equivalent to the
class ``[a-zA-Z0-9_]``. If the regex pattern is a string, ``\w`` will
match all the characters marked as letters in the Unicode database
provided by the :mod:`unicodedata` module. You can use the more
restricted definition of ``\w`` in a string pattern by supplying the
:const:`re.ASCII` flag when compiling the regular expression.
The following list of special sequences isn't complete. For a complete
list of sequences and expanded class definitions for Unicode string
patterns, see the last part of :ref:`Regular Expression Syntax
<re-syntax>` in the Standard Library reference. In general, the
Unicode versions match any character that's in the appropriate
category in the Unicode database.
2007-08-15 11:28:22 -03:00
``\d``
Matches any decimal digit; this is equivalent to the class ``[0-9]``.
``\D``
Matches any non-digit character; this is equivalent to the class ``[^0-9]``.
``\s``
Matches any whitespace character; this is equivalent to the class ``[
\t\n\r\f\v]``.
``\S``
Matches any non-whitespace character; this is equivalent to the class ``[^
\t\n\r\f\v]``.
``\w``
Matches any alphanumeric character; this is equivalent to the class
``[a-zA-Z0-9_]``.
``\W``
Matches any non-alphanumeric character; this is equivalent to the class
``[^a-zA-Z0-9_]``.
These sequences can be included inside a character class. For example,
``[\s,.]`` is a character class that will match any whitespace character, or
``','`` or ``'.'``.
The final metacharacter in this section is ``.``. It matches anything except a
newline character, and there's an alternate mode (:const:`re.DOTALL`) where it will
match even a newline. ``.`` is often used where you want to match "any
2007-08-15 11:28:22 -03:00
character".
Repeating Things
----------------
Being able to match varying sets of characters is the first thing regular
expressions can do that isn't already possible with the methods available on
strings. However, if that was the only additional capability of regexes, they
wouldn't be much of an advance. Another capability is that you can specify that
portions of the RE must be repeated a certain number of times.
The first metacharacter for repeating things that we'll look at is ``*``. ``*``
doesn't match the literal character ``'*'``; instead, it specifies that the
2007-08-15 11:28:22 -03:00
previous character can be matched zero or more times, instead of exactly once.
For example, ``ca*t`` will match ``'ct'`` (0 ``'a'`` characters), ``'cat'`` (1 ``'a'``),
``'caaat'`` (3 ``'a'`` characters), and so forth.
2007-08-15 11:28:22 -03:00
Repetitions such as ``*`` are :dfn:`greedy`; when repeating a RE, the matching
engine will try to repeat it as many times as possible. If later portions of the
pattern don't match, the matching engine will then back up and try again with
2016-02-18 03:42:46 -04:00
fewer repetitions.
2007-08-15 11:28:22 -03:00
A step-by-step example will make this more obvious. Let's consider the
expression ``a[bcd]*b``. This matches the letter ``'a'``, zero or more letters
from the class ``[bcd]``, and finally ends with a ``'b'``. Now imagine matching
this RE against the string ``'abcbd'``.
2007-08-15 11:28:22 -03:00
+------+-----------+---------------------------------+
| Step | Matched | Explanation |
+======+===========+=================================+
| 1 | ``a`` | The ``a`` in the RE matches. |
+------+-----------+---------------------------------+
| 2 | ``abcbd`` | The engine matches ``[bcd]*``, |
| | | going as far as it can, which |
| | | is to the end of the string. |
+------+-----------+---------------------------------+
| 3 | *Failure* | The engine tries to match |
| | | ``b``, but the current position |
| | | is at the end of the string, so |
| | | it fails. |
+------+-----------+---------------------------------+
| 4 | ``abcb`` | Back up, so that ``[bcd]*`` |
| | | matches one less character. |
+------+-----------+---------------------------------+
| 5 | *Failure* | Try ``b`` again, but the |
| | | current position is at the last |
| | | character, which is a ``'d'``. |
+------+-----------+---------------------------------+
| 6 | ``abc`` | Back up again, so that |
| | | ``[bcd]*`` is only matching |
| | | ``bc``. |
+------+-----------+---------------------------------+
| 6 | ``abcb`` | Try ``b`` again. This time |
| | | the character at the |
2007-08-15 11:28:22 -03:00
| | | current position is ``'b'``, so |
| | | it succeeds. |
+------+-----------+---------------------------------+
The end of the RE has now been reached, and it has matched ``'abcb'``. This
2007-08-15 11:28:22 -03:00
demonstrates how the matching engine goes as far as it can at first, and if no
match is found it will then progressively back up and retry the rest of the RE
again and again. It will back up until it has tried zero matches for
``[bcd]*``, and if that subsequently fails, the engine will conclude that the
string doesn't match the RE at all.
Another repeating metacharacter is ``+``, which matches one or more times. Pay
careful attention to the difference between ``*`` and ``+``; ``*`` matches
*zero* or more times, so whatever's being repeated may not be present at all,
while ``+`` requires at least *one* occurrence. To use a similar example,
``ca+t`` will match ``'cat'`` (1 ``'a'``), ``'caaat'`` (3 ``'a'``\ s), but won't
match ``'ct'``.
2007-08-15 11:28:22 -03:00
There are two more repeating qualifiers. The question mark character, ``?``,
matches either once or zero times; you can think of it as marking something as
being optional. For example, ``home-?brew`` matches either ``'homebrew'`` or
``'home-brew'``.
2007-08-15 11:28:22 -03:00
The most complicated repeated qualifier is ``{m,n}``, where *m* and *n* are
decimal integers. This qualifier means there must be at least *m* repetitions,
and at most *n*. For example, ``a/{1,3}b`` will match ``'a/b'``, ``'a//b'``, and
``'a///b'``. It won't match ``'ab'``, which has no slashes, or ``'a////b'``, which
2007-08-15 11:28:22 -03:00
has four.
You can omit either *m* or *n*; in that case, a reasonable value is assumed for
the missing value. Omitting *m* is interpreted as a lower limit of 0, while
omitting *n* results in an upper bound of infinity.
2007-08-15 11:28:22 -03:00
Readers of a reductionist bent may notice that the three other qualifiers can
all be expressed using this notation. ``{0,}`` is the same as ``*``, ``{1,}``
is equivalent to ``+``, and ``{0,1}`` is the same as ``?``. It's better to use
``*``, ``+``, or ``?`` when you can, simply because they're shorter and easier
to read.
Using Regular Expressions
=========================
Now that we've looked at some simple regular expressions, how do we actually use
them in Python? The :mod:`re` module provides an interface to the regular
expression engine, allowing you to compile REs into objects and then perform
matches with them.
Compiling Regular Expressions
-----------------------------
Regular expressions are compiled into pattern objects, which have
2007-08-15 11:28:22 -03:00
methods for various operations such as searching for pattern matches or
performing string substitutions. ::
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
2007-08-15 11:28:22 -03:00
:func:`re.compile` also accepts an optional *flags* argument, used to enable
various special features and syntax variations. We'll go over the available
settings later, but for now a single example will do::
>>> p = re.compile('ab*', re.IGNORECASE)
The RE is passed to :func:`re.compile` as a string. REs are handled as strings
because regular expressions aren't part of the core Python language, and no
special syntax was created for expressing them. (There are applications that
don't need REs at all, so there's no need to bloat the language specification by
including them.) Instead, the :mod:`re` module is simply a C extension module
included with Python, just like the :mod:`socket` or :mod:`zlib` modules.
Putting REs in strings keeps the Python language simpler, but has one
disadvantage which is the topic of the next section.
The Backslash Plague
--------------------
As stated earlier, regular expressions use the backslash character (``'\'``) to
indicate special forms or to allow special characters to be used without
invoking their special meaning. This conflicts with Python's usage of the same
character for the same purpose in string literals.
Let's say you want to write a RE that matches the string ``\section``, which
might be found in a LaTeX file. To figure out what to write in the program
code, start with the desired string to be matched. Next, you must escape any
backslashes and other metacharacters by preceding them with a backslash,
resulting in the string ``\\section``. The resulting string that must be passed
to :func:`re.compile` must be ``\\section``. However, to express this as a
Python string literal, both backslashes must be escaped *again*.
+-------------------+------------------------------------------+
| Characters | Stage |
+===================+==========================================+
| ``\section`` | Text string to be matched |
+-------------------+------------------------------------------+
| ``\\section`` | Escaped backslash for :func:`re.compile` |
+-------------------+------------------------------------------+
| ``"\\\\section"`` | Escaped backslashes for a string literal |
+-------------------+------------------------------------------+
In short, to match a literal backslash, one has to write ``'\\\\'`` as the RE
string, because the regular expression must be ``\\``, and each backslash must
be expressed as ``\\`` inside a regular Python string literal. In REs that
feature backslashes repeatedly, this leads to lots of repeated backslashes and
makes the resulting strings difficult to understand.
The solution is to use Python's raw string notation for regular expressions;
backslashes are not handled in any special way in a string literal prefixed with
``'r'``, so ``r"\n"`` is a two-character string containing ``'\'`` and ``'n'``,
while ``"\n"`` is a one-character string containing a newline. Regular
expressions will often be written in Python code using this raw string notation.
+-------------------+------------------+
| Regular String | Raw string |
+===================+==================+
| ``"ab*"`` | ``r"ab*"`` |
+-------------------+------------------+
| ``"\\\\section"`` | ``r"\\section"`` |
+-------------------+------------------+
| ``"\\w+\\s+\\1"`` | ``r"\w+\s+\1"`` |
+-------------------+------------------+
Performing Matches
------------------
Once you have an object representing a compiled regular expression, what do you
do with it? Pattern objects have several methods and attributes.
Merged revisions 60151-60159,60161-60168,60170,60172-60173,60175 via svnmerge from svn+ssh://pythondev@svn.python.org/python/trunk ........ r60151 | christian.heimes | 2008-01-21 14:11:15 +0100 (Mon, 21 Jan 2008) | 1 line A bunch of header files were not listed as dependencies for object files. Changes to files like Parser/parser.h weren't picked up by make. ........ r60152 | georg.brandl | 2008-01-21 15:16:46 +0100 (Mon, 21 Jan 2008) | 3 lines #1087741: make mmap.mmap the type of mmap objects, not a factory function. Allow it to be subclassed. ........ r60153 | georg.brandl | 2008-01-21 15:18:14 +0100 (Mon, 21 Jan 2008) | 2 lines mmap is an extension module. ........ r60154 | georg.brandl | 2008-01-21 17:28:13 +0100 (Mon, 21 Jan 2008) | 2 lines Fix example. ........ r60155 | georg.brandl | 2008-01-21 17:34:07 +0100 (Mon, 21 Jan 2008) | 2 lines #1555501: document plistlib and move it to the general library. ........ r60156 | georg.brandl | 2008-01-21 17:36:00 +0100 (Mon, 21 Jan 2008) | 2 lines Add a stub for bundlebuilder documentation. ........ r60157 | georg.brandl | 2008-01-21 17:46:58 +0100 (Mon, 21 Jan 2008) | 2 lines Removing bundlebuilder docs again -- it's not to be used anymore (see #779825). ........ r60158 | georg.brandl | 2008-01-21 17:51:51 +0100 (Mon, 21 Jan 2008) | 2 lines #997912: acknowledge nested scopes in tutorial. ........ r60159 | vinay.sajip | 2008-01-21 18:02:26 +0100 (Mon, 21 Jan 2008) | 1 line Fix: #1836: Off-by-one bug in TimedRotatingFileHandler rollover calculation. Patch thanks to Kathryn M. Kowalski. ........ r60161 | georg.brandl | 2008-01-21 18:13:03 +0100 (Mon, 21 Jan 2008) | 2 lines Adapt pydoc to new doc URLs. ........ r60162 | georg.brandl | 2008-01-21 18:17:00 +0100 (Mon, 21 Jan 2008) | 2 lines Fix old link. ........ r60163 | georg.brandl | 2008-01-21 18:22:06 +0100 (Mon, 21 Jan 2008) | 2 lines #1726198: replace while 1: fp.readline() with file iteration. ........ r60164 | georg.brandl | 2008-01-21 18:29:23 +0100 (Mon, 21 Jan 2008) | 2 lines Clarify $ behavior in re docstring. #1631394. ........ r60165 | vinay.sajip | 2008-01-21 18:39:22 +0100 (Mon, 21 Jan 2008) | 1 line Minor documentation change - hyperlink tidied up. ........ r60166 | georg.brandl | 2008-01-21 18:42:40 +0100 (Mon, 21 Jan 2008) | 2 lines #1530959: change distutils build dir for --with-pydebug python builds. ........ r60167 | vinay.sajip | 2008-01-21 19:16:05 +0100 (Mon, 21 Jan 2008) | 1 line Updated to include news on recent logging fixes and documentation changes. ........ r60168 | georg.brandl | 2008-01-21 19:35:49 +0100 (Mon, 21 Jan 2008) | 3 lines Issue #1882: when compiling code from a string, encoding cookies in the second line of code were not always recognized correctly. ........ r60170 | georg.brandl | 2008-01-21 19:36:51 +0100 (Mon, 21 Jan 2008) | 2 lines Add NEWS entry for #1882. ........ r60172 | georg.brandl | 2008-01-21 19:41:24 +0100 (Mon, 21 Jan 2008) | 2 lines Use original location of document, which has translations. ........ r60173 | walter.doerwald | 2008-01-21 21:18:04 +0100 (Mon, 21 Jan 2008) | 2 lines Follow PEP 8 in module docstring. ........ r60175 | georg.brandl | 2008-01-21 21:20:53 +0100 (Mon, 21 Jan 2008) | 2 lines Adapt to latest doctools refactoring. ........
2008-01-21 16:36:10 -04:00
Only the most significant ones will be covered here; consult the :mod:`re` docs
for a complete listing.
2007-08-15 11:28:22 -03:00
+------------------+-----------------------------------------------+
| Method/Attribute | Purpose |
+==================+===============================================+
| ``match()`` | Determine if the RE matches at the beginning |
| | of the string. |
+------------------+-----------------------------------------------+
| ``search()`` | Scan through a string, looking for any |
| | location where this RE matches. |
+------------------+-----------------------------------------------+
| ``findall()`` | Find all substrings where the RE matches, and |
| | returns them as a list. |
+------------------+-----------------------------------------------+
| ``finditer()`` | Find all substrings where the RE matches, and |
#1370: Finish the merge r58749, log below, by resolving all conflicts in Doc/. Merged revisions 58221-58741 via svnmerge from svn+ssh://pythondev@svn.python.org/python/trunk ........ r58221 | georg.brandl | 2007-09-20 10:57:59 -0700 (Thu, 20 Sep 2007) | 2 lines Patch #1181: add os.environ.clear() method. ........ r58225 | sean.reifschneider | 2007-09-20 23:33:28 -0700 (Thu, 20 Sep 2007) | 3 lines Issue1704287: "make install" fails unless you do "make" first. Make oldsharedmods and sharedmods in "libinstall". ........ r58232 | guido.van.rossum | 2007-09-22 13:18:03 -0700 (Sat, 22 Sep 2007) | 4 lines Patch # 188 by Philip Jenvey. Make tell() mark CRLF as a newline. With unit test. ........ r58242 | georg.brandl | 2007-09-24 10:55:47 -0700 (Mon, 24 Sep 2007) | 2 lines Fix typo and double word. ........ r58245 | georg.brandl | 2007-09-24 10:59:28 -0700 (Mon, 24 Sep 2007) | 2 lines #1196: document default radix for int(). ........ r58247 | georg.brandl | 2007-09-24 11:08:24 -0700 (Mon, 24 Sep 2007) | 2 lines #1177: accept 2xx responses for https too, not only http. ........ r58249 | andrew.kuchling | 2007-09-24 16:45:51 -0700 (Mon, 24 Sep 2007) | 1 line Remove stray odd character; grammar fix ........ r58250 | andrew.kuchling | 2007-09-24 16:46:28 -0700 (Mon, 24 Sep 2007) | 1 line Typo fix ........ r58251 | andrew.kuchling | 2007-09-24 17:09:42 -0700 (Mon, 24 Sep 2007) | 1 line Add various items ........ r58268 | vinay.sajip | 2007-09-26 22:34:45 -0700 (Wed, 26 Sep 2007) | 1 line Change to flush and close logic to fix #1760556. ........ r58269 | vinay.sajip | 2007-09-26 22:38:51 -0700 (Wed, 26 Sep 2007) | 1 line Change to basicConfig() to fix #1021. ........ r58270 | georg.brandl | 2007-09-26 23:26:58 -0700 (Wed, 26 Sep 2007) | 2 lines #1208: document match object's boolean value. ........ r58271 | vinay.sajip | 2007-09-26 23:56:13 -0700 (Wed, 26 Sep 2007) | 1 line Minor date change. ........ r58272 | vinay.sajip | 2007-09-27 00:35:10 -0700 (Thu, 27 Sep 2007) | 1 line Change to LogRecord.__init__() to fix #1206. Note that archaic use of type(x) == types.DictType is because of keeping 1.5.2 compatibility. While this is much less relevant these days, there probably needs to be a separate commit for removing all archaic constructs at the same time. ........ r58288 | brett.cannon | 2007-09-30 12:45:10 -0700 (Sun, 30 Sep 2007) | 9 lines tuple.__repr__ did not consider a reference loop as it is not possible from Python code; but it is possible from C. object.__str__ had the issue of not expecting a type to doing something within it's tp_str implementation that could trigger an infinite recursion, but it could in C code.. Both found thanks to BaseException and how it handles its repr. Closes issue #1686386. Thanks to Thomas Herve for taking an initial stab at coming up with a solution. ........ r58289 | brett.cannon | 2007-09-30 13:37:19 -0700 (Sun, 30 Sep 2007) | 3 lines Fix error introduced by r58288; if a tuple is length 0 return its repr and don't worry about any self-referring tuples. ........ r58294 | facundo.batista | 2007-10-02 10:01:24 -0700 (Tue, 02 Oct 2007) | 11 lines Made the various is_* operations return booleans. This was discussed with Cawlishaw by mail, and he basically confirmed that to these is_* operations, there's no need to return Decimal(0) and Decimal(1) if the language supports the False and True booleans. Also added a few tests for the these functions in extra.decTest, since they are mostly untested (apart from the doctests). Thanks Mark Dickinson ........ r58295 | facundo.batista | 2007-10-02 11:21:18 -0700 (Tue, 02 Oct 2007) | 4 lines Added a class to store the digits of log(10), so that they can be made available when necessary without recomputing. Thanks Mark Dickinson ........ r58299 | mark.summerfield | 2007-10-03 01:53:21 -0700 (Wed, 03 Oct 2007) | 4 lines Added note in footnote about string comparisons about unicodedata.normalize(). ........ r58304 | raymond.hettinger | 2007-10-03 14:18:11 -0700 (Wed, 03 Oct 2007) | 1 line enumerate() is no longer bounded to using sequences shorter than LONG_MAX. The possibility of overflow was sending some newsgroup posters into a tizzy. ........ r58305 | raymond.hettinger | 2007-10-03 17:20:27 -0700 (Wed, 03 Oct 2007) | 1 line itertools.count() no longer limited to sys.maxint. ........ r58306 | kurt.kaiser | 2007-10-03 18:49:54 -0700 (Wed, 03 Oct 2007) | 3 lines Assume that the user knows when he wants to end the line; don't insert something he didn't select or complete. ........ r58307 | kurt.kaiser | 2007-10-03 19:07:50 -0700 (Wed, 03 Oct 2007) | 2 lines Remove unused theme that was causing a fault in p3k. ........ r58308 | kurt.kaiser | 2007-10-03 19:09:17 -0700 (Wed, 03 Oct 2007) | 2 lines Clean up EditorWindow close. ........ r58309 | kurt.kaiser | 2007-10-03 19:53:07 -0700 (Wed, 03 Oct 2007) | 7 lines textView cleanup. Patch 1718043 Tal Einat. M idlelib/EditorWindow.py M idlelib/aboutDialog.py M idlelib/textView.py M idlelib/NEWS.txt ........ r58310 | kurt.kaiser | 2007-10-03 20:11:12 -0700 (Wed, 03 Oct 2007) | 3 lines configDialog cleanup. Patch 1730217 Tal Einat. ........ r58311 | neal.norwitz | 2007-10-03 23:00:48 -0700 (Wed, 03 Oct 2007) | 4 lines Coverity #151: Remove deadcode. All this code already exists above starting at line 653. ........ r58325 | fred.drake | 2007-10-04 19:46:12 -0700 (Thu, 04 Oct 2007) | 1 line wrap lines to <80 characters before fixing errors ........ r58326 | raymond.hettinger | 2007-10-04 19:47:07 -0700 (Thu, 04 Oct 2007) | 6 lines Add __asdict__() to NamedTuple and refine the docs. Add maxlen support to deque() and fixup docs. Partially fix __reduce__(). The None as a third arg was no longer supported. Still needs work on __reduce__() to handle recursive inputs. ........ r58327 | fred.drake | 2007-10-04 19:48:32 -0700 (Thu, 04 Oct 2007) | 3 lines move descriptions of ac_(in|out)_buffer_size to the right place http://bugs.python.org/issue1053 ........ r58329 | neal.norwitz | 2007-10-04 20:39:17 -0700 (Thu, 04 Oct 2007) | 3 lines dict could be NULL, so we need to XDECREF. Fix a compiler warning about passing a PyTypeObject* instead of PyObject*. ........ r58330 | neal.norwitz | 2007-10-04 20:41:19 -0700 (Thu, 04 Oct 2007) | 2 lines Fix Coverity #158: Check the correct variable. ........ r58332 | neal.norwitz | 2007-10-04 22:01:38 -0700 (Thu, 04 Oct 2007) | 7 lines Fix Coverity #159. This code was broken if save() returned a negative number since i contained a boolean value and then we compared i < 0 which should never be true. Will backport (assuming it's necessary) ........ r58334 | neal.norwitz | 2007-10-04 22:29:17 -0700 (Thu, 04 Oct 2007) | 1 line Add a note about fixing some more warnings found by Coverity. ........ r58338 | raymond.hettinger | 2007-10-05 12:07:31 -0700 (Fri, 05 Oct 2007) | 1 line Restore BEGIN/END THREADS macros which were squashed in the previous checkin ........ r58343 | gregory.p.smith | 2007-10-06 00:48:10 -0700 (Sat, 06 Oct 2007) | 3 lines Stab in the dark attempt to fix the test_bsddb3 failure on sparc and S-390 ubuntu buildbots. ........ r58344 | gregory.p.smith | 2007-10-06 00:51:59 -0700 (Sat, 06 Oct 2007) | 2 lines Allows BerkeleyDB 4.6.x >= 4.6.21 for the bsddb module. ........ r58348 | gregory.p.smith | 2007-10-06 08:47:37 -0700 (Sat, 06 Oct 2007) | 3 lines Use the host the author likely meant in the first place. pop.gmail.com is reliable. gmail.org is someones personal domain. ........ r58351 | neal.norwitz | 2007-10-06 12:16:28 -0700 (Sat, 06 Oct 2007) | 3 lines Ensure that this test will pass even if another test left an unwritable TESTFN. Also use the safe unlink in test_support instead of rolling our own here. ........ r58368 | georg.brandl | 2007-10-08 00:50:24 -0700 (Mon, 08 Oct 2007) | 3 lines #1123: fix the docs for the str.split(None, sep) case. Also expand a few other methods' docs, which had more info in the deprecated string module docs. ........ r58369 | georg.brandl | 2007-10-08 01:06:05 -0700 (Mon, 08 Oct 2007) | 2 lines Update docstring of sched, also remove an unused assignment. ........ r58370 | raymond.hettinger | 2007-10-08 02:14:28 -0700 (Mon, 08 Oct 2007) | 5 lines Add comments to NamedTuple code. Let the field spec be either a string or a non-string sequence (suggested by Martin Blais with use cases). Improve the error message in the case of a SyntaxError (caused by a duplicate field name). ........ r58371 | raymond.hettinger | 2007-10-08 02:56:29 -0700 (Mon, 08 Oct 2007) | 1 line Missed a line in the docs ........ r58372 | raymond.hettinger | 2007-10-08 03:11:51 -0700 (Mon, 08 Oct 2007) | 1 line Better variable names ........ r58376 | georg.brandl | 2007-10-08 07:12:47 -0700 (Mon, 08 Oct 2007) | 3 lines #1199: docs for tp_as_{number,sequence,mapping}, by Amaury Forgeot d'Arc. No need to merge this to py3k! ........ r58380 | raymond.hettinger | 2007-10-08 14:26:58 -0700 (Mon, 08 Oct 2007) | 1 line Eliminate camelcase function name ........ r58381 | andrew.kuchling | 2007-10-08 16:23:03 -0700 (Mon, 08 Oct 2007) | 1 line Eliminate camelcase function name ........ r58382 | raymond.hettinger | 2007-10-08 18:36:23 -0700 (Mon, 08 Oct 2007) | 1 line Make the error messages more specific ........ r58384 | gregory.p.smith | 2007-10-08 23:02:21 -0700 (Mon, 08 Oct 2007) | 10 lines Splits Modules/_bsddb.c up into bsddb.h and _bsddb.c and adds a C API object available as bsddb.db.api. This is based on the patch submitted by Duncan Grisby here: http://sourceforge.net/tracker/index.php?func=detail&aid=1551895&group_id=13900&atid=313900 See this thread for additional info: http://sourceforge.net/mailarchive/forum.php?thread_name=E1GAVDK-0002rk-Iw%40apasphere.com&forum_name=pybsddb-users It also cleans up the code a little by removing some ifdef/endifs for python prior to 2.1 and for unsupported Berkeley DB <= 3.2. ........ r58385 | gregory.p.smith | 2007-10-08 23:50:43 -0700 (Mon, 08 Oct 2007) | 5 lines Fix a double free when positioning a database cursor to a non-existant string key (and probably a few other situations with string keys). This was reported with a patch as pybsddb sourceforge bug 1708868 by jjjhhhlll at gmail. ........ r58386 | gregory.p.smith | 2007-10-09 00:19:11 -0700 (Tue, 09 Oct 2007) | 3 lines Use the highest cPickle protocol in bsddb.dbshelve. This comes from sourceforge pybsddb patch 1551443 by w_barnes. ........ r58394 | gregory.p.smith | 2007-10-09 11:26:02 -0700 (Tue, 09 Oct 2007) | 2 lines remove another sleepycat reference ........ r58396 | kurt.kaiser | 2007-10-09 12:31:30 -0700 (Tue, 09 Oct 2007) | 3 lines Allow interrupt only when executing user code in subprocess Patch 1225 Tal Einat modified from IDLE-Spoon. ........ r58399 | brett.cannon | 2007-10-09 17:07:50 -0700 (Tue, 09 Oct 2007) | 5 lines Remove file-level typedefs that were inconsistently used throughout the file. Just move over to the public API names. Closes issue1238. ........ r58401 | raymond.hettinger | 2007-10-09 17:26:46 -0700 (Tue, 09 Oct 2007) | 1 line Accept Jim Jewett's api suggestion to use None instead of -1 to indicate unbounded deques. ........ r58403 | kurt.kaiser | 2007-10-09 17:55:40 -0700 (Tue, 09 Oct 2007) | 2 lines Allow cursor color change w/o restart. Patch 1725576 Tal Einat. ........ r58404 | kurt.kaiser | 2007-10-09 18:06:47 -0700 (Tue, 09 Oct 2007) | 2 lines show paste if > 80 columns. Patch 1659326 Tal Einat. ........ r58415 | thomas.heller | 2007-10-11 12:51:32 -0700 (Thu, 11 Oct 2007) | 5 lines On OS X, use os.uname() instead of gestalt.sysv(...) to get the operating system version. This allows to use ctypes when Python was configured with --disable-toolbox-glue. ........ r58419 | neal.norwitz | 2007-10-11 20:01:01 -0700 (Thu, 11 Oct 2007) | 1 line Get rid of warning about not being able to create an existing directory. ........ r58420 | neal.norwitz | 2007-10-11 20:01:30 -0700 (Thu, 11 Oct 2007) | 1 line Get rid of warnings on a bunch of platforms by using a proper prototype. ........ r58421 | neal.norwitz | 2007-10-11 20:01:54 -0700 (Thu, 11 Oct 2007) | 4 lines Get rid of compiler warning about retval being used (returned) without being initialized. (gcc warning and Coverity 202) ........ r58422 | neal.norwitz | 2007-10-11 20:03:23 -0700 (Thu, 11 Oct 2007) | 1 line Fix Coverity 168: Close the file before returning (exiting). ........ r58423 | neal.norwitz | 2007-10-11 20:04:18 -0700 (Thu, 11 Oct 2007) | 4 lines Fix Coverity 180: Don't overallocate. We don't need structs, but pointers. Also fix a memory leak. ........ r58424 | neal.norwitz | 2007-10-11 20:05:19 -0700 (Thu, 11 Oct 2007) | 5 lines Fix Coverity 185-186: If the passed in FILE is NULL, uninitialized memory would be accessed. Will backport. ........ r58425 | neal.norwitz | 2007-10-11 20:52:34 -0700 (Thu, 11 Oct 2007) | 1 line Get this module to compile with bsddb versions prior to 4.3 ........ r58430 | martin.v.loewis | 2007-10-12 01:56:52 -0700 (Fri, 12 Oct 2007) | 3 lines Bug #1216: Restore support for Visual Studio 2002. Will backport to 2.5. ........ r58433 | raymond.hettinger | 2007-10-12 10:53:11 -0700 (Fri, 12 Oct 2007) | 1 line Fix test of count.__repr__() to ignore the 'L' if the count is a long ........ r58434 | gregory.p.smith | 2007-10-12 11:44:06 -0700 (Fri, 12 Oct 2007) | 4 lines Fixes http://bugs.python.org/issue1233 - bsddb.dbshelve.DBShelf.append was useless due to inverted logic. Also adds a test case for RECNO dbs to test_dbshelve. ........ r58445 | georg.brandl | 2007-10-13 06:20:03 -0700 (Sat, 13 Oct 2007) | 2 lines Fix email example. ........ r58450 | gregory.p.smith | 2007-10-13 16:02:05 -0700 (Sat, 13 Oct 2007) | 2 lines Fix an uncollectable reference leak in bsddb.db.DBShelf.append ........ r58453 | neal.norwitz | 2007-10-13 17:18:40 -0700 (Sat, 13 Oct 2007) | 8 lines Let the O/S supply a port if none of the default ports can be used. This should make the tests more robust at the expense of allowing tests to be sloppier by not requiring them to cleanup after themselves. (It will legitamitely help when running two test suites simultaneously or if another process is already using one of the predefined ports.) Also simplifies (slightLy) the exception handling elsewhere. ........ r58459 | neal.norwitz | 2007-10-14 11:30:21 -0700 (Sun, 14 Oct 2007) | 2 lines Don't raise a string exception, they don't work anymore. ........ r58460 | neal.norwitz | 2007-10-14 11:40:37 -0700 (Sun, 14 Oct 2007) | 1 line Use unittest for assertions ........ r58468 | armin.rigo | 2007-10-15 00:48:35 -0700 (Mon, 15 Oct 2007) | 2 lines test_bigbits was not testing what it seemed to. ........ r58471 | guido.van.rossum | 2007-10-15 08:54:11 -0700 (Mon, 15 Oct 2007) | 3 lines Change a PyErr_Print() into a PyErr_Clear(), per discussion in issue 1031213. ........ r58500 | raymond.hettinger | 2007-10-16 12:18:30 -0700 (Tue, 16 Oct 2007) | 1 line Improve error messages ........ r58506 | raymond.hettinger | 2007-10-16 14:28:32 -0700 (Tue, 16 Oct 2007) | 1 line More docs, error messages, and tests ........ r58507 | andrew.kuchling | 2007-10-16 15:58:03 -0700 (Tue, 16 Oct 2007) | 1 line Add items ........ r58508 | brett.cannon | 2007-10-16 16:24:06 -0700 (Tue, 16 Oct 2007) | 3 lines Remove ``:const:`` notation on None in parameter list. Since the markup is not rendered for parameters it just showed up as ``:const:`None` `` in the output. ........ r58509 | brett.cannon | 2007-10-16 16:26:45 -0700 (Tue, 16 Oct 2007) | 3 lines Re-order some functions whose parameters differ between PyObject and const char * so that they are next to each other. ........ r58522 | armin.rigo | 2007-10-17 11:46:37 -0700 (Wed, 17 Oct 2007) | 5 lines Fix the overflow checking of list_repeat. Introduce overflow checking into list_inplace_repeat. Backport candidate, possibly. ........ r58530 | facundo.batista | 2007-10-17 20:16:03 -0700 (Wed, 17 Oct 2007) | 7 lines Issue #1580738. When HTTPConnection reads the whole stream with read(), it closes itself. When the stream is read in several calls to read(n), it should behave in the same way if HTTPConnection knows where the end of the stream is (through self.length). Added a test case for this behaviour. ........ r58531 | facundo.batista | 2007-10-17 20:44:48 -0700 (Wed, 17 Oct 2007) | 3 lines Issue 1289, just a typo. ........ r58532 | gregory.p.smith | 2007-10-18 00:56:54 -0700 (Thu, 18 Oct 2007) | 4 lines cleanup test_dbtables to use mkdtemp. cleanup dbtables to pass txn as a keyword argument whenever possible to avoid bugs and confusion. (dbtables.py line 447 self.db.get using txn as a non-keyword was an actual bug due to this) ........ r58533 | gregory.p.smith | 2007-10-18 01:34:20 -0700 (Thu, 18 Oct 2007) | 4 lines Fix a weird bug in dbtables: if it chose a random rowid string that contained NULL bytes it would cause the database all sorts of problems in the future leading to very strange random failures and corrupt dbtables.bsdTableDb dbs. ........ r58534 | gregory.p.smith | 2007-10-18 09:32:02 -0700 (Thu, 18 Oct 2007) | 3 lines A cleaner fix than the one committed last night. Generate random rowids that do not contain null bytes. ........ r58537 | gregory.p.smith | 2007-10-18 10:17:57 -0700 (Thu, 18 Oct 2007) | 2 lines mention bsddb fixes. ........ r58538 | raymond.hettinger | 2007-10-18 14:13:06 -0700 (Thu, 18 Oct 2007) | 1 line Remove useless warning ........ r58539 | gregory.p.smith | 2007-10-19 00:31:20 -0700 (Fri, 19 Oct 2007) | 2 lines squelch the warning that this test is supposed to trigger. ........ r58542 | georg.brandl | 2007-10-19 05:32:39 -0700 (Fri, 19 Oct 2007) | 2 lines Clarify wording for apply(). ........ r58544 | mark.summerfield | 2007-10-19 05:48:17 -0700 (Fri, 19 Oct 2007) | 3 lines Added a cross-ref to each other. ........ r58545 | georg.brandl | 2007-10-19 10:38:49 -0700 (Fri, 19 Oct 2007) | 2 lines #1284: "S" means "seen", not unread. ........ r58548 | thomas.heller | 2007-10-19 11:11:41 -0700 (Fri, 19 Oct 2007) | 4 lines Fix ctypes on 32-bit systems when Python is configured --with-system-ffi. See also https://bugs.launchpad.net/bugs/72505. Ported from release25-maint branch. ........ r58550 | facundo.batista | 2007-10-19 12:25:57 -0700 (Fri, 19 Oct 2007) | 8 lines The constructor from tuple was way too permissive: it allowed bad coefficient numbers, floats in the sign, and other details that generated directly the wrong number in the best case, or triggered misfunctionality in the alorithms. Test cases added for these issues. Thanks Mark Dickinson. ........ r58559 | georg.brandl | 2007-10-20 06:22:53 -0700 (Sat, 20 Oct 2007) | 2 lines Fix code being interpreted as a target. ........ r58561 | georg.brandl | 2007-10-20 06:36:24 -0700 (Sat, 20 Oct 2007) | 2 lines Document new "cmdoption" directive. ........ r58562 | georg.brandl | 2007-10-20 08:21:22 -0700 (Sat, 20 Oct 2007) | 2 lines Make a path more Unix-standardy. ........ r58564 | georg.brandl | 2007-10-20 10:51:39 -0700 (Sat, 20 Oct 2007) | 2 lines Document new directive "envvar". ........ r58567 | georg.brandl | 2007-10-20 11:08:14 -0700 (Sat, 20 Oct 2007) | 6 lines * Add new toplevel chapter, "Using Python." (how to install, configure and setup python on different platforms -- at least in theory.) * Move the Python on Mac docs in that chapter. * Add a new chapter about the command line invocation, by stargaming. ........ r58568 | georg.brandl | 2007-10-20 11:33:20 -0700 (Sat, 20 Oct 2007) | 2 lines Change title, for now. ........ r58569 | georg.brandl | 2007-10-20 11:39:25 -0700 (Sat, 20 Oct 2007) | 2 lines Add entry to ACKS. ........ r58570 | georg.brandl | 2007-10-20 12:05:45 -0700 (Sat, 20 Oct 2007) | 2 lines Clarify -E docs. ........ r58571 | georg.brandl | 2007-10-20 12:08:36 -0700 (Sat, 20 Oct 2007) | 2 lines Even more clarification. ........ r58572 | andrew.kuchling | 2007-10-20 12:25:37 -0700 (Sat, 20 Oct 2007) | 1 line Fix protocol name ........ r58573 | andrew.kuchling | 2007-10-20 12:35:18 -0700 (Sat, 20 Oct 2007) | 1 line Various items ........ r58574 | andrew.kuchling | 2007-10-20 12:39:35 -0700 (Sat, 20 Oct 2007) | 1 line Use correct header line ........ r58576 | armin.rigo | 2007-10-21 02:14:15 -0700 (Sun, 21 Oct 2007) | 3 lines Add a crasher for the long-standing issue with closing a file while another thread uses it. ........ r58577 | georg.brandl | 2007-10-21 03:01:56 -0700 (Sun, 21 Oct 2007) | 2 lines Remove duplicate crasher. ........ r58578 | georg.brandl | 2007-10-21 03:24:20 -0700 (Sun, 21 Oct 2007) | 2 lines Unify "byte code" to "bytecode". Also sprinkle :term: markup for it. ........ r58579 | georg.brandl | 2007-10-21 03:32:54 -0700 (Sun, 21 Oct 2007) | 2 lines Add markup to new function descriptions. ........ r58580 | georg.brandl | 2007-10-21 03:45:46 -0700 (Sun, 21 Oct 2007) | 2 lines Add :term:s for descriptors. ........ r58581 | georg.brandl | 2007-10-21 03:46:24 -0700 (Sun, 21 Oct 2007) | 2 lines Unify "file-descriptor" to "file descriptor". ........ r58582 | georg.brandl | 2007-10-21 03:52:38 -0700 (Sun, 21 Oct 2007) | 2 lines Add :term: for generators. ........ r58583 | georg.brandl | 2007-10-21 05:10:28 -0700 (Sun, 21 Oct 2007) | 2 lines Add :term:s for iterator. ........ r58584 | georg.brandl | 2007-10-21 05:15:05 -0700 (Sun, 21 Oct 2007) | 2 lines Add :term:s for "new-style class". ........ r58588 | neal.norwitz | 2007-10-21 21:47:54 -0700 (Sun, 21 Oct 2007) | 1 line Add Chris Monson so he can edit PEPs. ........ r58594 | guido.van.rossum | 2007-10-22 09:27:19 -0700 (Mon, 22 Oct 2007) | 4 lines Issue #1307, patch by Derek Shockey. When "MAIL" is received without args, an exception happens instead of sending a 501 syntax error response. ........ r58598 | travis.oliphant | 2007-10-22 19:40:56 -0700 (Mon, 22 Oct 2007) | 1 line Add phuang patch from Issue 708374 which adds offset parameter to mmap module. ........ r58601 | neal.norwitz | 2007-10-22 22:44:27 -0700 (Mon, 22 Oct 2007) | 2 lines Bug #1313, fix typo (wrong variable name) in example. ........ r58609 | georg.brandl | 2007-10-23 11:21:35 -0700 (Tue, 23 Oct 2007) | 2 lines Update Pygments version from externals. ........ r58618 | guido.van.rossum | 2007-10-23 12:25:41 -0700 (Tue, 23 Oct 2007) | 3 lines Issue 1307 by Derek Shockey, fox the same bug for RCPT. Neal: please backport! ........ r58620 | raymond.hettinger | 2007-10-23 13:37:41 -0700 (Tue, 23 Oct 2007) | 1 line Shorter name for namedtuple() ........ r58621 | andrew.kuchling | 2007-10-23 13:55:47 -0700 (Tue, 23 Oct 2007) | 1 line Update name ........ r58622 | raymond.hettinger | 2007-10-23 14:23:07 -0700 (Tue, 23 Oct 2007) | 1 line Fixup news entry ........ r58623 | raymond.hettinger | 2007-10-23 18:28:33 -0700 (Tue, 23 Oct 2007) | 1 line Optimize sum() for integer and float inputs. ........ r58624 | raymond.hettinger | 2007-10-23 19:05:51 -0700 (Tue, 23 Oct 2007) | 1 line Fixup error return and add support for intermixed ints and floats/ ........ r58628 | vinay.sajip | 2007-10-24 03:47:06 -0700 (Wed, 24 Oct 2007) | 1 line Bug #1321: Fixed logic error in TimedRotatingFileHandler.__init__() ........ r58641 | facundo.batista | 2007-10-24 12:11:08 -0700 (Wed, 24 Oct 2007) | 4 lines Issue 1290. CharacterData.__repr__ was constructing a string in response that keeped having a non-ascii character. ........ r58643 | thomas.heller | 2007-10-24 12:50:45 -0700 (Wed, 24 Oct 2007) | 1 line Added unittest for calling a function with paramflags (backport from py3k branch). ........ r58645 | matthias.klose | 2007-10-24 13:00:44 -0700 (Wed, 24 Oct 2007) | 2 lines - Build using system ffi library on arm*-linux*. ........ r58651 | georg.brandl | 2007-10-24 14:40:38 -0700 (Wed, 24 Oct 2007) | 2 lines Bug #1287: make os.environ.pop() work as expected. ........ r58652 | raymond.hettinger | 2007-10-24 19:26:58 -0700 (Wed, 24 Oct 2007) | 1 line Missing DECREFs ........ r58653 | matthias.klose | 2007-10-24 23:37:24 -0700 (Wed, 24 Oct 2007) | 2 lines - Build using system ffi library on arm*-linux*, pass --with-system-ffi to CONFIG_ARGS ........ r58655 | thomas.heller | 2007-10-25 12:47:32 -0700 (Thu, 25 Oct 2007) | 2 lines ffi_type_longdouble may be already #defined. See issue 1324. ........ r58656 | kurt.kaiser | 2007-10-25 15:43:45 -0700 (Thu, 25 Oct 2007) | 3 lines Correct an ancient bug in an unused path by removing that path: register() is now idempotent. ........ r58660 | kurt.kaiser | 2007-10-25 17:10:09 -0700 (Thu, 25 Oct 2007) | 4 lines 1. Add comments to provide top-level documentation. 2. Refactor to use more descriptive names. 3. Enhance tests in main(). ........ r58675 | georg.brandl | 2007-10-26 11:30:41 -0700 (Fri, 26 Oct 2007) | 2 lines Fix new pop() method on os.environ on ignorecase-platforms. ........ r58696 | neal.norwitz | 2007-10-27 15:32:21 -0700 (Sat, 27 Oct 2007) | 1 line Update URL for Pygments. 0.8.1 is no longer available ........ r58697 | hyeshik.chang | 2007-10-28 04:19:02 -0700 (Sun, 28 Oct 2007) | 3 lines - Add support for FreeBSD 8 which is recently forked from FreeBSD 7. - Regenerate IN module for most recent maintenance tree of FreeBSD 6 and 7. ........ r58698 | hyeshik.chang | 2007-10-28 05:38:09 -0700 (Sun, 28 Oct 2007) | 2 lines Enable platform-specific tweaks for FreeBSD 8 (exactly same to FreeBSD 7's yet) ........ r58700 | kurt.kaiser | 2007-10-28 12:03:59 -0700 (Sun, 28 Oct 2007) | 2 lines Add confirmation dialog before printing. Patch 1717170 Tal Einat. ........ r58706 | guido.van.rossum | 2007-10-29 13:52:45 -0700 (Mon, 29 Oct 2007) | 3 lines Patch 1353 by Jacob Winther. Add mp4 mapping to mimetypes.py. ........ r58709 | guido.van.rossum | 2007-10-29 15:15:05 -0700 (Mon, 29 Oct 2007) | 6 lines Backport fixes for the code that decodes octal escapes (and for PyString also hex escapes) -- this was reaching beyond the end of the input string buffer, even though it is not supposed to be \0-terminated. This has no visible effect but is clearly the correct thing to do. (In 3.0 it had a visible effect after removing ob_sstate from PyString.) ........ r58710 | kurt.kaiser | 2007-10-29 19:38:54 -0700 (Mon, 29 Oct 2007) | 7 lines check in Tal Einat's update to tabpage.py Patch 1612746 M configDialog.py M NEWS.txt AM tabbedpages.py ........ r58715 | georg.brandl | 2007-10-30 10:51:18 -0700 (Tue, 30 Oct 2007) | 2 lines Use correct markup. ........ r58716 | georg.brandl | 2007-10-30 10:57:12 -0700 (Tue, 30 Oct 2007) | 2 lines Make example about hiding None return values at the prompt clearer. ........ r58728 | neal.norwitz | 2007-10-30 23:33:20 -0700 (Tue, 30 Oct 2007) | 1 line Fix some compiler warnings for signed comparisons on Unix and Windows. ........ r58731 | martin.v.loewis | 2007-10-31 10:19:33 -0700 (Wed, 31 Oct 2007) | 2 lines Adding Christian Heimes. ........ r58737 | raymond.hettinger | 2007-10-31 14:57:58 -0700 (Wed, 31 Oct 2007) | 1 line Clarify the reasons why pickle is almost always better than marshal ........ r58739 | raymond.hettinger | 2007-10-31 15:15:49 -0700 (Wed, 31 Oct 2007) | 1 line Sets are marshalable. ........
2007-11-01 17:32:30 -03:00
| | returns them as an :term:`iterator`. |
2007-08-15 11:28:22 -03:00
+------------------+-----------------------------------------------+
:meth:`~re.Pattern.match` and :meth:`~re.Pattern.search` return ``None`` if no match can be found. If
they're successful, a :ref:`match object <match-objects>` instance is returned,
containing information about the match: where it starts and ends, the substring
it matched, and more.
2007-08-15 11:28:22 -03:00
You can learn about this by interactively experimenting with the :mod:`re`
module. If you have :mod:`tkinter` available, you may also want to look at
:source:`Tools/demo/redemo.py`, a demonstration program included with the
2007-08-15 11:28:22 -03:00
Python distribution. It allows you to enter REs and strings, and displays
whether the RE matches or fails. :file:`redemo.py` can be quite useful when
trying to debug a complicated RE.
2007-08-15 11:28:22 -03:00
This HOWTO uses the standard Python interpreter for its examples. First, run the
Python interpreter, import the :mod:`re` module, and compile a RE::
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
2007-08-15 11:28:22 -03:00
Now, you can try matching various strings against the RE ``[a-z]+``. An empty
string shouldn't match at all, since ``+`` means 'one or more repetitions'.
:meth:`~re.Pattern.match` should return ``None`` in this case, which will cause the
2007-08-15 11:28:22 -03:00
interpreter to print no output. You can explicitly print the result of
:meth:`!match` to make this clear. ::
2007-08-15 11:28:22 -03:00
>>> p.match("")
>>> print(p.match(""))
2007-08-15 11:28:22 -03:00
None
Now, let's try it on a string that it should match, such as ``tempo``. In this
case, :meth:`~re.Pattern.match` will return a :ref:`match object <match-objects>`, so you
should store the result in a variable for later use. ::
2007-08-15 11:28:22 -03:00
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
2007-08-15 11:28:22 -03:00
Now you can query the :ref:`match object <match-objects>` for information
about the matching string. Match object instances
also have several methods and attributes; the most important ones are:
2007-08-15 11:28:22 -03:00
+------------------+--------------------------------------------+
| Method/Attribute | Purpose |
+==================+============================================+
| ``group()`` | Return the string matched by the RE |
+------------------+--------------------------------------------+
| ``start()`` | Return the starting position of the match |
+------------------+--------------------------------------------+
| ``end()`` | Return the ending position of the match |
+------------------+--------------------------------------------+
| ``span()`` | Return a tuple containing the (start, end) |
| | positions of the match |
+------------------+--------------------------------------------+
Trying these methods will soon clarify their meaning::
>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
:meth:`~re.Match.group` returns the substring that was matched by the RE. :meth:`~re.Match.start`
and :meth:`~re.Match.end` return the starting and ending index of the match. :meth:`~re.Match.span`
returns both start and end indexes in a single tuple. Since the :meth:`~re.Pattern.match`
method only checks if the RE matches at the start of a string, :meth:`!start`
will always be zero. However, the :meth:`~re.Pattern.search` method of patterns
scans through the string, so the match may not start at zero in that
2007-08-15 11:28:22 -03:00
case. ::
>>> print(p.match('::: message'))
2007-08-15 11:28:22 -03:00
None
>>> m = p.search('::: message'); print(m)
<re.Match object; span=(4, 11), match='message'>
2007-08-15 11:28:22 -03:00
>>> m.group()
'message'
>>> m.span()
(4, 11)
In actual programs, the most common style is to store the
:ref:`match object <match-objects>` in a variable, and then check if it was
``None``. This usually looks like::
2007-08-15 11:28:22 -03:00
p = re.compile( ... )
m = p.match( 'string goes here' )
if m:
print('Match found: ', m.group())
2007-08-15 11:28:22 -03:00
else:
print('No match')
2007-08-15 11:28:22 -03:00
Two pattern methods return all of the matches for a pattern.
:meth:`~re.Pattern.findall` returns a list of matching strings::
2007-08-15 11:28:22 -03:00
>>> p = re.compile('\d+')
>>> p.findall('12 drummers drumming, 11 pipers piping, 10 lords a-leaping')
['12', '11', '10']
:meth:`~re.Pattern.findall` has to create the entire list before it can be returned as the
result. The :meth:`~re.Pattern.finditer` method returns a sequence of
:ref:`match object <match-objects>` instances as an :term:`iterator`::
2007-08-15 11:28:22 -03:00
>>> iterator = p.finditer('12 drummers drumming, 11 ... 10 ...')
>>> iterator #doctest: +ELLIPSIS
<callable_iterator object at 0x...>
2007-08-15 11:28:22 -03:00
>>> for match in iterator:
... print(match.span())
2007-08-15 11:28:22 -03:00
...
(0, 2)
(22, 24)
(29, 31)
Module-Level Functions
----------------------
You don't have to create a pattern object and call its methods; the
:mod:`re` module also provides top-level functions called :func:`~re.match`,
:func:`~re.search`, :func:`~re.findall`, :func:`~re.sub`, and so forth. These functions
take the same arguments as the corresponding pattern method with
2007-08-15 11:28:22 -03:00
the RE string added as the first argument, and still return either ``None`` or a
:ref:`match object <match-objects>` instance. ::
2007-08-15 11:28:22 -03:00
>>> print(re.match(r'From\s+', 'Fromage amk'))
2007-08-15 11:28:22 -03:00
None
>>> re.match(r'From\s+', 'From amk Thu May 14 19:12:10 1998') #doctest: +ELLIPSIS
<re.Match object; span=(0, 5), match='From '>
2007-08-15 11:28:22 -03:00
Under the hood, these functions simply create a pattern object for you
and call the appropriate method on it. They also store the compiled
object in a cache, so future calls using the same RE won't need to
parse the pattern again and again.
2007-08-15 11:28:22 -03:00
Should you use these module-level functions, or should you get the
pattern and call its methods yourself? If you're accessing a regex
within a loop, pre-compiling it will save a few function calls.
Outside of loops, there's not much difference thanks to the internal
cache.
2007-08-15 11:28:22 -03:00
Compilation Flags
-----------------
Compilation flags let you modify some aspects of how regular expressions work.
Flags are available in the :mod:`re` module under two names, a long name such as
:const:`IGNORECASE` and a short, one-letter form such as :const:`I`. (If you're
familiar with Perl's pattern modifiers, the one-letter forms use the same
letters; the short form of :const:`re.VERBOSE` is :const:`re.X`, for example.)
Multiple flags can be specified by bitwise OR-ing them; ``re.I | re.M`` sets
both the :const:`I` and :const:`M` flags, for example.
Here's a table of the available flags, followed by a more detailed explanation
of each one.
+---------------------------------+--------------------------------------------+
| Flag | Meaning |
+=================================+============================================+
| :const:`ASCII`, :const:`A` | Makes several escapes like ``\w``, ``\b``, |
| | ``\s`` and ``\d`` match only on ASCII |
| | characters with the respective property. |
+---------------------------------+--------------------------------------------+
2007-08-15 11:28:22 -03:00
| :const:`DOTALL`, :const:`S` | Make ``.`` match any character, including |
| | newlines. |
2007-08-15 11:28:22 -03:00
+---------------------------------+--------------------------------------------+
| :const:`IGNORECASE`, :const:`I` | Do case-insensitive matches. |
2007-08-15 11:28:22 -03:00
+---------------------------------+--------------------------------------------+
| :const:`LOCALE`, :const:`L` | Do a locale-aware match. |
2007-08-15 11:28:22 -03:00
+---------------------------------+--------------------------------------------+
| :const:`MULTILINE`, :const:`M` | Multi-line matching, affecting ``^`` and |
| | ``$``. |
2007-08-15 11:28:22 -03:00
+---------------------------------+--------------------------------------------+
| :const:`VERBOSE`, :const:`X` | Enable verbose REs, which can be organized |
| (for 'extended') | more cleanly and understandably. |
+---------------------------------+--------------------------------------------+
2007-08-15 11:28:22 -03:00
.. data:: I
IGNORECASE
:noindex:
Perform case-insensitive matching; character class and literal strings will
match letters by ignoring case. For example, ``[A-Z]`` will match lowercase
letters, too. Full Unicode matching also works unless the :const:`ASCII`
flag is used to disable non-ASCII matches. When the Unicode patterns
``[a-z]`` or ``[A-Z]`` are used in combination with the :const:`IGNORECASE`
flag, they will match the 52 ASCII letters and 4 additional non-ASCII
letters: 'İ' (U+0130, Latin capital letter I with dot above), 'ı' (U+0131,
Latin small letter dotless i), 'ſ' (U+017F, Latin small letter long s) and
'' (U+212A, Kelvin sign). ``Spam`` will match ``'Spam'``, ``'spam'``,
``'spAM'``, or ``'ſpam'`` (the latter is matched only in Unicode mode).
This lowercasing doesn't take the current locale into account;
it will if you also set the :const:`LOCALE` flag.
2007-08-15 11:28:22 -03:00
.. data:: L
LOCALE
:noindex:
Make ``\w``, ``\W``, ``\b``, ``\B`` and case-insensitive matching dependent
on the current locale instead of the Unicode database.
Locales are a feature of the C library intended to help in writing programs
that take account of language differences. For example, if you're
processing encoded French text, you'd want to be able to write ``\w+`` to
match words, but ``\w`` only matches the character class ``[A-Za-z]`` in
bytes patterns; it won't match bytes corresponding to ``é`` or ``ç``.
If your system is configured properly and a French locale is selected,
certain C functions will tell the program that the byte corresponding to
``é`` should also be considered a letter.
2007-08-15 11:28:22 -03:00
Setting the :const:`LOCALE` flag when compiling a regular expression will cause
the resulting compiled object to use these C functions for ``\w``; this is
slower, but also enables ``\w+`` to match French words as you'd expect.
The use of this flag is discouraged in Python 3 as the locale mechanism
is very unreliable, it only handles one "culture" at a time, and it only
works with 8-bit locales. Unicode matching is already enabled by default
in Python 3 for Unicode (str) patterns, and it is able to handle different
locales/languages.
2007-08-15 11:28:22 -03:00
.. data:: M
MULTILINE
:noindex:
(``^`` and ``$`` haven't been explained yet; they'll be introduced in section
:ref:`more-metacharacters`.)
Usually ``^`` matches only at the beginning of the string, and ``$`` matches
only at the end of the string and immediately before the newline (if any) at the
end of the string. When this flag is specified, ``^`` matches at the beginning
of the string and at the beginning of each line within the string, immediately
following each newline. Similarly, the ``$`` metacharacter matches either at
the end of the string and at the end of each line (immediately preceding each
newline).
.. data:: S
DOTALL
:noindex:
Makes the ``'.'`` special character match any character at all, including a
newline; without this flag, ``'.'`` will match anything *except* a newline.
.. data:: A
ASCII
:noindex:
Make ``\w``, ``\W``, ``\b``, ``\B``, ``\s`` and ``\S`` perform ASCII-only
matching instead of full Unicode matching. This is only meaningful for
Unicode patterns, and is ignored for byte patterns.
2007-08-15 11:28:22 -03:00
.. data:: X
VERBOSE
:noindex:
This flag allows you to write regular expressions that are more readable by
granting you more flexibility in how you can format them. When this flag has
been specified, whitespace within the RE string is ignored, except when the
whitespace is in a character class or preceded by an unescaped backslash; this
lets you organize and indent the RE more clearly. This flag also lets you put
comments within a RE that will be ignored by the engine; comments are marked by
a ``'#'`` that's neither in a character class or preceded by an unescaped
backslash.
For example, here's a RE that uses :const:`re.VERBOSE`; see how much easier it
is to read? ::
charref = re.compile(r"""
&[#] # Start of a numeric entity reference
2007-08-15 11:28:22 -03:00
(
0[0-7]+ # Octal form
| [0-9]+ # Decimal form
| x[0-9a-fA-F]+ # Hexadecimal form
)
; # Trailing semicolon
""", re.VERBOSE)
Without the verbose setting, the RE would look like this::
charref = re.compile("&#(0[0-7]+"
"|[0-9]+"
"|x[0-9a-fA-F]+);")
In the above example, Python's automatic concatenation of string literals has
been used to break up the RE into smaller pieces, but it's still more difficult
to understand than the version using :const:`re.VERBOSE`.
More Pattern Power
==================
So far we've only covered a part of the features of regular expressions. In
this section, we'll cover some new metacharacters, and how to use groups to
retrieve portions of the text that was matched.
.. _more-metacharacters:
More Metacharacters
-------------------
There are some metacharacters that we haven't covered yet. Most of them will be
covered in this section.
Some of the remaining metacharacters to be discussed are :dfn:`zero-width
assertions`. They don't cause the engine to advance through the string;
instead, they consume no characters at all, and simply succeed or fail. For
example, ``\b`` is an assertion that the current position is located at a word
boundary; the position isn't changed by the ``\b`` at all. This means that
zero-width assertions should never be repeated, because if they match once at a
given location, they can obviously be matched an infinite number of times.
``|``
Alternation, or the "or" operator. If *A* and *B* are regular expressions,
``A|B`` will match any string that matches either *A* or *B*. ``|`` has very
2007-08-15 11:28:22 -03:00
low precedence in order to make it work reasonably when you're alternating
multi-character strings. ``Crow|Servo`` will match either ``'Crow'`` or ``'Servo'``,
not ``'Cro'``, a ``'w'`` or an ``'S'``, and ``'ervo'``.
2007-08-15 11:28:22 -03:00
To match a literal ``'|'``, use ``\|``, or enclose it inside a character class,
as in ``[|]``.
``^``
Matches at the beginning of lines. Unless the :const:`MULTILINE` flag has been
set, this will only match at the beginning of the string. In :const:`MULTILINE`
mode, this also matches immediately after each newline within the string.
For example, if you wish to match the word ``From`` only at the beginning of a
line, the RE to use is ``^From``. ::
>>> print(re.search('^From', 'From Here to Eternity')) #doctest: +ELLIPSIS
<re.Match object; span=(0, 4), match='From'>
>>> print(re.search('^From', 'Reciting From Memory'))
2007-08-15 11:28:22 -03:00
None
To match a literal ``'^'``, use ``\^``.
2007-08-15 11:28:22 -03:00
``$``
Matches at the end of a line, which is defined as either the end of the string,
or any location followed by a newline character. ::
>>> print(re.search('}$', '{block}')) #doctest: +ELLIPSIS
<re.Match object; span=(6, 7), match='}'>
>>> print(re.search('}$', '{block} '))
2007-08-15 11:28:22 -03:00
None
>>> print(re.search('}$', '{block}\n')) #doctest: +ELLIPSIS
<re.Match object; span=(6, 7), match='}'>
2007-08-15 11:28:22 -03:00
To match a literal ``'$'``, use ``\$`` or enclose it inside a character class,
as in ``[$]``.
``\A``
Matches only at the start of the string. When not in :const:`MULTILINE` mode,
``\A`` and ``^`` are effectively the same. In :const:`MULTILINE` mode, they're
different: ``\A`` still matches only at the beginning of the string, but ``^``
may match at any location inside the string that follows a newline character.
``\Z``
Matches only at the end of the string.
``\b``
Word boundary. This is a zero-width assertion that matches only at the
beginning or end of a word. A word is defined as a sequence of alphanumeric
characters, so the end of a word is indicated by whitespace or a
non-alphanumeric character.
The following example matches ``class`` only when it's a complete word; it won't
match when it's contained inside another word. ::
>>> p = re.compile(r'\bclass\b')
>>> print(p.search('no class at all'))
<re.Match object; span=(3, 8), match='class'>
>>> print(p.search('the declassified algorithm'))
2007-08-15 11:28:22 -03:00
None
>>> print(p.search('one subclass is'))
2007-08-15 11:28:22 -03:00
None
There are two subtleties you should remember when using this special sequence.
First, this is the worst collision between Python's string literals and regular
expression sequences. In Python's string literals, ``\b`` is the backspace
character, ASCII value 8. If you're not using raw strings, then Python will
convert the ``\b`` to a backspace, and your RE won't match as you expect it to.
The following example looks the same as our previous RE, but omits the ``'r'``
in front of the RE string. ::
>>> p = re.compile('\bclass\b')
>>> print(p.search('no class at all'))
2007-08-15 11:28:22 -03:00
None
>>> print(p.search('\b' + 'class' + '\b'))
<re.Match object; span=(0, 7), match='\x08class\x08'>
2007-08-15 11:28:22 -03:00
Second, inside a character class, where there's no use for this assertion,
``\b`` represents the backspace character, for compatibility with Python's
string literals.
``\B``
Another zero-width assertion, this is the opposite of ``\b``, only matching when
the current position is not at a word boundary.
Grouping
--------
Frequently you need to obtain more information than just whether the RE matched
or not. Regular expressions are often used to dissect strings by writing a RE
divided into several subgroups which match different components of interest.
For example, an RFC-822 header line is divided into a header name and a value,
separated by a ``':'``, like this::
From: author@example.com
User-Agent: Thunderbird 1.5.0.9 (X11/20061227)
MIME-Version: 1.0
To: editor@example.com
This can be handled by writing a regular expression which matches an entire
header line, and has one group which matches the header name, and another group
which matches the header's value.
Groups are marked by the ``'('``, ``')'`` metacharacters. ``'('`` and ``')'``
have much the same meaning as they do in mathematical expressions; they group
together the expressions contained inside them, and you can repeat the contents
of a group with a repeating qualifier, such as ``*``, ``+``, ``?``, or
``{m,n}``. For example, ``(ab)*`` will match zero or more repetitions of
``ab``. ::
>>> p = re.compile('(ab)*')
>>> print(p.match('ababababab').span())
2007-08-15 11:28:22 -03:00
(0, 10)
Groups indicated with ``'('``, ``')'`` also capture the starting and ending
index of the text that they match; this can be retrieved by passing an argument
to :meth:`~re.Match.group`, :meth:`~re.Match.start`, :meth:`~re.Match.end`, and
:meth:`~re.Match.span`. Groups are
2007-08-15 11:28:22 -03:00
numbered starting with 0. Group 0 is always present; it's the whole RE, so
:ref:`match object <match-objects>` methods all have group 0 as their default
argument. Later we'll see how to express groups that don't capture the span
of text that they match. ::
2007-08-15 11:28:22 -03:00
>>> p = re.compile('(a)b')
>>> m = p.match('ab')
>>> m.group()
'ab'
>>> m.group(0)
'ab'
Subgroups are numbered from left to right, from 1 upward. Groups can be nested;
to determine the number, just count the opening parenthesis characters, going
from left to right. ::
>>> p = re.compile('(a(b)c)d')
>>> m = p.match('abcd')
>>> m.group(0)
'abcd'
>>> m.group(1)
'abc'
>>> m.group(2)
'b'
:meth:`~re.Match.group` can be passed multiple group numbers at a time, in which case it
2007-08-15 11:28:22 -03:00
will return a tuple containing the corresponding values for those groups. ::
>>> m.group(2,1,2)
('b', 'abc', 'b')
The :meth:`~re.Match.groups` method returns a tuple containing the strings for all the
2007-08-15 11:28:22 -03:00
subgroups, from 1 up to however many there are. ::
>>> m.groups()
('abc', 'b')
Backreferences in a pattern allow you to specify that the contents of an earlier
capturing group must also be found at the current location in the string. For
example, ``\1`` will succeed if the exact contents of group 1 can be found at
the current position, and fails otherwise. Remember that Python's string
literals also use a backslash followed by numbers to allow including arbitrary
characters in a string, so be sure to use a raw string when incorporating
backreferences in a RE.
For example, the following RE detects doubled words in a string. ::
>>> p = re.compile(r'\b(\w+)\s+\1\b')
2007-08-15 11:28:22 -03:00
>>> p.search('Paris in the the spring').group()
'the the'
Backreferences like this aren't often useful for just searching through a string
--- there are few text formats which repeat data in this way --- but you'll soon
find out that they're *very* useful when performing string substitutions.
Non-capturing and Named Groups
------------------------------
Elaborate REs may use many groups, both to capture substrings of interest, and
to group and structure the RE itself. In complex REs, it becomes difficult to
keep track of the group numbers. There are two features which help with this
problem. Both of them use a common syntax for regular expression extensions, so
we'll look at that first.
Perl 5 is well known for its powerful additions to standard regular expressions.
For these new features the Perl developers couldn't choose new single-keystroke metacharacters
or new special sequences beginning with ``\`` without making Perl's regular
expressions confusingly different from standard REs. If they chose ``&`` as a
2007-08-15 11:28:22 -03:00
new metacharacter, for example, old expressions would be assuming that ``&`` was
a regular character and wouldn't have escaped it by writing ``\&`` or ``[&]``.
The solution chosen by the Perl developers was to use ``(?...)`` as the
extension syntax. ``?`` immediately after a parenthesis was a syntax error
because the ``?`` would have nothing to repeat, so this didn't introduce any
compatibility problems. The characters immediately after the ``?`` indicate
what extension is being used, so ``(?=foo)`` is one thing (a positive lookahead
assertion) and ``(?:foo)`` is something else (a non-capturing group containing
the subexpression ``foo``).
Python supports several of Perl's extensions and adds an extension
syntax to Perl's extension syntax. If the first character after the
question mark is a ``P``, you know that it's an extension that's
specific to Python.
Now that we've looked at the general extension syntax, we can return
to the features that simplify working with groups in complex REs.
Sometimes you'll want to use a group to denote a part of a regular expression,
2007-08-15 11:28:22 -03:00
but aren't interested in retrieving the group's contents. You can make this fact
explicit by using a non-capturing group: ``(?:...)``, where you can replace the
``...`` with any other regular expression. ::
>>> m = re.match("([abc])+", "abc")
>>> m.groups()
('c',)
>>> m = re.match("(?:[abc])+", "abc")
>>> m.groups()
()
Except for the fact that you can't retrieve the contents of what the group
matched, a non-capturing group behaves exactly the same as a capturing group;
you can put anything inside it, repeat it with a repetition metacharacter such
as ``*``, and nest it within other groups (capturing or non-capturing).
``(?:...)`` is particularly useful when modifying an existing pattern, since you
can add new groups without changing how all the other groups are numbered. It
should be mentioned that there's no performance difference in searching between
capturing and non-capturing groups; neither form is any faster than the other.
A more significant feature is named groups: instead of referring to them by
numbers, groups can be referenced by a name.
The syntax for a named group is one of the Python-specific extensions:
``(?P<name>...)``. *name* is, obviously, the name of the group. Named groups
behave exactly like capturing groups, and additionally associate a name
with a group. The :ref:`match object <match-objects>` methods that deal with
capturing groups all accept either integers that refer to the group by number
or strings that contain the desired group's name. Named groups are still
given numbers, so you can retrieve information about a group in two ways::
2007-08-15 11:28:22 -03:00
>>> p = re.compile(r'(?P<word>\b\w+\b)')
>>> m = p.search( '(((( Lots of punctuation )))' )
>>> m.group('word')
'Lots'
>>> m.group(1)
'Lots'
Named groups are handy because they let you use easily-remembered names, instead
of having to remember numbers. Here's an example RE from the :mod:`imaplib`
module::
InternalDate = re.compile(r'INTERNALDATE "'
r'(?P<day>[ 123][0-9])-(?P<mon>[A-Z][a-z][a-z])-'
r'(?P<year>[0-9][0-9][0-9][0-9])'
2007-08-15 11:28:22 -03:00
r' (?P<hour>[0-9][0-9]):(?P<min>[0-9][0-9]):(?P<sec>[0-9][0-9])'
r' (?P<zonen>[-+])(?P<zoneh>[0-9][0-9])(?P<zonem>[0-9][0-9])'
r'"')
It's obviously much easier to retrieve ``m.group('zonem')``, instead of having
to remember to retrieve group 9.
The syntax for backreferences in an expression such as ``(...)\1`` refers to the
number of the group. There's naturally a variant that uses the group name
instead of the number. This is another Python extension: ``(?P=name)`` indicates
that the contents of the group called *name* should again be matched at the
current point. The regular expression for finding doubled words,
``\b(\w+)\s+\1\b`` can also be written as ``\b(?P<word>\w+)\s+(?P=word)\b``::
2007-08-15 11:28:22 -03:00
>>> p = re.compile(r'\b(?P<word>\w+)\s+(?P=word)\b')
2007-08-15 11:28:22 -03:00
>>> p.search('Paris in the the spring').group()
'the the'
Lookahead Assertions
--------------------
Another zero-width assertion is the lookahead assertion. Lookahead assertions
are available in both positive and negative form, and look like this:
``(?=...)``
Positive lookahead assertion. This succeeds if the contained regular
expression, represented here by ``...``, successfully matches at the current
location, and fails otherwise. But, once the contained expression has been
tried, the matching engine doesn't advance at all; the rest of the pattern is
tried right where the assertion started.
``(?!...)``
Negative lookahead assertion. This is the opposite of the positive assertion;
it succeeds if the contained expression *doesn't* match at the current position
in the string.
To make this concrete, let's look at a case where a lookahead is useful.
Consider a simple pattern to match a filename and split it apart into a base
name and an extension, separated by a ``.``. For example, in ``news.rc``,
``news`` is the base name, and ``rc`` is the filename's extension.
The pattern to match this is quite simple:
``.*[.].*$``
Notice that the ``.`` needs to be treated specially because it's a
metacharacter, so it's inside a character class to only match that
specific character. Also notice the trailing ``$``; this is added to
ensure that all the rest of the string must be included in the
extension. This regular expression matches ``foo.bar`` and
2007-08-15 11:28:22 -03:00
``autoexec.bat`` and ``sendmail.cf`` and ``printers.conf``.
Now, consider complicating the problem a bit; what if you want to match
filenames where the extension is not ``bat``? Some incorrect attempts:
``.*[.][^b].*$`` The first attempt above tries to exclude ``bat`` by requiring
that the first character of the extension is not a ``b``. This is wrong,
because the pattern also doesn't match ``foo.bar``.
``.*[.]([^b]..|.[^a].|..[^t])$``
The expression gets messier when you try to patch up the first solution by
requiring one of the following cases to match: the first character of the
extension isn't ``b``; the second character isn't ``a``; or the third character
isn't ``t``. This accepts ``foo.bar`` and rejects ``autoexec.bat``, but it
requires a three-letter extension and won't accept a filename with a two-letter
extension such as ``sendmail.cf``. We'll complicate the pattern again in an
effort to fix it.
``.*[.]([^b].?.?|.[^a]?.?|..?[^t]?)$``
In the third attempt, the second and third letters are all made optional in
order to allow matching extensions shorter than three characters, such as
``sendmail.cf``.
The pattern's getting really complicated now, which makes it hard to read and
understand. Worse, if the problem changes and you want to exclude both ``bat``
and ``exe`` as extensions, the pattern would get even more complicated and
confusing.
A negative lookahead cuts through all this confusion:
``.*[.](?!bat$)[^.]*$`` The negative lookahead means: if the expression ``bat``
2007-08-15 11:28:22 -03:00
doesn't match at this point, try the rest of the pattern; if ``bat$`` does
match, the whole pattern will fail. The trailing ``$`` is required to ensure
that something like ``sample.batch``, where the extension only starts with
``bat``, will be allowed. The ``[^.]*`` makes sure that the pattern works
when there are multiple dots in the filename.
2007-08-15 11:28:22 -03:00
Excluding another filename extension is now easy; simply add it as an
alternative inside the assertion. The following pattern excludes filenames that
end in either ``bat`` or ``exe``:
``.*[.](?!bat$|exe$)[^.]*$``
2007-08-15 11:28:22 -03:00
Modifying Strings
=================
Up to this point, we've simply performed searches against a static string.
Regular expressions are also commonly used to modify strings in various ways,
using the following pattern methods:
2007-08-15 11:28:22 -03:00
+------------------+-----------------------------------------------+
| Method/Attribute | Purpose |
+==================+===============================================+
| ``split()`` | Split the string into a list, splitting it |
| | wherever the RE matches |
+------------------+-----------------------------------------------+
| ``sub()`` | Find all substrings where the RE matches, and |
| | replace them with a different string |
+------------------+-----------------------------------------------+
| ``subn()`` | Does the same thing as :meth:`!sub`, but |
2007-08-15 11:28:22 -03:00
| | returns the new string and the number of |
| | replacements |
+------------------+-----------------------------------------------+
Splitting Strings
-----------------
The :meth:`~re.Pattern.split` method of a pattern splits a string apart
2007-08-15 11:28:22 -03:00
wherever the RE matches, returning a list of the pieces. It's similar to the
:meth:`~str.split` method of strings but provides much more generality in the
delimiters that you can split by; string :meth:`!split` only supports splitting by
2007-08-15 11:28:22 -03:00
whitespace or by a fixed string. As you'd expect, there's a module-level
:func:`re.split` function, too.
.. method:: .split(string [, maxsplit=0])
:noindex:
Split *string* by the matches of the regular expression. If capturing
parentheses are used in the RE, then their contents will also be returned as
part of the resulting list. If *maxsplit* is nonzero, at most *maxsplit* splits
are performed.
You can limit the number of splits made, by passing a value for *maxsplit*.
When *maxsplit* is nonzero, at most *maxsplit* splits will be made, and the
remainder of the string is returned as the final element of the list. In the
following example, the delimiter is any sequence of non-alphanumeric characters.
::
>>> p = re.compile(r'\W+')
>>> p.split('This is a test, short and sweet, of split().')
['This', 'is', 'a', 'test', 'short', 'and', 'sweet', 'of', 'split', '']
>>> p.split('This is a test, short and sweet, of split().', 3)
['This', 'is', 'a', 'test, short and sweet, of split().']
Sometimes you're not only interested in what the text between delimiters is, but
also need to know what the delimiter was. If capturing parentheses are used in
the RE, then their values are also returned as part of the list. Compare the
following calls::
>>> p = re.compile(r'\W+')
>>> p2 = re.compile(r'(\W+)')
>>> p.split('This... is a test.')
['This', 'is', 'a', 'test', '']
>>> p2.split('This... is a test.')
['This', '... ', 'is', ' ', 'a', ' ', 'test', '.', '']
The module-level function :func:`re.split` adds the RE to be used as the first
argument, but is otherwise the same. ::
>>> re.split('[\W]+', 'Words, words, words.')
['Words', 'words', 'words', '']
>>> re.split('([\W]+)', 'Words, words, words.')
['Words', ', ', 'words', ', ', 'words', '.', '']
>>> re.split('[\W]+', 'Words, words, words.', 1)
['Words', 'words, words.']
Search and Replace
------------------
Another common task is to find all the matches for a pattern, and replace them
with a different string. The :meth:`~re.Pattern.sub` method takes a replacement value,
2007-08-15 11:28:22 -03:00
which can be either a string or a function, and the string to be processed.
.. method:: .sub(replacement, string[, count=0])
:noindex:
Returns the string obtained by replacing the leftmost non-overlapping
occurrences of the RE in *string* by the replacement *replacement*. If the
pattern isn't found, *string* is returned unchanged.
The optional argument *count* is the maximum number of pattern occurrences to be
replaced; *count* must be a non-negative integer. The default value of 0 means
to replace all occurrences.
Here's a simple example of using the :meth:`~re.Pattern.sub` method. It replaces colour
2007-08-15 11:28:22 -03:00
names with the word ``colour``::
>>> p = re.compile('(blue|white|red)')
>>> p.sub('colour', 'blue socks and red shoes')
2007-08-15 11:28:22 -03:00
'colour socks and colour shoes'
>>> p.sub('colour', 'blue socks and red shoes', count=1)
2007-08-15 11:28:22 -03:00
'colour socks and red shoes'
The :meth:`~re.Pattern.subn` method does the same work, but returns a 2-tuple containing the
2007-08-15 11:28:22 -03:00
new string value and the number of replacements that were performed::
>>> p = re.compile('(blue|white|red)')
>>> p.subn('colour', 'blue socks and red shoes')
2007-08-15 11:28:22 -03:00
('colour socks and colour shoes', 2)
>>> p.subn('colour', 'no colours at all')
2007-08-15 11:28:22 -03:00
('no colours at all', 0)
Empty matches are replaced only when they're not adjacent to a previous match.
::
>>> p = re.compile('x*')
>>> p.sub('-', 'abxd')
'-a-b-d-'
If *replacement* is a string, any backslash escapes in it are processed. That
is, ``\n`` is converted to a single newline character, ``\r`` is converted to a
carriage return, and so forth. Unknown escapes such as ``\&`` are left alone.
2007-08-15 11:28:22 -03:00
Backreferences, such as ``\6``, are replaced with the substring matched by the
corresponding group in the RE. This lets you incorporate portions of the
original text in the resulting replacement string.
This example matches the word ``section`` followed by a string enclosed in
``{``, ``}``, and changes ``section`` to ``subsection``::
>>> p = re.compile('section{ ( [^}]* ) }', re.VERBOSE)
>>> p.sub(r'subsection{\1}','section{First} section{second}')
'subsection{First} subsection{second}'
There's also a syntax for referring to named groups as defined by the
``(?P<name>...)`` syntax. ``\g<name>`` will use the substring matched by the
group named ``name``, and ``\g<number>`` uses the corresponding group number.
``\g<2>`` is therefore equivalent to ``\2``, but isn't ambiguous in a
replacement string such as ``\g<2>0``. (``\20`` would be interpreted as a
reference to group 20, not a reference to group 2 followed by the literal
character ``'0'``.) The following substitutions are all equivalent, but use all
three variations of the replacement string. ::
>>> p = re.compile('section{ (?P<name> [^}]* ) }', re.VERBOSE)
>>> p.sub(r'subsection{\1}','section{First}')
'subsection{First}'
>>> p.sub(r'subsection{\g<1>}','section{First}')
'subsection{First}'
>>> p.sub(r'subsection{\g<name>}','section{First}')
'subsection{First}'
*replacement* can also be a function, which gives you even more control. If
*replacement* is a function, the function is called for every non-overlapping
occurrence of *pattern*. On each call, the function is passed a
:ref:`match object <match-objects>` argument for the match and can use this
information to compute the desired replacement string and return it.
2007-08-15 11:28:22 -03:00
In the following example, the replacement function translates decimals into
2007-08-15 11:28:22 -03:00
hexadecimal::
>>> def hexrepl(match):
2007-08-15 11:28:22 -03:00
... "Return the hex string for a decimal number"
... value = int(match.group())
2007-08-15 11:28:22 -03:00
... return hex(value)
...
>>> p = re.compile(r'\d+')
>>> p.sub(hexrepl, 'Call 65490 for printing, 49152 for user code.')
'Call 0xffd2 for printing, 0xc000 for user code.'
When using the module-level :func:`re.sub` function, the pattern is passed as
the first argument. The pattern may be provided as an object or as a string; if
2007-08-15 11:28:22 -03:00
you need to specify regular expression flags, you must either use a
pattern object as the first parameter, or use embedded modifiers in the
pattern string, e.g. ``sub("(?i)b+", "x", "bbbb BBBB")`` returns ``'x x'``.
2007-08-15 11:28:22 -03:00
Common Problems
===============
Regular expressions are a powerful tool for some applications, but in some ways
their behaviour isn't intuitive and at times they don't behave the way you may
expect them to. This section will point out some of the most common pitfalls.
Use String Methods
------------------
Sometimes using the :mod:`re` module is a mistake. If you're matching a fixed
string, or a single character class, and you're not using any :mod:`re` features
such as the :const:`~re.IGNORECASE` flag, then the full power of regular expressions
2007-08-15 11:28:22 -03:00
may not be required. Strings have several methods for performing operations with
fixed strings and they're usually much faster, because the implementation is a
single small C loop that's been optimized for the purpose, instead of the large,
more generalized regular expression engine.
One example might be replacing a single fixed string with another one; for
example, you might replace ``word`` with ``deed``. :func:`re.sub` seems like the
function to use for this, but consider the :meth:`~str.replace` method. Note that
:meth:`!replace` will also replace ``word`` inside words, turning ``swordfish``
2007-08-15 11:28:22 -03:00
into ``sdeedfish``, but the naive RE ``word`` would have done that, too. (To
avoid performing the substitution on parts of words, the pattern would have to
be ``\bword\b``, in order to require that ``word`` have a word boundary on
either side. This takes the job beyond :meth:`!replace`'s abilities.)
2007-08-15 11:28:22 -03:00
Another common task is deleting every occurrence of a single character from a
string or replacing it with another single character. You might do this with
something like ``re.sub('\n', ' ', S)``, but :meth:`~str.translate` is capable of
2007-08-15 11:28:22 -03:00
doing both tasks and will be faster than any regular expression operation can
be.
In short, before turning to the :mod:`re` module, consider whether your problem
can be solved with a faster and simpler string method.
match() versus search()
-----------------------
The :func:`~re.match` function only checks if the RE matches at the beginning of the
string while :func:`~re.search` will scan forward through the string for a match.
It's important to keep this distinction in mind. Remember, :func:`!match` will
2007-08-15 11:28:22 -03:00
only report a successful match which will start at 0; if the match wouldn't
start at zero, :func:`!match` will *not* report it. ::
2007-08-15 11:28:22 -03:00
>>> print(re.match('super', 'superstition').span())
2007-08-15 11:28:22 -03:00
(0, 5)
>>> print(re.match('super', 'insuperable'))
2007-08-15 11:28:22 -03:00
None
On the other hand, :func:`~re.search` will scan forward through the string,
2007-08-15 11:28:22 -03:00
reporting the first match it finds. ::
>>> print(re.search('super', 'superstition').span())
2007-08-15 11:28:22 -03:00
(0, 5)
>>> print(re.search('super', 'insuperable').span())
2007-08-15 11:28:22 -03:00
(2, 7)
Sometimes you'll be tempted to keep using :func:`re.match`, and just add ``.*``
to the front of your RE. Resist this temptation and use :func:`re.search`
instead. The regular expression compiler does some analysis of REs in order to
speed up the process of looking for a match. One such analysis figures out what
the first character of a match must be; for example, a pattern starting with
``Crow`` must match starting with a ``'C'``. The analysis lets the engine
quickly scan through the string looking for the starting character, only trying
the full match if a ``'C'`` is found.
Adding ``.*`` defeats this optimization, requiring scanning to the end of the
string and then backtracking to find a match for the rest of the RE. Use
:func:`re.search` instead.
Greedy versus Non-Greedy
------------------------
When repeating a regular expression, as in ``a*``, the resulting action is to
consume as much of the pattern as possible. This fact often bites you when
you're trying to match a pair of balanced delimiters, such as the angle brackets
surrounding an HTML tag. The naive pattern for matching a single HTML tag
doesn't work because of the greedy nature of ``.*``. ::
>>> s = '<html><head><title>Title</title>'
>>> len(s)
32
>>> print(re.match('<.*>', s).span())
2007-08-15 11:28:22 -03:00
(0, 32)
>>> print(re.match('<.*>', s).group())
2007-08-15 11:28:22 -03:00
<html><head><title>Title</title>
The RE matches the ``'<'`` in ``'<html>'``, and the ``.*`` consumes the rest of
2007-08-15 11:28:22 -03:00
the string. There's still more left in the RE, though, and the ``>`` can't
match at the end of the string, so the regular expression engine has to
backtrack character by character until it finds a match for the ``>``. The
final match extends from the ``'<'`` in ``'<html>'`` to the ``'>'`` in
``'</title>'``, which isn't what you want.
2007-08-15 11:28:22 -03:00
In this case, the solution is to use the non-greedy qualifiers ``*?``, ``+?``,
``??``, or ``{m,n}?``, which match as *little* text as possible. In the above
example, the ``'>'`` is tried immediately after the first ``'<'`` matches, and
when it fails, the engine advances a character at a time, retrying the ``'>'``
at every step. This produces just the right result::
>>> print(re.match('<.*?>', s).group())
2007-08-15 11:28:22 -03:00
<html>
(Note that parsing HTML or XML with regular expressions is painful.
Quick-and-dirty patterns will handle common cases, but HTML and XML have special
cases that will break the obvious regular expression; by the time you've written
a regular expression that handles all of the possible cases, the patterns will
be *very* complicated. Use an HTML or XML parser module for such tasks.)
Using re.VERBOSE
----------------
2007-08-15 11:28:22 -03:00
By now you've probably noticed that regular expressions are a very compact
notation, but they're not terribly readable. REs of moderate complexity can
become lengthy collections of backslashes, parentheses, and metacharacters,
making them difficult to read and understand.
For such REs, specifying the :const:`re.VERBOSE` flag when compiling the regular
2007-08-15 11:28:22 -03:00
expression can be helpful, because it allows you to format the regular
expression more clearly.
The ``re.VERBOSE`` flag has several effects. Whitespace in the regular
expression that *isn't* inside a character class is ignored. This means that an
expression such as ``dog | cat`` is equivalent to the less readable ``dog|cat``,
but ``[a b]`` will still match the characters ``'a'``, ``'b'``, or a space. In
addition, you can also put comments inside a RE; comments extend from a ``#``
character to the next newline. When used with triple-quoted strings, this
enables REs to be formatted more neatly::
pat = re.compile(r"""
\s* # Skip leading whitespace
(?P<header>[^:]+) # Header name
\s* : # Whitespace, and a colon
(?P<value>.*?) # The header's value -- *? used to
# lose the following trailing whitespace
\s*$ # Trailing whitespace to end-of-line
""", re.VERBOSE)
This is far more readable than::
2007-08-15 11:28:22 -03:00
pat = re.compile(r"\s*(?P<header>[^:]+)\s*:(?P<value>.*?)\s*$")
Feedback
========
Regular expressions are a complicated topic. Did this document help you
understand them? Were there parts that were unclear, or Problems you
encountered that weren't covered here? If so, please send suggestions for
improvements to the author.
The most complete book on regular expressions is almost certainly Jeffrey
Friedl's Mastering Regular Expressions, published by O'Reilly. Unfortunately,
it exclusively concentrates on Perl and Java's flavours of regular expressions,
and doesn't contain any Python material at all, so it won't be useful as a
reference for programming in Python. (The first edition covered Python's
now-removed :mod:`!regex` module, which won't help you much.) Consider checking
2007-08-15 11:28:22 -03:00
it out from your library.