In GH-15823 the pattern was changed from `libpython*.so*` to `*.so*` which
matches a bit too greedily for some packagers. For instance this trips up
`debian/README.source`. A more specific pattern fixes this issue.
This fixes the issue discussed in https://bugs.python.org/issue22377
and fixes it according to the comments made by Paul Ganssle @pganssle
* It clarifies which values are acceptable in the table
* It extends the note with a clearer information on the valid values
https://bugs.python.org/issue22377
* bpo-20928: bring elementtree's XInclude support en-par with the implementation in lxml by adding support for recursive includes and a base-URL.
* bpo-20928: Support xincluding the same file multiple times, just not recursively.
* bpo-20928: Add 'max_depth' parameter to xinclude that limits the maximum recursion depth to 6 by default.
* Add news entry for updated ElementInclude support
Add ast.unparse() as a function in the ast module that can be used to unparse an
ast.AST object and produce a string with code that would produce an equivalent ast.AST
object when parsed.
Extra newlines are removed at the end of non-shell files. If the file only has newlines after stripping other trailing whitespace, all are removed, as is done by patchcheck.py.
The previous code was raising a `KeyError` for both the Python and C implementation.
This was caused by the specified index of an invalid input which did not exist
in the memo structure, where the pickle stores what objects it has seen.
The malformed input would have caused either a `BINGET` or `LONG_BINGET` load
from the memo, leading to a `KeyError` as the determined index was bogus.
https://bugs.python.org/issue38876https://bugs.python.org/issue38876
This patch enables downstream projects inspecting a TypedDict subclass at runtime to tell which keys are optional.
This is essential for generating test data with Hypothesis or validating inputs with typeguard or pydantic.
* fix HTTP Digest handling in request.py
There is a bug triggered when server replies to a request with `WWW-Authenticate: Digest` where `qop="auth,auth-int"` rather than mere `qop="auth"`. Having both `auth` and `auth-int` is legitimate according to the `qop-options` rule in §3.2.1 of [[https://www.ietf.org/rfc/rfc2617.txt|RFC 2617]]:
> qop-options = "qop" "=" <"> 1#qop-value <">
> qop-value = "auth" | "auth-int" | token
> **qop-options**: [...] If present, it is a quoted string **of one or more** tokens indicating the "quality of protection" values supported by the server. The value `"auth"` indicates authentication; the value `"auth-int"` indicates authentication with integrity protection
This is description confirmed by the definition of the [//n//]`#`[//m//]//rule// extended-BNF pattern defined in §2.1 of [[https://www.ietf.org/rfc/rfc2616.txt|RFC 2616]] as 'a comma-separated list of //rule// with at least //n// and at most //m// items'.
When this reply is parsed by `get_authorization`, request.py only tests for identity with `'auth'`, failing to recognize it as one of the supported modes the server announced, and claims that `"qop 'auth,auth-int' is not supported"`.
* 📜🤖 Added by blurb_it.
* bpo-38686 review fix: remember why.
* fix trailing space in Lib/urllib/request.py
Co-Authored-By: Brandt Bucher <brandtbucher@gmail.com>
new_interpreter() now calls _PyBuiltin_Init() to create the builtins
module and calls _PyImport_FixupBuiltin(), rather than using
_PyImport_FindBuiltin(tstate, "builtins").
pycore_init_builtins() is now responsible to initialize
intepr->builtins_copy: inline _PyImport_Init() and remove this
function.
If _PyImport_FixupExtensionObject() is called from a subinterpreter,
leave extensions unchanged and don't copy the module dictionary
into def->m_base.m_copy.
The Y2K reference is not needed as it only points out that Python's use
of C standard functions doesn't generally suffer from Y2K issues; the
point regarding conventions for conversion of 2-digit years in
:func:`strptime` is still valid.
The regex http.cookiejar.LOOSE_HTTP_DATE_RE was vulnerable to regular
expression denial of service (REDoS).
LOOSE_HTTP_DATE_RE.match is called when using http.cookiejar.CookieJar
to parse Set-Cookie headers returned by a server.
Processing a response from a malicious HTTP server can lead to extreme
CPU usage and execution will be blocked for a long time.
The regex contained multiple overlapping \s* capture groups.
Ignoring the ?-optional capture groups the regex could be simplified to
\d+-\w+-\d+(\s*\s*\s*)$
Therefore, a long sequence of spaces can trigger bad performance.
Matching a malicious string such as
LOOSE_HTTP_DATE_RE.match("1-c-1" + (" " * 2000) + "!")
caused catastrophic backtracking.
The fix removes ambiguity about which \s* should match a particular
space.
You can create a malicious server which responds with Set-Cookie headers
to attack all python programs which access it e.g.
from http.server import BaseHTTPRequestHandler, HTTPServer
def make_set_cookie_value(n_spaces):
spaces = " " * n_spaces
expiry = f"1-c-1{spaces}!"
return f"b;Expires={expiry}"
class Handler(BaseHTTPRequestHandler):
def do_GET(self):
self.log_request(204)
self.send_response_only(204) # Don't bother sending Server and Date
n_spaces = (
int(self.path[1:]) # Can GET e.g. /100 to test shorter sequences
if len(self.path) > 1 else
65506 # Max header line length 65536
)
value = make_set_cookie_value(n_spaces)
for i in range(99): # Not necessary, but we can have up to 100 header lines
self.send_header("Set-Cookie", value)
self.end_headers()
if __name__ == "__main__":
HTTPServer(("", 44020), Handler).serve_forever()
This server returns 99 Set-Cookie headers. Each has 65506 spaces.
Extracting the cookies will pretty much never complete.
Vulnerable client using the example at the bottom of
https://docs.python.org/3/library/http.cookiejar.html :
import http.cookiejar, urllib.request
cj = http.cookiejar.CookieJar()
opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj))
r = opener.open("http://localhost:44020/")
The popular requests library was also vulnerable without any additional
options (as it uses http.cookiejar by default):
import requests
requests.get("http://localhost:44020/")
* Regression test for http.cookiejar REDoS
If we regress, this test will take a very long time.
* Improve performance of http.cookiejar.ISO_DATE_RE
A string like
"444444" + (" " * 2000) + "A"
could cause poor performance due to the 2 overlapping \s* groups,
although this is not as serious as the REDoS in LOOSE_HTTP_DATE_RE was.
is_cgi() function of http.server library does not currently handle a
cgi script if one of the cgi_directories is located at the
sub-directory of given path. Since is_cgi() in CGIHTTPRequestHandler
class separates given path into (dir, rest) based on the first seen
'/', multi-level directories like /sub/dir/cgi-bin/hello.py is divided
into head=/sub, rest=dir/cgi-bin/hello.py then check whether '/sub'
exists in cgi_directories = [..., '/sub/dir/cgi-bin'].
This patch makes the is_cgi() keep expanding dir part to the next '/'
then checking if that expanded path exists in the cgi_directories.
Signed-off-by: Siwon Kang <kkangshawn@gmail.com>
https://bugs.python.org/issue38863