This commit is contained in:
Éric Araujo 2011-06-19 18:53:31 +02:00
parent 12f2bffce1
commit 348c572dcf
2 changed files with 2 additions and 2 deletions

View File

@ -110,7 +110,7 @@ class Crawler(BaseClient):
:param follow_externals: tell if following external links is needed or :param follow_externals: tell if following external links is needed or
not. Default is False. not. Default is False.
:param mirrors_url: the url to look on for DNS records giving mirror :param mirrors_url: the url to look on for DNS records giving mirror
adresses. addresses.
:param mirrors: a list of mirrors (see PEP 381). :param mirrors: a list of mirrors (see PEP 381).
:param timeout: time in seconds to consider a url has timeouted. :param timeout: time in seconds to consider a url has timeouted.
:param mirrors_max_tries": number of times to try requesting informations :param mirrors_max_tries": number of times to try requesting informations

View File

@ -22,7 +22,7 @@ I think of something like that:
Then, the server must have only one port to rely on, eg. Then, the server must have only one port to rely on, eg.
>>> server.fulladress() >>> server.fulladdress()
"http://ip:port/" "http://ip:port/"
It could be simple to have one HTTP server, relaying the requests to the two It could be simple to have one HTTP server, relaying the requests to the two