Yurttas/PL/SL/python/docs/core-python-programming/doc/20/lib/module-robotparser.html
12.18 robotparser -- Parser for robots.txt
This module provides a single class, RobotFileParser, which answers questions about whether or not a particular user agent can fetch a URL on the web site that published the robots.txt file. For more details on the structure of robots.txt files, see http://info.webcrawler.com/mak/projects/robots/norobots.html ].
- RobotFileParser ()
- This class provides a set of methods to read, parse and answer questions about a single robots.txt file.
- set_url (url)
- Sets the URL referring to a robots.txt file.
- read ()
- Reads the robots.txt URL and feeds it to the parser.
- parse (lines)
- Parses the lines argument.
- can_fetch (useragent, url)
- Returns true if the useragent is allowed to fetch the url according to the rules contained in the parsed robots.txt file.
- mtime ()
- Returns the time the
robots.txtfile was last fetched. This is useful for long-running web spiders that need to check for newrobots.txtfiles periodically. - modified ()
- Sets the time the
robots.txtfile was last fetched to the current time.
The following example demonstrates basic use of the RobotFileParser class.
>>> import robotparser
>>> rp = robotparser.RobotFileParser()
>>> rp.set_url("http://www.musi-cal.com/robots.txt")
>>> rp.read()
>>> rp.can_fetch("*", "http://www.musi-cal.com/cgi-bin/search?city=San+Francisco")
0
>>> rp.can_fetch("*", "http://www.musi-cal.com/")
1
See About this document... for information on suggesting changes.