This module provides a single class, RobotFileParser, which answers questions about whether or not a particular user agent can fetch a URL on the web site that published the robots.txt file. For more details on the structure of robots.txt files, see http://info.webcrawler.com/mak/projects/robots/norobots.html.
This class provides a set of methods to read, parse and answer questions about a single robots.txt file.
robots.txt file was last fetched. This is
useful for long-running web spiders that need to check for new
robots.txt files periodically.
robots.txt file was last fetched to the current
time.
The following example demonstrates basic use of the RobotFileParser class.
>>> import robotparser
>>> rp = robotparser.RobotFileParser()
>>> rp.set_url("http://www.musi-cal.com/robots.txt")
>>> rp.read()
>>> rp.can_fetch("*", "http://www.musi-cal.com/cgi-bin/search?city=San+Francisco")
0
>>> rp.can_fetch("*", "http://www.musi-cal.com/")
1
See About this document... for information on suggesting changes.