nutch-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Enrique Berlanga (JIRA)" <>
Subject [jira] Created: (NUTCH-938) Imposible to fetch sites with robots.txt
Date Tue, 23 Nov 2010 17:52:13 GMT
Imposible to fetch sites with robots.txt 

                 Key: NUTCH-938
             Project: Nutch
          Issue Type: Bug
          Components: fetcher
    Affects Versions: 1.2
         Environment: red hat, nutch 1.2, jaca 1.6
            Reporter: Enrique Berlanga

Crawling a site with a robots.txt file like this:
User-agent: *
Disallow: /
No links are followed. 

It doesn't matters the value set at "protocol.plugin.check.blocking" or "protocol.plugin.check.robots"
properties, because they are overloaded in class org.apache.nutch.fetcher.Fetcher:

// set non-blocking & no-robots mode for HTTP protocol plugins.
    getConf().setBoolean(Protocol.CHECK_BLOCKING, false);
    getConf().setBoolean(Protocol.CHECK_ROBOTS, false);

False is the desired value, but in FetcherThread inner class, robot rules are checket ignoring
the configuration:
RobotRules rules = protocol.getRobotRules(fit.url, fit.datum);
if (!rules.isAllowed(fit.u)) {
LOG.debug("Denied by robots.txt: " + fit.url);

I suposse there is no problem in disabling that part of the code directly for HTTP protocol.
If so, I could submit a patch as soon as posible to get over this.

Thanks in advance

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message