nutch-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Doğacan Güney (JIRA) <j...@apache.org>
Subject [jira] Commented: (NUTCH-446) RobotRulesParser should ignore Crawl-delay values of other bots in robots.txt
Date Thu, 10 May 2007 12:47:15 GMT

    [ https://issues.apache.org/jira/browse/NUTCH-446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12494734
] 

Doğacan Güney commented on NUTCH-446:
-------------------------------------

So, does anyone have objections to this? It fixes an annoying (albeit rare) bug in which Nutch
doesn't fetch pages even though it is alllowed to, or behave too polite/impolite. And it doesn't
seem to break anything.

> RobotRulesParser should ignore Crawl-delay values of other bots in robots.txt
> -----------------------------------------------------------------------------
>
>                 Key: NUTCH-446
>                 URL: https://issues.apache.org/jira/browse/NUTCH-446
>             Project: Nutch
>          Issue Type: Bug
>          Components: fetcher
>    Affects Versions: 0.9.0
>            Reporter: Doğacan Güney
>            Priority: Minor
>             Fix For: 1.0.0
>
>         Attachments: crawl-delay.patch, crawl-delay_test.patch
>
>
> RobotRulesParser doesn't check for addRules when reading the crawl-delay value, so the
nutch bot will get the crawl-delay value of another robot's crawl-delay in robots.txt. 
> Let me try to be more clear:
> User-agent: foobot
> Crawl-delay: 3600
> User-agent: *
> Disallow: /baz
> In such a robots.txt file, nutch bot will get 3600 as its crawl-delay
> value, no matter what nutch bot's name actually is.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message