nutch-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Markus Jelsma (JIRA)" <>
Subject [jira] [Commented] (NUTCH-1067) Configure minimum throughput for fetcher
Date Wed, 14 Sep 2011 12:19:09 GMT


Markus Jelsma commented on NUTCH-1067:

Committed fixes for NUTCH-1102 (originating issue) for 1.4 in rev. 1170557. Everything works
again with a clean check out. My apologies for letting myself be fooled by not doing a ant
clean more regularly.

Thanks Julien for being to prompt!

> Configure minimum throughput for fetcher
> ----------------------------------------
>                 Key: NUTCH-1067
>                 URL:
>             Project: Nutch
>          Issue Type: New Feature
>          Components: fetcher
>            Reporter: Markus Jelsma
>            Assignee: Markus Jelsma
>            Priority: Minor
>             Fix For: 1.4
>         Attachments: NUTCH-1045-1.4-v2.patch, NUTCH-1067-1.4-1.patch, NUTCH-1067-1.4-2.patch,
NUTCH-1067-1.4-3.patch, NUTCH-1067-1.4-4.patch
> Large fetches can contain a lot of url's for the same domain. These can be very slow
to crawl due to politeness from robots.txt, e.g. 10s per url. If all other url's have been
fetched, these queue's can stall the entire fetcher, 60 url's can then take 10 minutes or
even more. This can usually be dealt with using the time bomb but the time bomb value is hard
to determine.
> This patch adds a fetcher.throughput.threshold setting meaning the minimum number of
pages per second before the fetcher gives up. It doesn't use the global number of pages /
running time but records the actual pages processed in the previous second. This value is
compared with the configured threshold.
> Besides the check the fetcher's status is also updated with the actual number of pages
per second and bytes per second.

This message is automatically generated by JIRA.
For more information on JIRA, see:


View raw message