nutch-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <>
Subject [jira] [Commented] (NUTCH-2501) Take into account $NUTCH_HEAPSIZE when crawling using crawl script
Date Fri, 26 Jan 2018 08:56:00 GMT


ASF GitHub Bot commented on NUTCH-2501:

sebastian-nagel commented on a change in pull request #279: NUTCH-2501: Take NUTCH_HEAPSIZE
into account  when crawling using crawl script

 File path: src/bin/crawl
 @@ -171,6 +175,8 @@ fi
 Review comment:
   Why should the heap size depend on the number of reducers? For a large-scale crawl the
reducers will run independently on different nodes, ev. also sequentially if there are not
enough computing resources available. Since is also used for the map
tasks and it's often not possible to force a fix number of map tasks, it's better to define
the heap size per task (usually via and

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:

> Take into account $NUTCH_HEAPSIZE when crawling using crawl script
> ------------------------------------------------------------------
>                 Key: NUTCH-2501
>                 URL:
>             Project: Nutch
>          Issue Type: Improvement
>            Reporter: Moreno Feltscher
>            Assignee: Lewis John McGibbney
>            Priority: Major

This message was sent by Atlassian JIRA

View raw message