nutch-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <>
Subject [jira] [Commented] (NUTCH-2501) Take into account $NUTCH_HEAPSIZE when crawling using crawl script
Date Mon, 29 Jan 2018 12:33:00 GMT


ASF GitHub Bot commented on NUTCH-2501:

sebastian-nagel commented on a change in pull request #279: NUTCH-2501: Take NUTCH_HEAPSIZE
into account  when crawling using crawl script

 File path: src/bin/crawl
 @@ -171,6 +175,8 @@ fi
 Review comment:
   In [local mode](
all reducer tasks run in a single JVM instance. Only in [pseudo-distributed mode](
this could make some sense, given that all remaining resources (eg. number of CPUs) make it
possible to run all reduce tasks in parallel. In distributed mode you want to define the max.
heap size based on the configuration of your cluster nodes, because that defines how many
parallel tasks can be run on every node (in combination with other resource limits). The heap
size configured for a single task is usually used to define what is required to run the task
without running into an out-of-memory error. The Yarn resource manager verifies that the heap
size configured for the job tasks does not overflow the resource limits configured on the
cluster nodes. Otherwise the job will fail.

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:

> Take into account $NUTCH_HEAPSIZE when crawling using crawl script
> ------------------------------------------------------------------
>                 Key: NUTCH-2501
>                 URL:
>             Project: Nutch
>          Issue Type: Improvement
>            Reporter: Moreno Feltscher
>            Assignee: Lewis John McGibbney
>            Priority: Major

This message was sent by Atlassian JIRA

View raw message