mahout-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tamas Jambor <jambo...@googlemail.com>
Subject Re: new to hadoop
Date Mon, 03 May 2010 16:10:02 GMT
thanks. one step closer. I needed to assign -Xmx2048m to make it work.

now I get the following error:

Task attempt_201005031452_0003_m_000001_0 failed to report status for 
600 seconds. Killing!

and then it assigns it to other nodes, but they all fail this way.

This is with the third task 
(RecommenderJob-UserVectorToCooccurrenceMapper-UserVectorToCooccurrenceReducer)

by the way I don't understand why the mapper assigns the task only to 2 
nodes, when I run the sample mapreduce word count example, it uses all 
the nodes available.

Tamas


On 03/05/2010 11:51, Sean Owen wrote:
> Not sure I understand the question -- all jobs need to run for the
> recommendations to complete. It is a process with about 5 distinct
> mapreduces. Which one fails with an OOME? they have names, you can see
> in the console.
>
> Are you giving Hadoop workers enough memory? by default they can only
> use like 64MB which is far too little. You need to, for example, in
> conf/mapred-site.xml, add a new property named
> “mapred.child.java.opts” with value “-Xmx1024m” to give workers up to
> 1GB of heap. They probably don't need that much but might as well not
> limit it.
>    


Mime
View raw message