mahout-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sean Owen <sro...@gmail.com>
Subject Re: new to hadoop
Date Mon, 03 May 2010 10:51:50 GMT
Not sure I understand the question -- all jobs need to run for the
recommendations to complete. It is a process with about 5 distinct
mapreduces. Which one fails with an OOME? they have names, you can see
in the console.

Are you giving Hadoop workers enough memory? by default they can only
use like 64MB which is far too little. You need to, for example, in
conf/mapred-site.xml, add a new property named
“mapred.child.java.opts” with value “-Xmx1024m” to give workers up to
1GB of heap. They probably don't need that much but might as well not
limit it.

Mime
View raw message