hadoop-mapreduce-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Karthik Kambatla <ka...@cloudera.com>
Subject Re: JVM vs container memory configs
Date Mon, 06 May 2013 16:59:40 GMT
Thanks a lot for your inputs, Bobby and Harsh.

Harsh - Agree that we should raise the heap sizes, particularly for your
second reason. Once we arrive at  a good default value, I think we should
reduce the memory.mb values, if possible, to increase the number of
containers and parallelism in the cluster.

Would it be right to say that the required headroom between the heap and
container size doesn't depend on what the task does and the input to the
task? If that is the case, I intend to run a few experiments to find a
reasonable value for this.

On Sun, May 5, 2013 at 9:11 PM, Harsh J <harsh@cloudera.com> wrote:

> While I think work should be done to make the numbers nearby, we should
> ideally raise the JVM heap value than lower the memory.mb resource request
> of MR tasks. Otherwise, with YARN, users will start seeing more containers
> per node than before.
> Also good to raise heap plus the sort buffer memory of mappers now, since
> the HDFS default block size has also doubled to 128m. I think we have a
> JIRA already open for this.
> Hi
> While looking into MAPREDUCE-5207 (adding defaults for
> mapreduce.{map|reduce}.memory.mb), I was wondering how much headroom should
> be left on top of mapred.child.java.opts (or other similar JVM opts) for
> the container memory itself?
> Currently, mapred.child.java.opts (per mapred-default.xml) is set to 200 MB
> by default. The default for mapreduce.{map|reduce}.memory.mb is 1024 in the
> code, which is significantly higher than the 200MB value.
> Do we need more than 100 MB for non-JVM memory per container? If so, does
> it make sense make that a config property in itself and the code to verify
> all 3 values are clear enough?
> Thanks
> Karthik

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message