spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Greg Hill <greg.h...@RACKSPACE.COM>
Subject Re: Spark on YARN driver memory allocation bug?
Date Thu, 09 Oct 2014 14:05:39 GMT
$MASTER is 'yarn-cluster' in spark-env.sh

spark-submit --driver-memory 12424m --class org.apache.spark.examples.SparkPi /usr/lib/spark-yarn/lib/spark-examples*.jar
1000
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000006fd280000, 4342677504,
0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 4342677504 bytes for committing reserved
memory.
# An error report file with more information is saved as:
# /tmp/jvm-3525/hs_error.log


From: Andrew Or <andrew@databricks.com<mailto:andrew@databricks.com>>
Date: Wednesday, October 8, 2014 3:25 PM
To: Greg <greg.hill@rackspace.com<mailto:greg.hill@rackspace.com>>
Cc: "user@spark.apache.org<mailto:user@spark.apache.org>" <user@spark.apache.org<mailto:user@spark.apache.org>>
Subject: Re: Spark on YARN driver memory allocation bug?

Hi Greg,

It does seem like a bug. What is the particular exception message that you see?

Andrew

2014-10-08 12:12 GMT-07:00 Greg Hill <greg.hill@rackspace.com<mailto:greg.hill@rackspace.com>>:
So, I think this is a bug, but I wanted to get some feedback before I reported it as such.
 On Spark on YARN, 1.1.0, if you specify the --driver-memory value to be higher than the memory
available on the client machine, Spark errors out due to failing to allocate enough memory.
 This happens even in yarn-cluster mode.  Shouldn't it only allocate that memory on the YARN
node that is going to run the driver process, not the local client machine?

Greg



Mime
View raw message