spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tsai Li Ming <mailingl...@ltsai.com>
Subject Re: Setting SPARK_MEM higher than available memory in driver
Date Fri, 28 Mar 2014 06:10:14 GMT
Thanks. Got it working.

On 28 Mar, 2014, at 2:02 pm, Aaron Davidson <ilikerps@gmail.com> wrote:

> Assuming you're using a new enough version of Spark, you should use spark.executor.memory
to set the memory for your executors, without changing the driver memory. See the docs for
your version of Spark.
> 
> 
> On Thu, Mar 27, 2014 at 10:48 PM, Tsai Li Ming <mailinglist@ltsai.com> wrote:
> Hi,
> 
> My worker nodes have more memory than the host that I’m submitting my driver program,
but it seems that SPARK_MEM is also setting the Xmx of the spark shell?
> 
> $ SPARK_MEM=100g MASTER=spark://XXX:7077 bin/spark-shell
> 
> Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f736e130000,
205634994176, 0) failed; error='Cannot allocate memory' (errno=12)
> #
> # There is insufficient memory for the Java Runtime Environment to continue.
> # Native memory allocation (malloc) failed to allocate 205634994176 bytes for committing
reserved memory.
> 
> I want to allocate at least 100GB of memory per executor. The allocated memory on the
executor seems to depend on the -Xmx heap size of the driver?
> 
> Thanks!
> 
> 
> 
> 


Mime
View raw message