spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Baptiste Onofré ...@nanthrax.net>
Subject Re: OutOfMemoryError
Date Mon, 05 Oct 2015 08:06:12 GMT
Hi Ramkumar,

did you try to increase Xmx of the workers ?

Regards
JB

On 10/05/2015 08:56 AM, Ramkumar V wrote:
> Hi,
>
> When i submit java spark job in cluster mode, i'm getting following
> exception.
>
> *LOG TRACE :*
>
> INFO yarn.ExecutorRunnable: Setting up executor with commands:
> List({{JAVA_HOME}}/bin/java, -server, -XX:OnOutOfMemoryError='kill
>   %p', -Xms1024m, -Xmx1024m, -Djava.io.tmpdir={{PWD}}/tmp,
> '-Dspark.ui.port=0', '-Dspark.driver.port=48309',
> -Dspark.yarn.app.container.log.dir=<LOG
> _DIR>, org.apache.spark.executor.CoarseGrainedExecutorBackend,
> --driver-url, akka.tcp://sparkDriver@ip:port/user/CoarseGrainedScheduler,
>   --executor-id, 2, --hostname, hostname , --cores, 1, --app-id,
> application_1441965028669_9009, --user-class-path, file:$PWD
> /__app__.jar, --user-class-path, file:$PWD/json-20090211.jar, 1>,
> <LOG_DIR>/stdout, 2>, <LOG_DIR>/stderr).
>
> I have a cluster of 11 machines (9 - 64 GB memory and 2 - 32 GB memory
> ). my input data of size 128 GB.
>
> How to solve this exception ? is it depends on driver.memory and
> execuitor.memory setting ?
>
>
> *Thanks*,
> <https://in.linkedin.com/in/ramkumarcs31>
>

-- 
Jean-Baptiste Onofré
jbonofre@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Mime
View raw message