spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Pa Rö <paul.roewer1...@googlemail.com>
Subject spark submit configuration on yarn
Date Tue, 14 Jul 2015 08:43:27 GMT
hello community,

i want run my spark app on a cluster (cloudera 5.4.4) with 3 nodes (one pc
has i7 8core with 16GB RAM). now i want submit my spark job on yarn (20GB
RAM).

my script to submit the job is to time the following:

export HADOOP_CONF_DIR=/etc/hadoop/conf/
./spark-1.3.0-bin-hadoop2.4/bin/spark-submit \
  --class mgm.tp.bigdata.ma_spark.SparkMain \
  --master yarn-cluster \
  --executor-memory 9G \
  --total-executor-cores 16 \
  ma-spark.jar \
  1000

maybe my configuration is not the optimal??

best regards,
paul

Mime
View raw message