spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "梅西0247" <zhen...@dtdream.com>
Subject 回复:ApplicationMaster + Fair Scheduler + Dynamic resource allocation
Date Tue, 30 Aug 2016 13:21:35 GMT


1) Is that what you want?
 spark.yarn.am.memory     when yarn-client
spark.driver.memory            when   yarn-cluster
2)I think you need to set these configs in spark-default.conf
spark.dynamicAllocation.minExecutors 
spark.dynamicAllocation.maxExecutors 


3) It's not about the fair scheduler.Instead of use a mapreduce conf, you need to set a env
like this:export SPARK_EXECUTOR_CORES=6
------------------------------------------------------------------发件人:Cleosson José
Pirani de Souza <csouza@daitangroup.com>发送时间:2016年8月30日(星期二) 19:30收件人:user
<user@spark.apache.org>主 题:ApplicationMaster + Fair Scheduler + Dynamic resource
allocation
Hi 
 I am using Spark 1.6.2 and Hadoop 2.7.2 in a single node cluster (Pseudo-Distributed Operation
settings for testing propose). For every spark application that I submit I get:  - ApplicationMaster
with 1024 MB of RAM and 1 vcore  - And one container with 1024 MB of RAM and 1 vcore I have
three questions using dynamic allocation and Fair Scheduler:
  1) How do I set ApplicationMaster max memory to 512m ?  2) How do I get more than one
container running per application ? (Using dynamic allocation I cannot set the spark.executor.instances)  
3) I noticed that YARN ignores yarn.app.mapreduce.am.resource.mb, yarn.app.mapreduce.am.resource.cpu-vcores
and yarn.app.mapreduce.am.command-opts when the scheduler is Fair, am I
 right ?

 My settings:
 Spark    # spark-defaults.conf    spark.driver.memory                512m 
  spark.yarn.am.memory               512m    spark.executor.memory          
   512m    spark.executor.cores               2    spark.dynamicAllocation.enabled
   true    spark.shuffle.service.enabled  true YARN    # yarn-site.xml    yarn.scheduler.maximum-allocation-vcores
   32    yarn.scheduler.minimum-allocation-vcores    1    yarn.scheduler.maximum-allocation-mb
       16384    yarn.scheduler.minimum-allocation-mb        64    yarn.scheduler.fair.preemption
             true    yarn.resourcemanager.scheduler.class        org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler 
  yarn.nodemanager.aux-services               spark_shuffle    # mapred-site.xml 
  yarn.app.mapreduce.am.resource.mb           512    yarn.app.mapreduce.am.resource.cpu-vcores
  1    yarn.app.mapreduce.am.command-opts          -Xmx384    mapreduce.map.memory.mb
                    1024    mapreduce.map.java.opts                  
  -Xmx768m    mapreduce.reduce.memory.mb                  1024    mapreduce.reduce.java.opts
                 -Xmx768m
Thanks in advance,Cleosson
Mime
View raw message