spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "梅西0247" <>
Subject 回复:ApplicationMaster + Fair Scheduler + Dynamic resource allocation
Date Tue, 30 Aug 2016 13:21:35 GMT

1) Is that what you want?     when yarn-client
spark.driver.memory            when   yarn-cluster
2)I think you need to set these configs in spark-default.conf

3) It's not about the fair scheduler.Instead of use a mapreduce conf, you need to set a env
like this:export SPARK_EXECUTOR_CORES=6
------------------------------------------------------------------发件人:Cleosson José
Pirani de Souza <>发送时间:2016年8月30日(星期二) 19:30收件人:user
<>主 题:ApplicationMaster + Fair Scheduler + Dynamic resource
 I am using Spark 1.6.2 and Hadoop 2.7.2 in a single node cluster (Pseudo-Distributed Operation
settings for testing propose). For every spark application that I submit I get:  - ApplicationMaster
with 1024 MB of RAM and 1 vcore  - And one container with 1024 MB of RAM and 1 vcore I have
three questions using dynamic allocation and Fair Scheduler:
  1) How do I set ApplicationMaster max memory to 512m ?  2) How do I get more than one
container running per application ? (Using dynamic allocation I cannot set the spark.executor.instances)  
3) I noticed that YARN ignores,
and when the scheduler is Fair, am I
 right ?

 My settings:
 Spark    # spark-defaults.conf    spark.driver.memory                512m               512m    spark.executor.memory          
   512m    spark.executor.cores               2    spark.dynamicAllocation.enabled
   true    spark.shuffle.service.enabled  true YARN    # yarn-site.xml    yarn.scheduler.maximum-allocation-vcores
   32    yarn.scheduler.minimum-allocation-vcores    1    yarn.scheduler.maximum-allocation-mb
       16384    yarn.scheduler.minimum-allocation-mb        64    yarn.scheduler.fair.preemption
             true    yarn.resourcemanager.scheduler.class        org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler 
  yarn.nodemanager.aux-services               spark_shuffle    # mapred-site.xml           512
  1          -Xmx384
  -Xmx768m    mapreduce.reduce.memory.mb                  1024
Thanks in advance,Cleosson
View raw message