spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Antony Mayi <antonym...@yahoo.com.INVALID>
Subject Re: HW imbalance
Date Mon, 26 Jan 2015 16:25:26 GMT
should have said I am running as yarn-client. all I can see is specifying the generic executor
memory that is then to be used in all containers. 

     On Monday, 26 January 2015, 16:48, Charles Feduke <charles.feduke@gmail.com> wrote:
   
 

 You should look at using Mesos. This should abstract away the individual hosts into a pool
of resources and make the different physical specifications manageable.

I haven't tried configuring Spark Standalone mode to have different specs on different machines
but based on spark-env.sh.template:
# - SPARK_WORKER_CORES, to set the number of cores to use on this machine# - SPARK_WORKER_MEMORY,
to set how much total memory workers have to give executors (e.g. 1000m, 2g)# - SPARK_WORKER_OPTS,
to set config properties only for the worker (e.g. "-Dx=y")
it looks like you should be able to mix. (Its not clear to me whether SPARK_WORKER_MEMORY
is uniform across the cluster or for the machine where the config file resides.)

On Mon Jan 26 2015 at 8:07:51 AM Antony Mayi <antonymayi@yahoo.com.invalid> wrote:

Hi,
is it possible to mix hosts with (significantly) different specs within a cluster (without
wasting the extra resources)? for example having 10 nodes with 36GB RAM/10CPUs now trying
to add 3 hosts with 128GB/10CPUs - is there a way to utilize the extra memory by spark executors
(as my understanding is all spark executors must have same memory).
thanks,Antony.


 
    
Mime
View raw message