spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean Georges Perrin <...@jgp.net>
Subject Re: spark.executor.cores
Date Fri, 15 Jul 2016 12:48:00 GMT
Merci Nihed, this is one of the tests I did :( still not working



> On Jul 15, 2016, at 8:41 AM, nihed mbarek <nihedmm@gmail.com> wrote:
> 
> can you try with : 
> SparkConf conf = new SparkConf().setAppName("NC Eatery app").set("spark.executor.memory",
"4g")
> 				.setMaster("spark://10.0.100.120:7077 <>");
> 		if (restId == 0) {
> 			conf = conf.set("spark.executor.cores", "22");
> 		} else {
> 			conf = conf.set("spark.executor.cores", "2");
> 		}
> 		JavaSparkContext javaSparkContext = new JavaSparkContext(conf);
> 
> On Fri, Jul 15, 2016 at 2:31 PM, Jean Georges Perrin <jgp@jgp.net <mailto:jgp@jgp.net>>
wrote:
> Hi,
> 
> Configuration: standalone cluster, Java, Spark 1.6.2, 24 cores
> 
> My process uses all the cores of my server (good), but I am trying to limit it so I can
actually submit a second job.
> 
> I tried
> 
> 		SparkConf conf = new SparkConf().setAppName("NC Eatery app").set("spark.executor.memory",
"4g")
> 				.setMaster("spark://10.0.100.120:7077 <>");
> 		if (restId == 0) {
> 			conf = conf.set("spark.executor.cores", "22");
> 		} else {
> 			conf = conf.set("spark.executor.cores", "2");
> 		}
> 		JavaSparkContext javaSparkContext = new JavaSparkContext(conf);
> 
> and
> 
> 		SparkConf conf = new SparkConf().setAppName("NC Eatery app").set("spark.executor.memory",
"4g")
> 				.setMaster("spark://10.0.100.120:7077 <>");
> 		if (restId == 0) {
> 			conf.set("spark.executor.cores", "22");
> 		} else {
> 			conf.set("spark.executor.cores", "2");
> 		}
> 		JavaSparkContext javaSparkContext = new JavaSparkContext(conf);
> 
> but it does not seem to take it. Any hint?
> 
> jg
> 
> 
> 
> 
> 
> -- 
> 
> M'BAREK Med Nihed,
> Fedora Ambassador, TUNISIA, Northern Africa
> http://www.nihed.com <http://www.nihed.com/>
> 
>  <http://tn.linkedin.com/in/nihed>
> 


Mime
View raw message