spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sean Owen <>
Subject Re: Executors not utilized properly.
Date Tue, 17 Jun 2014 18:39:54 GMT
It sounds like your job has 9 tasks and all are executing simultaneously in
parallel. This is as good as it gets right? Are you asking how to break the
work into more tasks, like 120 to match your 10*12 cores? Make your RDD
have more partitions. For example the textFile method can override the
default number of partitions determined by HDFS splits.
On Jun 17, 2014 5:37 PM, "abhiguruvayya" <> wrote:

> I am creating around 10 executors with 12 cores and 7g memory, but when i
> launch a task not all executors are being used. For example if my job has 9
> tasks, only 3 executors are being used with 3 task each and i believe this
> is making my app slower than map reduce program for the same use case. Can
> any one throw some light on executor configuration if any?How can i use all
> the executors. I am running spark on yarn and Hadoop 2.4.0.
> --
> View this message in context:
> Sent from the Apache Spark User List mailing list archive at

View raw message