spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrew Ehrlich <and...@aehrlich.com>
Subject Re: Heavy Stage Concentration - Ends With Failure
Date Wed, 20 Jul 2016 03:20:42 GMT
Yea this is a good suggestion; also check 25th percentile, median, and 75th percentile to see
how skewed the input data is.

If you find that the RDD’s partitions are skewed you can solve it either by changing the
partitioner when you read the files like already suggested, or call repartition(<int>)
on the RDD before the bottleneck to redistribute the data amongst the partitions by executing
a shuffle.

> On Jul 19, 2016, at 6:19 PM, Kuchekar <kuchekar.nilesh@gmail.com> wrote:
> 
> Hi,
> 
> Can you check if the RDD is partitioned correctly with correct partition number (if you
are manually setting the partition value.) . Try using Hash partitioner while reading the
files.
> 
> One way you can debug is by checking the number of records that executor has compared
to others in the Stage tab of the Spark UI.
> 
> Kuchekar, Nilesh
> 
> On Tue, Jul 19, 2016 at 8:16 PM, Aaron Jackson <ajackson@pobox.com <mailto:ajackson@pobox.com>>
wrote:
> Hi,
> 
> I have a cluster with 15 nodes of which 5 are HDFS nodes.  I kick off a job that creates
some 120 stages.  Eventually, the active and pending stages reduce down to a small bottleneck
and it never fails... the tasks associated with the 10 (or so) running tasks are always allocated
to the same executor on the same host.
> 
> Sooner or later, it runs out of memory ... or some other resource.  It falls over and
then they tasks are reallocated to another executor.
> 
> Why do we see such heavy concentration of tasks onto a single executor when other executors
are free?  Were the tasks assigned to an executor when the job was decomposed into stages?
> 


Mime
View raw message