spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From DB Tsai <dbt...@dbtsai.com>
Subject Re: Spark KMeans hangs at reduceByKey / collectAsMap
Date Tue, 14 Oct 2014 23:26:20 GMT
I saw similar bottleneck in reduceByKey operation. Maybe we can
implement treeReduceByKey to reduce the pressure on single executor
reducing the particular key.

Sincerely,

DB Tsai
-------------------------------------------------------
My Blog: https://www.dbtsai.com
LinkedIn: https://www.linkedin.com/in/dbtsai


On Wed, Oct 15, 2014 at 12:16 AM, Burak Yavuz <byavuz@stanford.edu> wrote:
> Hi Ray,
>
> The reduceByKey / collectAsMap does a lot of calculations. Therefore it can take a very
long time if:
> 1) The parameter number of runs is set very high
> 2) k is set high (you have observed this already)
> 3) data is not properly repartitioned
> It seems that it is hanging, but there is a lot of calculation going on.
>
> Did you use a different value for the number of runs?
> If you look at the storage tab, does the data look balanced among executors?
>
> Best,
> Burak
>
> ----- Original Message -----
> From: "Ray" <ray-wang@outlook.com>
> To: user@spark.incubator.apache.org
> Sent: Tuesday, October 14, 2014 2:58:03 PM
> Subject: Re: Spark KMeans hangs at reduceByKey / collectAsMap
>
> Hi Xiangrui,
>
> The input dataset has 1.5 million sparse vectors. Each sparse vector has a
> dimension(cardinality) of 9153 and has less than 15 nonzero elements.
>
>
> Yes, if I set num-executors = 200, from the hadoop cluster scheduler, I can
> see the application got  201 vCores. From the spark UI, I can see it got 201
> executors (as shown below).
>
> <http://apache-spark-user-list.1001560.n3.nabble.com/file/n16428/spark_core.png>
>
> <http://apache-spark-user-list.1001560.n3.nabble.com/file/n16428/spark_executor.png>
>
>
>
> Thanks.
>
> Ray
>
>
>
>
> --
> View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-KMeans-hangs-at-reduceByKey-collectAsMap-tp16413p16428.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Mime
View raw message