spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ruslan Dautkhanov <dautkha...@gmail.com>
Subject Re: Spark Number of Partitions Recommendations
Date Sat, 01 Aug 2015 21:14:16 GMT
You should also take into account amount of memory that you plan to use.
It's advised not to give too much memory for each executor .. otherwise GC
overhead will go up.

Btw, why prime numbers?



-- 
Ruslan Dautkhanov

On Wed, Jul 29, 2015 at 3:31 AM, ponkin <alexey.ponkin@ya.ru> wrote:

> Hi Rahul,
>
> Where did you see such a recommendation?
> I personally define partitions with the following formula
>
> partitions = nextPrimeNumberAbove( K*(--num-executors * --executor-cores )
> )
>
> where
> nextPrimeNumberAbove(x) - prime number which is greater than x
> K - multiplicator  to calculate start with 1 and encrease untill join
> perfomance start to degrade
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Number-of-Partitions-Recommendations-tp24022p24059.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>

Mime
View raw message