systemml-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Matthias Boehm <mboe...@googlemail.com>
Subject Re: Decaying performance of SystemML
Date Tue, 11 Jul 2017 17:12:58 GMT
without any specifics of scripts or datasets, it's unfortunately, hard 
if not impossible to help you here. However, note that the memory 
configuration seems wrong. Why would you configure the driver and 
executors with 2TB if you only have 256GB per node. Maybe you observe an 
issue of swapping. Also note that the maxResultSize is irrelevant in 
case SystemML creates the spark context because we would anyway set it 
to unlimited.

Regarding generally recommend configurations, it's usually a good idea 
to use one executor per worker node with the number of cores set to the 
number of virtual cores. This allows maximum sharing of broadcasts 
across tasks and hence reduces memory pressure.

Regards,
Matthias

On 7/11/2017 9:36 AM, arijit chakraborty wrote:
> Hi,
>
>
> I'm creating a process using systemML. But after certain period of time, the performance
decreases.
>
>
> 1) This warning message: WARN TaskSetManager: Stage 25254 contains a task of very large
size (3954 KB). The maximum recommended task size is 100 KB.
>
>
> 2) For Spark, we are implementing this setting:
>
>                      spark.executor.memory 2048g
>
>                       spark.driver.memory 2048g
>
>                 spark.driver.maxResultSize 2048
>
> is this good enough, or we can do something else to improve the performance? WE tried
the spark implementation suggested in the documentation. But it didn't help much.
>
>
> 3) We are running on a system with 244 gb ram 32 cores and 100 gb hard disk space.
>
>
> it will be great if anyone can guide me how to improve the performance.
>
>
> Thank you!
>
> Arijit
>

Mime
View raw message