spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sean Owen <so...@cloudera.com>
Subject Re: [MLLib] storageLevel in ALS
Date Wed, 07 Jan 2015 16:41:44 GMT
Ah, Fernando means the usersOut / productsOut RDDs, not the intermediate
links RDDs.
Can you unpersist() them, and persist() again at the desired level? the
downside is that this might mean recomputing and repersisting the RDDs.

On Wed, Jan 7, 2015 at 5:11 AM, Xiangrui Meng <mengxr@gmail.com> wrote:

> Which Spark version are you using? We made this configurable in 1.1:
>
>
> https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/recommendation/ALS.scala#L202
>
> -Xiangrui
>
> On Tue, Jan 6, 2015 at 12:57 PM, Fernando O. <fotero@gmail.com> wrote:
>
>> Hi,
>>    I was doing a tests with ALS and I noticed that if I persist the inner
>> RDDs  from a MatrixFactorizationModel the RDD is not replicated, it seems
>> like the storagelevel is hardcoded to MEMORY_AND_DISK, do you think it
>> makes sense to make that configurable?
>> [image: Inline image 1]
>>
>
>

Mime
View raw message