[ https://issues.apache.org/jira/browse/SPARK-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269218#comment-15269218
]
Xiangrui Meng commented on SPARK-15027:
---------------------------------------
Ah, I see the problems now. We do need the hash partitioner to accelerate queries from the
driver and probably joins. What if we convert the factors using `repartition(blocks, "id")`
before we return the factors? It should come with a hash partitioner, but it might be different
from the one we used in ALS. #2 seems like a bug. Could you provide a minimal example that
can reproduce it?
Given the pending issues, it seems that we should target this to 2.1. Sounds good?
> ALS.train should use DataFrame instead of RDD
> ---------------------------------------------
>
> Key: SPARK-15027
> URL: https://issues.apache.org/jira/browse/SPARK-15027
> Project: Spark
> Issue Type: Improvement
> Components: ML, PySpark
> Affects Versions: 2.0.0
> Reporter: Xiangrui Meng
>
> We should also update `ALS.train` to use `Dataset/DataFrame` instead of `RDD` to be consistent
with other APIs under spark.ml and it also leaves space for Tungsten-based optimization.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org
|