I understand from SPARK-6799[1] and the respective merge commit [2]  that the RDD class is private in Spark 1.4 . If I wanted to modify the old Kmeans and/or LR examples so that the computation happened in Spark what is the best direction to go? Sorry if I am missing something obvious, but based on the NAMESPACE file [3] in the SparkR codebase I am having trouble seeing the obvious direction to go.

Thanks in advance,

[1] https://issues.apache.org/jira/browse/SPARK-6799
[2] https://github.com/apache/spark/commit/4b91e18d9b7803dbfe1e1cf20b46163d8cb8716c
[3] https://github.com/apache/spark/blob/branch-1.4/R/pkg/NAMESPACE