spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Deep Pradhan <pradhandeep1...@gmail.com>
Subject Re: Filter using the Vertex Ids
Date Thu, 04 Dec 2014 06:27:26 GMT
But groupByKey() gives me the error saying that it is not a member of
org.apache.spark,rdd,RDD[(Double, org.apache.spark.graphx.VertexId)] when
run in the graphx directory of spark-1.0.0. This error does not come when I
use the same in the interactive shell.


On Wed, Dec 3, 2014 at 3:49 PM, Ankur Dave <ankurdave@gmail.com> wrote:

> At 2014-12-03 02:13:49 -0800, Deep Pradhan <pradhandeep1991@gmail.com>
> wrote:
> > We cannot do sc.parallelize(List(VertexRDD)), can we?
>
> There's no need to do this, because every VertexRDD is also a pair RDD:
>
>     class VertexRDD[VD] extends RDD[(VertexId, VD)]
>
> You can simply use graph.vertices in place of `a` in my example.
>
> Ankur
>

Mime
View raw message