spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Amit Behera <amit.bd...@gmail.com>
Subject groupByKey is not working
Date Fri, 30 Jan 2015 20:20:32 GMT
hi all,

my sbt file is like this:

name := "Spark"

version := "1.0"

scalaVersion := "2.10.4"

libraryDependencies += "org.apache.spark" %% "spark-core" % "1.1.0"

libraryDependencies += "net.sf.opencsv" % "opencsv" % "2.3"


*code:*

object SparkJob
{

  def pLines(lines:Iterator[String])={
    val parser=new CSVParser()
    lines.map(l=>{val vs=parser.parseLine(l)
      (vs(0),vs(1).toInt)})
  }

  def main(args: Array[String]) {
    val conf = new SparkConf().setAppName("Spark Job").setMaster("local")
    val sc = new SparkContext(conf)
    val data = sc.textFile("/home/amit/testData.csv").cache()
    val result = data.mapPartitions(pLines).groupByKey
    //val list = result.filter(x=> {(x._1).contains("24050881")})

  }

}


Here groupByKey is not working . But same thing is working from *spark-shell.*

Please help me


Thanks

Amit

Mime
View raw message