Hi Amit,

What error does it through?

Thanks
Arush

On Sat, Jan 31, 2015 at 1:50 AM, Amit Behera <amit.bdk10@gmail.com> wrote:
hi all,

my sbt file is like this:

name := "Spark"

version := "1.0"

scalaVersion := "2.10.4"

libraryDependencies += "org.apache.spark" %% "spark-core" % "1.1.0"

libraryDependencies += "net.sf.opencsv" % "opencsv" % "2.3"

code:

object SparkJob
{

def pLines(lines:Iterator[String])={
val parser=new CSVParser()
lines.map(l=>{val vs=parser.parseLine(l)
(vs(0),vs(1).toInt)})
}

def main(args: Array[String]) {
val conf = new SparkConf().setAppName("Spark Job").setMaster("local")
val sc = new SparkContext(conf)
val data = sc.textFile("/home/amit/testData.csv").cache()
val result = data.mapPartitions(pLines).groupByKey
//val list = result.filter(x=> {(x._1).contains("24050881")})

}
}

Here groupByKey is not working . But same thing is working from spark-shell.
Please help me

Thanks
Amit  



--

Sigmoid Analytics

Arush Kharbanda || Technical Teamlead

arush@sigmoidanalytics.com || www.sigmoidanalytics.com