From user-return-35088-apmail-spark-user-archive=spark.apache.org@spark.apache.org Tue Jun 9 04:58:50 2015 Return-Path: X-Original-To: apmail-spark-user-archive@minotaur.apache.org Delivered-To: apmail-spark-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E7F1518874 for ; Tue, 9 Jun 2015 04:58:50 +0000 (UTC) Received: (qmail 51311 invoked by uid 500); 9 Jun 2015 04:58:47 -0000 Delivered-To: apmail-spark-user-archive@spark.apache.org Received: (qmail 51227 invoked by uid 500); 9 Jun 2015 04:58:47 -0000 Mailing-List: contact user-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list user@spark.apache.org Received: (qmail 51217 invoked by uid 99); 9 Jun 2015 04:58:46 -0000 Received: from Unknown (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 09 Jun 2015 04:58:46 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 7E7561A46BE for ; Tue, 9 Jun 2015 04:58:46 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 3.148 X-Spam-Level: *** X-Spam-Status: No, score=3.148 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, HTML_MESSAGE=3, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd2-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-eu-west.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id 099_3NrzIZi9 for ; Tue, 9 Jun 2015 04:58:45 +0000 (UTC) Received: from mail-ie0-f196.google.com (mail-ie0-f196.google.com [209.85.223.196]) by mx1-eu-west.apache.org (ASF Mail Server at mx1-eu-west.apache.org) with ESMTPS id 8DFA3209DC for ; Tue, 9 Jun 2015 04:58:44 +0000 (UTC) Received: by iebtr6 with SMTP id tr6so1055456ieb.2 for ; Mon, 08 Jun 2015 21:58:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=7stZT96007qOfFP+zM+V5olnAdMA73PmEx5yPzQYgus=; b=SJS9hv7Im5E+B7z4TgiG/q/D0EOq1WBBAvD6kwrLvsT0CghibQNITVJM4Cr8Gg1vze H0n61geGlgNz9FZfDvewb9Pcpk3cLxZROmZq8EouffSyXOTEEMIv3d6ADkjrIQT7rDH9 wTHR9WQ2zfXlkgaCaGcR1PkYt8WTT3wD45hxo6UKCM+rFH6OEwx/XkLN1sOHR5M3OHyc lg/6OcOkYJGWSpUGy4ZAGg5ERCyfu3ZloGnqNCCWMF0Z6UochbB43GpG4jj6061iWvBD g2pTxg08lOpN3YwANfw12wpi2yMGfPep4RNDzXYqeQ1vZW4S69jFw6xUxOnIy1JEZ8Us Jqag== MIME-Version: 1.0 X-Received: by 10.107.152.14 with SMTP id a14mr11188541ioe.92.1433825923586; Mon, 08 Jun 2015 21:58:43 -0700 (PDT) Received: by 10.50.216.231 with HTTP; Mon, 8 Jun 2015 21:58:43 -0700 (PDT) In-Reply-To: References: Date: Tue, 9 Jun 2015 10:28:43 +0530 Message-ID: Subject: Re: Spark error "value join is not a member of org.apache.spark.rdd.RDD[((String, String), String, String)]" From: amit tewari To: Ted Yu Cc: user Content-Type: multipart/alternative; boundary=001a11409f18d8636c05180e9cd4 --001a11409f18d8636c05180e9cd4 Content-Type: text/plain; charset=UTF-8 Thanks, but Spark 1.2 doesnt yet have DataFrame I guess? Regards Amit On Tue, Jun 9, 2015 at 10:25 AM, Ted Yu wrote: > join is operation of DataFrame > > You can call sc.createDataFrame(myRDD) to obtain DataFrame where sc is > sqlContext > > Cheers > > On Mon, Jun 8, 2015 at 9:44 PM, amit tewari > wrote: > >> Hi Dear Spark Users >> >> I am very new to Spark/Scala. >> >> Am using Datastax (4.7/Spark 1.2.1) and struggling with following >> error/issue. >> >> Already tried options like import org.apache.spark.SparkContext._ or >> explicit import org.apache.spark.SparkContext.rddToPairRDDFunctions. >> But error not resolved. >> >> Help much appreciated. >> >> Thanks >> AT >> >> scala>val input1 = sc.textFile("/test7").map(line => >> line.split(",").map(_.trim)); >> scala>val input2 = sc.textFile("/test8").map(line => >> line.split(",").map(_.trim)); >> scala>val input11 = input1.map(x=>((x(0),x(1)),x(2),x(3))) >> scala>val input22 = input2.map(x=>((x(0),x(1)),x(2),x(3))) >> >> scala> input11.join(input22).take(10) >> >> :66: error: value join is not a member of >> org.apache.spark.rdd.RDD[((String, String), String, String)] >> >> input11.join(input22).take(10) >> >> >> >> >> >> >> > --001a11409f18d8636c05180e9cd4 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Thanks, but Spark 1.2 doesnt yet have DataFrame I guess?
Regards
Amit
On Tue, Jun 9, 2015 at 10:25 AM, Ted Yu <yuz= hihong@gmail.com> wrote:
join is operation of=C2=A0DataFrame

You c= an call=C2=A0sc.createDataFrame(myRDD)=C2=A0to obtain D= ataFrame where sc is=C2=A0sqlContext

Cheers

On Mon, Jun 8, 2015 at 9:44 PM, a= mit tewari <amittewari.5@gmail.com> wrote:
Hi Dear Spark Users

I am very new to Spark/Scala.

Am using Datastax= (4.7/Spark 1.2.1) and struggling with following error/issue.
Already tried options like import org.apache.spark.SparkContext._ or explicit=C2=A0import org.apache.spark.SparkContext.rddT= oPairRDDFunctions.

Help much appreciated.

Thanks
AT

scala>val input1 =3D sc.textFile("/test7&qu= ot;).map(line =3D> line.split(",").map(_.trim));
scala>val input2 =3D sc.textF= ile("/test8").map(line =3D> line.split(",").map(_.trim));
scala>val input11 =3D input1.= map(x=3D>((x(0),x(1)),x(2),x(3)))
scala>val input22 =3D input2.map= (x=3D>((x(0),x(1)),x(2),x(3)))

scala> input11.join(input22).take(10)

<console>:66: error: value joi= n is not a member of org.apache.spark.rdd.RDD[((String, String), String, St= ring)]

=C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 input11.join(input22).take(10)








--001a11409f18d8636c05180e9cd4--