spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sourigna Phetsarath <gna.phetsar...@teamaol.com>
Subject Re: using hivecontext with sparksql on cdh 5.3
Date Fri, 20 Feb 2015 20:48:40 GMT
Correction,

should be  HADOOP_CONF_DIR="/etc/hive/conf" --driver-class-path
'/data/opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/hive/lib/*'
--driver-java-options '-Dspark.executor.extraClassPath=/opt/cloudera/
parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/hive/lib/*'

On Fri, Feb 20, 2015 at 3:43 PM, Sourigna Phetsarath <
gna.phetsarath@teamaol.com> wrote:

> Also, you might want to add the hadoop configs:
>
> HADOOP_CONF_DIR="/etc/hadoop/conf:/etc/hive/conf" --driver-class-path
> '/data/opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/hive/lib/*'
> --driver-java-options '-Dspark.executor.extraClassPath=/opt/cloudera/
> parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/hive/lib/*'
>
> Ok. It needs the CDH configs for hive and hadoop.  Hopefully this works
> for you.
>
>
>
> On Fri, Feb 20, 2015 at 3:41 PM, chirag lakhani <chirag.lakhani@gmail.com>
> wrote:
>
>> Thanks!  I am able to login to Spark now but I am still getting the same
>> error
>>
>> scala> sqlContext.sql("FROM analytics.trainingdatafinal SELECT
>> *").collect().foreach(println)
>> 15/02/20 14:40:22 INFO ParseDriver: Parsing command: FROM
>> analytics.trainingdatafinal SELECT *
>> 15/02/20 14:40:22 INFO ParseDriver: Parse Completed
>> 15/02/20 14:40:23 INFO HiveMetaStore: 0: Opening raw store with
>> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
>> 15/02/20 14:40:23 INFO ObjectStore: ObjectStore, initialize called
>> 15/02/20 14:40:23 WARN General: Plugin (Bundle) "org.datanucleus.api.jdo"
>> is already registered. Ensure you dont have multiple JAR versions of the
>> same plugin in the classpath. The URL
>> "file:/data/opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/jars/datanucleus-api-jdo-3.2.6.jar"
>> is already registered, and you are trying to register an identical plugin
>> located at URL
>> "file:/data/opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/hive/lib/datanucleus-api-jdo-3.2.6.jar."
>> 15/02/20 14:40:23 WARN General: Plugin (Bundle)
>> "org.datanucleus.store.rdbms" is already registered. Ensure you dont have
>> multiple JAR versions of the same plugin in the classpath. The URL
>> "file:/data/opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/jars/datanucleus-rdbms-3.2.9.jar"
>> is already registered, and you are trying to register an identical plugin
>> located at URL
>> "file:/data/opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/hive/lib/datanucleus-rdbms-3.2.9.jar."
>> 15/02/20 14:40:23 WARN General: Plugin (Bundle) "org.datanucleus" is
>> already registered. Ensure you dont have multiple JAR versions of the same
>> plugin in the classpath. The URL
>> "file:/data/opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/jars/datanucleus-core-3.2.10.jar"
>> is already registered, and you are trying to register an identical plugin
>> located at URL
>> "file:/data/opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/hive/lib/datanucleus-core-3.2.10.jar."
>> 15/02/20 14:40:23 INFO Persistence: Property datanucleus.cache.level2
>> unknown - will be ignored
>> 15/02/20 14:40:23 INFO Persistence: Property
>> hive.metastore.integral.jdo.pushdown unknown - will be ignored
>> 15/02/20 14:40:25 INFO ObjectStore: Setting MetaStore object pin classes
>> with
>> hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
>> 15/02/20 14:40:25 INFO MetaStoreDirectSql: MySQL check failed, assuming
>> we are not on mysql: Lexical error at line 1, column 5.  Encountered: "@"
>> (64), after : "".
>> 15/02/20 14:40:27 INFO Datastore: The class
>> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
>> "embedded-only" so does not have its own datastore table.
>> 15/02/20 14:40:27 INFO Datastore: The class
>> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
>> "embedded-only" so does not have its own datastore table.
>> 15/02/20 14:40:28 INFO Datastore: The class
>> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
>> "embedded-only" so does not have its own datastore table.
>> 15/02/20 14:40:28 INFO Datastore: The class
>> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
>> "embedded-only" so does not have its own datastore table.
>> 15/02/20 14:40:28 INFO Query: Reading in results for query
>> "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used
>> is closing
>> 15/02/20 14:40:28 INFO ObjectStore: Initialized ObjectStore
>> 15/02/20 14:40:28 INFO HiveMetaStore: Added admin role in metastore
>> 15/02/20 14:40:28 INFO HiveMetaStore: Added public role in metastore
>> 15/02/20 14:40:29 INFO HiveMetaStore: No user is added in admin role,
>> since config is empty
>> 15/02/20 14:40:29 INFO SessionState: No Tez session required at this
>> point. hive.execution.engine=mr.
>> 15/02/20 14:40:29 INFO HiveMetaStore: 0: get_table : db=analytics
>> tbl=trainingdatafinal
>> 15/02/20 14:40:29 INFO audit: ugi=hdfs ip=unknown-ip-addr cmd=get_table
>> : db=analytics tbl=trainingdatafinal
>> 15/02/20 14:40:29 ERROR Hive:
>> NoSuchObjectException(message:analytics.trainingdatafinal table not found)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1569)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at
>> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:106)
>> at com.sun.proxy.$Proxy24.get_table(Unknown Source)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1008)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at
>> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:90)
>> at com.sun.proxy.$Proxy25.getTable(Unknown Source)
>> at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1000)
>> at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:974)
>> at
>> org.apache.spark.sql.hive.HiveMetastoreCatalog.lookupRelation(HiveMetastoreCatalog.scala:70)
>> at org.apache.spark.sql.hive.HiveContext$$anon$2.org
>> $apache$spark$sql$catalyst$analysis$OverrideCatalog$$super$lookupRelation(HiveContext.scala:253)
>> at
>> org.apache.spark.sql.catalyst.analysis.OverrideCatalog$$anonfun$lookupRelation$3.apply(Catalog.scala:141)
>> at
>> org.apache.spark.sql.catalyst.analysis.OverrideCatalog$$anonfun$lookupRelation$3.apply(Catalog.scala:141)
>> at scala.Option.getOrElse(Option.scala:120)
>> at
>> org.apache.spark.sql.catalyst.analysis.OverrideCatalog$class.lookupRelation(Catalog.scala:141)
>> at
>> org.apache.spark.sql.hive.HiveContext$$anon$2.lookupRelation(HiveContext.scala:253)
>> at
>> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$5.applyOrElse(Analyzer.scala:143)
>> at
>> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$5.applyOrElse(Analyzer.scala:138)
>> at
>> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:144)
>> at
>> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:162)
>> at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>> at
>> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>> at
>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>> at
>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>> at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
>> at scala.collection.AbstractIterator.to(Iterator.scala:1157)
>> at
>> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>> at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
>> at
>> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>> at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
>> at
>> org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:191)
>> at
>> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:147)
>> at
>> org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:135)
>> at
>> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:138)
>> at
>> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:137)
>> at
>> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:61)
>> at
>> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:59)
>> at
>> scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:111)
>> at scala.collection.immutable.List.foldLeft(List.scala:84)
>> at
>> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:59)
>> at
>> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:51)
>> at scala.collection.immutable.List.foreach(List.scala:318)
>> at
>> org.apache.spark.sql.catalyst.rules.RuleExecutor.apply(RuleExecutor.scala:51)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.analyzed$lzycompute(SQLContext.scala:411)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.analyzed(SQLContext.scala:411)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.withCachedData$lzycompute(SQLContext.scala:412)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.withCachedData(SQLContext.scala:412)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.optimizedPlan$lzycompute(SQLContext.scala:413)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.optimizedPlan(SQLContext.scala:413)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:418)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:416)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:422)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:422)
>> at org.apache.spark.sql.SchemaRDD.collect(SchemaRDD.scala:444)
>> at $line9.$read$$iwC$$iwC$$iwC$$iwC.<init>(<console>:15)
>> at $line9.$read$$iwC$$iwC$$iwC.<init>(<console>:20)
>> at $line9.$read$$iwC$$iwC.<init>(<console>:22)
>> at $line9.$read$$iwC.<init>(<console>:24)
>> at $line9.$read.<init>(<console>:26)
>> at $line9.$read$.<init>(<console>:30)
>> at $line9.$read$.<clinit>(<console>)
>> at $line9.$eval$.<init>(<console>:7)
>> at $line9.$eval$.<clinit>(<console>)
>> at $line9.$eval.$print(<console>)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at
>> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:852)
>> at
>> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1125)
>> at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:674)
>> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:705)
>> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:669)
>> at
>> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:828)
>> at
>> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:873)
>> at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:785)
>> at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:628)
>> at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:636)
>> at org.apache.spark.repl.SparkILoop.loop(SparkILoop.scala:641)
>> at
>> org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:968)
>> at
>> org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:916)
>> at
>> org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:916)
>> at
>> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>> at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:916)
>> at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1011)
>> at org.apache.spark.repl.Main$.main(Main.scala:31)
>> at org.apache.spark.repl.Main.main(Main.scala)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:358)
>> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>
>> org.apache.hadoop.hive.ql.metadata.InvalidTableException: Table not found
>> trainingdatafinal
>> at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1004)
>> at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:974)
>> at
>> org.apache.spark.sql.hive.HiveMetastoreCatalog.lookupRelation(HiveMetastoreCatalog.scala:70)
>> at org.apache.spark.sql.hive.HiveContext$$anon$2.org
>> $apache$spark$sql$catalyst$analysis$OverrideCatalog$$super$lookupRelation(HiveContext.scala:253)
>> at
>> org.apache.spark.sql.catalyst.analysis.OverrideCatalog$$anonfun$lookupRelation$3.apply(Catalog.scala:141)
>> at
>> org.apache.spark.sql.catalyst.analysis.OverrideCatalog$$anonfun$lookupRelation$3.apply(Catalog.scala:141)
>> at scala.Option.getOrElse(Option.scala:120)
>> at
>> org.apache.spark.sql.catalyst.analysis.OverrideCatalog$class.lookupRelation(Catalog.scala:141)
>> at
>> org.apache.spark.sql.hive.HiveContext$$anon$2.lookupRelation(HiveContext.scala:253)
>> at
>> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$5.applyOrElse(Analyzer.scala:143)
>> at
>> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$5.applyOrElse(Analyzer.scala:138)
>> at
>> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:144)
>> at
>> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:162)
>> at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>> at
>> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>> at
>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>> at
>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>> at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
>> at scala.collection.AbstractIterator.to(Iterator.scala:1157)
>> at
>> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>> at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
>> at
>> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>> at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
>> at
>> org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:191)
>> at
>> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:147)
>> at
>> org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:135)
>> at
>> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:138)
>> at
>> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:137)
>> at
>> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:61)
>> at
>> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:59)
>> at
>> scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:111)
>> at scala.collection.immutable.List.foldLeft(List.scala:84)
>> at
>> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:59)
>> at
>> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:51)
>> at scala.collection.immutable.List.foreach(List.scala:318)
>> at
>> org.apache.spark.sql.catalyst.rules.RuleExecutor.apply(RuleExecutor.scala:51)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.analyzed$lzycompute(SQLContext.scala:411)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.analyzed(SQLContext.scala:411)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.withCachedData$lzycompute(SQLContext.scala:412)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.withCachedData(SQLContext.scala:412)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.optimizedPlan$lzycompute(SQLContext.scala:413)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.optimizedPlan(SQLContext.scala:413)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:418)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:416)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:422)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:422)
>> at org.apache.spark.sql.SchemaRDD.collect(SchemaRDD.scala:444)
>> at $iwC$$iwC$$iwC$$iwC.<init>(<console>:15)
>> at $iwC$$iwC$$iwC.<init>(<console>:20)
>> at $iwC$$iwC.<init>(<console>:22)
>> at $iwC.<init>(<console>:24)
>> at <init>(<console>:26)
>> at .<init>(<console>:30)
>> at .<clinit>(<console>)
>> at .<init>(<console>:7)
>> at .<clinit>(<console>)
>> at $print(<console>)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at
>> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:852)
>> at
>> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1125)
>> at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:674)
>> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:705)
>> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:669)
>> at
>> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:828)
>> at
>> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:873)
>> at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:785)
>> at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:628)
>> at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:636)
>> at org.apache.spark.repl.SparkILoop.loop(SparkILoop.scala:641)
>> at
>> org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:968)
>> at
>> org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:916)
>> at
>> org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:916)
>> at
>> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>> at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:916)
>> at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1011)
>> at org.apache.spark.repl.Main$.main(Main.scala:31)
>> at org.apache.spark.repl.Main.main(Main.scala)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:358)
>> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>
>>
>> On Fri, Feb 20, 2015 at 3:28 PM, Sourigna Phetsarath <
>> gna.phetsarath@teamaol.com> wrote:
>>
>>> Try it without
>>>
>>> "--master yarn-cluster"
>>>
>>> if you are trying to run a spark-shell. :)
>>>
>>> On Fri, Feb 20, 2015 at 3:18 PM, chirag lakhani <
>>> chirag.lakhani@gmail.com> wrote:
>>>
>>>> I tried
>>>>
>>>> spark-shell --master yarn-cluster --driver-class-path
>>>> '/data/opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/hive/lib/*'
>>>> --driver-java-options
>>>> '-Dspark.executor.extraClassPath=/opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/hive/lib/*'
>>>>
>>>> and I get the following error
>>>>
>>>> Error: Cluster deploy mode is not applicable to Spark shells.
>>>> Run with --help for usage help or --verbose for debug output
>>>>
>>>>
>>>>
>>>> On Fri, Feb 20, 2015 at 2:52 PM, Sourigna Phetsarath <
>>>> gna.phetsarath@teamaol.com> wrote:
>>>>
>>>>> Chirag,
>>>>>
>>>>> This worked for us:
>>>>>
>>>>> spark-submit --master yarn-cluster --driver-class-path
>>>>> '/opt/cloudera/parcels/CDH/lib/hive/lib/*' --driver-java-options
>>>>> '-Dspark.executor.extraClassPath=/opt/cloudera/parcels/CDH/lib/hive/lib/*'
>>>>> ...
>>>>>
>>>>> Let me know, if you have any issues.
>>>>>
>>>>> On Fri, Feb 20, 2015 at 2:43 PM, chirag lakhani <
>>>>> chirag.lakhani@gmail.com> wrote:
>>>>>
>>>>>> I am trying to access a hive table using spark sql but I am having
>>>>>> trouble.  I followed the instructions in a cloudera community board which
>>>>>> stated
>>>>>>
>>>>>> 1) Import hive jars into the class path
>>>>>>
>>>>>> export SPARK_CLASSPATH=$(find
>>>>>> /data/opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/hive/lib/ -name
>>>>>> '*.jar' -print0 | sed 's/\x0/:/g')
>>>>>>
>>>>>> 2) start the spark shell
>>>>>>
>>>>>> spark-shell
>>>>>>
>>>>>> 3) created a hive context
>>>>>>
>>>>>> val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
>>>>>>
>>>>>> 4) then run query
>>>>>>
>>>>>> sqlContext.sql("FROM analytics.trainingdatafinal SELECT
>>>>>> *").collect().foreach(println)
>>>>>>
>>>>>>
>>>>>> When I do this it seems that it cannot find the table in the hive
>>>>>> metastore, I have put all of my cloudera parcels in the partition starting
>>>>>> with /data as opposed to the default location used by cloudera.  Any
>>>>>> suggestions on what can be done?  I am putting the error below
>>>>>>
>>>>>>
>>>>>> 15/02/20 13:43:01 ERROR Hive:
>>>>>> NoSuchObjectException(message:analytics.trainingdatafinal table not found)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1569)
>>>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>> at
>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>>> at
>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:106)
>>>>>> at com.sun.proxy.$Proxy24.get_table(Unknown Source)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1008)
>>>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>> at
>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>>> at
>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>>>>> at
>>>>>> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:90)
>>>>>> at com.sun.proxy.$Proxy25.getTable(Unknown Source)
>>>>>> at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1000)
>>>>>> at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:974)
>>>>>> at
>>>>>> org.apache.spark.sql.hive.HiveMetastoreCatalog.lookupRelation(HiveMetastoreCatalog.scala:70)
>>>>>> at org.apache.spark.sql.hive.HiveContext$$anon$2.org
>>>>>> $apache$spark$sql$catalyst$analysis$OverrideCatalog$$super$lookupRelation(HiveContext.scala:253)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.analysis.OverrideCatalog$$anonfun$lookupRelation$3.apply(Catalog.scala:141)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.analysis.OverrideCatalog$$anonfun$lookupRelation$3.apply(Catalog.scala:141)
>>>>>> at scala.Option.getOrElse(Option.scala:120)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.analysis.OverrideCatalog$class.lookupRelation(Catalog.scala:141)
>>>>>> at
>>>>>> org.apache.spark.sql.hive.HiveContext$$anon$2.lookupRelation(HiveContext.scala:253)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$5.applyOrElse(Analyzer.scala:143)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$5.applyOrElse(Analyzer.scala:138)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:144)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:162)
>>>>>> at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>>>>>> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>>>>>> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>>>>>> at
>>>>>> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>>>>>> at
>>>>>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>>>>>> at
>>>>>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>>>>>> at scala.collection.TraversableOnce$class.to
>>>>>> (TraversableOnce.scala:273)
>>>>>> at scala.collection.AbstractIterator.to(Iterator.scala:1157)
>>>>>> at
>>>>>> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>>>>>> at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
>>>>>> at
>>>>>> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>>>>>> at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:191)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:147)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:162)
>>>>>> at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>>>>>> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>>>>>> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>>>>>> at
>>>>>> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>>>>>> at
>>>>>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>>>>>> at
>>>>>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>>>>>> at scala.collection.TraversableOnce$class.to
>>>>>> (TraversableOnce.scala:273)
>>>>>> at scala.collection.AbstractIterator.to(Iterator.scala:1157)
>>>>>> at
>>>>>> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>>>>>> at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
>>>>>> at
>>>>>> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>>>>>> at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:191)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:147)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:135)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:138)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:137)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:61)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:59)
>>>>>> at
>>>>>> scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:111)
>>>>>> at scala.collection.immutable.List.foldLeft(List.scala:84)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:59)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:51)
>>>>>> at scala.collection.immutable.List.foreach(List.scala:318)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.rules.RuleExecutor.apply(RuleExecutor.scala:51)
>>>>>> at
>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.analyzed$lzycompute(SQLContext.scala:411)
>>>>>> at
>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.analyzed(SQLContext.scala:411)
>>>>>> at
>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.withCachedData$lzycompute(SQLContext.scala:412)
>>>>>> at
>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.withCachedData(SQLContext.scala:412)
>>>>>> at
>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.optimizedPlan$lzycompute(SQLContext.scala:413)
>>>>>> at
>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.optimizedPlan(SQLContext.scala:413)
>>>>>> at
>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:418)
>>>>>> at
>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:416)
>>>>>> at
>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:422)
>>>>>> at
>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:422)
>>>>>> at org.apache.spark.sql.SchemaRDD.collect(SchemaRDD.scala:444)
>>>>>> at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:446)
>>>>>> at $line9.$read$$iwC$$iwC$$iwC$$iwC.<init>(<console>:15)
>>>>>> at $line9.$read$$iwC$$iwC$$iwC.<init>(<console>:20)
>>>>>> at $line9.$read$$iwC$$iwC.<init>(<console>:22)
>>>>>> at $line9.$read$$iwC.<init>(<console>:24)
>>>>>> at $line9.$read.<init>(<console>:26)
>>>>>> at $line9.$read$.<init>(<console>:30)
>>>>>> at $line9.$read$.<clinit>(<console>)
>>>>>> at $line9.$eval$.<init>(<console>:7)
>>>>>> at $line9.$eval$.<clinit>(<console>)
>>>>>> at $line9.$eval.$print(<console>)
>>>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>> at
>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>>> at
>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>>>>> at
>>>>>> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:852)
>>>>>> at
>>>>>> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1125)
>>>>>> at
>>>>>> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:674)
>>>>>> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:705)
>>>>>> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:669)
>>>>>> at
>>>>>> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:828)
>>>>>> at
>>>>>> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:873)
>>>>>> at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:785)
>>>>>> at
>>>>>> org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:628)
>>>>>> at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:636)
>>>>>> at org.apache.spark.repl.SparkILoop.loop(SparkILoop.scala:641)
>>>>>> at
>>>>>> org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:968)
>>>>>> at
>>>>>> org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:916)
>>>>>> at
>>>>>> org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:916)
>>>>>> at
>>>>>> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>>>>>> at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:916)
>>>>>> at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1011)
>>>>>> at org.apache.spark.repl.Main$.main(Main.scala:31)
>>>>>> at org.apache.spark.repl.Main.main(Main.scala)
>>>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>> at
>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>>> at
>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>>>>> at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:358)
>>>>>> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
>>>>>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>>>>>
>>>>>> org.apache.hadoop.hive.ql.metadata.InvalidTableException: Table not
>>>>>> found trainingdatafinal
>>>>>> at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1004)
>>>>>> at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:974)
>>>>>> at
>>>>>> org.apache.spark.sql.hive.HiveMetastoreCatalog.lookupRelation(HiveMetastoreCatalog.scala:70)
>>>>>> at org.apache.spark.sql.hive.HiveContext$$anon$2.org
>>>>>> $apache$spark$sql$catalyst$analysis$OverrideCatalog$$super$lookupRelation(HiveContext.scala:253)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.analysis.OverrideCatalog$$anonfun$lookupRelation$3.apply(Catalog.scala:141)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.analysis.OverrideCatalog$$anonfun$lookupRelation$3.apply(Catalog.scala:141)
>>>>>> at scala.Option.getOrElse(Option.scala:120)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.analysis.OverrideCatalog$class.lookupRelation(Catalog.scala:141)
>>>>>> at
>>>>>> org.apache.spark.sql.hive.HiveContext$$anon$2.lookupRelation(HiveContext.scala:253)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$5.applyOrElse(Analyzer.scala:143)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$5.applyOrElse(Analyzer.scala:138)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:144)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:162)
>>>>>> at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>>>>>> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>>>>>> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>>>>>> at
>>>>>> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>>>>>> at
>>>>>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>>>>>> at
>>>>>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>>>>>> at scala.collection.TraversableOnce$class.to
>>>>>> (TraversableOnce.scala:273)
>>>>>> at scala.collection.AbstractIterator.to(Iterator.scala:1157)
>>>>>> at
>>>>>> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>>>>>> at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
>>>>>> at
>>>>>> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>>>>>> at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:191)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:147)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:162)
>>>>>> at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>>>>>> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>>>>>> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>>>>>> at
>>>>>> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>>>>>> at
>>>>>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>>>>>> at
>>>>>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>>>>>> at scala.collection.TraversableOnce$class.to
>>>>>> (TraversableOnce.scala:273)
>>>>>> at scala.collection.AbstractIterator.to(Iterator.scala:1157)
>>>>>> at
>>>>>> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>>>>>> at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
>>>>>> at
>>>>>> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>>>>>> at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:191)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:147)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:135)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:138)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:137)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:61)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:59)
>>>>>> at
>>>>>> scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:111)
>>>>>> at scala.collection.immutable.List.foldLeft(List.scala:84)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:59)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:51)
>>>>>> at scala.collection.immutable.List.foreach(List.scala:318)
>>>>>> at
>>>>>> org.apache.spark.sql.catalyst.rules.RuleExecutor.apply(RuleExecutor.scala:51)
>>>>>> at
>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.analyzed$lzycompute(SQLContext.scala:411)
>>>>>> at
>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.analyzed(SQLContext.scala:411)
>>>>>> at
>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.withCachedData$lzycompute(SQLContext.scala:412)
>>>>>> at
>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.withCachedData(SQLContext.scala:412)
>>>>>> at
>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.optimizedPlan$lzycompute(SQLContext.scala:413)
>>>>>> at
>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.optimizedPlan(SQLContext.scala:413)
>>>>>> at
>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:418)
>>>>>> at
>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:416)
>>>>>> at
>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:422)
>>>>>> at
>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:422)
>>>>>> at org.apache.spark.sql.SchemaRDD.collect(SchemaRDD.scala:444)
>>>>>> at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:446)
>>>>>> at $iwC$$iwC$$iwC$$iwC.<init>(<console>:15)
>>>>>> at $iwC$$iwC$$iwC.<init>(<console>:20)
>>>>>> at $iwC$$iwC.<init>(<console>:22)
>>>>>> at $iwC.<init>(<console>:24)
>>>>>> at <init>(<console>:26)
>>>>>> at .<init>(<console>:30)
>>>>>> at .<clinit>(<console>)
>>>>>> at .<init>(<console>:7)
>>>>>> at .<clinit>(<console>)
>>>>>> at $print(<console>)
>>>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>> at
>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>>> at
>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>>>>> at
>>>>>> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:852)
>>>>>> at
>>>>>> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1125)
>>>>>> at
>>>>>> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:674)
>>>>>> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:705)
>>>>>> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:669)
>>>>>> at
>>>>>> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:828)
>>>>>> at
>>>>>> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:873)
>>>>>> at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:785)
>>>>>> at
>>>>>> org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:628)
>>>>>> at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:636)
>>>>>> at org.apache.spark.repl.SparkILoop.loop(SparkILoop.scala:641)
>>>>>> at
>>>>>> org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:968)
>>>>>> at
>>>>>> org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:916)
>>>>>> at
>>>>>> org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:916)
>>>>>> at
>>>>>> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>>>>>> at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:916)
>>>>>> at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1011)
>>>>>> at org.apache.spark.repl.Main$.main(Main.scala:31)
>>>>>> at org.apache.spark.repl.Main.main(Main.scala)
>>>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>> at
>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>>> at
>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>>>>> at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:358)
>>>>>> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
>>>>>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>>
>>>>> *Gna Phetsarath*C: +1 917.373.7363
>>>>> AIM: sphetsarath20 VVMR: 8890237
>>>>> Address | 54 West 40th Street, New York, NY 10018
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>>
>>>
>>> *Gna Phetsarath*C: +1 917.373.7363
>>> AIM: sphetsarath20 VVMR: 8890237
>>> Address | 54 West 40th Street, New York, NY 10018
>>>
>>
>>
>
>
> --
>
>
> *Gna Phetsarath*C: +1 917.373.7363
> AIM: sphetsarath20 VVMR: 8890237
> Address | 54 West 40th Street, New York, NY 10018
>



-- 


*Gna Phetsarath*C: +1 917.373.7363
AIM: sphetsarath20 VVMR: 8890237
Address | 54 West 40th Street, New York, NY 10018

Mime
View raw message