spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Hmxxyy <hmx...@gmail.com>
Subject Re: How to make ./bin/spark-sql work with hive?
Date Sat, 04 Oct 2014 00:06:38 GMT
No, it is hive 0.12.4.

Let me try your suggestion. It is an existing hive db. I am using the original hive-site.xml
as is.

Sent from my iPhone

> On Oct 3, 2014, at 5:02 PM, Edwin Chiu <edwin.chiu@manage.com> wrote:
> 
> Are you using hive 0.13?
> 
> Switching back to HadoopDefaultAuthenticator in your hive-site.xml worth a shot
> 
>     <property>
> 
>       <name>hive.security.authenticator.manager</name>
> 
>       <!--<value>org.apache.hadoop.hive.ql.security.ProxyUserAuthenticator</value>-->
> 
>       <value>org.apache.hadoop.hive.ql.security.HadoopDefaultAuthenticator</value>
> 
>     </property>
> 
> 
> 
> - Edwin
> 
>> On Fri, Oct 3, 2014 at 4:25 PM, Li HM <hmxxyy@gmail.com> wrote:
>> If I don't have that jar, I am getting the following error:
>> 
>> xception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.ClassNotFoundException: org.apache.hcatalog.security.HdfsAuthorizationProvider
>> 	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:286)
>> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:116)
>> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
>> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> 	at java.lang.reflect.Method.invoke(Method.java:601)
>> 	at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
>> 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
>> 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.ClassNotFoundException:
org.apache.hcatalog.security.HdfsAuthorizationProvider
>> 	at org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthorizeProviderManager(HiveUtils.java:342)
>> 	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:280)
>> 	... 9 more
>> Caused by: java.lang.ClassNotFoundException: org.apache.hcatalog.security.HdfsAuthorizationProvider
>> 	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>> 	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>> 	at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>> 	at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>> 	at java.lang.Class.forName0(Native Method)
>> 	at java.lang.Class.forName(Class.java:266)
>> 	at org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthorizeProviderManager(HiveUtils.java:335)
>> 	... 10 more
>> 
>>> On Fri, Oct 3, 2014 at 3:27 PM, Michael Armbrust <michael@databricks.com>
wrote:
>>> Why are you including hcatalog-core.jar?  That is probably causing the issues.
>>> 
>>>> On Fri, Oct 3, 2014 at 3:03 PM, Li HM <hmxxyy@gmail.com> wrote:
>>>> This is my SPARK_CLASSPATH after cleanup
>>>> SPARK_CLASSPATH=/home/test/lib/hcatalog-core.jar:$SPARK_CLASSPATH
>>>> 
>>>> now use mydb works.
>>>> 
>>>> but "show tables" and "select * from test" still gives exception:
>>>> 
>>>> spark-sql> show tables;
>>>> OK
>>>> java.io.IOException: java.io.IOException: Cannot create an instance of InputFormat
class org.apache.hadoop.mapred.TextInputFormat as specified in mapredWork!
>>>> 	at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:551)
>>>> 	at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:489)
>>>> 	at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:136)
>>>> 	at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1471)
>>>> 	at org.apache.spark.sql.hive.HiveContext.runHive(HiveContext.scala:305)
>>>> 	at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:272)
>>>> 	at org.apache.spark.sql.hive.execution.NativeCommand.sideEffectResult$lzycompute(NativeCommand.scala:35)
>>>> 	at org.apache.spark.sql.hive.execution.NativeCommand.sideEffectResult(NativeCommand.scala:35)
>>>> 	at org.apache.spark.sql.hive.execution.NativeCommand.execute(NativeCommand.scala:38)
>>>> 	at org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd$lzycompute(HiveContext.scala:360)
>>>> 	at org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd(HiveContext.scala:360)
>>>> 	at org.apache.spark.sql.SchemaRDDLike$class.$init$(SchemaRDDLike.scala:58)
>>>> 	at org.apache.spark.sql.SchemaRDD.<init>(SchemaRDD.scala:103)
>>>> 	at org.apache.spark.sql.hive.HiveContext.sql(HiveContext.scala:98)
>>>> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:58)
>>>> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:291)
>>>> 	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
>>>> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:226)
>>>> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
>>>> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>> 	at java.lang.reflect.Method.invoke(Method.java:601)
>>>> 	at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
>>>> 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
>>>> 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>>> Caused by: java.io.IOException: Cannot create an instance of InputFormat
class org.apache.hadoop.mapred.TextInputFormat as specified in mapredWork!
>>>> 	at org.apache.hadoop.hive.ql.exec.FetchOperator.getInputFormatFromCache(FetchOperator.java:223)
>>>> 	at org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:379)
>>>> 	at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:515)
>>>> 	... 25 more
>>>> Caused by: java.lang.RuntimeException: Error in configuring object
>>>> 	at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
>>>> 	at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
>>>> 	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>>>> 	at org.apache.hadoop.hive.ql.exec.FetchOperator.getInputFormatFromCache(FetchOperator.java:219)
>>>> 	... 27 more
>>>> Caused by: java.lang.reflect.InvocationTargetException
>>>> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>> 	at java.lang.reflect.Method.invoke(Method.java:601)
>>>> 	at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)
>>>> 	... 30 more
>>>> Caused by: java.lang.IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec
not found.
>>>> 	at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:135)
>>>> 	at org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:175)
>>>> 	at org.apache.hadoop.mapred.TextInputFormat.configure(TextInputFormat.java:45)
>>>> 	... 35 more
>>>> Caused by: java.lang.ClassNotFoundException: Class com.hadoop.compression.lzo.LzoCodec
not found
>>>> 	at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1801)
>>>> 	at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:128)
>>>> 	... 37 more
>>>> 
>>>> spark-sql> select * from test;
>>>> java.lang.RuntimeException: java.lang.RuntimeException: java.lang.ClassNotFoundException:
Class org.apache.hadoop.hdfs.server.namenode.ha.IPFailoverProxyProvider not found
>>>> 	at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1927)
>>>> 	at org.apache.hadoop.hdfs.NameNodeProxies.getFailoverProxyProviderClass(NameNodeProxies.java:409)
>>>> 	at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:139)
>>>> 	at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:579)
>>>> 	at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:524)
>>>> 	at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
>>>> 	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
>>>> 	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
>>>> 	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
>>>> 	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
>>>> 	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
>>>> 	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:167)
>>>> 	at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:653)
>>>> 	at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:427)
>>>> 	at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:400)
>>>> 	at org.apache.spark.sql.hive.HadoopTableReader$.initializeLocalJobConfFunc(TableReader.scala:250)
>>>> 	at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$8.apply(TableReader.scala:228)
>>>> 	at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$8.apply(TableReader.scala:228)
>>>> 	at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$1.apply(HadoopRDD.scala:149)
>>>> 	at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$1.apply(HadoopRDD.scala:149)
>>>> 	at scala.Option.map(Option.scala:145)
>>>> 	at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:149)
>>>> 	at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:172)
>>>> 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
>>>> 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
>>>> 	at scala.Option.getOrElse(Option.scala:120)
>>>> 	at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
>>>> 	at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
>>>> 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
>>>> 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
>>>> 	at scala.Option.getOrElse(Option.scala:120)
>>>> 	at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
>>>> 	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
>>>> 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
>>>> 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
>>>> 	at scala.Option.getOrElse(Option.scala:120)
>>>> 	at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
>>>> 	at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
>>>> 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
>>>> 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
>>>> 	at scala.Option.getOrElse(Option.scala:120)
>>>> 	at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
>>>> 	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1135)
>>>> 	at org.apache.spark.rdd.RDD.collect(RDD.scala:774)
>>>> 	at org.apache.spark.sql.hive.HiveContext$QueryExecution.stringResult(HiveContext.scala:415)
>>>> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:59)
>>>> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:291)
>>>> 	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
>>>> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:226)
>>>> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
>>>> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>> 	at java.lang.reflect.Method.invoke(Method.java:601)
>>>> 	at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
>>>> 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
>>>> 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>>> Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException:
Class org.apache.hadoop.hdfs.server.namenode.ha.IPFailoverProxyProvider not found
>>>> 	at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1895)
>>>> 	at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1919)
>>>> 	... 56 more
>>>> Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.hdfs.server.namenode.ha.IPFailoverProxyProvider
not found
>>>> 	at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1801)
>>>> 	at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1893)
>>>> 	... 57 more
>>>> 
>>>>> On Fri, Oct 3, 2014 at 1:55 AM, Michael Armbrust <michael@databricks.com>
wrote:
>>>>> Often java.lang.NoSuchMethodError means that you have more than one version
of a library on your classpath, in this case it looks like hive.
>>>>> 
>>>>>> On Thu, Oct 2, 2014 at 8:44 PM, Li HM <hmxxyy@gmail.com> wrote:
>>>>>> I have rebuild package with -Phive
>>>>>> Copied hive-site.xml to conf (I am using hive-0.12)
>>>>>> 
>>>>>> When I run ./bin/spark-sql, I get java.lang.NoSuchMethodError for
every command.
>>>>>> 
>>>>>> What am I missing here?
>>>>>> 
>>>>>> Could somebody share what would be the right procedure to make it
work?
>>>>>> 
>>>>>> java.lang.NoSuchMethodError: org.apache.hadoop.hive.ql.Driver.getResults(Ljava/util/ArrayList;)Z

>>>>>>         at org.apache.spark.sql.hive.HiveContext.runHive(HiveContext.scala:305)

>>>>>>         at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:272)

>>>>>>         at org.apache.spark.sql.hive.execution.NativeCommand.sideEffectResult$lzycompute(NativeCommand.scala:35)

>>>>>>         at org.apache.spark.sql.hive.execution.NativeCommand.sideEffectResult(NativeCommand.scala:35)

>>>>>>         at org.apache.spark.sql.hive.execution.NativeCommand.execute(NativeCommand.scala:38)

>>>>>>         at org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd$lzycompute(HiveContext.scala:360)

>>>>>>         at org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd(HiveContext.scala:360)

>>>>>>         at org.apache.spark.sql.SchemaRDDLike$class.$init$(SchemaRDDLike.scala:58)

>>>>>>         at org.apache.spark.sql.SchemaRDD.<init>(SchemaRDD.scala:103)

>>>>>>         at org.apache.spark.sql.hive.HiveContext.sql(HiveContext.scala:98)

>>>>>>         at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:58)

>>>>>>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:291)

>>>>>>         at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)

>>>>>>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:226)

>>>>>>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)

>>>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

>>>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

>>>>>>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

>>>>>>         at java.lang.reflect.Method.invoke(Method.java:601) 
>>>>>>         at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)

>>>>>>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)

>>>>>>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

>>>>>> 
>>>>>> spark-sql> use mydb; 
>>>>>> OK 
>>>>>> java.lang.NoSuchMethodError: org.apache.hadoop.hive.ql.Driver.getResults(Ljava/util/ArrayList;)Z

>>>>>>         at org.apache.spark.sql.hive.HiveContext.runHive(HiveContext.scala:305)

>>>>>>         at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:272)

>>>>>>         at org.apache.spark.sql.hive.execution.NativeCommand.sideEffectResult$lzycompute(NativeCommand.scala:35)

>>>>>>         at org.apache.spark.sql.hive.execution.NativeCommand.sideEffectResult(NativeCommand.scala:35)

>>>>>>         at org.apache.spark.sql.hive.execution.NativeCommand.execute(NativeCommand.scala:38)

>>>>>>         at org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd$lzycompute(HiveContext.scala:360)

>>>>>>         at org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd(HiveContext.scala:360)

>>>>>>         at org.apache.spark.sql.SchemaRDDLike$class.$init$(SchemaRDDLike.scala:58)

>>>>>>         at org.apache.spark.sql.SchemaRDD.<init>(SchemaRDD.scala:103)

>>>>>>         at org.apache.spark.sql.hive.HiveContext.sql(HiveContext.scala:98)

>>>>>>         at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:58)

>>>>>>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:291)

>>>>>>         at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)

>>>>>>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:226)

>>>>>>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)

>>>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

>>>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

>>>>>>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

>>>>>>         at java.lang.reflect.Method.invoke(Method.java:601) 
>>>>>>         at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)

>>>>>>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)

>>>>>>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

>>>>>> 
>>>>>> spark-sql> select count(*) from test; 
>>>>>> java.lang.NoSuchMethodError: com.google.common.hash.HashFunction.hashInt(I)Lcom/google/common/hash/HashCode;

>>>>>>         at org.apache.spark.util.collection.OpenHashSet.org$apache$spark$util$collection$OpenHashSet$$hashcode(OpenHashSet.scala:261)

>>>>>>         at org.apache.spark.util.collection.OpenHashSet$mcI$sp.getPos$mcI$sp(OpenHashSet.scala:165)

>>>>>>         at org.apache.spark.util.collection.OpenHashSet$mcI$sp.contains$mcI$sp(OpenHashSet.scala:102)

>>>>>>         at org.apache.spark.util.SizeEstimator$$anonfun$visitArray$2.apply$mcVI$sp(SizeEstimator.scala:214)

>>>>>>         at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)

>>>>>>         at org.apache.spark.util.SizeEstimator$.visitArray(SizeEstimator.scala:210)

>>>>>>         at org.apache.spark.util.SizeEstimator$.visitSingleObject(SizeEstimator.scala:169)

>>>>>>         at org.apache.spark.util.SizeEstimator$.org$apache$spark$util$SizeEstimator$$estimate(SizeEstimator.scala:161)

>>>>>>         at org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:155)

>>>>>>         at org.apache.spark.util.collection.SizeTracker$class.takeSample(SizeTracker.scala:78)

>>>>>>         at org.apache.spark.util.collection.SizeTracker$class.afterUpdate(SizeTracker.scala:70)

>>>>>>         at org.apache.spark.util.collection.SizeTrackingVector.$plus$eq(SizeTrackingVector.scala:31)

>>>>>>         at org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:236)

>>>>>>         at org.apache.spark.storage.MemoryStore.putIterator(MemoryStore.scala:126)

>>>>>>         at org.apache.spark.storage.MemoryStore.putIterator(MemoryStore.scala:104)

>>>>>>         at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:743)

>>>>>>         at org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:594)

>>>>>>         at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:865)

>>>>>>         at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:79)

>>>>>>         at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:68)

>>>>>>         at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:36)

>>>>>>         at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:29)

>>>>>>         at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)

>>>>>>         at org.apache.spark.SparkContext.broadcast(SparkContext.scala:809)

>>>>>>         at org.apache.spark.sql.hive.HadoopTableReader.<init>(TableReader.scala:68)

>>>>>>         at org.apache.spark.sql.hive.execution.HiveTableScan.<init>(HiveTableScan.scala:68)

>>>>>>         at org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:188)

>>>>>>         at org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:188)

>>>>>>         at org.apache.spark.sql.SQLContext$SparkPlanner.pruneFilterProject(SQLContext.scala:364)

>>>>>>         at org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$.apply(HiveStrategies.scala:184)

>>>>>>         at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)

>>>>>>         at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)

>>>>>>         at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)

>>>>>>         at org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)

>>>>>>         at org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)

>>>>>>         at org.apache.spark.sql.execution.SparkStrategies$HashAggregation$.apply(SparkStrategies.scala:146)

>>>>>>         at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)

>>>>>>         at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)

>>>>>>         at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)

>>>>>>         at org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)

>>>>>>         at org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:402)
>>>>>>         at org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:400)

>>>>>>         at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:406)

>>>>>>         at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:406)

>>>>>>         at org.apache.spark.sql.hive.HiveContext$QueryExecution.stringResult(HiveContext.scala:406)

>>>>>>         at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:59)

>>>>>>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:291)

>>>>>>         at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)

>>>>>>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:226)

>>>>>>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)

>>>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

>>>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

>>>>>>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

>>>>>>         at java.lang.reflect.Method.invoke(Method.java:601) 
>>>>>>         at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)

>>>>>>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)

>>>>>>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

> 

Mime
View raw message