spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Koert Kuipers <ko...@tresata.com>
Subject Re: Issue with compiling Scala with Spark 2
Date Sun, 14 Aug 2016 16:16:41 GMT
HiveContext is gone

SparkSession now combines functionality of SqlContext and HiveContext (if
hive support is available)

On Sun, Aug 14, 2016 at 12:12 PM, Mich Talebzadeh <mich.talebzadeh@gmail.com
> wrote:

> Thanks Koert,
>
> I did that before as well. Anyway this is dependencies
>
> libraryDependencies += "org.apache.spark" %% "spark-core" % "2.0.0"
> libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.0.0"
> libraryDependencies += "org.apache.spark" %% "spark-hive" % "2.0.0"
>
>
> and the error
>
>
> [info] Compiling 1 Scala source to /data6/hduser/scala/ETL_
> scratchpad_dummy/target/scala-2.10/classes...
> [error] /data6/hduser/scala/ETL_scratchpad_dummy/src/main/
> scala/ETL_scratchpad_dummy.scala:4: object hive is not a member of
> package org.apache.spark.sql
> [error] import org.apache.spark.sql.hive.HiveContext
> [error]                             ^
> [error] /data6/hduser/scala/ETL_scratchpad_dummy/src/main/
> scala/ETL_scratchpad_dummy.scala:20: object hive is not a member of
> package org.apache.spark.sql
> [error]   val HiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
>
>
>
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 14 August 2016 at 17:00, Koert Kuipers <koert@tresata.com> wrote:
>
>> you cannot mix spark 1 and spark 2 jars
>>
>> change this
>> libraryDependencies += "org.apache.spark" %% "spark-hive" % "1.5.1"
>> to
>> libraryDependencies += "org.apache.spark" %% "spark-hive" % "2.0.0"
>>
>> On Sun, Aug 14, 2016 at 11:58 AM, Mich Talebzadeh <
>> mich.talebzadeh@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> In Spark 2 I am using sbt or mvn to compile my scala program. This used
>>> to compile and run perfectly with Spark 1.6.1 but now it is throwing error
>>>
>>>
>>> I believe the problem is here. I have
>>>
>>> name := "scala"
>>> version := "1.0"
>>> scalaVersion := "2.11.7"
>>> libraryDependencies += "org.apache.spark" %% "spark-core" % "2.0.0"
>>> libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.0.0"
>>> libraryDependencies += "org.apache.spark" %% "spark-hive" % "1.5.1"
>>>
>>> However the error I am getting is
>>>
>>> [error] bad symbolic reference. A signature in HiveContext.class refers
>>> to type Logging
>>> [error] in package org.apache.spark which is not available.
>>> [error] It may be completely missing from the current classpath, or the
>>> version on
>>> [error] the classpath might be incompatible with the version used when
>>> compiling HiveContext.class.
>>> [error] one error found
>>> [error] (compile:compileIncremental) Compilation failed
>>>
>>>
>>> And this is the code
>>>
>>> import org.apache.spark.SparkContext
>>> import org.apache.spark.SparkConf
>>> import org.apache.spark.sql.Row
>>> import org.apache.spark.sql.hive.HiveContext
>>> import org.apache.spark.sql.types._
>>> import org.apache.spark.sql.SparkSession
>>> import org.apache.spark.sql.functions._
>>> object ETL_scratchpad_dummy {
>>>   def main(args: Array[String]) {
>>>   val conf = new SparkConf().
>>>                setAppName("ETL_scratchpad_dummy").
>>>                set("spark.driver.allowMultipleContexts", "true").
>>>                set("enableHiveSupport","true")
>>>   val sc = new SparkContext(conf)
>>>   //import sqlContext.implicits._
>>>   val HiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
>>>   HiveContext.sql("use oraclehadoop")
>>>
>>>
>>> Anyone has come across this
>>>
>>>
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>
>>
>

Mime
View raw message