spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Koert Kuipers <ko...@tresata.com>
Subject Re: Issue with compiling Scala with Spark 2
Date Sun, 14 Aug 2016 16:00:15 GMT
you cannot mix spark 1 and spark 2 jars

change this
libraryDependencies += "org.apache.spark" %% "spark-hive" % "1.5.1"
to
libraryDependencies += "org.apache.spark" %% "spark-hive" % "2.0.0"

On Sun, Aug 14, 2016 at 11:58 AM, Mich Talebzadeh <mich.talebzadeh@gmail.com
> wrote:

> Hi,
>
> In Spark 2 I am using sbt or mvn to compile my scala program. This used to
> compile and run perfectly with Spark 1.6.1 but now it is throwing error
>
>
> I believe the problem is here. I have
>
> name := "scala"
> version := "1.0"
> scalaVersion := "2.11.7"
> libraryDependencies += "org.apache.spark" %% "spark-core" % "2.0.0"
> libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.0.0"
> libraryDependencies += "org.apache.spark" %% "spark-hive" % "1.5.1"
>
> However the error I am getting is
>
> [error] bad symbolic reference. A signature in HiveContext.class refers to
> type Logging
> [error] in package org.apache.spark which is not available.
> [error] It may be completely missing from the current classpath, or the
> version on
> [error] the classpath might be incompatible with the version used when
> compiling HiveContext.class.
> [error] one error found
> [error] (compile:compileIncremental) Compilation failed
>
>
> And this is the code
>
> import org.apache.spark.SparkContext
> import org.apache.spark.SparkConf
> import org.apache.spark.sql.Row
> import org.apache.spark.sql.hive.HiveContext
> import org.apache.spark.sql.types._
> import org.apache.spark.sql.SparkSession
> import org.apache.spark.sql.functions._
> object ETL_scratchpad_dummy {
>   def main(args: Array[String]) {
>   val conf = new SparkConf().
>                setAppName("ETL_scratchpad_dummy").
>                set("spark.driver.allowMultipleContexts", "true").
>                set("enableHiveSupport","true")
>   val sc = new SparkContext(conf)
>   //import sqlContext.implicits._
>   val HiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
>   HiveContext.sql("use oraclehadoop")
>
>
> Anyone has come across this
>
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>

Mime
View raw message