Hello,

Thank you for your contribution.

We have tried to reproduce your error but we need more information:

- Which Spark version are you using? Stratio Spark-Mongodb connector supports 1.2.x SparkSQL version.

- What jars are you adding while launching the Spark-shell?

Best regards,

2015-03-03 14:06 GMT+01:00 Cheng, Hao <hao.cheng@intel.com>:

As the call stack shows, the mongodb connector is not compatible with the Spark SQL Data Source interface. The latest Data Source API is changed since 1.2, probably you need to confirm which spark version the MongoDB Connector build against.

 

By the way, a well format call stack will be more helpful for people reading.

 

From: taoewang [mailto:taoewang@sequoiadb.com]
Sent: Tuesday, March 3, 2015 7:39 PM
To: user@spark.apache.org
Subject: java.lang.IncompatibleClassChangeError when using PrunedFilteredScan

 

 

Hi,

 

I’m trying to build the stratio spark-mongodb connector and got error "java.lang.IncompatibleClassChangeError: class com.stratio.deep.mongodb.MongodbRelation has interface org.apache.spark.sql.sources.PrunedFilteredScan as super class” when trying to create a table using the driver:

 

scala> import org.apache.spark.sql.SQLContext

val sqlContext = new SQLContext(sc)import org.apache.spark.sql.SQLContext

 

import org.apache.spark.sql.SQLContext

 

scala> val sqlContext = new SQLContext(sc)

sqlContext: org.apache.spark.sql.SQLContext = org.apache.spark.sql.SQLContext@37050c15

 

scala> import com.stratio.deep.mongodb

import com.stratio.deep.mongodb

import com.stratio.deep.mongodb

 

scala> sqlContext.sql("CREATE TEMPORARY TABLE students_table USING com.stratio.deep.mongodb OPTIONS (host 'host:port', database 'highschool', collection 'students')")

sqlContext.sql("CREATE TEMPORARY TABLE students_table USING com.stratio.d 

eep.mongodb OPTIONS (host 'host:port', database 'highschool', collection 'studen 

ts')")

java.lang.IncompatibleClassChangeError: class com.stratio.deep.mongodb.MongodbRelation has interface org.apache.spark.sql.sources.PrunedFilteredScan as super class

at java.lang.ClassLoader.defineClass1(Native Method)

at java.lang.ClassLoader.defineClass(ClassLoader.java:800)

at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)

at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)

at java.net.URLClassLoader.access$100(URLClassLoader.java:71)

at java.net.URLClassLoader$1.run(URLClassLoader.java:361)

at java.net.URLClassLoader$1.run(URLClassLoader.java:355)

at java.security.AccessController.doPrivileged(Native Method)

at java.net.URLClassLoader.findClass(URLClassLoader.java:354)

at java.lang.ClassLoader.loadClass(ClassLoader.java:425)

at java.lang.ClassLoader.loadClass(ClassLoader.java:358)

at java.lang.Class.getDeclaredConstructors0(Native Method)

at java.lang.Class.privateGetDeclaredConstructors(Class.java:2585)

at java.lang.Class.getConstructor0(Class.java:2885)

at java.lang.Class.newInstance(Class.java:350)

at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:288)

at org.apache.spark.sql.sources.CreateTempTableUsing.run(ddl.scala:376)

at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:55)

at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:55)

 

The code failed at line 288 in ddl.scala:

  def apply(

      sqlContext: SQLContext,

      userSpecifiedSchema: Option[StructType],

      provider: String,

      options: Map[String, String]): ResolvedDataSource = {

    val clazz: Class[_] = lookupDataSource(provider)

    val relation = userSpecifiedSchema match {

      case Some(schema: StructType) => clazz.newInstance() match {

        case dataSource: SchemaRelationProvider =>

          dataSource.createRelation(sqlContext, new CaseInsensitiveMap(options), schema)

        case dataSource: org.apache.spark.sql.sources.RelationProvider =>

          sys.error(s"${clazz.getCanonicalName} does not allow user-specified schemas.")

      }

 

      case None => clazz.newInstance() match {  <——— failed here

        case dataSource: RelationProvider =>

          dataSource.createRelation(sqlContext, new CaseInsensitiveMap(options))

        case dataSource: org.apache.spark.sql.sources.SchemaRelationProvider =>

          sys.error(s"A schema needs to be specified when using ${clazz.getCanonicalName}.")

      }

    }

    new ResolvedDataSource(clazz, relation)

  }

 

The “clazz” here is com.stratio.deep.mongodb.DefaultSource, which extends RelationProvider:

class DefaultSource extends RelationProvider {

 

  override def createRelation(

    sqlContext: SQLContext,

    parameters: Map[String, String]): BaseRelation = {

 

    /** We will assume hosts are provided like 'host:port,host2:port2,...'*/

    val host = parameters

      .getOrElse(Host, notFound[String](Host))

      .split(",").toList

 

    val database = parameters.getOrElse(Database, notFound(Database))

 

    val collection = parameters.getOrElse(Collection, notFound(Collection))

 

    val samplingRatio = parameters

      .get(SamplingRatio)

      .map(_.toDouble).getOrElse(DefaultSamplingRatio)

 

    MongodbRelation(

      MongodbConfigBuilder()

        .set(Host,host)

        .set(Database,database)

        .set(Collection,collection)

        .set(SamplingRatio,samplingRatio).build())(sqlContext)

 

  }

 

}

 

In createRelation function it returns MongodbRelation, which is extended from PrunedFilteredScan

case class MongodbRelation(

  config: DeepConfig,

  schemaProvided: Option[StructType] = None)(

  @transient val sqlContext: SQLContext) extends PrunedFilteredScan {

 

Since both TableScan and PrunedFilteredScan are based on BaseRelation, I’m not sure why clazz.newInstance() call failed by java.lang.IncompatibleClassChangeError.

Is there any special way to deal with if I need to use PrunedFilteredScan?

 

I’m using scala 2.10 and JDK 1.7

 

Thanks

TW

 




--
Gaspar Muñoz
@gmunozsoria

Vía de las dos Castillas, 33, Ática 4, 3ª Planta
28224 Pozuelo de Alarcón, Madrid
Tel: +34 91 352 59 42 // @stratiobd