spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sean Owen <so...@cloudera.com>
Subject Re: Hadoop interface vs class
Date Thu, 26 Jun 2014 16:14:58 GMT
Yes it does. The idea is to override the dependency if needed. I thought
you mentioned that you had built for Hadoop 2.
On Jun 26, 2014 11:07 AM, "Robert James" <srobertjames@gmail.com> wrote:

> Yes.  As far as I can tell, Spark seems to be including Hadoop 1 via
> its transitive dependency:
> http://mvnrepository.com/artifact/org.apache.spark/spark-core_2.10/1.0.0
> - shows a dependency on Hadoop 1.0.4, which I'm perplexed by.
>
> On 6/26/14, Sean Owen <sowen@cloudera.com> wrote:
> > You seem to have the binary for Hadoop 2, since it was compiled
> > expecting that TaskAttemptContext is an interface. So the error
> > indicates that Spark is also seeing Hadoop 1 classes somewhere.
> >
> > On Wed, Jun 25, 2014 at 4:41 PM, Robert James <srobertjames@gmail.com>
> > wrote:
> >> After upgrading to Spark 1.0.0, I get this error:
> >>
> >>  ERROR org.apache.spark.executor.ExecutorUncaughtExceptionHandler -
> >> Uncaught exception in thread Thread[Executor task launch
> >> worker-2,5,main]
> >> java.lang.IncompatibleClassChangeError: Found interface
> >> org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
> >>
> >> I thought this was caused by a dependency on Hadoop 1.0.4 (even though
> >> I downloaded the Spark 1.0.0 for Hadoop 2), but I can't seem to fix
> >> it.  Any advice?
> >
>

Mime
View raw message