spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Nunez <snu...@hortonworks.com>
Subject Re: Issues with HDP 2.4.0.2.1.3.0-563
Date Mon, 04 Aug 2014 14:13:23 GMT
Provided you¹ve got the HWX repo in your pom.xml, you can build with this
line:

mvn -Pyarn -Phive -Phadoop-2.4 -Dhadoop.version=2.4.0.2.1.1.0-385
-DskipTests clean package

I haven¹t tried building a distro, but it should be similar.


	- SteveN

On 8/4/14, 1:25, "Sean Owen" <sowen@cloudera.com> wrote:

>For any Hadoop 2.4 distro, yes, set hadoop.version but also set
>-Phadoop-2.4. http://spark.apache.org/docs/latest/building-with-maven.html
>
>On Mon, Aug 4, 2014 at 9:15 AM, Patrick Wendell <pwendell@gmail.com>
>wrote:
>> For hortonworks, I believe it should work to just link against the
>> corresponding upstream version. I.e. just set the Hadoop version to
>>"2.4.0"
>>
>> Does that work?
>>
>> - Patrick
>>
>>
>> On Mon, Aug 4, 2014 at 12:13 AM, Ron's Yahoo!
>><zlgonzalez@yahoo.com.invalid>
>> wrote:
>>>
>>> Hi,
>>>   Not sure whose issue this is, but if I run make-distribution using
>>>HDP
>>> 2.4.0.2.1.3.0-563 as the hadoop version (replacing it in
>>> make-distribution.sh), I get a strange error with the exception below.
>>>If I
>>> use a slightly older version of HDP (2.4.0.2.1.2.0-402) with
>>> make-distribution, using the generated assembly all works fine for me.
>>> Either 1.0.0 or 1.0.1 will work fine.
>>>
>>>   Should I file a JIRA or is this a known issue?
>>>
>>> Thanks,
>>> Ron
>>>
>>> Exception in thread "main" org.apache.spark.SparkException: Job aborted
>>> due to stage failure: Task 0.0:0 failed 1 times, most recent failure:
>>> Exception failure in TID 0 on host localhost:
>>> java.lang.IncompatibleClassChangeError: Found interface
>>> org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
>>>
>>> 
>>>org.apache.avro.mapreduce.AvroKeyInputFormat.createRecordReader(AvroKeyI
>>>nputFormat.java:47)
>>>
>>> 
>>>org.apache.spark.rdd.NewHadoopRDD$$anon$1.<init>(NewHadoopRDD.scala:111)
>>>         
>>>org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:99)
>>>         
>>>org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:61)
>>>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>>>         org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>>>         org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>>>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>>>         
>>>org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:77)
>>>         org.apache.spark.rdd.RDD.iterator(RDD.scala:227)
>>>         org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>>>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>>>         org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>>>
>>> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
>>>         org.apache.spark.scheduler.Task.run(Task.scala:51)
>>>
>>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
>>>
>>> 
>>>java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.jav
>>>a:1145)
>>>
>>> 
>>>java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.ja
>>>va:615)
>>>         java.lang.Thread.run(Thread.java:745)
>>
>>
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: dev-unsubscribe@spark.apache.org
>For additional commands, e-mail: dev-help@spark.apache.org
>



-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@spark.apache.org
For additional commands, e-mail: dev-help@spark.apache.org


Mime
View raw message