spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Koert Kuipers <ko...@tresata.com>
Subject Re: spark 0.8
Date Thu, 17 Oct 2013 23:10:03 GMT
even the simplest job will produce these EOFException errors in the
workers. for example:

object SimpleJob extends App {
  System.setProperty("spark.serializer",
"org.apache.spark.serializer.KryoSerializer")
  val sc = new SparkContext("spark://master:7077", "Simple Job",
"/usr/local/lib/spark",
    List("dist/myjar-0.1-SNAPSHOT.jar"))
  val songs = sc.textFile("hdfs://master:8020/user/koert/songs")
  println("songs count: " + songs.count)
}

did something change in how i am supposed to launch jobs?



On Thu, Oct 17, 2013 at 6:38 PM, Koert Kuipers <koert@tresata.com> wrote:

> oh well i spoke too soon. spark-shell still works but any scala/java
> program still throws this error.
>
>
> On Thu, Oct 17, 2013 at 6:18 PM, dachuan <hdc1112@gmail.com> wrote:
>
>> yeah, I mean 0.9.0-SNAPSHOT. I use git clone and that's what I got..
>> what's the difference? I mean SNAPSHOT and non-SNAPSHOT.
>>
>>
>> On Thu, Oct 17, 2013 at 6:15 PM, Mark Hamstra <mark@clearstorydata.com>wrote:
>>
>>> Of course, you mean 0.9.0-SNAPSHOT.  There is no Spark 0.9.0, and won't
>>> be for several months.
>>>
>>>
>>>
>>> On Thu, Oct 17, 2013 at 3:11 PM, dachuan <hdc1112@gmail.com> wrote:
>>>
>>>> I'm sorry if this doesn't answer your question directly, but I have
>>>> tried spark 0.9.0 and hdfs 1.0.4 just now, it works..
>>>>
>>>>
>>>> On Thu, Oct 17, 2013 at 6:05 PM, Koert Kuipers <koert@tresata.com>wrote:
>>>>
>>>>> after upgrading from spark 0.7 to spark 0.8 i can no longer access any
>>>>> files on HDFS.
>>>>>  i see the error below. any ideas?
>>>>>
>>>>> i am running spark standalone on a cluster that also has CDH4.3.0 and
>>>>> rebuild spark accordingly. the jars in lib_managed look good to me.
>>>>>
>>>>> i noticed similar errors in the mailing list but found no suggested
>>>>> solutions.
>>>>>
>>>>> thanks! koert
>>>>>
>>>>>
>>>>> 13/10/17 17:43:23 ERROR Executor: Exception in task ID 0
>>>>> java.io.EOFException
>>>>> 	at java.io.ObjectInputStream$BlockDataInputStream.readFully(ObjectInputStream.java:2703)
>>>>> 	at java.io.ObjectInputStream.readFully(ObjectInputStream.java:1008)
>>>>> 	at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:68)
>>>>> 	at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:106)
>>>>> 	at org.apache.hadoop.io.UTF8.readChars(UTF8.java:258)
>>>>> 	at org.apache.hadoop.io.UTF8.readString(UTF8.java:250)
>>>>> 	at org.apache.hadoop.mapred.FileSplit.readFields(FileSplit.java:87)
>>>>> 	at org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:280)
>>>>> 	at org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:75)
>>>>> 	at org.apache.spark.SerializableWritable.readObject(SerializableWritable.scala:39)
>>>>> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>> 	at java.lang.reflect.Method.invoke(Method.java:597)
>>>>> 	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:969)
>>>>> 	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1852)
>>>>> 	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1756)
>>>>> 	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1326)
>>>>> 	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1950)
>>>>> 	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1874)
>>>>> 	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1756)
>>>>> 	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1326)
>>>>> 	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
>>>>> 	at org.apache.spark.scheduler.ResultTask.readExternal(ResultTask.scala:135)
>>>>> 	at java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:1795)
>>>>> 	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1754)
>>>>> 	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1326)
>>>>> 	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
>>>>> 	at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:39)
>>>>> 	at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:61)
>>>>> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:153)
>>>>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
>>>>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
>>>>> 	at java.lang.Thread.run(Thread.java:662)
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Dachuan Huang
>>>> Cellphone: 614-390-7234
>>>> 2015 Neil Avenue
>>>> Ohio State University
>>>> Columbus, Ohio
>>>> U.S.A.
>>>> 43210
>>>>
>>>
>>>
>>
>>
>> --
>> Dachuan Huang
>> Cellphone: 614-390-7234
>> 2015 Neil Avenue
>> Ohio State University
>> Columbus, Ohio
>> U.S.A.
>> 43210
>>
>
>

Mime
View raw message