spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Pat Ferrel (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (SPARK-2075) Anonymous classes are missing from Spark distribution
Date Tue, 21 Oct 2014 00:57:34 GMT

    [ https://issues.apache.org/jira/browse/SPARK-2075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14177748#comment-14177748
] 

Pat Ferrel edited comment on SPARK-2075 at 10/21/14 12:57 AM:
--------------------------------------------------------------

Is there any more on this?

Building Spark from the 1.1.0 tar for Hadoop 1.2.1--all is well. Trying to upgrade Mahout
to use Spark 1.1.0. The Mahout 1.0-snapshot source builds and build tests pass with spark
1.1.0 as a maven dependency. Running the Mahout build on some bigger data using my dev machine
as a standalone single node Spark cluster. So the same code is running as executed the build
tests, just in single node cluster mode. Also since I built Spark i assume it is using the
artifact from my .m2 maven cache, but not 100% on that. Anyway I get the class not found error
below.

I assume the missing function is the anon function passed to the 

```
    rdd.map(
      {anon function}
    )saveAsTextFile ???? 
```

so shouldn't the function be in the Mahout jar (it isn't)? Isn't this function passed in from
Mahout so I don't understand why it matters how Spark was built. 

Several other users are getting this for Spark 1.0.2. If we are doing something wrong in our
build process we'd appreciate a pointer.

Here's the error I get:

14/10/20 17:21:36 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 8.0 (TID 16, 192.168.0.2):
java.lang.ClassNotFoundException: org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1
        java.net.URLClassLoader$1.run(URLClassLoader.java:202)
        java.security.AccessController.doPrivileged(Native Method)
        java.net.URLClassLoader.findClass(URLClassLoader.java:190)
        java.lang.ClassLoader.loadClass(ClassLoader.java:306)
        java.lang.ClassLoader.loadClass(ClassLoader.java:247)
        java.lang.Class.forName0(Native Method)
        java.lang.Class.forName(Class.java:249)
        org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:59)
        java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1591)
        java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1496)
        java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1750)
        java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329)
        java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1970)
        java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1895)
        java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)
        java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329)
        java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1970)
        java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1895)
        java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)
        java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329)
        java.io.ObjectInputStream.readObject(ObjectInputStream.java:349)
        org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
        org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:57)
        org.apache.spark.scheduler.Task.run(Task.scala:54)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
        java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
        java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
        java.lang.Thread.run(Thread.java:695)
  


was (Author: pferrel):
Is there any more on this?

Building Spark from the 1.1.0 tar for Hadoop 1.2.1--all is well. Trying to upgrade Mahout
to use Spark 1.1.0. The Mahout 1.0-snapshot source builds and build tests pass with spark
1.1.0 as a maven dependency. Running the Mahout build on some bigger data using my dev machine
as a standalone single node Spark cluster. So the same code is running as executed the build
tests, just in single node cluster mode. Also since I built Spark i assume it is using the
artifact from my .m2 maven cache, but not 100% on that. Anyway I get the class not found error
below.

I assume the missing function is the anon function passed to the 

    rdd.map(
      {anon function}
    )saveAsTextFile ???? 

so shouldn't the function be in the Mahout jar (it isn't)? Isn't this function passed in from
Mahout so I don't understand why it matters how Spark was built. 

Several other users are getting this for Spark 1.0.2. If we are doing something wrong in our
build process we'd appreciate a pointer.

Here's the error I get:

14/10/20 17:21:36 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 8.0 (TID 16, 192.168.0.2):
java.lang.ClassNotFoundException: org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1
        java.net.URLClassLoader$1.run(URLClassLoader.java:202)
        java.security.AccessController.doPrivileged(Native Method)
        java.net.URLClassLoader.findClass(URLClassLoader.java:190)
        java.lang.ClassLoader.loadClass(ClassLoader.java:306)
        java.lang.ClassLoader.loadClass(ClassLoader.java:247)
        java.lang.Class.forName0(Native Method)
        java.lang.Class.forName(Class.java:249)
        org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:59)
        java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1591)
        java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1496)
        java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1750)
        java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329)
        java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1970)
        java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1895)
        java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)
        java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329)
        java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1970)
        java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1895)
        java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)
        java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329)
        java.io.ObjectInputStream.readObject(ObjectInputStream.java:349)
        org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
        org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:57)
        org.apache.spark.scheduler.Task.run(Task.scala:54)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
        java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
        java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
        java.lang.Thread.run(Thread.java:695)
  

> Anonymous classes are missing from Spark distribution
> -----------------------------------------------------
>
>                 Key: SPARK-2075
>                 URL: https://issues.apache.org/jira/browse/SPARK-2075
>             Project: Spark
>          Issue Type: Bug
>          Components: Build, Spark Core
>    Affects Versions: 1.0.0
>            Reporter: Paul R. Brown
>            Priority: Critical
>             Fix For: 1.0.1
>
>
> Running a job built against the Maven dep for 1.0.0 and the hadoop1 distribution produces:
> {code}
> java.lang.ClassNotFoundException:
> org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1
> {code}
> Here's what's in the Maven dep as of 1.0.0:
> {code}
> jar tvf ~/.m2/repository/org/apache/spark/spark-core_2.10/1.0.0/spark-core_2.10-1.0.0.jar
| grep 'rdd/RDD' | grep 'saveAs'
>   1519 Mon May 26 13:57:58 PDT 2014 org/apache/spark/rdd/RDD$anonfun$saveAsTextFile$1.class
>   1560 Mon May 26 13:57:58 PDT 2014 org/apache/spark/rdd/RDD$anonfun$saveAsTextFile$2.class
> {code}
> And here's what's in the hadoop1 distribution:
> {code}
> jar tvf spark-assembly-1.0.0-hadoop1.0.4.jar| grep 'rdd/RDD' | grep 'saveAs'
> {code}
> I.e., it's not there.  It is in the hadoop2 distribution:
> {code}
> jar tvf spark-assembly-1.0.0-hadoop2.2.0.jar| grep 'rdd/RDD' | grep 'saveAs'
>   1519 Mon May 26 07:29:54 PDT 2014 org/apache/spark/rdd/RDD$anonfun$saveAsTextFile$1.class
>   1560 Mon May 26 07:29:54 PDT 2014 org/apache/spark/rdd/RDD$anonfun$saveAsTextFile$2.class
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message