spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Michael Armbrust (JIRA)" <j...@apache.org>
Subject [jira] [Created] (SPARK-1560) PySpark SQL depends on Java 7 only jars
Date Tue, 22 Apr 2014 02:01:19 GMT
Michael Armbrust created SPARK-1560:
---------------------------------------

             Summary: PySpark SQL depends on Java 7 only jars
                 Key: SPARK-1560
                 URL: https://issues.apache.org/jira/browse/SPARK-1560
             Project: Spark
          Issue Type: Bug
          Components: SQL
            Reporter: Michael Armbrust
            Priority: Blocker
             Fix For: 1.0.0


We need to republish the pickler built with java 7. Details below:

{code}
14/04/19 12:31:29 INFO rdd.HadoopRDD: Input split: file:/Users/ceteri/opt/spark-branch-1.0/examples/src/main/resources/people.txt:0+16
Exception in thread "Local computation of job 1" java.lang.UnsupportedClassVersionError: net/razorvine/pickle/Unpickler
: Unsupported major.minor version 51.0
	at java.lang.ClassLoader.defineClass1(Native Method)
	at java.lang.ClassLoader.defineClassCond(ClassLoader.java:637)
	at java.lang.ClassLoader.defineClass(ClassLoader.java:621)
	at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
	at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
	at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
	at org.apache.spark.api.python.PythonRDD$$anonfun$pythonToJavaMap$1.apply(PythonRDD.scala:295)
	at org.apache.spark.api.python.PythonRDD$$anonfun$pythonToJavaMap$1.apply(PythonRDD.scala:294)
	at org.apache.spark.rdd.RDD$$anonfun$3.apply(RDD.scala:518)
	at org.apache.spark.rdd.RDD$$anonfun$3.apply(RDD.scala:518)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:243)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:234)
	at org.apache.spark.scheduler.DAGScheduler.runLocallyWithinThread(DAGScheduler.scala:700)
	at org.apache.spark.scheduler.DAGScheduler$$anon$1.run(DAGScheduler.scala:685)
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message