spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sasi <sasikumar....@gmail.com>
Subject Re: Set EXTRA_JAR environment variable for spark-jobserver
Date Fri, 09 Jan 2015 08:04:14 GMT
Boris,

Yes, as you mentioned, we are creating a new SparkContext for our Job. The
reason being, to define Apache Cassandra connection using SparkConf. We
hope, this also should work.

For uploading JAR, we followed 
(1) Package JAR using *sbt package* command 
(2) Use *curl --data-binary
@target/scala-2.10/spark-jobserver-examples_2.10-1.0.0.jar
localhost:8090/jars/sparking* command to upload
as mentioned in https://github.com/fedragon/spark-jobserver-examples link.

We done some samples earlier for connecting Apache Cassandra to spark using
Scala language. Initially, we faced same exception as
*java.lang.NoClassDefFoundError* during class run and we overcome that using
*--jars {required JAR paths}* option during *spark-submit*. Finally, able to
run them as regular spark app successfully. So, we are sure of what has been
written for this spark-jobserver.

To give you some update, we prepared Uber JAR (an integrated JAR with all
depedencies) as Pankaj mentioned and now facing *SparkException: Job aborted
due to stage failure* for which we need to raise another post.

Thank you once again for your suggestions.



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Set-EXTRA-JAR-environment-variable-for-spark-jobserver-tp20989p21054.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Mime
View raw message