spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Roi Reshef (JIRA)" <>
Subject [jira] [Commented] (SPARK-10789) Cluster mode SparkSubmit classpath only includes Spark assembly
Date Mon, 21 Dec 2015 10:40:46 GMT


Roi Reshef commented on SPARK-10789:

Any resolution on that? Can you elaborate more on how were you able to bypass the limitations
you described? I'm trying to add Netlib unsuccessfully. I'm also restricted to running the
driver on yarn-local mode rather than yarn-cluster - will your temporary solution work with
client (local) mode?

> Cluster mode SparkSubmit classpath only includes Spark assembly
> ---------------------------------------------------------------
>                 Key: SPARK-10789
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Submit
>    Affects Versions: 1.5.0
>            Reporter: Jonathan Kelly
> When using cluster deploy mode, the classpath of the SparkSubmit process that gets launched
only includes the Spark assembly and not spark.driver.extraClassPath. This is of course by
design, since the driver actually runs on the cluster and not inside the SparkSubmit process.
> However, if the SparkSubmit process, minimal as it may be, needs any extra libraries
that are not part of the Spark assembly, there is no good way to include them. (I say "no
good way" because including them in the SPARK_CLASSPATH environment variable does cause the
SparkSubmit process to include them, but this is not acceptable because this environment variable
has long been deprecated, and it prevents the use of spark.driver.extraClassPath.)
> An example of when this matters is on Amazon EMR when using an S3 path for the application
JAR and running in yarn-cluster mode. The SparkSubmit process needs the EmrFileSystem implementation
and its dependencies in the classpath in order to download the application JAR from S3, so
it fails with a ClassNotFoundException. (EMR currently gets around this by setting SPARK_CLASSPATH,
but as mentioned above this is less than ideal.)
> I have tried modifying SparkSubmitCommandBuilder to include the driver extra classpath
whether it's client mode or cluster mode, and this seems to work, but I don't know if there
is any downside to this.
> Example that fails on emr-4.0.0 (if you switch to setting spark.(driver,executor).extraClassPath
instead of SPARK_CLASSPATH): spark-submit --deploy-mode cluster --class org.apache.spark.examples.JavaWordCount
s3://my-bucket/spark-examples.jar s3://my-bucket/word-count-input.txt
> Resulting Exception:
> Exception in thread "main" java.lang.RuntimeException: java.lang.ClassNotFoundException:
Class not found
> 	at org.apache.hadoop.conf.Configuration.getClass(
> 	at org.apache.hadoop.fs.FileSystem.getFileSystemClass(
> 	at org.apache.hadoop.fs.FileSystem.createFileSystem(
> 	at org.apache.hadoop.fs.FileSystem.access$200(
> 	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(
> 	at org.apache.hadoop.fs.FileSystem$Cache.get(
> 	at org.apache.hadoop.fs.FileSystem.get(
> 	at org.apache.hadoop.fs.Path.getFileSystem(
> 	at org.apache.spark.deploy.yarn.Client.copyFileToRemote(Client.scala:233)
> 	at$apache$spark$deploy$yarn$Client$$distribute$1(Client.scala:327)
> 	at org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources$5.apply(Client.scala:366)
> 	at org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources$5.apply(Client.scala:364)
> 	at scala.collection.immutable.List.foreach(List.scala:318)
> 	at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:364)
> 	at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:629)
> 	at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:119)
> 	at
> 	at org.apache.spark.deploy.yarn.Client$.main(Client.scala:966)
> 	at org.apache.spark.deploy.yarn.Client.main(Client.scala)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> 	at java.lang.reflect.Method.invoke(
> 	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
> 	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
> 	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
> 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
> 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: java.lang.ClassNotFoundException: Class
not found
> 	at org.apache.hadoop.conf.Configuration.getClassByName(
> 	at org.apache.hadoop.conf.Configuration.getClass(
> 	... 27 more

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message