spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jesper Lundgren (JIRA)" <>
Subject [jira] [Commented] (SPARK-6355) Spark standalone cluster does not support local:/ url for jar file
Date Mon, 16 Mar 2015 14:13:38 GMT


Jesper Lundgren commented on SPARK-6355:

[~srowen] spark-submit --class class.Main local:/application.jar .
under "Advanced Dependency Management" mentions local:/ can be used when a jar is pre-distributed
instead of uploading using the built in file server. Maybe I am misunderstanding but I believe
it is meant to work for the main application jar as well as for --jars config option.

I am running standalone cluster with Zookeeper HA and have on occasion had problem crashing
on restart due to the spark fileserver being unavailable to distribute the jar to the worker
nodes (I can't reliably reproduce this yet). I intended to use local:/ as a fix but seems
this option does not work in standalone cluster.

> Spark standalone cluster does not support local:/ url for jar file
> ------------------------------------------------------------------
>                 Key: SPARK-6355
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.3.0, 1.2.1
>            Reporter: Jesper Lundgren
> Submitting a new spark application to a standalone cluster with local:/path will result
in an exception.
> Driver successfully submitted as driver-20150316171157-0004
> ... waiting before polling master for driver state
> ... polling master for driver state
> State of driver-20150316171157-0004 is ERROR
> Exception from cluster was: No FileSystem for scheme: local
> No FileSystem for scheme: local
> 	at org.apache.hadoop.fs.FileSystem.getFileSystemClass(
> 	at org.apache.hadoop.fs.FileSystem.createFileSystem(
> 	at org.apache.hadoop.fs.FileSystem.access$200(
> 	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(
> 	at org.apache.hadoop.fs.FileSystem$Cache.get(
> 	at org.apache.hadoop.fs.FileSystem.get(
> 	at org.apache.hadoop.fs.Path.getFileSystem(
> 	at$apache$spark$deploy$worker$DriverRunner$$downloadUserJar(DriverRunner.scala:141)
> 	at org.apache.spark.deploy.worker.DriverRunner$$anon$

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message