spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Eran Chinthaka Withana <>
Subject Re: Problem mixing MESOS Cluster Mode and Docker task execution
Date Thu, 10 Mar 2016 03:26:24 GMT

I'm also having this issue and can not get the tasks to work inside mesos.

In my case, the spark-submit command is the following.

$SPARK_HOME/bin/spark-submit \
 --class com.mycompany.SparkStarter \
 --master mesos://mesos-dispatcher:7077 \ --name SparkStarterJob \
--driver-memory 1G \
 --executor-memory 4G \
--deploy-mode cluster \
 --total-executor-cores 1 \
 --conf spark.mesos.executor.docker.image=echinthaka/mesos-spark:0.23.1-1.6.0-2.6

And the error I'm getting is the following.

I0310 03:13:11.417009 131594 exec.cpp:132] Version: 0.23.1
I0310 03:13:11.419452 131601 exec.cpp:206] Executor registered on
slave 20160223-000314-3439362570-5050-631-S0
sh: 1: /usr/spark-1.6.0-bin-hadoop2.6/bin/spark-class: not found

(Looked into Spark JIRA and I found that is marked as closed since is marked as resolved)

Really appreciate if I can get some help here.

Eran Chinthaka Withana

On Wed, Feb 17, 2016 at 2:00 PM, <> wrote:

> Hi everybody,
> I am testing the use of Docker for executing Spark algorithms on MESOS. I
> managed to execute Spark in client mode with executors inside Docker, but I
> wanted to go further and have also my Driver running into a Docker
> Container. Here I ran into a behavior that I'm not sure is normal, let me
> try to explain.
> I submit my spark application through MesosClusterDispatcher using a
> command
> like:
> $ ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
> mesos://spark-master-1:7077 --deploy-mode cluster --conf
> spark.mesos.executor.docker.image=myuser/myimage:0.0.2
> 10
> My driver is running fine, inside its docker container, but the executors
> fail:
> "sh: /some/spark/home/bin/spark-class: No such file or directory"
> Looking on MESOS slaves log, I think that the executors do not run inside
> docker: "docker.cpp:775] No container info found, skipping launch". As my
> Mesos slaves do not have spark installed, it fails.
> *It seems that the spark conf that I gave in the first spark-submit is not
> transmitted to the Driver submitted conf*, when launched in the docker
> container. The only workaround I found is to modify my Docker image in
> order
> to define inside its spark conf the spark.mesos.executor.docker.image
> property. This way, my executors get the conf well and are launched inside
> docker on Mesos. This seems a little complicated to me, and I feel the
> configuration passed to the early spark-submit should be transmitted to the
> Driver submit...
> --
> View this message in context:
> Sent from the Apache Spark User List mailing list archive at
> ---------------------------------------------------------------------
> To unsubscribe, e-mail:
> For additional commands, e-mail:

View raw message