spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gerard Maas <>
Subject Re: is Mesos falling out of favor?
Date Thu, 15 May 2014 20:14:00 GMT
By looking at your config, I think there's something wrong with your setup.
One of the key elements of Mesos is that you are abstracted from where the
execution of your task takes place. The SPARK_EXECUTOR_URI tells Mesos
where to find the 'framework' (in Mesos jargon) required to execute a job.
 (Actually, it tells the spark driver  to tell mesos where to download the
Your config looks like you are running some mix of Spark Cluster with

This is an example of a Spark job to run on Mesos:


ADD_JARS=/.../job-jar-with-dependencies.jar SPARK_LOCAL_IP=<IP> java -cp

Config: job-config.conf contains this info on Mesos: (Note the Mesos URI is
constructed from this config
# ------------------------------------------------------------
# Mesos configuration
# ------------------------------------------------------------
mesos {
    zookeeper = {zookeeper.ip}
    executorUri  =
    master       {
        host = {mesos-ip}
        port = 5050

Probably this can still be improved as it's the result of some
trial-error-repeat, but it's working for us.

-greetz, Gerard

On Wed, May 7, 2014 at 7:43 PM, deric <> wrote:

> I'm running 1.0.0 branch, finally I've managed to make it work. I'm using a
> Debian package which is distributed on all slave nodes. So, I've removed
> `SPARK_EXECUTOR_URI` and it works, looks like this:
> export MESOS_NATIVE_LIBRARY="/usr/local/lib/"
> export SCALA_HOME="/usr"
> export SCALA_LIBRARY_PATH="/usr/share/java"
> export MASTER="mesos://zk://"
> export SPARK_HOME="/usr/share/spark"
> export SPARK_LOCAL_IP=""
> scripts for Debian package are here (I'll try to add some documentation):
> --
> View this message in context:
> Sent from the Apache Spark User List mailing list archive at

View raw message