spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sean Owen <>
Subject Re: Problem with version compatibility
Date Thu, 25 Jun 2015 15:23:34 GMT
Try putting your same Mesos assembly on the classpath of your client
then, to emulate what spark-submit does. I don't think you merely also
want to put it on the classpath but make sure nothing else from Spark
is coming from your app.

In 1.4 there is the 'launcher' API which makes programmatic access a
lot more feasible but still kinda needs you to get Spark code to your
driver program, and if it's not the same as on your cluster you'd
still risk some incompatibilities.

On Thu, Jun 25, 2015 at 6:05 PM, jimfcarroll <> wrote:
> Ah. I've avoided using spark-submit primarily because our use of Spark is as
> part of an analytics library that's meant to be embedded in other
> applications with their own lifecycle management.
> One of those application is a REST app running in tomcat which will make the
> use of spark-submit difficult (if not impossible).
> Also, we're trying to avoid sending jars over the wire per-job and so we
> install our library (minus the spark dependencies) on the mesos workers and
> refer to it in the spark configuration using spark.executor.extraClassPath
> and if I'm reading SparkSubmit.scala correctly, it looks like the user's
> assembly ends up sent to the cluster (at least in the case of yarn) though I
> could be wrong on this.
> Is there a standard way of running an app that's in control of it's own
> runtime lifecycle without spark-submit?
> Thanks again.
> Jim
> --
> View this message in context:
> Sent from the Apache Spark Developers List mailing list archive at
> ---------------------------------------------------------------------
> To unsubscribe, e-mail:
> For additional commands, e-mail:

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message