spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Denny Lee <denny.g....@gmail.com>
Subject Spark 1.2 and Mesos 0.21.0 spark.executor.uri issue?
Date Tue, 30 Dec 2014 16:25:07 GMT
I've been working with Spark 1.2 and Mesos 0.21.0 and while I have set the
spark.executor.uri within spark-env.sh (and directly within bash as well),
the Mesos slaves do not seem to be able to access the spark tgz file via
HTTP or HDFS as per the message below.


14/12/30 15:57:35 INFO SparkILoop: Created spark context..
Spark context available as sc.

scala> 14/12/30 15:57:38 INFO CoarseMesosSchedulerBackend: Mesos task 0 is
now TASK_FAILED
14/12/30 15:57:38 INFO CoarseMesosSchedulerBackend: Mesos task 1 is now
TASK_FAILED
14/12/30 15:57:39 INFO CoarseMesosSchedulerBackend: Mesos task 2 is now
TASK_FAILED
14/12/30 15:57:41 INFO CoarseMesosSchedulerBackend: Mesos task 3 is now
TASK_FAILED
14/12/30 15:57:41 INFO CoarseMesosSchedulerBackend: Blacklisting Mesos
slave value: "20141228-183059-3045950474-5050-2788-S1"
 due to too many failures; is Spark installed on it?


I've verified that the Mesos slaves can access both the HTTP and HDFS
locations.  I'll start digging into the Mesos logs but was wondering if
anyone had run into this issue before.  I was able to get this to run
successfully on Spark 1.1 on GCP - my current environment that I'm
experimenting with is Digital Ocean - perhaps this is in play?

Thanks!
Denny

Mime
View raw message