spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From pedro <ski.rodrig...@gmail.com>
Subject Initial job has not accepted any resources
Date Sun, 04 May 2014 20:59:45 GMT
I have been working on a Spark program, completed it, but have spent the past
few hours trying to run on EC2 without any luck. I am hoping i can
comprehensively describe my problem and what I have done, but I am pretty
stuck.

My code uses the following lines to configure the SparkContext, which are
taken from the standalone app example found here:
https://spark.apache.org/docs/0.9.0/quick-start.html
And combined with ampcamps code found here:
http://spark-summit.org/2013/exercises/machine-learning-with-spark.html
To give the following code:
http://pastebin.com/zDYkk1T8

Launch spark cluster with:
./spark-ec2 -k plda -i ~/plda.pem -s 1 --instance-type=t1.micro
--region=us-west-2 start lda
When logged in, launch my job with sbt via this, while in my projects
directory
$/root/bin/sbt run

This results in the following log, indicating the problem in my subject
line:
http://pastebin.com/DiQCj6jQ

Following this, I got advice to set my conf/spark-env.sh so it exports
MASTER and SPARK_HOME_IP
"There's an inconsistency in the way the master addresses itself. The Spark
master uses the internal (ip-*.internal) address, but the driver is trying
to connect using the external (ec2-*.compute-1.amazonaws.com) address. The
solution is to set the Spark master URL to the external address in the
spark-env.sh file.

Your conf/spark-env.sh is probably empty. It should set MASTER and
SPARK_MASTER_IP to the external URL, as the EC2 launch script does:
https://github.com/.../templ.../root/spark/conf/spark-env.sh"

My spark-env.sh looks like this:
#!/usr/bin/env bash

export MASTER=`cat /root/spark-ec2/cluster-url`
export SPARK_MASTER_IP="ec2-54-186-178-145.us-west-2.compute.amazonaws.com"
export SPARK_WORKER_MEM=128m

At this point, I did a test
1. Remove my spark-env.sh variables
2. Run spark-shell
3. Run: sc.parallelize(1 to 1000).count()
4. This works as expected
5. Reset my spark-env.sh variables
6. Run prior spark-shell and commands
7. I get the same error as reported above.

Hence, it is something wrong with how I am setting my master/slave
configuration. Any help would be greatly appreciated.




--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Initial-job-has-not-accepted-any-resources-tp5322.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Mime
View raw message