spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From pedro <>
Subject Initial job has not accepted any resources
Date Sun, 04 May 2014 20:59:45 GMT
I have been working on a Spark program, completed it, but have spent the past
few hours trying to run on EC2 without any luck. I am hoping i can
comprehensively describe my problem and what I have done, but I am pretty

My code uses the following lines to configure the SparkContext, which are
taken from the standalone app example found here:
And combined with ampcamps code found here:
To give the following code:

Launch spark cluster with:
./spark-ec2 -k plda -i ~/plda.pem -s 1 --instance-type=t1.micro
--region=us-west-2 start lda
When logged in, launch my job with sbt via this, while in my projects
$/root/bin/sbt run

This results in the following log, indicating the problem in my subject

Following this, I got advice to set my conf/ so it exports
"There's an inconsistency in the way the master addresses itself. The Spark
master uses the internal (ip-*.internal) address, but the driver is trying
to connect using the external (ec2-* address. The
solution is to set the Spark master URL to the external address in the file.

Your conf/ is probably empty. It should set MASTER and
SPARK_MASTER_IP to the external URL, as the EC2 launch script does:"

My looks like this:
#!/usr/bin/env bash

export MASTER=`cat /root/spark-ec2/cluster-url`
export SPARK_WORKER_MEM=128m

At this point, I did a test
1. Remove my variables
2. Run spark-shell
3. Run: sc.parallelize(1 to 1000).count()
4. This works as expected
5. Reset my variables
6. Run prior spark-shell and commands
7. I get the same error as reported above.

Hence, it is something wrong with how I am setting my master/slave
configuration. Any help would be greatly appreciated.

View this message in context:
Sent from the Apache Spark User List mailing list archive at

View raw message