I am trying to deploy my code (read jar) to a standalone cluster and nothing is working for me.
- LocalMachine = build machine (Mac)
- Cluster           = (1 master and 1 slave with over 90gigs memory) (CentOs)
1. I can run the code on my local machine passing local as argument to spark context.
2. I can execute test function on my LocalMachine using "./run-example org.apache.spark.examples.SparkPi spark://master:7077" and I can the see the jar (spark-examples-assembly-0.8.0-SNAPSHOT.jar) deployed to the slave work folder and the job being done.
3. Step 2 behaves similar when executed to on master machine using "./run-example org.apache.spark.examples.SparkPi spark://master:7077"

4. Now I have written down code in scala for spark. How do I deploy my jar to the cluster to run and compute. ?
    a. Running "/libs/spark/sbt/sbt run" from project directory results in incessant "cluster.ClusterScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered"

5. I want to keep the build machine separate from the cluster master and slave.
6. SparkContext in my code looks like this - 
val sc          = new SparkContext("spark://master:7077", "Simple Job", "$SPARK_HOME", List("target/scala-2.9.3/simple-project_2.9.3-1.0.jar"))

Any ideas how to solve this one ?