spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Manoj Samel <>
Subject RDD action hangs on a standalone mode cluster
Date Tue, 21 Jan 2014 05:02:27 GMT

I configured spark 0.8.1 cluster on AWS with one master node and 3 worker
nodes. The cluster was configured as a standalone cluster using

The distribution was generated
the master node was started on master host with ./bin/
Then on each of the worker nodes, I did a cd spark-distro directory and did
./spark-class org.apache.spark.deploy.worker.Worker spark://IPxxxx:7077

In the browser, on master 8080 port, I can see the 3 worker nodes ALIVE

Next I start a spark shell on master node with
MASTER=spark://IPxxx:7077 ./spark-shell.

In it I create a simple RDD on a local text file with few lines and do
countByKey(). The shell hangs. Doing ctrl-C gives

scala> credit.countByKey()
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(
at org.apache.spark.scheduler.JobWaiter.awaitResult(JobWaiter.scala:73)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:318)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:840)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:909)
at org.apache.spark.rdd.RDD.reduce(RDD.scala:654)
at org.apache.spark.rdd.RDD.countByValue(RDD.scala:752)

Note - the same works in a local shell (without master).

Any pointers? Do I have to set any other network/logins? Note I am *** NOT
*** starting slaves from the master node (using bin/ and
thus have not set passwordless ssh login etc.

View raw message