spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Li, Rui" <>
Subject RE: Connection Refused When Running SparkPi Locally
Date Tue, 04 Mar 2014 08:55:28 GMT
I've encountered similar problems.
Maybe you can try using hostname or FQDN (rather than IP address) of your node for the master
In my case, AKKA picks the FQDN for master URI and worker has to use exactly the same string
for connection.

From: Benny Thompson []
Sent: Saturday, March 01, 2014 10:18 AM
Subject: Connection Refused When Running SparkPi Locally

I'm trying to run a simple execution of the SparkPi example.  I started the master and one
worker, then executed the job on my local "cluster", but end up getting a sequence of errors
all ending with

"Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection
refused: /<>"

I originally tried running my master and worker without configuration but ended up with the
same error.  I tried to change to to test if it was maybe just a firewall issue
since the server is locked down from the outside world.

My conf/ contains the following:

Here is the order and commands I run:
1) "sbin/" (to start the master)
2) "bin/spark-class org.apache.spark.deploy.worker.Worker spark://<>
--ip --port 1111" (in a different session on the same machine to start the slave)
3) "bin/run-example org.apache.spark.examples.SparkPi spark://<>"
(in a different session on the same machine to start the job)

I find it hard to believe that I'm locked down enough that running locally would cause problems.

Any help is greatly appreciated!


View raw message