spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrew Ehrlich <>
Subject Re: spark worker continuously trying to connect to master and failed in standalone mode
Date Wed, 20 Jul 2016 03:12:59 GMT
Troubleshooting steps:

$ telnet localhost 7077 (on master, to confirm port is open)
$ telnet <master_ip> 7077 (on slave, to confirm port is blocked)

If the port is available on the master from the master, but not on the master from the slave,
check firewall settings on the master:
> On Jul 19, 2016, at 6:25 PM, Neil Chang <> wrote:
> Hi,
>   I have two virtual pcs on private cloud (ubuntu 14). I installed spark 2.0 preview
on both machines. I then tried to test it with standalone mode.
> I have no problem start the master. However, when I start the worker (slave) on another
machine, it makes many attempts to connect to master and failed at the end. 
>   I can ssh from each machine to another without any problem. I can also run a master
and worker at the same machine without any problem.
> What did I miss? Any clue?
> here are the messages:
> WARN NativeCodeLoader: Unable to load native-hadoop library for your platform ... using
builtin-java classes where applicable
> ..............
> INFO Worker: Connecting to master ip:7077 ... 
> INFO Worker: Retrying connection to master (attempt #1)
> ..............
> INFO Worker: Retrying connection to master (attempt #7)
> java.lang.IllegalArgumentException: requirement failed: TransportClient has not yet been
>        at scala.Predef$.require(Predef.scala:224)
> .......
> WARN NettyRocEnv: Ignored failure: Connecting to ip:7077 timed out
> WARN Worker: Failed to connect to master ip.7077
> Thanks,
> Neil

View raw message