spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <>
Subject Re: Getting when attempting to start Spark master on EC2 node with public IP
Date Tue, 28 Jul 2015 20:21:11 GMT
try looking at the causes and steps here

On 28 Jul 2015, at 09:22, Wayne Song <<>>

I made this message with the Nabble web interface; I included the stack trace there, but I
guess it didn't show up in the emails.  Anyways, here's the stack trace:

15/07/27 17:04:09 ERROR NettyTransport: failed to bind to /54.xx.xx.xx:7093, shutting down
Netty transport Exception in thread "main" Failed to bind to: /54.xx.xx.xx:7093:
Service 'sparkMaster' failed after 16 retries! at org.jboss.netty.bootstrap.ServerBootstrap.bind(
at akka.remote.transport.netty.NettyTransport$anonfun$listen$1.apply(NettyTransport.scala:393)
at akka.remote.transport.netty.NettyTransport$anonfun$listen$1.apply(NettyTransport.scala:389)
at scala.util.Success$anonfun$map$1.apply(Try.scala:206) at scala.util.Try$.apply(Try.scala:161)
at at scala.concurrent.Future$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.Future$anonfun$map$1.apply(Future.scala:235) at
at akka.dispatch.BatchingExecutor$Batch$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
at akka.dispatch.BatchingExecutor$Batch$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
at akka.dispatch.BatchingExecutor$Batch$anonfun$run$1.apply(BatchingExecutor.scala:59) at
akka.dispatch.BatchingExecutor$Batch$anonfun$run$1.apply(BatchingExecutor.scala:59) at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$ at
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec( at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(
at scala.concurrent.forkjoin.ForkJoinPool.runWorker( at

I'm using Spark 1.4.0.

Binding to works, but then workers can't connect to the Spark master, because when
you start a worker, you have to give it the Spark master URL in the form spark://<spark
master URL>:7077.  My understanding is that because of Akka, you have to bind to the exact
hostname that you used when you started the Spark master; thus, you can't bind to
on the Spark master machine and then connect to spark://54.xx.xx.xx:7077 or whatever.

On Tue, Jul 28, 2015 at 6:15 AM, Ted Yu <<>>
Can you show the full stack trace ?

Which Spark release are you using ?


> On Jul 27, 2015, at 10:07 AM, Wayne Song <<>>
> Hello,
> I am trying to start a Spark master for a standalone cluster on an EC2 node.
> The CLI command I'm using looks like this:
> Note that I'm specifying the --host argument; I want my Spark master to be
> listening on a specific IP address.  The host that I'm specifying (i.e.
> 54.xx.xx.xx) is the public IP for my EC2 node; I've confirmed that nothing
> else is listening on port 7077 and that my EC2 security group has all ports
> open.  I've also double-checked that the public IP is correct.
> When I use --host 54.xx.xx.xx, I get the following error message:
> This does not occur if I leave out the --host argument and it doesn't occur
> if I use --host 10.0.xx.xx, where 10.0.xx.xx is my private EC2 IP address.
> Why would Spark fail to bind to a public EC2 address?
> --
> View this message in context:
> Sent from the Apache Spark User List mailing list archive at<>.
> ---------------------------------------------------------------------
> To unsubscribe, e-mail:<>
> For additional commands, e-mail:<>

View raw message