spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aaron <>
Subject Re: Spark Driver "behind" NAT
Date Mon, 05 Jan 2015 14:02:28 GMT
Thanks for the link!  However, from reviewing the thread, it appears you
cannot have a NAT/firewall between the cluster and the
spark-driver/ this correct?

When the shell starts up, it binds to the internal IP (e.g.
192.168.x.y)..not the external floating IP..which is routable from the

When i did set a static port for the spark.driver.port and set the to the floating IP address...I get the same
exception, (Caused
by: Cannot assign requested address: bind), because
of the use of the InetAddress.getHostAddress method call.


On Mon, Jan 5, 2015 at 8:28 AM, Akhil Das <>

> You can have a look at this discussion
> Thanks
> Best Regards
> On Mon, Jan 5, 2015 at 6:11 PM, Aaron <> wrote:
>> Hello there, I was wondering if there is a way to have the spark-shell
>> (or pyspark) sit behind a NAT when talking to the cluster?
>> Basically, we have OpenStack instances that run with internal IPs, and we
>> assign floating IPs as needed.  Since the workers make direct TCP
>> connections back, the spark-shell is binding to the internal IP..not the
>> "floating."  Our other use case is running Vagrant VMs on our local
>> machines..but, we don't have those VMs' NICs setup in "bridged"
>> too has an "internal" IP.
>> I tried using the SPARK_LOCAL_IP, and the various --conf
>> parameters...but it still get's "angry."
>> Any thoughts/suggestions?
>> Currently our work around is to VPNC connection from inside the vagrant
>> VMs or Openstack instances...but, that doesn't seem like a long term plan.
>> Thanks in advance!
>> Cheers,
>> Aaron

View raw message