spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Danny Robinson (JIRA)" <>
Subject [jira] [Commented] (SPARK-4563) Allow spark driver to bind to different ip then advertise ip
Date Fri, 10 Feb 2017 16:03:42 GMT


Danny Robinson commented on SPARK-4563:

I found one completely hacked way to allow my Spark Driver in Docker connecting to non-docker
Spark to work.  This is Spark 1.6.2.


at container startup I do this:
echo -e "`hostname -i` `hostname` ${HOSTNAME_OF_DOCKER_HOST_OR_PROXY}" >> /etc/hosts

Essentially, the exports seem to control the IP that Spark UI & BlockManager recognize
The hosts file hack allows the spark driver to resolve the external hostname as if it was
a local hostname, and therefore it knows which interface card to listen on, and then uses
the hostname in the connection info it sends to the executors.  When the executors connect
back, they are obviously resolving the hostname to the correct external IP.

Reason I say HOST or PROXY is that I run haproxy as a docker load-balancer at the front of
my swarm.  That ensures i never have to worry exactly which node is running the Spark driver,
all traffic routes via haproxy.

Agree with many here though, this is crazy complicated and inconsistent.

> Allow spark driver to bind to different ip then advertise ip
> ------------------------------------------------------------
>                 Key: SPARK-4563
>                 URL:
>             Project: Spark
>          Issue Type: Improvement
>          Components: Deploy
>            Reporter: Long Nguyen
>            Assignee: Marcelo Vanzin
>            Priority: Minor
>             Fix For: 2.1.0
> Spark driver bind ip and advertise is not configurable. is only bind
ip. SPARK_PUBLIC_DNS does not work for spark driver. Allow option to set advertised ip/hostname

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message