spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Segel <msegel_had...@hotmail.com>
Subject Re: Unable to Limit UI to localhost interface
Date Wed, 30 Mar 2016 15:10:23 GMT
It sounds like when you start up spark, its using 0.0.0.0 which means it will listen on all
interfaces.  
You should be able to limit which interface to use.  

The weird thing is that if you are specifying the IP Address and Port, Spark shouldn’t be
listening on all of the interfaces for that port. 
(This could be a UI bug? ) 

The other issue… you need to put a firewall in front of your cluster/machine. This is probably
a best practice issue. 



> On Mar 30, 2016, at 12:25 AM, Akhil Das <akhil@sigmoidanalytics.com> wrote:
> 
> In your case, you will be able to see the webui (unless restricted with iptables) but
you won't be able to submit jobs to that machine from a remote machine since the spark master
is spark://127.0.0.1:7077 <http://127.0.0.1:7077/>
> 
> Thanks
> Best Regards
> 
> On Tue, Mar 29, 2016 at 8:12 PM, David O'Gwynn <dogwynn@acm.org <mailto:dogwynn@acm.org>>
wrote:
> /etc/hosts
> 
> 127.0.0.1	localhost
> 
> conf/slaves 
> 127.0.0.1
> 
> 
> On Mon, Mar 28, 2016 at 5:36 PM, Mich Talebzadeh <mich.talebzadeh@gmail.com <mailto:mich.talebzadeh@gmail.com>>
wrote:
> in your /etc/hosts what do you have for localhost
> 
> 127.0.0.1 localhost.localdomain localhost
> 
> conf/slave should have one entry in your case
> 
> cat slaves
> # A Spark Worker will be started on each of the machines listed below.
> localhost
> .......
> 
> Dr Mich Talebzadeh
>  
> LinkedIn  https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>
>  
> http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/>
>  
> 
> On 28 March 2016 at 15:32, David O'Gwynn <dogwynn@acm.org <mailto:dogwynn@acm.org>>
wrote:
> Greetings to all,
> 
> I've search around the mailing list, but it would seem that (nearly?) everyone has the
opposite problem as mine. I made a stab at looking in the source for an answer, but I figured
I might as well see if anyone else has run into the same problem as I.
> 
> I'm trying to limit my Master/Worker UI to run only on localhost. As it stands, I have
the following two environment variables set in my spark-env.sh:
> 
> SPARK_LOCAL_IP=127.0.0.1
> SPARK_MASTER_IP=127.0.0.1
> 
> and my slaves file contains one line: 127.0.0.1
> 
> The problem is that when I run "start-all.sh", I can nmap my box's public interface and
get the following:
> 
> PORT     STATE SERVICE
> 22/tcp   open  ssh
> 8080/tcp open  http-proxy
> 8081/tcp open  blackice-icecap
> 
> Furthermore, I can go to my box's public IP at port 8080 in my browser and get the master
node's UI. The UI even reports that the URL/REST URLs to be 127.0.0.1 <http://127.0.0.1/>:
> 
> Spark Master at spark://127.0.0.1:7077 <http://127.0.0.1:7077/>
> URL: spark://127.0.0.1:7077 <http://127.0.0.1:7077/>
> REST URL: spark://127.0.0.1:6066 <http://127.0.0.1:6066/> (cluster mode)
> 
> I'd rather not have spark available in any way to the outside world without an explicit
SSH tunnel.
> 
> There are variables to do with setting the Web UI port, but I'm not concerned with the
port, only the network interface to which the Web UI binds.
> 
> Any help would be greatly appreciated.
> 
> 
> 
> 


Mime
View raw message