spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Akhil <>
Subject Re: Not understanding manually building EC2 cluster
Date Sun, 07 Jun 2015 16:36:39 GMT

- Remove localhost from the conf/slaves file, add the slaves private ip.
- Make sure master and slave machines are on the same security group (this
way all ports will be accessible to all machines)
- In conf/ file, place export

These changes should get you started with spark cluster. If not, look in the
logs file for more detailed information.

bjameshunter wrote
> Hi,
> I've tried a half a dozen times to build a spark cluster on EC2, without
> using the ec2 scripts or EMR. I'd like to eventually get an IPython
> notebook server running on the master, and the ec2 scripts and EMR don't
> seem accommodating for that. 
> I build an Ubuntu-spark-ipython machine. 
> I setup the Ipython server. Ipython and spark work together. 
> I make an image of the machine, spin up two of them. I can ssh between the
> original (master) and two new slaves without password (put master
> on slaves). 
> I add slave public IPs to $SPARK_HOME/conf/slaves, underneath "localhost"
> I execute $SPARK_HOME/sbin/
> Slaves and master start. The master GUI shows only one slave - itself.
> ---
> Here's where, I think the documentation ends for me and I start trying
> random stuff.
> setting SPARK_MASTER_IP to the EC2 public IP on all machines. Setting
> SPARK_LOCAL_IP to 127.0.01.
> Changing hostname on master to the public IP, changing it to my domain
> name prefixed with spark-master and routing it through my digital ocean
> account, etc.
> All combinations of the above steps have been tried, and then some.
> Any clue what I don't understand here?
> Thanks,
> Ben

View this message in context:
Sent from the Apache Spark User List mailing list archive at

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message