spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mich Talebzadeh <mich.talebza...@gmail.com>
Subject Re: Dynamically adding/removing slaves throuh start-slave.sh and stop-slave.sh
Date Mon, 28 Mar 2016 21:20:53 GMT
Have you added the slave host name to $SPARK_HOME/conf?

Then you can use start-slaves.sh or stop-slaves.sh for all instances

The assumption is that slave boxes have $SPARK_HOME installed in the same
directory as $SPARK_HOME is installed in the master.

HTH


Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com



On 28 March 2016 at 22:06, Sung Hwan Chung <codedeft@cs.stanford.edu> wrote:

> Hello,
>
> I found that I could dynamically add/remove new workers to a running
> standalone Spark cluster by simply triggering:
>
> start-slave.sh (SPARK_MASTER_ADDR)
>
> and
>
> stop-slave.sh
>
> E.g., I could instantiate a new AWS instance and just add it to a running
> cluster without needing to add it to slaves file and restarting the whole
> cluster.
> It seems that there's no need for me to stop a running cluster.
>
> Is this a valid way of dynamically resizing a spark cluster (as of now,
> I'm not concerned about HDFS)? Or will there be certain unforeseen problems
> if nodes are added/removed this way?
>

Mime
View raw message