spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Romi Kuntsman <r...@totango.com>
Subject Re: Scaling spark cluster for a running application
Date Wed, 22 Jul 2015 14:33:55 GMT
Are you running the Spark cluster in standalone or YARN?
In standalone, the application gets the available resources when it starts.
With YARN, you can try to turn on the setting
*spark.dynamicAllocation.enabled*
See https://spark.apache.org/docs/latest/configuration.html

On Wed, Jul 22, 2015 at 2:20 PM phagunbaya <phagun.baya@falkonry.com> wrote:

> I have a spark cluster running in client mode with driver outside the spark
> cluster. I want to scale the cluster after an application is submitted. In
> order to do this, I'm creating new workers and they are getting registered
> with master but issue I'm seeing is; running application does not use the
> newly added worker. Hence cannot add more resources to existing running
> application.
>
> Is there any other way or config to deal with this use-case ? How to make
> running application to ask for executors from newly issued worker node ?
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Scaling-spark-cluster-for-a-running-application-tp23951.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>

Mime
View raw message