spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sandish Kumar HN <sanysand...@gmail.com>
Subject Re: HPA - Kubernetes for Spark
Date Sun, 10 Jan 2021 20:47:02 GMT
Sachit,

K8S based spark dynamic allocation is only available on Spark 3.0.X+ and
that too without External Shuffling Service.

https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#dynamic-allocation
http://spark.apache.org/docs/latest/running-on-kubernetes.html#future-work

On Sun, 10 Jan 2021 at 13:23, Sachit Murarka <connectsachit@gmail.com>
wrote:

> Hi All,
>
> I have read about HPA  Horizontal Pod Autoscaling(for pod scaling).
>
> I understand it can be achieved by setting the request and limit for
> resources in yaml:
> kubectl autoscale deploy/application-cpu --cpu-percent=95 --min=1
> --max=10  // example command.
>
> But does Kubernetes actually work with Spark for HPA? Since 1 pod is used
> to launch 1 unique executor. Here in spark ideally pod can be scaled by
> dynamical allocation of executors(which in turn is a pod) instead of HPA.
> But Dynamic allocation is not supported as shuffle service is not there
> till Spark 3 release.
>
> Could any one suggest how can I proceed achieving pod scaling in Spark?
>
> Please note : I am using Kubernetes with Spark operator.
>
>
> Kind Regards,
> Sachit Murarka
>


-- 

Thanks,
Regards,
SandishKumar HN

Mime
View raw message