spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sachit Murarka <connectsac...@gmail.com>
Subject HPA - Kubernetes for Spark
Date Sun, 10 Jan 2021 19:22:53 GMT
Hi All,

I have read about HPA  Horizontal Pod Autoscaling(for pod scaling).

I understand it can be achieved by setting the request and limit for
resources in yaml:
kubectl autoscale deploy/application-cpu --cpu-percent=95 --min=1 --max=10
// example command.

But does Kubernetes actually work with Spark for HPA? Since 1 pod is used
to launch 1 unique executor. Here in spark ideally pod can be scaled by
dynamical allocation of executors(which in turn is a pod) instead of HPA.
But Dynamic allocation is not supported as shuffle service is not there
till Spark 3 release.

Could any one suggest how can I proceed achieving pod scaling in Spark?

Please note : I am using Kubernetes with Spark operator.


Kind Regards,
Sachit Murarka

Mime
View raw message