spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Behroz Sikander <behro...@gmail.com>
Subject Programmatically get status of job (WAITING/RUNNING)
Date Mon, 30 Oct 2017 14:37:00 GMT
Hi,

I have a Spark Cluster running in client mode. I programmatically submit
jobs to spark cluster. Under the hood, I am using spark-submit.

If my cluster is overloaded and I start a context, the driver JVM keeps on
waiting for executors. The executors are in waiting state because cluster
does not have enough resources. Here are the log messages in driver logs

2017-10-27 13:20:15,260 WARN Timer-0
org.apache.spark.scheduler.TaskSchedulerImpl []: Initial job has not
accepted any resources; check your cluster UI to ensure that workers
are registered and have sufficient resources2017-10-27 13:20:30,259
WARN Timer-0 org.apache.spark.scheduler.TaskSchedulerImpl []: Initial
job has not accepted any resources; check your cluster UI to ensure
that workers are registered and have sufficient resources

Is it possible to programmatically check the status of application (e.g.
Running/Waiting etc)? I know that we can use the application id and then
query the history server but I would like to know a solution which does not
involve REST calls to history server.

SparkContext should know about the state? How can i get this information
from sc?


Regards,

Behroz

Mime
View raw message