spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From map reduced <k3t.gi...@gmail.com>
Subject Zombie Driver process (Standalone Cluster)
Date Thu, 06 Oct 2016 22:09:32 GMT
Hi,

I am noticing zombie driver processes running on my standalone cluster. Not
sure about the reason - but node (with driver on it) restart may be a
potential cause.
What's interesting about these is: SparkUI doesn't recognize it as a
running driver, hence no 'kill' option there. Also, if I try running
command line driver kill command, it says no such driver found running.

bin/spark-class org.apache.spark.deploy.Client kill
spark://master-url:31498 driver-20161005204049-0031

Gives:
Driver driver-20161005204049-0031 has already finished or does not exist.

Here's another screenshot which shows that Spark Master UI doesn't find it
as a running driver:

[image: Inline image 1]
But it's really running on one of the workers:

[image: Inline image 2]
(Sorry can't share hostnames etc).

Any idea why this is happening and how to kill it? (I don't have a way to
ssh into that machine).

Thanks,
KP

Mime
View raw message