spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Davide.Mandrini" <>
Subject [Spark Streaming] Application is stopped after stopping a worker
Date Mon, 28 Aug 2017 19:36:48 GMT
I am running a spark streaming application on a cluster composed by three
nodes, each one with a worker and three executors (so a total of 9
executors). I am using the spark standalone mode.

The application is run with a spark-submit command with option --deploy-mode
client. The submit command is run from one of the nodes, let's call it node

As a fault tolerance test I am stopping the worker on node 2 with the
command sudo service spark-worker stop.

In logs i can see that the Master keeps trying to run executors on the
shutting down worker (I can see thousands of tries, all with status FAILED,
for few seconds), and then the whole application is terminated by spark.

I tried to get more information about how spark handle worker failures but I
was not able to find any useful answer.

In spark source code I can see that the worker call for a driver kill when
we stop the worker: method onStop here
This might explain why the whole application is stopped eventually.

Is this the expected behavior in case of a worker explicitly stopped?

Is this a case of worker failure or it has to be considered differently (I
am explicitly shutting down the node here)?

Would it be the same behavior if the worker process was killed (and not
explicitly stopped)?

Thank you 

View this message in context:
Sent from the Apache Spark User List mailing list archive at

To unsubscribe e-mail:

View raw message