spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Gummelt <>
Subject Re: How to stop a running job
Date Wed, 05 Oct 2016 21:07:30 GMT
If running in client mode, just kill the job.  If running in cluster mode,
the Spark Dispatcher exposes an HTTP API for killing jobs.  I don't think
this is externally documented, so you might have to check the code to find
this endpoint.  If you run in dcos, you can just run "dcos spark kill <id>".

You can also find which node is running the driver, ssh in, and kill the

On Wed, Oct 5, 2016 at 1:55 PM, Richard Siebeling <>

> Hi,
> how can I stop a long running job?
> We're having Spark running in Mesos Coarse-grained mode. Suppose the user
> start a long running job, makes a mistake, changes a transformation and
> runs the job again. In this case I'd like to cancel the first job and after
> that start the second job. It would be a waste of resources to finish the
> first job (which could possibly take several hours...)
> How can this be accomplished?
> thanks in advance,
> Richard

Michael Gummelt
Software Engineer

View raw message