spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Piotr Kołaczkowski <pkola...@datastax.com>
Subject How to terminate job from the task code?
Date Sat, 21 Jun 2014 05:08:45 GMT
If the task detects unrecoverable error, i.e. an error that we can't expect
to fix by retrying nor moving the task to another node, how to stop the job
/ prevent Spark from retrying it?

def process(taskContext: TaskContext, data: Iterator[T]) {
   ...

   if (unrecoverableError) {
      ??? // terminate the job immediately
   }
   ...
 }

Somewhere else:
rdd.sparkContext.runJob(rdd, something.process _)


Thanks,
Piotr


-- 
Piotr Kolaczkowski, Lead Software Engineer
pkolaczk@datastax.com

http://www.datastax.com/
777 Mariners Island Blvd., Suite 510
San Mateo, CA 94404

Mime
View raw message