spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jim Carroll <jimfcarr...@gmail.com>
Subject Ending a job early
Date Tue, 28 Oct 2014 14:27:23 GMT

We have some very large datasets where the calculation converge on a result.
Our current implementation allows us to track how quickly the calculations
are converging and end the processing early. This can significantly speed up
some of our processing.

Is there a way to do the same thing is spark?

A trivial example might be a column average on a dataset. As we're
'aggregating' rows into columnar averages I can track how fast these
averages are moving and decide to stop after a low percentage of the rows
has been processed, producing an estimate rather than an exact value.

Within a partition, or better yet, within a worker across 'reduce' steps, is
there a way to stop all of the aggregations and just continue on with
reduces of already processed data?

Thanks
JIm




--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Ending-a-job-early-tp17505.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Mime
View raw message