spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tathagata Das <>
Subject Re: Spark Streaming : Could not compute split, block not found
Date Fri, 01 Aug 2014 21:41:59 GMT
So you are not running non-streaming jobs using RDDs? That's
disturbing. Can you provide me a log of the run in which you
encountered this?

You cal also try setting spark.streaming.unpersist = false
All the blocks are going to be spilled to disk, and never unpersisted.
To add to that you can set the cleaner ttl to a very large value, say,
1 day, to cleanup disk stuff.

On Fri, Aug 1, 2014 at 2:16 PM, Kanwaldeep <> wrote:
> We are using Sparks 1.0.
> I'm using DStream operations such as map, filter and reduceByKeyAndWindow
> and doing a foreach operation on DStream.
> --
> View this message in context:
> Sent from the Apache Spark User List mailing list archive at

View raw message