spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tathagata Das <tathagata.das1...@gmail.com>
Subject Re: Spark Streaming : Could not compute split, block not found
Date Fri, 01 Aug 2014 21:41:59 GMT
So you are not running non-streaming jobs using RDDs? That's
disturbing. Can you provide me a log of the run in which you
encountered this?

You cal also try setting spark.streaming.unpersist = false
All the blocks are going to be spilled to disk, and never unpersisted.
To add to that you can set the cleaner ttl to a very large value, say,
1 day, to cleanup disk stuff.

On Fri, Aug 1, 2014 at 2:16 PM, Kanwaldeep <kanwal239@gmail.com> wrote:
> We are using Sparks 1.0.
>
> I'm using DStream operations such as map, filter and reduceByKeyAndWindow
> and doing a foreach operation on DStream.
>
>
>
>
>
> --
> View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-Could-not-compute-split-block-not-found-tp11186p11209.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.

Mime
View raw message