spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kanwaldeep <>
Subject Re: Spark Streaming : Could not compute split, block not found
Date Fri, 01 Aug 2014 22:52:20 GMT
All the operations being done are using the dstream. I do read an RDD in
memory which is collected and converted into a map and used for lookups as
part of DStream operations. This RDD is loaded only once and converted into
map that is then used on streamed data.

Do you mean non streaming jobs on RDD using raw kafka data? 

Log File attached:

View this message in context:
Sent from the Apache Spark User List mailing list archive at

View raw message