spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tathagata Das <>
Subject Re: kinesis batches hang after YARN automatic driver restart
Date Tue, 03 Nov 2015 11:14:35 GMT
The Kinesis integration underneath uses the KCL libraries which takes a
minute or so sometimes to spin up the threads and start getting data from
Kinesis. That is under normal conditions. In your case, it could be
happening that because of your killing and restarting, the restarted KCL
may be taking a while to get new lease and start getting data again.

On Mon, Nov 2, 2015 at 11:26 AM, Hster Geguri <>

> Hello Wonderful Sparks Peoples,
> We are testing AWS Kinesis/Spark Streaming (1.5) failover behavior with
> Hadoop/Yarn 2.6 and 2.71 and want to understand expected behavior.
> When I manually kill a yarn application master/driver with a linux kill
> -9, YARN will automatically relaunch another master that successfully reads
> in the previous checkpoint.
> However- more than half the time, the kinesis executors (5 second batches)
> don't continue processing immediately.  I.e. batches of 0 events are queued
> for  5-9 minutes before it starts reprocessing the stream again. When I
> drill down to the current job which is hanging- it shows all stages/tasks
> are complete. I would expect the automatically relaunched behavior to be
> similar to as if I had manually done a resubmit with spark-submit where the
> stream processing continues within a minute of launch.
> Any input is highly appreciated.
> Thanks much,
> Heji

View raw message