flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Julio Biason <julio.bia...@azion.com>
Subject Re: Trying to understand KafkaConsumer_records_lag_max
Date Mon, 16 Apr 2018 16:40:44 GMT
Hi Gordon (and list),

Yes, that's probably what's going on. I got another message from 徐骁 which
told me almost the same thing -- something I completely forgot (he also
mentioned auto.offset.reset, which could be forcing Flink to keep reading
from the top of Kafka instead of trying to go back and read older entries).

Now I need to figure out how to make my pipeline consume entries faster (or
at least on par) with the speed those are getting in Kafka -- but that's a
discussion for another email. ;)

On Mon, Apr 16, 2018 at 1:29 AM, Tzu-Li (Gordon) Tai <tzulitai@apache.org>
wrote:

> Hi Julio,
>
> I'm not really sure, but do you think it is possible that there could be
> some hard data retention setting for your Kafka topics in the staging
> environment?
> As in, at some point in time and maybe periodically, all data in the Kafka
> topics are dropped and therefore the consumers effectively jump directly
> back to the head again.
>
> Cheers,
> Gordon
>
>
>
> --
> Sent from: http://apache-flink-user-mailing-list-archive.2336050.
> n4.nabble.com/
>



-- 
*Julio Biason*, Sofware Engineer
*AZION*  |  Deliver. Accelerate. Protect.
Office: +55 51 3083 8101 <callto:+555130838101>  |  Mobile: +55 51
<callto:+5551996209291>*99907 0554*

Mime
View raw message