kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ewen Cheslack-Postava <e...@confluent.io>
Subject Re: New producer: metadata update problem on 2 Node cluster.
Date Tue, 28 Apr 2015 19:00:57 GMT
Ok, all of that makes sense. The only way to possibly recover from that
state is either for K2 to come back up allowing the metadata refresh to
eventually succeed or to eventually try some other node in the cluster.
Reusing the bootstrap nodes is one possibility. Another would be for the
client to get more metadata than is required for the topics it needs in
order to ensure it has more nodes to use as options when looking for a node
to fetch metadata from. I added your description to KAFKA-1843, although it
might also make sense as a separate bug since fixing it could be considered
incremental progress towards resolving 1843.

On Tue, Apr 28, 2015 at 9:18 AM, Manikumar Reddy <kumar@nmsworks.co.in>

> Hi Ewen,
>  Thanks for the response.  I agree with you, In some case we should use
> bootstrap servers.
> >
> > If you have logs at debug level, are you seeing this message in between
> the
> > connection attempts:
> >
> > Give up sending metadata request since no node is available
> >
>  Yes, this log came for couple of times.
> >
> > Also, if you let it continue running, does it recover after the
> > metadata.max.age.ms timeout?
> >
>  It does not reconnect.  It is continuously trying to connect with dead
> node.
> -Manikumar


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message