kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Guozhang Wang <wangg...@gmail.com>
Subject Re: Apache Kafka Use Case at WalmartLabs
Date Fri, 07 Mar 2014 21:14:59 GMT
Hello Bhavesh,

1) If auto.create.topics.enable is turned on and consumer is subscribing to
a wildcard topic, then producers can just send to new topics on the fly
which can be then captured by the consumers.

2) For now we do not have priority mechanism, but we do have some initial
plans on quotas, you can find some details here:

https://cwiki.apache.org/confluence/display/KAFKA/KAFKA-656+-+Quota+Design

3) MM does not preserve the offset across clusters now. What you can do is
to have a separate consumer group in the centralized cluster which will not
share any messages with the local consumers.

5) Thanks! You can start by searching for JIRAs tagged with "newbie".

Guozhang


On Fri, Mar 7, 2014 at 11:39 AM, Bhavesh Mistry
<mistry.p.bhavesh@gmail.com>wrote:

> We are planning to use Apache Kafka to replace Apache Fume for mostly as
> log transport layer.  Please see the attached image which is similar use
> case ( and deployment architecture ) at Linkedin (according to
> http://sites.computer.org/debull/A12june/pipeline.pdf ).     I have
> following questions: 1) We will be creating dynamic topic to publish
> messages from  frond end and back-end servers producers.   How can we
> discovers new topics so consumer can pull the data from Kafka Broker
> clusters to HDFS ? 2) Is there a topic priority available when system in
> under heavy load ?  For example,  during the holiday traffic we might get
> more traffic which will cause more events to be published...so is there any
> way to configure topic have higher priority and should not suffer the rate
> of through-put for that particular topic.  3) When using Kafka Mirror
> Maker for replicating messages from Local Datacenter to centralized Kafka
> broker cluster?  Does it also replicate offset consumed by particular
> consumer ?  Basically, from the centralized Kafka Brokers,  we wanted to
> re-read the message from beginning to input into the hadoop. 5) Also, I
> would like to contribute to the Kafka Development so please let me know
> which dev feature or bugs we can fix to get started.   I have already
> joined dev group of Kafka.
> Thanks,Bhavesh
>



-- 
-- Guozhang

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message