From users-return-2911-apmail-kafka-users-archive=kafka.apache.org@kafka.apache.org Fri Dec 14 06:13:58 2012 Return-Path: X-Original-To: apmail-kafka-users-archive@www.apache.org Delivered-To: apmail-kafka-users-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 3419BE205 for ; Fri, 14 Dec 2012 06:13:58 +0000 (UTC) Received: (qmail 81881 invoked by uid 500); 14 Dec 2012 06:13:58 -0000 Delivered-To: apmail-kafka-users-archive@kafka.apache.org Received: (qmail 81851 invoked by uid 500); 14 Dec 2012 06:13:57 -0000 Mailing-List: contact users-help@kafka.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@kafka.apache.org Delivered-To: mailing list users@kafka.apache.org Received: (qmail 81842 invoked by uid 99); 14 Dec 2012 06:13:57 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 14 Dec 2012 06:13:57 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of junrao@gmail.com designates 209.85.223.176 as permitted sender) Received: from [209.85.223.176] (HELO mail-ie0-f176.google.com) (209.85.223.176) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 14 Dec 2012 06:13:50 +0000 Received: by mail-ie0-f176.google.com with SMTP id 13so5283033iea.7 for ; Thu, 13 Dec 2012 22:13:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=QsYdh+JSOq6StjbkpmYI5jFpJxaroo3EAJF7mi+4XmM=; b=ApcRyRjL4NxDoF8WxVN+k5w+MEjc5Ka15a40WoqdDZ8fX9viJzz1zDaVOniHF52wVl s7XlB2K5fnWwEpKJ8tcEC5wNxm0X3LiNaQbsAtRJ32BKr+4pLi8jN2MYZA0aWxt14e2O Xzdg9kY3qZanIegwTdaKlXqxbgVNrBu0T8pWrcfQQpTumAPpZyoR7mEF1rdIijWWA/ac BSR50oe6HiJPWbLqz810ID/CTTOHh2DgX5/Sy3o0H8ih+LWtTevU1QhpbmKZPT7am2+q CmsIiO/i2GVGj+DMO5fi9BxDSM4fYOq5vyo9Ca6gMFqg/d5hgw5xPEPFqX5U6kghSCoM +wCg== MIME-Version: 1.0 Received: by 10.50.161.130 with SMTP id xs2mr662421igb.34.1355465609367; Thu, 13 Dec 2012 22:13:29 -0800 (PST) Received: by 10.43.124.129 with HTTP; Thu, 13 Dec 2012 22:13:29 -0800 (PST) In-Reply-To: References: Date: Thu, 13 Dec 2012 22:13:29 -0800 Message-ID: Subject: Re: Design questions and your opinion and suggestions are appreciated From: Jun Rao To: users@kafka.apache.org Content-Type: multipart/alternative; boundary=14dae934106d26cefd04d0c9ee0b X-Virus-Checked: Checked by ClamAV on apache.org --14dae934106d26cefd04d0c9ee0b Content-Type: text/plain; charset=ISO-8859-1 For 2), the next Kafka release will support replication. So a partition is still available when a single broker is down. For 1), in 0.7, partitions are automatically added to new brokers. In 0.8, you will need to run a command to change # partitions. Thanks, Jun On Thu, Dec 13, 2012 at 10:39 AM, Jamie Wang wrote: > Thanks for the pointers. > > To clarify: my question 1) actually is referring to if I can dynamically > create/add new partition to an existing topic on an existing broker? > > For pointer 2) you seem to suggest a newer version of the broker will have > a different behavior. what would this new behavior like and what version > will have this new behavior? > > Thanks again. > Jamie > > > -----Original Message----- > From: Jun Rao [mailto:junrao@gmail.com] > Sent: Wednesday, December 12, 2012 10:10 PM > To: users@kafka.apache.org > Subject: Re: Design questions and your opinion and suggestions are > appreciated > > 1) In 0.7, a topic exists on every broker. So new partitions will be > automatically added on the new broker. > 2) It's possible, if you use the partitioner. However, be aware in 0.7, if > a broker goes down, all partitions on it are down and you won't be able to > write to them. > 3) You can try to connect to the Kafka port. > 4) No. Just do kill -15 > 5) It depends. You may want to add a broker if you don't have enough > storage space, don't have enough I/Os, or don't have enough network > bandwidth. > > Thanks, > > Jun > > > On Wed, Dec 12, 2012 at 2:25 PM, Jamie Wang > wrote: > > > Hi, > > > > We are incorporating Kafka as part of our centralized log aggregation > > service for a clustered server system. Our current thinking on the design > > as follows: Each of our clustered server will produce 8 different logs. > A > > typical cluster will have about 4 or 5 nodes right now. Our design is to > > create a topic per each type of log and the number of partitions within > > each topic will equal to the number of nodes in the cluster. Therefore > > with a cluster of 5 nodes, we will have 8 topics and 5 partitions within > > each topic. > > > > On each of the node, we have a producer that has 8 threads. Each thread > > will always write to the same topic and partition. > > > > On the consumer side, we plan to have an aggregation client that has 8 > > threads. Each thread will pull from the same topic using round robin > > fashion to cycle through each partition within the topic. > > > > Couple of design questions we would like to hear your opinion and any > > suggestions are appreciated! > > > > 1) If we dynamically adds another node into the cluster, base on our > > design above, we would want to create a new partition and add to each of > > the 8 topics. Is this doable and how do I do that? > > > > 2) Can a producer send message to a specific partition? Base on our > > limited understanding so far, in a clustered kafka system, a producer > sends > > message to the cluster, how does the messages being distributed among > > different brokers/partitions? > > > > 3) In our system, we also have a c++ process manager that's monitoring > > over kafka broker process. is there a heartbeat in the broker that we can > > ping? Or where can I find the C++ API? > > > > 4) I saw in kafka.server.KafkaServer has a shutdown method. Is there a > > command port that I can send a shutdown command or how to trigger this > > shutdown method from an external process such as our process monitor. > > > > 5) We are thinking if in the future our cluster grow much larger. We > > possibly have to add additional kafka broker. What are the parameters we > > should look for to determine at that point we should add another broker? > > > > Thanks in advance for your time and help. > > Jamie > > > > > --14dae934106d26cefd04d0c9ee0b--