From user-return-6306-apmail-storm-user-archive=storm.apache.org@storm.apache.org Thu Feb 26 22:21:11 2015 Return-Path: X-Original-To: apmail-storm-user-archive@minotaur.apache.org Delivered-To: apmail-storm-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id BFCD110E90 for ; Thu, 26 Feb 2015 22:21:11 +0000 (UTC) Received: (qmail 82690 invoked by uid 500); 26 Feb 2015 22:21:10 -0000 Delivered-To: apmail-storm-user-archive@storm.apache.org Received: (qmail 82650 invoked by uid 500); 26 Feb 2015 22:21:10 -0000 Mailing-List: contact user-help@storm.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@storm.apache.org Delivered-To: mailing list user@storm.apache.org Received: (qmail 82640 invoked by uid 99); 26 Feb 2015 22:21:10 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 26 Feb 2015 22:21:10 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: local policy) Received: from [66.111.4.25] (HELO out1-smtp.messagingengine.com) (66.111.4.25) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 26 Feb 2015 22:20:43 +0000 Received: from compute2.internal (compute2.nyi.internal [10.202.2.42]) by mailout.nyi.internal (Postfix) with ESMTP id 2C0E52086F for ; Thu, 26 Feb 2015 17:20:40 -0500 (EST) Received: from web3 ([10.202.2.213]) by compute2.internal (MEProxy); Thu, 26 Feb 2015 17:20:41 -0500 DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=harsha.io; h= message-id:x-sasl-enc:from:to:mime-version :content-transfer-encoding:content-type:subject:date:in-reply-to :references; s=mesmtp; bh=8YIsYeYeXiUoRq33ZRI2aZ60l1M=; b=E0uVUL 4JusVUPYJbe1OmCbgzPPRWEStXTRBPJH6R/+1eP2qol0ezM8jBwGoQMAVwKxSSnA +jFlKyADK1uHFfmZWF80suSrUHDg4JawPJzsyJi9Ds4LMY3E0ymvSPM93ALv3/xR p5D5XEhUPRZgNTPHs6JfDlwUih6YxAs309Rn8= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d= messagingengine.com; h=message-id:x-sasl-enc:from:to :mime-version:content-transfer-encoding:content-type:subject :date:in-reply-to:references; s=smtpout; bh=8YIsYeYeXiUoRq33ZRI2 aZ60l1M=; b=bVpRaHW7lTQn2rtsjSx9f5uuDdVJupkUl7MMVN+Y7/mP3OPrG7Wg +x2cwhmMpbmfFRo1c8x4TuatM6dHwxhPNiDSTNfRemd0oy1+4vdO4+jJSPkv14hR pdQshFV4IhCSvKehWQp+36gvJuEM9NXd7zHD+p2B3WpD67b0GKMnpbE= Received: by web3.nyi.internal (Postfix, from userid 99) id D1D07117B38; Thu, 26 Feb 2015 17:20:40 -0500 (EST) Message-Id: <1424989240.4136667.232993273.77E7FCC3@webmail.messagingengine.com> X-Sasl-Enc: scNVFs8+bqTUJk/qVUaXIVdURUEf/Ne8rCfCgdfObE26 1424989240 From: Harsha To: user@storm.apache.org MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Type: multipart/alternative; boundary="_----------=_142498924041366670"; charset="utf-8" X-Mailer: MessagingEngine.com Webmail Interface - ajax-4ba7306c Subject: Re: Why is the toplogy.workers is hardcoded to 1 Date: Thu, 26 Feb 2015 14:20:40 -0800 In-Reply-To: References: <1424982753.4115112.232948785.2397BAF8@webmail.messagingengine.com> <1424983455.4117184.232954349.44B376B5@webmail.messagingengine.com> X-Virus-Checked: Checked by ClamAV on apache.org This is a multi-part message in MIME format. --_----------=_142498924041366670 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" I am not sure I follow. You are not setting numWorkers in your topology so by default this will 1 worker. If you deploy a topology it will be assigned to one worker. If you want to distribute the topology among multiple workers add conf.setNumWorkers(desired_workers) On Thu, Feb 26, 2015, at 02:12 PM, Srividhya Shanmugam wrote: > I guess it=E2=80=99s a problem=E2=80=A6. > I am looking at the following lines in Nimbus.clj in defn > compute-new-task->node+port function > total-slots-to-use (min (storm-conf TOPOLOGY-WORKERS) > (+ (count available-slots) (count alive-assigned))) > > If the storm-conf does not have the TOPOLOGY-WORKERS property set, it > should calculate based on available slots. > > But the nimbus is launched with conf =E2=80=93 where the topology.workers= is > set to 1. That=E2=80=99s due to this property being defaulted to 1 in > defaults.yaml, that gets read in Utils class. > This is merged with storm.conf file. Since the storm.conf file does > not have this property, the default is used.(hardcoded in > default.yaml) > > Isn=E2=80=99t this a bug? > > > Thanks, > Srividhya > > *From:* Harsha [mailto:storm@harsha.io] > > *Sent:* Thursday, February 26, 2015 3:44 PM *To:* > user@storm.apache.org *Subject:* Re: Why is the toplogy.workers is > hardcoded to 1 > > Are you settting numWorkers in you topology config like here > https://github.com/apache/storm/blob/master/examples/storm-starter/src/jvm/= storm/starter/WordCountTopology.java#L92 > > > On Thu, Feb 26, 2015, at 12:40 PM, Srividhya Shanmugam wrote: >> Thanks for the reply Harsha. We have distributed supervisor nodes (2) >> and a nimbus node. The storm.yaml file has topology.workers property commented out. When a topology gets submitted that has one spout and a bolt with parallelism hint of 10 for each, before 0.9.3 upgrade storm distributes this work across multiple worker process. The supervisor slots configured in the three nodes has a value 6701, 6702, 6703. >> >> When such topology is submitted in storm now(after the upgrade), it=E2= =80=99s >> just one worker process that gets created with 21 executor threads. Shouldn=E2=80=99t storm distribute the work? >> >> *From:* Harsha [mailto:storm@harsha.io] >> >> *Sent:* Thursday, February 26, 2015 3:33 PM *To:* >> user@storm.apache.org *Subject:* Re: Why is the toplogy.workers is >> hardcoded to 1 >> >> Srividhya, >> Storm topologies requires at least one worker to be available to run. >> Hence the config will set as 1 for the topology.workers as default >> value. Can you explain in more detail what you are trying to achieve. >> Thanks, >> Harsha >> >> >> On Thu, Feb 26, 2015, at 12:12 PM, Srividhya Shanmugam wrote: >>> I have commented this property in the storm.yaml. But still it >>> always defaults to 1 after we upgraded storm to 0.9.3. Any idea why its hardcoded? >>> >>> This email and any files transmitted with it are confidential, >>> proprietary and intended solely for the individual or entity to whom >>> they are addressed. If you have received this email in error please >>> delete it immediately. >> >> >> This email and any files transmitted with it are confidential, >> proprietary and intended solely for the individual or entity to whom >> they are addressed. If you have received this email in error please >> delete it immediately. > > > This email and any files transmitted with it are confidential, > proprietary and intended solely for the individual or entity to whom > they are addressed. If you have received this email in error please > delete it immediately. --_----------=_142498924041366670 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset="utf-8"
I am not sure I follow. You are not setting numWorkers in your t= opology so by default this will 1 worker. If you deploy a topology it will = be assigned to one worker. If you want to distribute the topology among mul= tiple workers add conf.setNumWorkers(desired_workers)
 
 
 
On Thu, Feb 26, 2015, at 02:12 PM, Srividhya Shanmugam wrote:

I guess it=E2=80=99s a problem=E2=80=A6.=

I am looking at the following lines in N= imbus.clj in defn compute-new-task->node+port  function

total-slots-to-use (min (storm-conf TOPOLOGY-WORKERS)<= /span>

       &= nbsp;           &nbs= p;            (+ (co= unt available-slots) (count alive-assigned)))

<= p style=3D"font-family: "Times New Roman","serif"; font= -size: 12pt; margin: 0in 0in 0.0001pt;">

If the storm-conf does not have the TOP= OLOGY-WORKERS property set, it should calculate based on available slots.


But the nim= bus is launched with conf =E2=80=93 where the topology.workers is set to 1.= That=E2=80=99s due to this property being defaulted to 1 in defaults.yaml,= that gets read in Utils class.

This is merged with= storm.conf file. Since the storm.conf file does not have this property, th= e default is used.(hardcoded in default.yaml)

<= p style=3D"font-family: "Times New Roman","serif"; font= -size: 12pt; margin: 0in 0in 0.0001pt;">

Isn=E2=80=99t this a bug?


Thanks= ,

Srividhya


From:= = Harsha [mailto:storm@harsha.io]
Sent: Thursday, February 26, 2015 3:44 PM
To: user@s= torm.apache.org
Subject: Re: Why is the toplogy.workers is hardc= oded to 1




On Thu, Feb 26, 2015, at 12:4= 0 PM, Srividhya Shanmugam wrote:

T= hanks for the reply Harsha. We have distributed supervisor nodes (2) and a = nimbus node. The storm.yaml file has topology.workers property commented out. When a topology gets submitted that has one spout = and a bolt with parallelism hint of 10 for each, before 0.9.3 upgrade storm= distributes  this work across multiple worker process. The supervisor= slots configured in the three nodes has a value 6701, 6702, 6703.


When such topology is su= bmitted in storm now(after the upgrade), it=E2=80=99s just one worker proce= ss that gets created with 21 executor threads. Shouldn=E2=80=99t storm distribute the work?


= From: Harsha [mailto:storm@harsha.io]
Sent: Thursday, February 26, 2015 3:33 PM
To: user@storm.apache= .org

Subject: Re: Why is the toplogy.workers is= hardcoded to 1


Srividhya,

     = ;   Storm topologies requires at least one worker to be available= to run. Hence the config will set as 1 for the topology.workers as default= value. Can you explain in more detail what you are trying to achieve.
<= /p>

Thanks,

Harsha



On Thu, Feb 26, 2015, at 12:1= 2 PM, Srividhya Shanmugam wrote:

I have commented this property in the storm.yaml. But sti= ll it always defaults to 1 after we upgraded storm to 0.9.3. Any idea why its hardcoded?


This email and any files tran= smitted with it are confidential, proprietary and intended solely for the i= ndividual or entity to whom they are addressed. If you have received this e= mail in error please delete it immediately.



This email and any files tran= smitted with it are confidential, proprietary and intended solely for the i= ndividual or entity to whom they are addressed. If you have received this e= mail in error please delete it immediately.


 
This email and any files transmitted with it are confidential, proprie= tary and intended solely for the individual or entity to whom they are addr= essed. If you have received this email in error please delete it immediatel= y.
 
--_----------=_142498924041366670--