storm-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsha <st...@harsha.io>
Subject Re: Why is the toplogy.workers is hardcoded to 1
Date Thu, 26 Feb 2015 22:20:40 GMT

I am not sure I follow. You are not setting numWorkers in your topology
so by default this will 1 worker. If you deploy a topology it will be
assigned to one worker. If you want to distribute the topology among
multiple workers add conf.setNumWorkers(desired_workers)



On Thu, Feb 26, 2015, at 02:12 PM, Srividhya Shanmugam wrote:
> I guess it’s a problem….


> I am looking at the following lines in Nimbus.clj in defn
> compute-new-task->node+port function


> total-slots-to-use (min (storm-conf TOPOLOGY-WORKERS)


> (+ (count available-slots) (count alive-assigned)))


>


> If the storm-conf does not have the TOPOLOGY-WORKERS property set, it
> should calculate based on available slots.


>


> But the nimbus is launched with conf – where the topology.workers is
> set to 1. That’s due to this property being defaulted to 1 in
> defaults.yaml, that gets
 read in Utils class.


> This is merged with storm.conf file. Since the storm.conf file does
> not have this property, the default is used.(hardcoded in
> default.yaml)


>


> Isn’t this a bug?
>


>


> Thanks,


> Srividhya


>


> *From:* Harsha [mailto:storm@harsha.io]
>
> *Sent:* Thursday, February 26, 2015 3:44 PM *To:*
> user@storm.apache.org *Subject:* Re: Why is the toplogy.workers is
> hardcoded to 1

>


> Are you settting numWorkers in you topology config like here
>
https://github.com/apache/storm/blob/master/examples/storm-starter/src/jvm/storm/starter/WordCountTopology.java#L92


>


>


> On Thu, Feb 26, 2015, at 12:40 PM, Srividhya Shanmugam wrote:


>> Thanks for the reply Harsha. We have distributed supervisor nodes (2)
>> and a nimbus node. The storm.yaml file has topology.workers
 property commented out. When a topology gets submitted that has one
 spout and a bolt with parallelism hint of 10 for each, before 0.9.3
 upgrade storm distributes this work across multiple worker process.
 The supervisor slots configured in the three nodes has a value 6701,
 6702, 6703.


>>


>> When such topology is submitted in storm now(after the upgrade), it’s
>> just one worker process that gets created with
 21 executor threads. Shouldn’t storm distribute the work?


>>


>> *From:* Harsha [mailto:storm@harsha.io]
>>
>> *Sent:* Thursday, February 26, 2015 3:33 PM *To:*
>> user@storm.apache.org *Subject:* Re: Why is the toplogy.workers is
>> hardcoded to 1

>>


>> Srividhya,


>> Storm topologies requires at least one worker to be available to run.
>> Hence the config will set as 1 for the topology.workers as default
>> value. Can you explain in more detail what you are trying to achieve.


>> Thanks,


>> Harsha


>>


>>


>> On Thu, Feb 26, 2015, at 12:12 PM, Srividhya Shanmugam wrote:


>>> I have commented this property in the storm.yaml. But still it
>>> always defaults to 1 after we upgraded storm to 0.9.3. Any idea why
 its hardcoded?


>>>


>>> This email and any files transmitted with it are confidential,
>>> proprietary and intended solely for the individual or entity to whom
>>> they are addressed. If you have received this email in error please
>>> delete it immediately.


>>


>>


>> This email and any files transmitted with it are confidential,
>> proprietary and intended solely for the individual or entity to whom
>> they are addressed. If you have received this email in error please
>> delete it immediately.


>


>
> This email and any files transmitted with it are confidential,
> proprietary and intended solely for the individual or entity to whom
> they are addressed. If you have received this email in error please
> delete it immediately.


Mime
View raw message