storm-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nathan Leung <ncle...@gmail.com>
Subject Re: Basic storm question
Date Tue, 01 Apr 2014 20:43:32 GMT
By default supervisor nodes can run up to 4 workers.  This is configurable
in storm.yaml (for example see supervisor.slots.ports here:
https://github.com/nathanmarz/storm/blob/master/conf/defaults.yaml).
 Memory should be split between the workers.  It's a typical Java heap, so
anything running on that worker process shares the heap.


On Tue, Apr 1, 2014 at 4:10 PM, David Crossland <david@elastacloud.com>wrote:

>  On said subject, how does memory allocation work I these cases? Assuming
> 1 worker per node would you just dump all the memory available into
> worker.childopts? I guess the memory pool would be shared between the
> spawned threads as appropriate to their needs?
>
>  I'm assuming the equivalent options for supervisor/nimbus are fine left
> at defaults.  Given that the workers/spouts/bolts are the working parts of
> the topology these would where I should target available memory?
>
>  D
>
>   *From:* Huiliang Zhang <zhlntu@gmail.com>
> *Sent:* Tuesday, 1 April 2014 19:47
> *To:* user@storm.incubator.apache.org
>
>  Thanks. It should be good if there exist some example figures explaining
> the relationship between tasks, workers, and threads.
>
>
> On Sat, Mar 29, 2014 at 6:34 AM, Susheel Kumar Gadalay <
> skgadalay@gmail.com> wrote:
>
>> No, a single worker is dedicated to a single topology no matter how
>> many threads it spawns for different bolts/spouts.
>> A single worker cannot be shared across multiple topologies.
>>
>> On 3/29/14, Nathan Leung <ncleung@gmail.com> wrote:
>> > From what I have seen, the second topology is run with 1 worker until
>> you
>> > kill the first topology or add more worker slots to your cluster.
>> >
>> >
>> > On Sat, Mar 29, 2014 at 2:57 AM, Huiliang Zhang <zhlntu@gmail.com>
>> wrote:
>> >
>> >> Thanks. I am still not clear.
>> >>
>> >> Do you mean that in a single worker process, there will be multiple
>> >> threads and each thread will handle part of a topology? If so, what
>> does
>> >> the number of workers mean when submitting topology?
>> >>
>> >>
>> >> On Fri, Mar 28, 2014 at 11:18 PM, padma priya chitturi <
>> >> padmapriya30@gmail.com> wrote:
>> >>
>> >>> Hi,
>> >>>
>> >>> No, its not the case. No matter how many topologies you submit, the
>> >>> workers will be shared among the topologies.
>> >>>
>> >>> Thanks,
>> >>> Padma Ch
>> >>>
>> >>>
>> >>> On Sat, Mar 29, 2014 at 5:11 AM, Huiliang Zhang <zhlntu@gmail.com>
>> >>> wrote:
>> >>>
>> >>>> Hi,
>> >>>>
>> >>>> I have a simple question about storm.
>> >>>>
>> >>>> My cluster has just 1 supervisor and 4 ports are defined to run
4
>> >>>> workers. I first submit a topology which needs 3 workers. Then I
>> submit
>> >>>> another topology which needs 2 workers. Does this mean that the
2nd
>> >>>> topology will never be run?
>> >>>>
>> >>>> Thanks,
>> >>>> Huiliang
>> >>>>
>> >>>
>> >>>
>> >>
>> >
>>
>
>

Mime
View raw message