storm-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nathan Marz <nat...@nathanmarz.com>
Subject Re: Storm acking mechanism, proposal to improvement.
Date Tue, 02 Dec 2014 04:45:11 GMT
Acking is 1 message per tuple, which is at most a 50% throughput drop.
However, messages sent for acking are quite small so it is not likely to be
nearly that much of a drop. In addition, the acker bolt is highly efficient
and uses very little CPU.

Local acking does not really make sense. If you're in a situation where you
would benefit from "local acking", that means you have a lot of bolts
strung together with "localOrShuffle" – in which case you should try
packing together those operations into a single bolt (as Trident does).

On Mon, Dec 1, 2014 at 2:41 PM, Vladi Feigin <vladif86@gmail.com> wrote:

> Hi All,
>
> We use Storm 0.82. Our throughput is 400K messages per sec.
> From Storm UI we calculate that the total latency of all spouts and bolts
> = ~3.5 min to process 10 min of data. But in reality it takes 13 min!
> Obviously it creates huge backlogs.
> We don't have time-out failures at all. We don't other exceptions.
> So our main suspect is Storm acking mechanism , which uses a lot of
> network.
> (BTW if you have other opinion , please let me know)
> We think the fact that the all ack messages go via 0mq ,even when acker
> bolt runs in the same worker, causes this huge performance drop. An ack is
> sent per tuple (micro-batches are not supported), which is inefficient.
> There is no a way as far as we know to define the acker bolt to work in
> local Shuffle (like it's possible for other bolts)
> We'd like to ask your opinion regarding the new proposed feature in Storm:
> Support local acking . That means if an acker runs locally in the same
> worker , send the ack messages via local Disraptor queue (like
> localShuffle) rather than via 0mq.
>
> Does it make sense? What do you think?
>
> If you think that a root cause of our problem is other one, please let us
> know.
> Thank you in advance,
> Vladi
>
>
>
>
>
>
>
> .
>
>
>


-- 
Twitter: @nathanmarz
http://nathanmarz.com

Mime
View raw message