storm-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stig Rohde Døssing <s...@apache.org>
Subject Re: Does storm guarantee that all tuples will be 'acked' or 'failed'
Date Thu, 21 Sep 2017 16:45:08 GMT
Storm guarantees that all tuples will be acked or failed on the spout
instance that emitted them. If the spout emits a tuple and the process dies
and a new one comes up, the new process may or may not receive the old
ack/fail (usually won't but can happen in some cases where the message id
only depends on the emitted message, e.g. the KafkaSpout).

You should leave the records in storage until they are acked. A common
approach to this is to keep identifiers for the in-progress records in
memory in the spout, and then remove them (and delete the underlying record
in your case) when the tuple is acked.

2017-09-21 18:29 GMT+02:00 Hannum, Daniel <Daniel_Hannum@premierinc.com>:

> I’m writing my own spout, backed by a persistent store outside of storm.
>
>
>
> What I need to know is whether Storm guarantees that a spout will always
> be called with ack() or fail() for a given tuple. I.e. even if the spout
> process dies, another one will make the call.
>
>
>
> If this is true, then I can remove the record from storage in nextTuple()
> and put it back on in fail(), and I’ll be sure I’ll never lose any even in
> case of failure.
>
>
>
> If this is not true, then I need to keep the record in the underlying
> storage after nextTuple() and don’t take it off until ack(). This just
> makes it harder because subsequent nextTuple() calls have to know to skip
> the in progress ones.
>
>
>
> So, I hope Storm provides this guarantee.
>
>
>
> Thanks
>
>
>

Mime
View raw message