storm-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hannum, Daniel" <>
Subject Re: Does storm guarantee that all tuples will be 'acked' or 'failed'
Date Thu, 21 Sep 2017 17:12:03 GMT
Thank you!

From: Stig Rohde Døssing <>
Reply-To: "" <>
Date: Thursday, September 21, 2017 at 12:45 PM
To: "" <>
Subject: Re: Does storm guarantee that all tuples will be 'acked' or 'failed'

****This email did not originate from the Premier, Inc. network. Use caution when opening
attachments or clicking on URLs.*****

Storm guarantees that all tuples will be acked or failed on the spout instance that emitted
them. If the spout emits a tuple and the process dies and a new one comes up, the new process
may or may not receive the old ack/fail (usually won't but can happen in some cases where
the message id only depends on the emitted message, e.g. the KafkaSpout).
You should leave the records in storage until they are acked. A common approach to this is
to keep identifiers for the in-progress records in memory in the spout, and then remove them
(and delete the underlying record in your case) when the tuple is acked.

2017-09-21 18:29 GMT+02:00 Hannum, Daniel <<>>:
I’m writing my own spout, backed by a persistent store outside of storm.

What I need to know is whether Storm guarantees that a spout will always be called with ack()
or fail() for a given tuple. I.e. even if the spout process dies, another one will make the

If this is true, then I can remove the record from storage in nextTuple() and put it back
on in fail(), and I’ll be sure I’ll never lose any even in case of failure.

If this is not true, then I need to keep the record in the underlying storage after nextTuple()
and don’t take it off until ack(). This just makes it harder because subsequent nextTuple()
calls have to know to skip the in progress ones.

So, I hope Storm provides this guarantee.


View raw message