spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Lars Albertsson <>
Subject Re: Fail a batch in Spark Streaming forcefully based on business rules
Date Sun, 31 Jul 2016 22:13:28 GMT
I don't know your context, so I don't have a solution for you. If you
provide more information, the list might be able to suggest a

IIUYC, however, it sounds like you could benefit from decoupling
operational failure from business level failure. E.g. if there is a
failure according to your business rules, keep the job running, but
emit business level failure records. If records need to be
reprocessed, emit them to another stream topic and reprocess.

It is risky to inject system level failures under normal operations.
An operational failure is normally an anomaly that should be
addressed; if you induce failures, system failures would become part
of normal operations, and real failures risk passing unnoticed.


Lars Albertsson
Data engineering consultant
+46 70 7687109

On Thu, Jul 28, 2016 at 12:11 PM, Hemalatha A
<> wrote:
> Another usecase why I need to do this is, If Exception A is caught I should
> just print it and ignore, but ifException B occurs, I have to end the batch,
> fail it and stop processing the batch.
> Is it possible to achieve this?? Any hints on this please.
> On Wed, Jul 27, 2016 at 10:42 AM, Hemalatha A
> <> wrote:
>> Hello,
>> I have a uescase where in, I have  to fail certain batches in my streaming
>> batches, based on my application specific business rules.
>> Ex: If in a batch of 2 seconds, I don't receive 100 message, I should fail
>> the batch and move on.
>> How to achieve this behavior?
>> --
>> Regards
>> Hemalatha
> --
> Regards
> Hemalatha

To unsubscribe e-mail:

View raw message