kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Edmondo Porcu <edmondo.po...@gmail.com>
Subject Architectural patterns for full log replayability
Date Tue, 22 May 2018 09:25:51 GMT
Hello Kafka Users,

we'd like to understand how you are designing systems based on Kafka so to
be able to replay the full log.

In particular, let's take the following example:
- A product service streams products
- A purchase service streams purchases
- A recommendation service join the two and determine "special offers" to
apply on product, then emit a recommendation for it
- A special offer service consume recommendations and update product.

This all work very well, however imagine the product update that the
special offer service can be done only when the product is in state
"VALID". When the special offer service update the product the first time,
this all works fine.

Imagine now I bring all the consumer offsets back and I restart executing
everything. Now when the special offer service receive a recommendation for
a product, the product has been marked by the sales team as "OBSOLETE". The
update of the product fails.

How are you tackling this sort of issues?

- Are you materializing products on the materialized view service before
performing the update and filtering out events which are not applicable
anymore?
- Are you making the product service failing in such a way that the special
offer service understand this specific error and handle it ?

Thanks
Edmondo

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message