kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jörn Franke <jornfra...@gmail.com>
Subject Re: Real time streaming as a microservice
Date Sun, 08 Jul 2018 09:25:07 GMT
That they are loosely coupled does not mean they are independent. For instance, you would not
be able to replace Kafka with zeromq in your scenario. Unfortunately also Kafka sometimes
needs to introduce breaking changes and the dependent application needs to upgrade. 
You will not be able to avoid these scenarios in the future (this is only possible if micro
services don’t communicate with each other or if they would never need to change their communication
protocol - pretty impossible ). However there are ways of course to reduce it, eg kafka could
reduce the number of breaking changes or you can develop a very lightweight microservice that
is very easy to change and that only deals with the broker integration and your application

> On 8. Jul 2018, at 10:59, Mich Talebzadeh <mich.talebzadeh@gmail.com> wrote:
> Hi,
> I have created the Kafka messaging architecture as a microservice that
> feeds both Spark streaming and Flink. Spark streaming uses micro-batches
> meaning "collect and process data" and flink as an event driven
> architecture (a stateful application that reacts to incoming events by
> triggering computations etc.
> According to Wikipedia, A Microservice is a  technique that structures an
> application as a collection of loosely coupled services. In a microservices
> architecture, services are fine-grained and the protocols are lightweight.
> Ok for streaming data among other things I have to create and configure
> topic (or topics), design a robust zookeeper ensemble and create Kafka
> brokers with scalability and resiliency. Then I can offer the streaming as
> a microservice to subscribers among them Spark and Flink. I can upgrade
> this microservice component in isolation without impacting either Spark or
> Flink.
> The problem I face here is the dependency on Flink etc on the jar files
> specific for the version of Kafka deployed. For example kafka_2.12-1.1.0 is
> built on Scala 2.12 and Kafka version 1.1.0. To make this work in Flink 1.5
> application, I need  to use the correct dependency in sbt build. For
> example:
> libraryDependencies += "org.apache.flink" %% "flink-connector-kafka-0.11" %
> "1.5.0"
> libraryDependencies += "org.apache.flink" %% "flink-connector-kafka-base" %
> "1.5.0"
> libraryDependencies += "org.apache.flink" %% "flink-scala" % "1.5.0"
> libraryDependencies += "org.apache.kafka" % "kafka-clients" % ""
> libraryDependencies += "org.apache.flink" %% "flink-streaming-scala" %
> "1.5.0"
> libraryDependencies += "org.apache.kafka" %% "kafka" % ""
> and the Scala code needs to change:
> import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
> …
>    val stream = env
>                 .addSource(new FlinkKafkaConsumer011[String]("md", new
> SimpleStringSchema(), properties))
> So in summary some changes need to be made to Flink to be able to interact
> with the new version of Kafka. And more importantly if one can use an
> abstract notion of microservice here?
> Dr Mich Talebzadeh
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
> http://talebzadehmich.wordpress.com
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.

View raw message