kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chanchal Chatterji <chanchal.chatte...@infosys.com>
Subject RE: Need info
Date Wed, 12 Sep 2018 07:51:45 GMT
We are planning to produce  Bank statements out of data traversing through the Kafka.  ( A
simple example of it would be Bank statement for saving account / current account in printable
format in our daily life.

So your three suggestions :

1.  Build your cluster right 
2. Size your Message right  
3. Tune your producer right 

You mean if we have correct size set for these three we can flow arbitrary data through Kafka?
Also if you have any observation on how much typical 'Bank Statement' message could be ( in
MBs), and if you can share that , it would be great help.

Regards
Chanchal Chatterji 
Principal Consultant,
Infosys Ltd.
Electronic city Phase-1,
Bangalore-560100
Contacts : 9731141606/ 8105120766







-----Original Message-----
From: Liam Clarke <liam.clarke@adscale.co.nz> 
Sent: Wednesday, September 12, 2018 1:10 PM
To: users@kafka.apache.org
Subject: Re: Need info

The answer to your question is "It depends". You build your cluster right and size your messages
right and tune your producers right, you can achieve near real time transport of terabytes
of data a day.


There's been plenty of articles written about Kafka performance. E.g.,
https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fengineering.linkedin.com%2Fkafka%2Fbenchmarking-apache-kafka-2-million-writes-second-three-cheap-machines&amp;data=01%7C01%7Cchanchal.chatterji%40infosys.com%7Cf7bece6231274ea7025e08d61882f2bf%7C63ce7d592f3e42cda8ccbe764cff5eb6%7C1&amp;sdata=Lp%2BsRFXrPRU3TXH9CP0bgLbj1AZ6cWYLge6sUN8aKJY%3D&amp;reserved=0

Kind regards,

Liam Clarke

On Wed, 12 Sep. 2018, 7:32 pm Chanchal Chatterji, < chanchal.chatterji@infosys.com>
wrote:

> Hi,
>
> In the process of mainframe modernization, we are attempting to stream 
> Mainframe data to AWS Cloud , using Kafka.  We are planning to use 
> Kafka 'Producer API' at mainframe side and 'Connector API' on the cloud side.
> Since our data is processed by a module called 'Central dispatch' 
> located in Mainframe and is sent to Kafka.  We want to know what is 
> rate of volume Kafka can handle.  The other end of Kafka is connected 
> to an AWS S3 Bucket As sink.  Please help us to provide this info or 
> else please help to connect with relevant person who can help us to understand this.
>
> Thanks and Regards
>
> Chanchal Chatterji
> Principal Consultant,
> Infosys Ltd.
> Electronic city Phase-1,
> Bangalore-560100
> Contacts : 9731141606/ 8105120766
>
>
Mime
View raw message