kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jörn Franke <jornfra...@gmail.com>
Subject Re: Higher number of producer creation/cleanup can lead to memory leaks at the brokers?
Date Thu, 15 Aug 2019 07:28:07 GMT
Even if it is not a memory leak it is not a good practice. You can put the messages on SQS
and have a lambda function listening to the SQS queue with reserve concurrency to put it on
Kafka

> Am 15.08.2019 um 08:52 schrieb Tianning Zhang <tianningzhang@yahoo.de.invalid>:
> 
> Dear all, 
> 
> I am using Amazon AWS Lambda functions to produce messages to a Kafka cluster. As I can
not control how frequently a Lambda function is initiated/invoked and I can not share object
between invocations - I have to create a new Kafka producer for each invocation and clean
it up after the invocation finishes. Each producer also set to the same "client.id".
> I notice that after deploying the lambda functions the heap size at the brokers increases
quickly - which finally resulted GC problems and problems at the brokers. It is very likely
that this increase is connected to the Lambda producers.
> I know that it is recommended to reuse single producer instance for message production.
But in this case (with AWS Lambda) this is not possible.
> My question is that if it is possible that high number of producer creation/cleanup can
lead to memory leaks at the brokers?
> I am using Kafka cluster with 5 brokers - version 1.0.1. Kafka client lib tested with
versions 0.11.0.03, 1.0.1 and 2.3.0.
> Thanks in advance
> Tianning Zhang
> 
> T: +49(30)509691-8301M: +49 172 7095686E: tianning.zhang@awin.com
> 
> Eichhornstraße 310785 Berlin
> www.awin.com
> 

Mime
View raw message