nifi-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Joe Witt <joe.w...@gmail.com>
Subject Re: NiFi 1.6.0 cluster stability with Site-to-Site
Date Fri, 10 Aug 2018 21:07:31 GMT
Yep what Mike points to is exactly what I was thinking of.  Since
you're on 1.6.0 then probably the issue is something else.  1.6
included an updated jersey client or something related to that.  Its
performance was really bad for our case.  In 1.7.0 it was replaced
with an implementation leveraging okhttp.  This may be and important
factor.

thanks
On Fri, Aug 10, 2018 at 5:02 PM Michael Moser <moser.mw@gmail.com> wrote:
>
> When I read this I thought of NIFI-4598 [1] and this may be what Joe remembers, too.
 If your site-to-site clients are older than 1.5.0, then maybe this is a factor?
>
> [1] - https://issues.apache.org/jira/browse/NIFI-4598
>
> -- Mike
>
>
> On Fri, Aug 10, 2018 at 4:43 PM Joe Witt <joe.witt@gmail.com> wrote:
>>
>> Joe G
>>
>> I do recall there were some fixes and improvements related to
>> clustering performance/thread pooling/ as it relates to site to site.
>> I dont recall precisely which version they went into but i'd strongly
>> recommend trying the latest release if you're able.
>>
>> Thanks
>> On Fri, Aug 10, 2018 at 4:13 PM Martijn Dekkers <martijn@dekkers.org.uk> wrote:
>> >
>> > Whats the OS you are running on? What kind of systems? Memory stats, network
stats, JVM stats etc. How much data coming through?
>> >
>> > On 10 August 2018 at 16:06, Joe Gresock <jgresock@gmail.com> wrote:
>> >>
>> >> Any nifi developers on this list that have any suggestions?
>> >>
>> >> On Wed, Aug 8, 2018 at 7:38 AM Joe Gresock <jgresock@gmail.com> wrote:
>> >>>
>> >>> I am running a 7-node NiFi 1.6.0 cluster that performs fairly well when
it's simply processing its own data (putting records in Elasticsearch, MongoDB, running transforms,
etc.).  However, when we add receiving Site-to-Site traffic to the mix, the CPU spikes to
the point that the nodes can't talk to each other, resulting in the inability to view or modify
the flow in the console.
>> >>>
>> >>> I have tried some basic things to mitigate this:
>> >>> - Requested that the sending party use a comma-separated list of all
7 of our nodes in their Remote Process Group that points to our cluster, in hopes that that
will help balance the requests
>> >>> - Requested that the sending party use some of the batching settings
on the Remote Port (i.e., Count = 20, Size = 100 MB, Duration = 10 sec)
>> >>> - Reduced the thread count on our Input Port to 2
>> >>>
>> >>> Are there any known nifi.properties that can be set to help mitigate
this problem?  Again, it only seems to be a problem when we are both receiving site-to-site
traffic and doing our normal processing, but taking each of those activities in isolation
seems to be okay.
>> >>>
>> >>> Thanks,
>> >>> Joe
>> >>
>> >>
>> >>
>> >> --
>> >> I know what it is to be in need, and I know what it is to have plenty. 
I have learned the secret of being content in any and every situation, whether well fed or
hungry, whether living in plenty or in want.  I can do all this through him who gives me strength.
   -Philippians 4:12-13
>> >
>> >

Mime
View raw message