nifi-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Luis Carmona <>
Subject Re: High CPU consumption
Date Thu, 17 Oct 2019 01:37:03 GMT
Hi Bryan,

lamentably I didn't keep a copy of the template, didn't think could be useful.

But Im pretty sure to remember two escenarios. 

First one an input port in a Process Group which was straight connected to 2 flows (no condition
in between), first connection to an active processor and the other connection was to a disabled
processor, leading nowhere (was just something I tested and forgot to delete, but was disabled).

Second one, was quite similar but with an output port, as it got out from a process group,
had two connections one to an active processor and the other to a disabled processor.

What I did was stop all the canvas, and then start one by one the Groups, Once I detected
the high consuming group, started the processors inside one bye one, then I noticed it didn't
have any further effect, so thought might be the port and the connections described above.
Stopped the whole thing again, deleted the useless processor and connections and voilá the
consumption problem was highly reduced.

I'll try to reproduce the scenario, and I if I get it I will send you the template.



----- Original Message -----
From: "Bryan Bende" <>
To: "users" <>
Sent: Wednesday, October 16, 2019 4:32:06 PM
Subject: Re: High CPU consumption

Hi Luis,

Can you describe the part of the flow that turned out to be a problem
a little more?

Was it a port on the root canvas used for s2s that was then connected
into a process group where everything inside was disabled?

And what did you do to solve the problem, did you stop the port?



On Wed, Oct 16, 2019 at 3:15 PM Evan Reynolds <> wrote:
> Thank you for that tip, Andy!
> This is actually a bug I've wanted to track down and fix but it's in parts of the NiFi
codebase I'm really not familiar with and wasn't sure how to start ... if you can connect
me with someone who knows that area (scheduling and clusters, mainly) I would be happy to
see if it's something I can patch!
> On Tue, Oct 15, 2019 at 2:43 PM Andy LoPresto <> wrote:
>> Evan,
>> Thanks for sharing that diagnosing technique. While ideally we would have other controls
to prevent excess CPU usage, this seems like a useful tool which could be automated using
NiPyAPI [1] to perform a “bisect” command. I’ve used this for git commit searching as
well as side-effect unit test identification.
>> [1]
>> Andy LoPresto
>> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>> On Oct 15, 2019, at 1:40 PM, Evan Reynolds <> wrote:
>> I have found two issues that can cause high CPU when idle (high being about 200%
CPU when idle.) I haven’t verified these with 1.9.2, but it doesn’t hurt to tell you.
>> If you are using ports, make sure each input port is connected. If you have a one
that isn’t connected, that can bring your CPU to 200% and stay there.
>> If you have any processors that are set to run on the primary node of a cluster,
that can also take your CPU to 200%. I know RouteOnAttribute will do that (again, haven’t
tested 1.9.2, but it was a problem for me for a bit!) The fix – either don’t run it on
the primary node, or else set the run schedule to 5 seconds or something instead of 0.
>> To find out if this is the case – well, this is what I did. It worked, and wasn’t
that hard, though isn’t exactly elegant.
>> Back up your flowfile (flow.xml.gz)
>> Stop all your processors and see what your CPU does
>> Start half of them and watch your CPU – basically do a binary search. If your CPU
stays reasonable, then whatever group you started is fine. If not, then start stopping things
until your CPU becomes reasonable.
>> Eventually you’ll find a processor that spikes your CPU when you start it and then
you can figure out what to do about that processor. Record which processor it is and how you
altered it to bring CPU down.
>> Repeat, as there may be more than one processor spiking CPU.
>> Stop NiFi and restore your flowfile by copying it back in place – since you were
running around stopping things, this just makes sure everything is correctly back to where
it should be
>> Then use the list of processors and fixes to make changes.
>> ---------------------------------------------------------------
>> Evan Reynolds
>> From: Jon Logan <>
>> Reply-To: "" <>
>> Date: Sunday, October 13, 2019 at 6:12 PM
>> To: "" <>
>> Subject: Re: High CPU consumption
>> That isn't logback, that lists all jars on your classpath, the first of which happens
to be logback.
>> If you send a SIGKILL3 (you can send it in HTOP) it will dump the thread stacks to
stdout (probably the bootstrap log)...but that's just for one instant, and may or may not
be helpful.
>> On Sun, Oct 13, 2019 at 8:58 PM Luis Carmona <> wrote:
>> hi Aldrin,
>> thanks a  lot, by now I'm trying to learn how to make the profiling you mentioned.
>> One more question: Is it normal that the father java process has very low consumption
while the child process related to logback jar is the one that is eating up all the CPU ?
>> Please take a look at the attached image.
>> Thanks,
>> LC
>> ________________________________
>> From: "Aldrin Piri" <>
>> To: "users" <>
>> Sent: Sunday, October 13, 2019 9:30:47 PM
>> Subject: Re: High CPU consumption
>> Luis, please feel free to give us some information on your flow so we can help you
track down problematic areas as well.
>> On Sun, Oct 13, 2019 at 3:56 PM Jon Logan <> wrote:
>> You should put a profiler on it to be sure.
>> Just because your processors aren't processing data doesn't mean they are idle though
-- many have to poll for new data, especially sources -- ex. connecting to Kafka, etc, will
itself consume some CPU.
>> But again, you should attach a profiler before participating in a wild goose chase
of performance issues.
>> On Sun, Oct 13, 2019 at 12:20 PM Luis Carmona <> wrote:
>> HI,
>> I've struggling to reduce my nifi installation CPU consumption. Even in idle state,
all processors running but no data flowing, it is beyond 60% CPU consumption, with peaks of
>> What I've done so far
>> - Read and followed every instruction/post about tuning NIFI I've found googling.
>> - Verify scheduling is 1s for most consuming processors: http processors, wait/notify,
jolt, etc.
>> - Scratch my head...
>> But nothing seems to have a major effect on the issue.
>> Can anyone give me some precise directions or tips about how to solve this please
>> Is this the regular situation, I mean this is the minimun NIFI consumption.
>> The server is configure with 4 CPU's, 8 GB RAM - 4 of them dedicated to heap at bootstrap.conf.
>> Thanks in advance.
>> LC

View raw message