nifi-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bryan Bende <>
Subject Re: Why is a DistributedMapCacheServer started on every NiFi instance?
Date Tue, 27 Feb 2018 14:37:18 GMT

In the older 0.x line of NiFi there was a way to target a controller
service to the NCM or to nodes, so you would have chosen NCM in this

In the 1.x line the NCM was removed and as a result a controller
service can only be started on all nodes. We should probably consider
allowing a controller service to be assigned to primary node only.

In general though, why do you need to run more than one NiFi JVM per
physical server?

You would just be dividing physical resources across two JVM
instances, rather than letting one instance use all the resources.
Assuming you have 6 physical servers and 2 NiFI's per server to make
12, I'm not aware of any significant improvement this would give you
over running 1 NiFi per server for 6 and just configuring the heap and
thread pools appropriately, but others can chime in if I am not
correct here.



On Mon, Feb 26, 2018 at 5:37 PM, Kevin Verhoeven
<> wrote:
> I run a NiFi 1.5.0 cluster with 12 instances. I would like to use a
> DistributedMapCacheServer for the Wait/Notify processors, but I run more
> than one instance of NiFi on a server and when I enable a
> DistributedMapCacheServer, the server fails to start with an address already
> in use error. Why is a DistributedMapCacheServer started on EVERY NiFi
> instance? It is preferable to me to run one centralized cache server for my
> cluster – and a client will only point to one cache server. Is there a way
> to increment the port number or start the DistributedMapCacheServer under
> different port numbers for each NiFi instance? An alternative would be to
> use a Redis server as a cache server, but I had hoped to avoid running
> Redis. Has anyone come across this problem? Any recommendations?
> Kevin

View raw message