I'm not sure I understand this proposal. MiNiFi can be installed completely separately from NiFi (and usually they are not co-located on the same machine). If your requirements are to connect an arbitrary Java program with Kafka to produce data that is published to a Kafka topic, MiNiFi is a potential (but probably not ideal) tool, while its interdependency with NiFi is only relevant if you wanted to push from MiNiFi to NiFi to Kafka.
Please let me know if I am misunderstanding, but to go from your client to Kafka, I would propose the following flow:
Your code: persist or stream data somewhere (CSV/XML/JSON file, HTTP endpoint, TCP packet, DB, whatever)
MiNiFi: processor to read that format/location -> processor(s) to manipulate what is read in as flowfile content to massage it to expected Kafka form -> PublishKafka processor
This allows you to decouple your application from the Kafka format while using MiNiFi as a "glue layer" that can be updated by changing a single processor should the source or destination change in the future.
MiNiFi would provide you with the queuing features like ordering, prioritization, and backpressure without requiring you to code that yourself.
Hopefully this helps and if not, at least clarifies the capabilities and uses of NiFi and MiNiFi. Andy LoPresto
As an alternative approach, is it conceivable to write a Custom Processor and run that in the MiNiFi 'pipeline' without connecting back to NiFi (Custom Processor -> Some Connector with socket / buffer input -> PublishToKafka) ? This wouldn't hide the MiNiFi installation / runtime from the user, but will keep the 'agent' independent of the NiFi backend.