nifi-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Venkat Williams <>
Subject Create a CovertCSVToJSON Processor
Date Tue, 06 Jun 2017 07:12:04 GMT

I want to contribute this processor implementation code to NIFI project.


1)     Convert CSV files to a standard/canonical JSON format

a.       One JSON object/document per row in the input CSV

b.      Format should encode the data as JSON fields and values

c.       JSON Field names should be the original column header with any invalid
characters handled properly.

d.      Values should be kept unaltered

2)     Optionally, be able to specify an expected header used to
validate/reject input CSVs

3)     Support both tab and comma delimited files

a.     Auto-detect based on header row is easy

b.    Allow operator to specify the delimiter as a way to override the
auto-detect logic

4)     Handle arbitrarily large files...

a.       should handle CSV files of any length ( achieve this using

5)     Handle errors gracefully

a.       File failures

b.      Row failures

6)     Support for RFC-4180 <> formatted
CSV files and be sure to handle edge cases like embedded newlines in a
field value and escaped double quotes


Input CSV:



Desired output JSON:



1)      Reviewed all the existing csv libraries which can be used to
transform csv record to json document by supporting  RFC-4180
<> standard to handle embedded new lines
in field value and escaped quotes. Found OpenCSV, FastCSV, UnivocityCSV
Libraries can do this job most effectively.

2)      Selected Univocity CSV Library as I can do most of validations
which are part of my requirements only using this library. When I did the
performance testing using 5 GB and 10GB arbitrarily large files this gave
better results compared any others.

3)      Processed CSV Records are being emitted immediately rather than
waiting complete file processing. Used some configurable number in
processor to wait until that many records to emit. With this approach I
could process 5GB CSV data records using 1GB NIFI RAM which is most
effective / attractive feature in this whole implementation to handle large
files. ( This is common limitation in most of processors like SplitText,
SplitXML, etc wait until whole file processing and stores the results
FlowFile ArrayList within the processor this cause heap size/outofmemory

4) Handled File errors and record errors gracefully using user defined
configurations and processor routes.

Can anyone suggest how to proceed further whether I have to open new issue
or if I have to use any existing issue. ( I don't find any which matches to
this requirement)

View raw message