You can use csv reader with delimiter as '|' to split data and create a dataframe on top of the file data.
Second step, filter the dataframe on column value like indicator type=A,D etc and put it in tables. Saving to tables you can use dataframewriter(not sure what is you destination db type here).

You can keep this part where based on column type you have to filter data generic and read the configurations like say - col value | tableName from config file or metadata table and write the data to destination.


On Mon, Sep 30, 2019 at 8:56 AM swetha kadiyala <> wrote:
dear friends,

I am new to spark. can you please help me to read the below text file using spark and scala.

Sample data


I am receiving indicator type with 3 rd column of each row. if my indicator type=A, then i need to store that particular row data into a table called Table1.
if indicator type=D then i have to store data into seperate table called TableB and same as indicator type=U then i have to store all rows data into a separate table called Table3.

Can anyone help me how to read row by row and split the columns and apply the condition based on indicator type and store columns data into respective tables.