There is a way to way obtain these malformed/rejected records. Rejection can happen not only because of column number mismatch but also if the data type of the data does not match the data type mentioned in the schema.
To obtain the rejected records, you can do the following:
1. Add an extra column (eg: CorruptRecCol) to your schema of type StringType()
2. In the datadrame reader, add the mode 'PERMISSIVE' while simultaneously adding the column CorruptRecCol to columnNameOfCorruptRecord
3. The column CorruptRecCol will contain the complete record if it is malformed/corrupted. On the other hand, it will be null if the record is valid. So you can use a filter (CorruptRecCol is NULL) to obtain the malformed/corrupted record.
You can use any column name to contain the invalid records. I have used CorruptRecCol just for example.
This example is for pyspark. Similar example will exist for Java/Scala also.

On Tue, 9 Oct 2018 at 00:27, Nirav Patel [via Apache Spark User List] <ml+s1001560n33643h19@n3.nabble.com> wrote:
I am getting `RuntimeException: Malformed CSV record` while parsing csv record and attaching schema at same time. Most likely there are additional commas or json data in some field which are not escaped properly. Is there a way CSV parser tells me which record is malformed?

This is what I am using:

    val df2 = sparkSession.read
      .option("inferSchema", true)
      .option("multiLine", true)
      .schema(headerDF.schema) // this only works without column mismatch


What's New with Xactly


If you reply to this email, your message will be added to the discussion below:
To start a new topic under Apache Spark User List, email ml+s1001560n1h17@n3.nabble.com
To unsubscribe from Apache Spark User List, click here.

Shuporno Choudhury