spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "koert kuipers (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-17583) Remove unused rowSeparator variable and set auto-expanding buffer as default for maxCharsPerColumn option in CSV
Date Thu, 01 Dec 2016 04:48:58 GMT

    [ https://issues.apache.org/jira/browse/SPARK-17583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710871#comment-15710871
] 

koert kuipers commented on SPARK-17583:
---------------------------------------

i see. so you are saying in spark 2.0.x it fails when the multiple lines that form a record
end up in different splits? so basically its then not safe to use then. it just happened to
work in my unit test because i had tiny part files that were never split up.


> Remove unused rowSeparator variable and set auto-expanding buffer as default for maxCharsPerColumn
option in CSV
> ----------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-17583
>                 URL: https://issues.apache.org/jira/browse/SPARK-17583
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.0.0
>            Reporter: Hyukjin Kwon
>            Assignee: Hyukjin Kwon
>            Priority: Minor
>             Fix For: 2.1.0
>
>
> This JIRA includes several changes below:
> 1. Upgrade Univocity library from 2.1.1 to 2.2.1
> This includes some performance improvement and also enabling auto-extending buffer in
{{maxCharsPerColumn}} option in CSV. Please refer the [release notes|https://github.com/uniVocity/univocity-parsers/releases].
> 2. Remove {{rowSeparator}} variable existing in {{CSVOptions}}
> We have this variable in [CSVOptions|https://github.com/apache/spark/blob/29952ed096fd2a0a19079933ff691671d6f00835/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVOptions.scala#L127]
but it seems possibly causing confusion that it actually does not care of {{\r\n}}. For example,
we have an issue open about this SPARK-17227 describing this variable
> This options is virtually not being used because we rely on {{LineRecordReader}} in Hadoop
which deals with only both {{\n}} and {{\r\n}}.
> 3. Setting the default value of {{maxCharsPerColumn}} to auto-expending 
> We are setting 1000000 for the length of each column. It'd be more sensible we allow
auto-expending rather than fixed length by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message