spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <>
Subject [jira] [Commented] (SPARK-25126) avoid creating OrcFile.Reader for all orc files
Date Wed, 22 Aug 2018 20:34:00 GMT


Steve Loughran commented on SPARK-25126:

+ [~dongjoon]

> avoid creating OrcFile.Reader for all orc files
> -----------------------------------------------
>                 Key: SPARK-25126
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: Input/Output
>    Affects Versions: 2.3.1
>            Reporter: Rao Fu
>            Priority: Minor
> We have a spark job that starts by reading orc files under an S3 directory and we noticed
the job consumes a lot of memory when both the number of orc files and the size of the file
are large. The memory bloat went away with the following workaround.
> 1) create a DataSet<Row> from a single orc file.
> Dataset<Row> rowsForFirstFile ="orc").load(oneFile);
> 2) when creating DataSet<Row> from all files under the directory, use the schema
from the previous DataSet.
> Dataset<Row> rows ="orc").load(path);
> I believe the issue is due to the fact in order to infer the schema a FileReader is created
for each orc file under the directory although only the first one is used. The FileReader
creation loads the metadata of the orc file and the memory consumption is very high when there
are many files under the directory.
> The issue exists in both 2.0 and HEAD.
> In 2.0, OrcFileOperator.readSchema is used.
> []
> In HEAD, OrcUtils.readSchema is used.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message