spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yin Huai (JIRA)" <>
Subject [jira] [Resolved] (SPARK-15280) Extract ORC serialization logic from OrcOutputWriter for reusability
Date Sat, 21 May 2016 23:09:13 GMT


Yin Huai resolved SPARK-15280.
       Resolution: Fixed
    Fix Version/s: 2.0.0

Issue resolved by pull request 13066

>  Extract ORC serialization logic from OrcOutputWriter for reusability
> ---------------------------------------------------------------------
>                 Key: SPARK-15280
>                 URL:
>             Project: Spark
>          Issue Type: Improvement
>          Components: Input/Output
>            Reporter: Ergin Seyfe
>            Priority: Minor
>             Fix For: 2.0.0
> Summary:
> This is a proposal to move ORC serialization logic from OrcOutputWriter to a new public
class (OrcSerializer) which can be re-used to serialize an InternalRow to a Writable object
so it can be written to a ORC file via RecordWriter.
> Details:
> Since Spark doesn't support SMB join yet, we would like to do SMB join at Spark application
side. Using DataFrame for reading and writing to a ORC file is easier but we also wanted to
parallelize it so we can have 1 task per each Hive bucket. This approach didn't work because
nested RDD's are not supported (cannot create/read a DF at executor).
> The workaround is creating a ORC reader & writer rather than DataFrame at each executor.
For reading ORC file OrcFile.createReader works fine. In order to write to a ORC file OrcOutputFormat().getRecordWriter
would do the trick. However the missing part is serialization of InternalRow to a Writable
object. In order to reuse the serialization part, I am proposing to split the OrcOutputWriter
into OrcOutputWriter and OrcSerializer (public) so we can reuse the serialization logic.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message