flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-1271) Extend HadoopOutputFormat and HadoopInputFormat to handle Void.class
Date Wed, 07 Jan 2015 08:55:34 GMT

    [ https://issues.apache.org/jira/browse/FLINK-1271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14267429#comment-14267429
] 

ASF GitHub Bot commented on FLINK-1271:
---------------------------------------

Github user rmetzger commented on the pull request:

    https://github.com/apache/flink/pull/287#issuecomment-68994150
  
    Hi,
    
    the changes look good. Can you apply the changes also to the "mapred" input/output format?
https://github.com/apache/flink/blob/master/flink-addons/flink-hadoop-compatibility/src/main/java/org/apache/flink/hadoopcompatibility/mapred/HadoopInputFormat.java#L53
    
    Also, it would be nice if you could squash your changes into one commit.


> Extend HadoopOutputFormat and HadoopInputFormat to handle Void.class 
> ---------------------------------------------------------------------
>
>                 Key: FLINK-1271
>                 URL: https://issues.apache.org/jira/browse/FLINK-1271
>             Project: Flink
>          Issue Type: Wish
>          Components: Hadoop Compatibility
>            Reporter: Felix Neutatz
>            Assignee: Felix Neutatz
>            Priority: Minor
>              Labels: Columnstore, HadoopInputFormat, HadoopOutputFormat, Parquet
>             Fix For: 0.8
>
>
> Parquet, one of the most famous and efficient column store formats in Hadoop uses Void.class
as Key!
> At the moment there are only keys allowed which extend Writable.
> For example, we would need to be able to do something like:
> HadoopInputFormat hadoopInputFormat = new HadoopInputFormat(new ParquetThriftInputFormat(),
Void.class, AminoAcid.class, job);
> ParquetThriftInputFormat.addInputPath(job, new Path("newpath"));
> ParquetThriftInputFormat.setReadSupportClass(job, AminoAcid.class);
> // Create a Flink job with it
> DataSet<Tuple2<Void, AminoAcid>> data = env.createInput(hadoopInputFormat);
> Where AminoAcid is a generated Thrift class in this case.
> However, I figured out how to output Parquet files with Parquet by creating a class which
extends HadoopOutputFormat.
> Now we will have to discuss, what's the best approach to make the Parquet integration
happen



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message