flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From fhueske <...@git.apache.org>
Subject [GitHub] flink pull request: [FLINK-1396][FLINK-1303] Hadoop Input/Output d...
Date Wed, 04 Feb 2015 20:43:32 GMT
Github user fhueske commented on a diff in the pull request:

    https://github.com/apache/flink/pull/363#discussion_r24117603
  
    --- Diff: docs/hadoop_compatibility.md ---
    @@ -38,9 +39,19 @@ This document shows how to use existing Hadoop MapReduce code with
Flink. Please
     
     ### Project Configuration
     
    -The Hadoop Compatibility Layer is part of the `flink-addons` Maven module. All relevant
classes are located in the `org.apache.flink.hadoopcompatibility` package. It includes separate
packages and classes for the Hadoop `mapred` and `mapreduce` APIs.
    +Support for Haddop input/output formats is part of the `flink-java` and
    +`flink-scala` Maven modules that are always required when writing Flink jobs.
    +The code is located in `org.apache.flink.api.java.hadoop` and
    +`org.apache.flink.api.scala.hadoop` in an additional sub-package for the
    +`mapred` and `mapreduce` API.
     
    -Add the following dependency to your `pom.xml` to use the Hadoop Compatibility Layer.
    +Support for Hadoop Mappers and Reducers is contained in the `flink-addons`
    --- End diff --
    
    `flink-staging` is the new `flink-addons` ;-)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

Mime
View raw message