flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From tzulitai <...@git.apache.org>
Subject [GitHub] flink pull request #3054: [Flink 5404] Consolidate and update S3 documentati...
Date Thu, 05 Jan 2017 10:43:03 GMT
Github user tzulitai commented on a diff in the pull request:

    --- Diff: docs/dev/batch/connectors.md ---
    @@ -52,33 +52,13 @@ interface. There are Hadoop `FileSystem` implementations for
     In order to use a Hadoop file system with Flink, make sure that
    -- the `flink-conf.yaml` has set the `fs.hdfs.hadoopconf` property set to the Hadoop configuration
    -- the Hadoop configuration (in that directory) has an entry for the required file system.
Examples for S3 and Alluxio are shown below.
    -- the required classes for using the file system are available in the `lib/` folder of
the Flink installation (on all machines running Flink). If putting the files into the directory
is not possible, Flink is also respecting the `HADOOP_CLASSPATH` environment variable to add
Hadoop jar files to the classpath.
    +- the `flink-conf.yaml` has set the `fs.hdfs.hadoopconf` property to the Hadoop configuration
directory. For automated testing or running from an IDE the directory containing `flink-conf.yaml`
can be set by defining the FLINK_CONF_DIR environment variable.
    --- End diff --
    Add \` around FLINK_CONF_DIR ==> `FLINK_CONF_DIR`

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.

View raw message