flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <...@apache.org>
Subject [GitHub] zentol closed pull request #7326: [hotfix] [docs] Fix typos in documentation
Date Tue, 18 Dec 2018 14:39:19 GMT
zentol closed pull request #7326: [hotfix] [docs] Fix typos in documentation
URL: https://github.com/apache/flink/pull/7326
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/dev/table/connect.md b/docs/dev/table/connect.md
index 9b466aeb149..7b164197139 100644
--- a/docs/dev/table/connect.md
+++ b/docs/dev/table/connect.md
@@ -34,7 +34,7 @@ This page describes how to declare built-in table sources and/or table sinks
and
 Dependencies
 ------------
 
-The following table list all available connectors and formats. Their mutual compatibility
is tagged in the corresponding sections for [table connectors](connect.html#table-connectors)
and [table formats](connect.html#table-formats). The following table provides dependency information
for both projects using a build automation tool (such as Maven or SBT) and SQL Client with
SQL JAR bundles.
+The following tables list all available connectors and formats. Their mutual compatibility
is tagged in the corresponding sections for [table connectors](connect.html#table-connectors)
and [table formats](connect.html#table-formats). The following tables provide dependency information
for both projects using a build automation tool (such as Maven or SBT) and SQL Client with
SQL JAR bundles.
 
 {% if site.is_stable %}
 
@@ -60,7 +60,7 @@ The following table list all available connectors and formats. Their mutual
comp
 
 {% else %}
 
-This table is only available for stable releases.
+These tables are only available for stable releases.
 
 {% endif %}
 
diff --git a/docs/dev/table/streaming/time_attributes.md b/docs/dev/table/streaming/time_attributes.md
index 27208fb768d..101bad68b80 100644
--- a/docs/dev/table/streaming/time_attributes.md
+++ b/docs/dev/table/streaming/time_attributes.md
@@ -30,7 +30,7 @@ Flink is able to process streaming data based on different notions of *time*.
 
 For more information about time handling in Flink, see the introduction about [Event Time
and Watermarks]({{ site.baseurl }}/dev/event_time.html).
 
-This pages explains how time attributes can be defined for time-based operations in Flink's
Table API & SQL.
+This page explains how time attributes can be defined for time-based operations in Flink's
Table API & SQL.
 
 * This will be replaced by the TOC
 {:toc}
diff --git a/docs/ops/deployment/yarn_setup.md b/docs/ops/deployment/yarn_setup.md
index a3342d154db..3d13e2db9b3 100644
--- a/docs/ops/deployment/yarn_setup.md
+++ b/docs/ops/deployment/yarn_setup.md
@@ -324,9 +324,9 @@ This section briefly describes how Flink and YARN interact.
 
 <img src="{{ site.baseurl }}/fig/FlinkOnYarn.svg" class="img-responsive">
 
-The YARN client needs to access the Hadoop configuration to connect to the YARN resource
manager and to HDFS. It determines the Hadoop configuration using the following strategy:
+The YARN client needs to access the Hadoop configuration to connect to the YARN resource
manager and HDFS. It determines the Hadoop configuration using the following strategy:
 
-* Test if `YARN_CONF_DIR`, `HADOOP_CONF_DIR` or `HADOOP_CONF_PATH` are set (in that order).
If one of these variables are set, they are used to read the configuration.
+* Test if `YARN_CONF_DIR`, `HADOOP_CONF_DIR` or `HADOOP_CONF_PATH` are set (in that order).
If one of these variables is set, it is used to read the configuration.
 * If the above strategy fails (this should not be the case in a correct YARN setup), the
client is using the `HADOOP_HOME` environment variable. If it is set, the client tries to
access `$HADOOP_HOME/etc/hadoop` (Hadoop 2) and `$HADOOP_HOME/conf` (Hadoop 1).
 
 When starting a new Flink YARN session, the client first checks if the requested resources
(containers and memory) are available. After that, it uploads a jar that contains Flink and
the configuration to HDFS (step 1).


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

Mime
View raw message