flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <...@apache.org>
Subject [GitHub] [flink-web] klion26 commented on a change in pull request #247: [FLINK-13683] Translate "Code Style - Component Guide" page into Chinese
Date Mon, 01 Jun 2020 04:48:17 GMT

klion26 commented on a change in pull request #247:
URL: https://github.com/apache/flink-web/pull/247#discussion_r432808804



##########
File path: contributing/code-style-and-quality-components.zh.md
##########
@@ -9,24 +9,24 @@ title:  "Apache Flink Code Style and Quality Guide  — Components"
 
 
 
-## Component Specific Guidelines
+## 组件特定指南
 
-_Additional guidelines about changes in specific components._
+_关于特定组件更改的附加指南。_
 
 
-### Configuration Changes
+### 配置更改
 
-Where should the config option go?
+配置选项应该放在哪里?
 
-* <span style="text-decoration:underline;">‘flink-conf.yaml’:</span> All
configuration that pertains to execution behavior that one may want to standardize across
jobs. Think of it as parameters someone would set wearing an “ops” hat, or someone that
provides a stream processing platform to other teams.
+* <span style="text-decoration:underline;">‘flink-conf.yaml’:</span> 所有属于可能要跨作业标准化的执行行为配置。可以将其想像成
Ops 的工作人员,或为其他团队提供流处理平台的设置参数。

Review comment:
       ```suggestion
   * <span style="text-decoration:underline;">‘flink-conf.yaml’:</span>
所有属于可能要跨作业标准化的执行行为配置。可以将其想像成 Ops 的工作人员,或为其他团队提供流处理平台的设置参数。
   ```
   下面几句也同样修改掉吧
   2 `可以将其想像成 Ops 的工作人员,或为其他团队提供流处理平台的设置参数`
,这句话读起来有点不太通顺的样子,前面的意思是想象成 `工作人员`
后面的意思读起来是 `设置参数`?

##########
File path: contributing/code-style-and-quality-components.zh.md
##########
@@ -48,96 +48,95 @@ How to name config keys:
   }
   ```
 
-* The resulting config keys should hence be:
+* 因此生成的配置键应该:
 
-  **NOT** `"taskmanager.detailed.network.metrics"`
+  **不是** `"taskmanager.detailed.network.metrics"`
 
-  **But rather** `"taskmanager.network.detailed-metrics"`
+  **而是** `"taskmanager.network.detailed-metrics"`
 
 
-### Connectors
+### 连接器
 
-Connectors are historically hard to implement and need to deal with many aspects of threading,
concurrency, and checkpointing.
+连接器历来很难实现,需要处理多线程、并发和检查点的许多方面。

Review comment:
       `等许多方面`?

##########
File path: contributing/code-style-and-quality-components.zh.md
##########
@@ -48,96 +48,95 @@ How to name config keys:
   }
   ```
 
-* The resulting config keys should hence be:
+* 因此生成的配置键应该:
 
-  **NOT** `"taskmanager.detailed.network.metrics"`
+  **不是** `"taskmanager.detailed.network.metrics"`
 
-  **But rather** `"taskmanager.network.detailed-metrics"`
+  **而是** `"taskmanager.network.detailed-metrics"`
 
 
-### Connectors
+### 连接器
 
-Connectors are historically hard to implement and need to deal with many aspects of threading,
concurrency, and checkpointing.
+连接器历来很难实现,需要处理多线程、并发和检查点的许多方面。
 
-As part of [FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
we are working on making this much simpler for sources. New sources should not have to deal
with any aspect of concurrency/threading and checkpointing any more.
+作为 [FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
的一部分,我们正在努力让这些 source 更加简单。新的 source 应该不必处理并发/线程和检查点的任何方面。
 
-A similar FLIP can be expected for sinks in the near future.
+预计在不久的将来,会有类似针对 sink 的 FLIP。
 
 
-### Examples
+### 示例
 
-Examples should be self-contained and not require systems other than Flink to run. Except
for examples that show how to use specific connectors, like the Kafka connector. Sources/sinks
that are ok to use are `StreamExecutionEnvironment.socketTextStream`, which should not be
used in production but is quite handy for exploring how things work, and file-based sources/sinks.
(For streaming, there is the continuous file source)
+示例应该是自包含的,不需要运行 Flink 以外的系统。除了显示如何使用具体的连接器的示例,比如
Kafka 连接器。source/sink 可以使用 `StreamExecutionEnvironment.socketTextStream`,这个不应该在生产中使用,但对于研究示例如何运行是相当方便的,以及基于文件的
source/sink。(对于流,有连续的文件 source)
+示例也不应该是纯粹的玩具示例,而是在现实世界的代码和纯粹的抽象示例之间取得平衡。WordCount
示例到现在已经很久了,但它是一个很好的功能突出并可以做有用事情的简单代码示例。
 
-Examples should also not be pure toy-examples but strike a balance between real-world code
and purely abstract examples. The WordCount example is quite long in the tooth by now but
it’s a good showcase of simple code that highlights functionality and can do useful things.
+示例中应该有不少的注释。他们可以在类级 Javadoc 中描述示例的总体思路,并且描述正在发生什么和整个代码里使用了什么功能。还应描述预期的输入数据和输出数据。
 
-Examples should also be heavy in comments. They should describe the general idea of the example
in the class-level Javadoc and describe what is happening and what functionality is used throughout
the code. The expected input data and output data should also be described.
+示例应该包括参数解析,以便你可以运行一个示例(使用 `bin/flink run
path/to/myExample.jar --param1 … --param2` 运行程序)。
 
-Examples should include parameter parsing, so that you can run an example (from the Jar that
is created for each example using `bin/flink run path/to/myExample.jar --param1 … --param2`.
 
+### 表和 SQL API
 
-### Table & SQL API
 
+#### 语义
 
-#### Semantics
+**SQL 标准应该是事实的主要来源。**
 
-**The SQL standard should be the main source of truth.**
+* 语法、语义和功能应该和 SQL 保持一致!
+* 我们不需要重造轮子。大部分问题都已经在业界广泛讨论过并写在
SQL 标准中了。
+* 我们依靠最新的标准(在写这篇文档时使用  SQL:2016 or ISO/IEC 9075:2016
 [[下载]](https://standards.iso.org/ittf/PubliclyAvailableStandards/c065143_ISO_IEC_TR_19075-5_2016.zip))。并非每个部分都可在线获取,但快速网络搜索可能对此有所帮助。
 
-* Syntax, semantics, and features should be aligned with SQL!
-* We don’t need to reinvent the wheel. Most problems have already been discussed industry-wide
and written down in the SQL standard.
-* We rely on the newest standard (SQL:2016 or ISO/IEC 9075:2016 when writing this document
[[download]](https://standards.iso.org/ittf/PubliclyAvailableStandards/c065143_ISO_IEC_TR_19075-5_2016.zip)
). Not every part is available online but a quick web search might help here.
+讨论与标准或厂商特定解释的差异。
 
-Discuss divergence from the standard or vendor-specific interpretations.
+* 一旦定义了语法或行为就不能轻易撤销。
+* 需要扩展或解释标准的贡献需要与社区进行深入的讨论。
+* 请通过一些对 Postgres、Microsoft SQL Server、Oracle、Hive、Calcite、Beam 等其他厂商如何处理此类案例进行初步的探讨来帮助提交者。
 
-* Once a syntax or behavior is defined it cannot be undone easily.
-* Contributions that need to extent or interpret the standard need a thorough discussion
with the community.
-* Please help committers by performing some initial research about how other vendors such
as Postgres, Microsoft SQL Server, Oracle, Hive, Calcite, Beam are handling such cases.
 
+将 Table API 视为 SQL 和 Java/Scala 编程世界之间的桥梁。
 
-Consider the Table API as a bridge between the SQL and Java/Scala programming world.
+* Table API 是一种嵌入式域特定语言,用于遵循关系模型的分析程序。
+在语法和名称方面不需要严格遵循 SQL 标准,但如果这有助于使其感觉更直观,那么可以更接近编程语言的方式/命名函数和功能。
+* Table API 可能有一些非 SQL 功能(例如 map()、flatMap() 等),但还是应该“感觉像
SQL”。如果可能,函数和算子应该有相等的语义和命名。
 
-* The Table API is an Embedded Domain Specific Language for analytical programs following
the relational model.
-It is not required to strictly follow the SQL standard in regards of syntax and names, but
can be closer to the way a programming language would do/name functions and features, if that
helps make it feel more intuitive.
-* The Table API might have some non-SQL features (e.g. map(), flatMap(), etc.) but should
nevertheless “feel like SQL”. Functions and operations should have equal semantics and
naming if possible.
 
+#### 常见错误
 
-#### Common mistakes
+* 添加功能时支持 SQL 的类型系统。
+    * SQL 函数、连接器或格式化从一开始就应该原生的支持大多数 SQL
类型。
+    * 不支持的类型会导致混淆,限制可用性,并通过多次接触相同代码路径产生开销。
+    * 例如,当添加 `SHIFT_LEFT` 函数时,确保贡献足够通用,不仅适用于
`INT` 也适用于 `BIGINT` 或 `TINYINT`。
 
-* Support SQL’s type system when adding a feature.
-    * A SQL function, connector, or format should natively support most SQL types from the
very beginning.
-    * Unsupported types lead to confusion, limit the usability, and create overhead by touching
the same code paths multiple times.
-    * For example, when adding a `SHIFT_LEFT` function, make sure that the contribution is
general enough not only for `INT` but also `BIGINT` or `TINYINT`.
 
+#### 测试
 
-#### Testing
+测试为空性
 
-Test for nullability.
+* 几乎每个操作,SQL 都原生支持 `NULL`,并具有 3 值布尔逻辑。
+* 也确保测试每个功能的可空性.
 
-* SQL natively supports `NULL` for almost every operation and has a 3-valued boolean logic.
-* Make sure to test every feature for nullability as well.
 
+尽量避免集成测试
 
-Avoid full integration tests
+* 生成 Flink 迷你集群并为 SQL 查询执行生成代码的编译是昂贵的。
+* 避免对计划测试或 API 调用的变更进行集成测试。
+* 相反,使用单元测试验证计划器的优化计划。或者直接测试运行时的算子行为。

Review comment:
       `运行时的算子行为`-> `算子的运行时行为` 会更好一些吗?

##########
File path: contributing/code-style-and-quality-components.zh.md
##########
@@ -48,96 +48,95 @@ How to name config keys:
   }
   ```
 
-* The resulting config keys should hence be:
+* 因此生成的配置键应该:
 
-  **NOT** `"taskmanager.detailed.network.metrics"`
+  **不是** `"taskmanager.detailed.network.metrics"`
 
-  **But rather** `"taskmanager.network.detailed-metrics"`
+  **而是** `"taskmanager.network.detailed-metrics"`
 
 
-### Connectors
+### 连接器
 
-Connectors are historically hard to implement and need to deal with many aspects of threading,
concurrency, and checkpointing.
+连接器历来很难实现,需要处理多线程、并发和检查点的许多方面。
 
-As part of [FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
we are working on making this much simpler for sources. New sources should not have to deal
with any aspect of concurrency/threading and checkpointing any more.
+作为 [FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
的一部分,我们正在努力让这些 source 更加简单。新的 source 应该不必处理并发/线程和检查点的任何方面。

Review comment:
       source 和 sink 建议都翻译,
   比如 [这里](https://ci.apache.org/projects/flink/flink-docs-master/getting-started/walkthroughs/table_api.html#breaking-down-the-code)
,sink 的翻译也可以在 docs 文件夹下 搜索一下看看,有翻译成 “汇”
的,同一篇文章需要统一

##########
File path: contributing/code-style-and-quality-components.zh.md
##########
@@ -48,96 +48,95 @@ How to name config keys:
   }
   ```
 
-* The resulting config keys should hence be:
+* 因此生成的配置键应该:
 
-  **NOT** `"taskmanager.detailed.network.metrics"`
+  **不是** `"taskmanager.detailed.network.metrics"`
 
-  **But rather** `"taskmanager.network.detailed-metrics"`
+  **而是** `"taskmanager.network.detailed-metrics"`
 
 
-### Connectors
+### 连接器
 
-Connectors are historically hard to implement and need to deal with many aspects of threading,
concurrency, and checkpointing.
+连接器历来很难实现,需要处理多线程、并发和检查点的许多方面。
 
-As part of [FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
we are working on making this much simpler for sources. New sources should not have to deal
with any aspect of concurrency/threading and checkpointing any more.
+作为 [FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
的一部分,我们正在努力让这些 source 更加简单。新的 source 应该不必处理并发/线程和检查点的任何方面。
 
-A similar FLIP can be expected for sinks in the near future.
+预计在不久的将来,会有类似针对 sink 的 FLIP。
 
 
-### Examples
+### 示例
 
-Examples should be self-contained and not require systems other than Flink to run. Except
for examples that show how to use specific connectors, like the Kafka connector. Sources/sinks
that are ok to use are `StreamExecutionEnvironment.socketTextStream`, which should not be
used in production but is quite handy for exploring how things work, and file-based sources/sinks.
(For streaming, there is the continuous file source)
+示例应该是自包含的,不需要运行 Flink 以外的系统。除了显示如何使用具体的连接器的示例,比如
Kafka 连接器。source/sink 可以使用 `StreamExecutionEnvironment.socketTextStream`,这个不应该在生产中使用,但对于研究示例如何运行是相当方便的,以及基于文件的
source/sink。(对于流,有连续的文件 source)

Review comment:
       `对于流,有连续的文件 source` 这句话有点奇怪,这里应该指的是对于
stream 现在 Flink 提供 `continues file source` 读取数据,可以参考 [这里](https://ci.apache.org/projects/flink/flink-docs-stable/dev/datastream_api.html#data-sources)

##########
File path: contributing/code-style-and-quality-components.zh.md
##########
@@ -48,96 +48,95 @@ How to name config keys:
   }
   ```
 
-* The resulting config keys should hence be:
+* 因此生成的配置键应该:
 
-  **NOT** `"taskmanager.detailed.network.metrics"`
+  **不是** `"taskmanager.detailed.network.metrics"`
 
-  **But rather** `"taskmanager.network.detailed-metrics"`
+  **而是** `"taskmanager.network.detailed-metrics"`
 
 
-### Connectors
+### 连接器
 
-Connectors are historically hard to implement and need to deal with many aspects of threading,
concurrency, and checkpointing.
+连接器历来很难实现,需要处理多线程、并发和检查点的许多方面。
 
-As part of [FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
we are working on making this much simpler for sources. New sources should not have to deal
with any aspect of concurrency/threading and checkpointing any more.
+作为 [FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
的一部分,我们正在努力让这些 source 更加简单。新的 source 应该不必处理并发/线程和检查点的任何方面。
 
-A similar FLIP can be expected for sinks in the near future.
+预计在不久的将来,会有类似针对 sink 的 FLIP。
 
 
-### Examples
+### 示例
 
-Examples should be self-contained and not require systems other than Flink to run. Except
for examples that show how to use specific connectors, like the Kafka connector. Sources/sinks
that are ok to use are `StreamExecutionEnvironment.socketTextStream`, which should not be
used in production but is quite handy for exploring how things work, and file-based sources/sinks.
(For streaming, there is the continuous file source)
+示例应该是自包含的,不需要运行 Flink 以外的系统。除了显示如何使用具体的连接器的示例,比如
Kafka 连接器。source/sink 可以使用 `StreamExecutionEnvironment.socketTextStream`,这个不应该在生产中使用,但对于研究示例如何运行是相当方便的,以及基于文件的
source/sink。(对于流,有连续的文件 source)
+示例也不应该是纯粹的玩具示例,而是在现实世界的代码和纯粹的抽象示例之间取得平衡。WordCount
示例到现在已经很久了,但它是一个很好的功能突出并可以做有用事情的简单代码示例。
 
-Examples should also not be pure toy-examples but strike a balance between real-world code
and purely abstract examples. The WordCount example is quite long in the tooth by now but
it’s a good showcase of simple code that highlights functionality and can do useful things.
+示例中应该有不少的注释。他们可以在类级 Javadoc 中描述示例的总体思路,并且描述正在发生什么和整个代码里使用了什么功能。还应描述预期的输入数据和输出数据。
 
-Examples should also be heavy in comments. They should describe the general idea of the example
in the class-level Javadoc and describe what is happening and what functionality is used throughout
the code. The expected input data and output data should also be described.
+示例应该包括参数解析,以便你可以运行一个示例(使用 `bin/flink run
path/to/myExample.jar --param1 … --param2` 运行程序)。
 
-Examples should include parameter parsing, so that you can run an example (from the Jar that
is created for each example using `bin/flink run path/to/myExample.jar --param1 … --param2`.
 
+### 表和 SQL API
 
-### Table & SQL API
 
+#### 语义
 
-#### Semantics
+**SQL 标准应该是事实的主要来源。**
 
-**The SQL standard should be the main source of truth.**
+* 语法、语义和功能应该和 SQL 保持一致!
+* 我们不需要重造轮子。大部分问题都已经在业界广泛讨论过并写在
SQL 标准中了。
+* 我们依靠最新的标准(在写这篇文档时使用  SQL:2016 or ISO/IEC 9075:2016
 [[下载]](https://standards.iso.org/ittf/PubliclyAvailableStandards/c065143_ISO_IEC_TR_19075-5_2016.zip))。并非每个部分都可在线获取,但快速网络搜索可能对此有所帮助。
 
-* Syntax, semantics, and features should be aligned with SQL!
-* We don’t need to reinvent the wheel. Most problems have already been discussed industry-wide
and written down in the SQL standard.
-* We rely on the newest standard (SQL:2016 or ISO/IEC 9075:2016 when writing this document
[[download]](https://standards.iso.org/ittf/PubliclyAvailableStandards/c065143_ISO_IEC_TR_19075-5_2016.zip)
). Not every part is available online but a quick web search might help here.
+讨论与标准或厂商特定解释的差异。
 
-Discuss divergence from the standard or vendor-specific interpretations.
+* 一旦定义了语法或行为就不能轻易撤销。
+* 需要扩展或解释标准的贡献需要与社区进行深入的讨论。
+* 请通过一些对 Postgres、Microsoft SQL Server、Oracle、Hive、Calcite、Beam 等其他厂商如何处理此类案例进行初步的探讨来帮助提交者。
 
-* Once a syntax or behavior is defined it cannot be undone easily.
-* Contributions that need to extent or interpret the standard need a thorough discussion
with the community.
-* Please help committers by performing some initial research about how other vendors such
as Postgres, Microsoft SQL Server, Oracle, Hive, Calcite, Beam are handling such cases.
 
+将 Table API 视为 SQL 和 Java/Scala 编程世界之间的桥梁。
 
-Consider the Table API as a bridge between the SQL and Java/Scala programming world.
+* Table API 是一种嵌入式域特定语言,用于遵循关系模型的分析程序。
+在语法和名称方面不需要严格遵循 SQL 标准,但如果这有助于使其感觉更直观,那么可以更接近编程语言的方式/命名函数和功能。
+* Table API 可能有一些非 SQL 功能(例如 map()、flatMap() 等),但还是应该“感觉像
SQL”。如果可能,函数和算子应该有相等的语义和命名。
 
-* The Table API is an Embedded Domain Specific Language for analytical programs following
the relational model.
-It is not required to strictly follow the SQL standard in regards of syntax and names, but
can be closer to the way a programming language would do/name functions and features, if that
helps make it feel more intuitive.
-* The Table API might have some non-SQL features (e.g. map(), flatMap(), etc.) but should
nevertheless “feel like SQL”. Functions and operations should have equal semantics and
naming if possible.
 
+#### 常见错误
 
-#### Common mistakes
+* 添加功能时支持 SQL 的类型系统。
+    * SQL 函数、连接器或格式化从一开始就应该原生的支持大多数 SQL
类型。
+    * 不支持的类型会导致混淆,限制可用性,并通过多次接触相同代码路径产生开销。
+    * 例如,当添加 `SHIFT_LEFT` 函数时,确保贡献足够通用,不仅适用于
`INT` 也适用于 `BIGINT` 或 `TINYINT`。
 
-* Support SQL’s type system when adding a feature.
-    * A SQL function, connector, or format should natively support most SQL types from the
very beginning.
-    * Unsupported types lead to confusion, limit the usability, and create overhead by touching
the same code paths multiple times.
-    * For example, when adding a `SHIFT_LEFT` function, make sure that the contribution is
general enough not only for `INT` but also `BIGINT` or `TINYINT`.
 
+#### 测试
 
-#### Testing
+测试为空性
 
-Test for nullability.
+* 几乎每个操作,SQL 都原生支持 `NULL`,并具有 3 值布尔逻辑。
+* 也确保测试每个功能的可空性.
 
-* SQL natively supports `NULL` for almost every operation and has a 3-valued boolean logic.
-* Make sure to test every feature for nullability as well.
 
+尽量避免集成测试
 
-Avoid full integration tests
+* 生成 Flink 迷你集群并为 SQL 查询执行生成代码的编译是昂贵的。

Review comment:
       `生成 Flink 迷你集群并为 SQL 查询执行生成代码的编译是昂贵的`
这句话读起来有点怪怪的,这里的意思应该是 启动一个集群,并且对
“generated code”进行 编译汇很耗时。

##########
File path: contributing/code-style-and-quality-components.zh.md
##########
@@ -48,96 +48,95 @@ How to name config keys:
   }
   ```
 
-* The resulting config keys should hence be:
+* 因此生成的配置键应该:
 
-  **NOT** `"taskmanager.detailed.network.metrics"`
+  **不是** `"taskmanager.detailed.network.metrics"`
 
-  **But rather** `"taskmanager.network.detailed-metrics"`
+  **而是** `"taskmanager.network.detailed-metrics"`
 
 
-### Connectors
+### 连接器
 
-Connectors are historically hard to implement and need to deal with many aspects of threading,
concurrency, and checkpointing.
+连接器历来很难实现,需要处理多线程、并发和检查点的许多方面。
 
-As part of [FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
we are working on making this much simpler for sources. New sources should not have to deal
with any aspect of concurrency/threading and checkpointing any more.
+作为 [FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
的一部分,我们正在努力让这些 source 更加简单。新的 source 应该不必处理并发/线程和检查点的任何方面。
 
-A similar FLIP can be expected for sinks in the near future.
+预计在不久的将来,会有类似针对 sink 的 FLIP。
 
 
-### Examples
+### 示例
 
-Examples should be self-contained and not require systems other than Flink to run. Except
for examples that show how to use specific connectors, like the Kafka connector. Sources/sinks
that are ok to use are `StreamExecutionEnvironment.socketTextStream`, which should not be
used in production but is quite handy for exploring how things work, and file-based sources/sinks.
(For streaming, there is the continuous file source)
+示例应该是自包含的,不需要运行 Flink 以外的系统。除了显示如何使用具体的连接器的示例,比如
Kafka 连接器。source/sink 可以使用 `StreamExecutionEnvironment.socketTextStream`,这个不应该在生产中使用,但对于研究示例如何运行是相当方便的,以及基于文件的
source/sink。(对于流,有连续的文件 source)
+示例也不应该是纯粹的玩具示例,而是在现实世界的代码和纯粹的抽象示例之间取得平衡。WordCount
示例到现在已经很久了,但它是一个很好的功能突出并可以做有用事情的简单代码示例。
 
-Examples should also not be pure toy-examples but strike a balance between real-world code
and purely abstract examples. The WordCount example is quite long in the tooth by now but
it’s a good showcase of simple code that highlights functionality and can do useful things.
+示例中应该有不少的注释。他们可以在类级 Javadoc 中描述示例的总体思路,并且描述正在发生什么和整个代码里使用了什么功能。还应描述预期的输入数据和输出数据。
 
-Examples should also be heavy in comments. They should describe the general idea of the example
in the class-level Javadoc and describe what is happening and what functionality is used throughout
the code. The expected input data and output data should also be described.
+示例应该包括参数解析,以便你可以运行一个示例(使用 `bin/flink run
path/to/myExample.jar --param1 … --param2` 运行程序)。
 
-Examples should include parameter parsing, so that you can run an example (from the Jar that
is created for each example using `bin/flink run path/to/myExample.jar --param1 … --param2`.
 
+### 表和 SQL API
 
-### Table & SQL API
 
+#### 语义
 
-#### Semantics
+**SQL 标准应该是事实的主要来源。**
 
-**The SQL standard should be the main source of truth.**
+* 语法、语义和功能应该和 SQL 保持一致!
+* 我们不需要重造轮子。大部分问题都已经在业界广泛讨论过并写在
SQL 标准中了。
+* 我们依靠最新的标准(在写这篇文档时使用  SQL:2016 or ISO/IEC 9075:2016
 [[下载]](https://standards.iso.org/ittf/PubliclyAvailableStandards/c065143_ISO_IEC_TR_19075-5_2016.zip))。并非每个部分都可在线获取,但快速网络搜索可能对此有所帮助。
 
-* Syntax, semantics, and features should be aligned with SQL!
-* We don’t need to reinvent the wheel. Most problems have already been discussed industry-wide
and written down in the SQL standard.
-* We rely on the newest standard (SQL:2016 or ISO/IEC 9075:2016 when writing this document
[[download]](https://standards.iso.org/ittf/PubliclyAvailableStandards/c065143_ISO_IEC_TR_19075-5_2016.zip)
). Not every part is available online but a quick web search might help here.
+讨论与标准或厂商特定解释的差异。
 
-Discuss divergence from the standard or vendor-specific interpretations.
+* 一旦定义了语法或行为就不能轻易撤销。
+* 需要扩展或解释标准的贡献需要与社区进行深入的讨论。
+* 请通过一些对 Postgres、Microsoft SQL Server、Oracle、Hive、Calcite、Beam 等其他厂商如何处理此类案例进行初步的探讨来帮助提交者。
 
-* Once a syntax or behavior is defined it cannot be undone easily.
-* Contributions that need to extent or interpret the standard need a thorough discussion
with the community.
-* Please help committers by performing some initial research about how other vendors such
as Postgres, Microsoft SQL Server, Oracle, Hive, Calcite, Beam are handling such cases.
 
+将 Table API 视为 SQL 和 Java/Scala 编程世界之间的桥梁。
 
-Consider the Table API as a bridge between the SQL and Java/Scala programming world.
+* Table API 是一种嵌入式域特定语言,用于遵循关系模型的分析程序。
+在语法和名称方面不需要严格遵循 SQL 标准,但如果这有助于使其感觉更直观,那么可以更接近编程语言的方式/命名函数和功能。
+* Table API 可能有一些非 SQL 功能(例如 map()、flatMap() 等),但还是应该“感觉像
SQL”。如果可能,函数和算子应该有相等的语义和命名。
 
-* The Table API is an Embedded Domain Specific Language for analytical programs following
the relational model.
-It is not required to strictly follow the SQL standard in regards of syntax and names, but
can be closer to the way a programming language would do/name functions and features, if that
helps make it feel more intuitive.
-* The Table API might have some non-SQL features (e.g. map(), flatMap(), etc.) but should
nevertheless “feel like SQL”. Functions and operations should have equal semantics and
naming if possible.
 
+#### 常见错误
 
-#### Common mistakes
+* 添加功能时支持 SQL 的类型系统。
+    * SQL 函数、连接器或格式化从一开始就应该原生的支持大多数 SQL
类型。
+    * 不支持的类型会导致混淆,限制可用性,并通过多次接触相同代码路径产生开销。
+    * 例如,当添加 `SHIFT_LEFT` 函数时,确保贡献足够通用,不仅适用于
`INT` 也适用于 `BIGINT` 或 `TINYINT`。
 
-* Support SQL’s type system when adding a feature.
-    * A SQL function, connector, or format should natively support most SQL types from the
very beginning.
-    * Unsupported types lead to confusion, limit the usability, and create overhead by touching
the same code paths multiple times.
-    * For example, when adding a `SHIFT_LEFT` function, make sure that the contribution is
general enough not only for `INT` but also `BIGINT` or `TINYINT`.
 
+#### 测试
 
-#### Testing
+测试为空性
 
-Test for nullability.
+* 几乎每个操作,SQL 都原生支持 `NULL`,并具有 3 值布尔逻辑。
+* 也确保测试每个功能的可空性.

Review comment:
       最开始的“也”去掉会好一些吗?

##########
File path: contributing/code-style-and-quality-components.zh.md
##########
@@ -48,96 +48,95 @@ How to name config keys:
   }
   ```
 
-* The resulting config keys should hence be:
+* 因此生成的配置键应该:
 
-  **NOT** `"taskmanager.detailed.network.metrics"`
+  **不是** `"taskmanager.detailed.network.metrics"`
 
-  **But rather** `"taskmanager.network.detailed-metrics"`
+  **而是** `"taskmanager.network.detailed-metrics"`
 
 
-### Connectors
+### 连接器
 
-Connectors are historically hard to implement and need to deal with many aspects of threading,
concurrency, and checkpointing.
+连接器历来很难实现,需要处理多线程、并发和检查点的许多方面。
 
-As part of [FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
we are working on making this much simpler for sources. New sources should not have to deal
with any aspect of concurrency/threading and checkpointing any more.
+作为 [FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
的一部分,我们正在努力让这些 source 更加简单。新的 source 应该不必处理并发/线程和检查点的任何方面。
 
-A similar FLIP can be expected for sinks in the near future.
+预计在不久的将来,会有类似针对 sink 的 FLIP。
 
 
-### Examples
+### 示例
 
-Examples should be self-contained and not require systems other than Flink to run. Except
for examples that show how to use specific connectors, like the Kafka connector. Sources/sinks
that are ok to use are `StreamExecutionEnvironment.socketTextStream`, which should not be
used in production but is quite handy for exploring how things work, and file-based sources/sinks.
(For streaming, there is the continuous file source)
+示例应该是自包含的,不需要运行 Flink 以外的系统。除了显示如何使用具体的连接器的示例,比如
Kafka 连接器。source/sink 可以使用 `StreamExecutionEnvironment.socketTextStream`,这个不应该在生产中使用,但对于研究示例如何运行是相当方便的,以及基于文件的
source/sink。(对于流,有连续的文件 source)
+示例也不应该是纯粹的玩具示例,而是在现实世界的代码和纯粹的抽象示例之间取得平衡。WordCount
示例到现在已经很久了,但它是一个很好的功能突出并可以做有用事情的简单代码示例。
 
-Examples should also not be pure toy-examples but strike a balance between real-world code
and purely abstract examples. The WordCount example is quite long in the tooth by now but
it’s a good showcase of simple code that highlights functionality and can do useful things.
+示例中应该有不少的注释。他们可以在类级 Javadoc 中描述示例的总体思路,并且描述正在发生什么和整个代码里使用了什么功能。还应描述预期的输入数据和输出数据。
 
-Examples should also be heavy in comments. They should describe the general idea of the example
in the class-level Javadoc and describe what is happening and what functionality is used throughout
the code. The expected input data and output data should also be described.
+示例应该包括参数解析,以便你可以运行一个示例(使用 `bin/flink run
path/to/myExample.jar --param1 … --param2` 运行程序)。
 
-Examples should include parameter parsing, so that you can run an example (from the Jar that
is created for each example using `bin/flink run path/to/myExample.jar --param1 … --param2`.
 
+### 表和 SQL API
 
-### Table & SQL API
 
+#### 语义
 
-#### Semantics
+**SQL 标准应该是事实的主要来源。**
 
-**The SQL standard should be the main source of truth.**
+* 语法、语义和功能应该和 SQL 保持一致!
+* 我们不需要重造轮子。大部分问题都已经在业界广泛讨论过并写在
SQL 标准中了。
+* 我们依靠最新的标准(在写这篇文档时使用  SQL:2016 or ISO/IEC 9075:2016
 [[下载]](https://standards.iso.org/ittf/PubliclyAvailableStandards/c065143_ISO_IEC_TR_19075-5_2016.zip))。并非每个部分都可在线获取,但快速网络搜索可能对此有所帮助。
 
-* Syntax, semantics, and features should be aligned with SQL!
-* We don’t need to reinvent the wheel. Most problems have already been discussed industry-wide
and written down in the SQL standard.
-* We rely on the newest standard (SQL:2016 or ISO/IEC 9075:2016 when writing this document
[[download]](https://standards.iso.org/ittf/PubliclyAvailableStandards/c065143_ISO_IEC_TR_19075-5_2016.zip)
). Not every part is available online but a quick web search might help here.
+讨论与标准或厂商特定解释的差异。
 
-Discuss divergence from the standard or vendor-specific interpretations.
+* 一旦定义了语法或行为就不能轻易撤销。
+* 需要扩展或解释标准的贡献需要与社区进行深入的讨论。
+* 请通过一些对 Postgres、Microsoft SQL Server、Oracle、Hive、Calcite、Beam 等其他厂商如何处理此类案例进行初步的探讨来帮助提交者。
 
-* Once a syntax or behavior is defined it cannot be undone easily.
-* Contributions that need to extent or interpret the standard need a thorough discussion
with the community.
-* Please help committers by performing some initial research about how other vendors such
as Postgres, Microsoft SQL Server, Oracle, Hive, Calcite, Beam are handling such cases.
 
+将 Table API 视为 SQL 和 Java/Scala 编程世界之间的桥梁。
 
-Consider the Table API as a bridge between the SQL and Java/Scala programming world.
+* Table API 是一种嵌入式域特定语言,用于遵循关系模型的分析程序。
+在语法和名称方面不需要严格遵循 SQL 标准,但如果这有助于使其感觉更直观,那么可以更接近编程语言的方式/命名函数和功能。
+* Table API 可能有一些非 SQL 功能(例如 map()、flatMap() 等),但还是应该“感觉像
SQL”。如果可能,函数和算子应该有相等的语义和命名。
 
-* The Table API is an Embedded Domain Specific Language for analytical programs following
the relational model.
-It is not required to strictly follow the SQL standard in regards of syntax and names, but
can be closer to the way a programming language would do/name functions and features, if that
helps make it feel more intuitive.
-* The Table API might have some non-SQL features (e.g. map(), flatMap(), etc.) but should
nevertheless “feel like SQL”. Functions and operations should have equal semantics and
naming if possible.
 
+#### 常见错误
 
-#### Common mistakes
+* 添加功能时支持 SQL 的类型系统。
+    * SQL 函数、连接器或格式化从一开始就应该原生的支持大多数 SQL
类型。
+    * 不支持的类型会导致混淆,限制可用性,并通过多次接触相同代码路径产生开销。
+    * 例如,当添加 `SHIFT_LEFT` 函数时,确保贡献足够通用,不仅适用于
`INT` 也适用于 `BIGINT` 或 `TINYINT`。
 
-* Support SQL’s type system when adding a feature.
-    * A SQL function, connector, or format should natively support most SQL types from the
very beginning.
-    * Unsupported types lead to confusion, limit the usability, and create overhead by touching
the same code paths multiple times.
-    * For example, when adding a `SHIFT_LEFT` function, make sure that the contribution is
general enough not only for `INT` but also `BIGINT` or `TINYINT`.
 
+#### 测试
 
-#### Testing
+测试为空性
 
-Test for nullability.
+* 几乎每个操作,SQL 都原生支持 `NULL`,并具有 3 值布尔逻辑。
+* 也确保测试每个功能的可空性.
 
-* SQL natively supports `NULL` for almost every operation and has a 3-valued boolean logic.
-* Make sure to test every feature for nullability as well.
 
+尽量避免集成测试
 
-Avoid full integration tests
+* 生成 Flink 迷你集群并为 SQL 查询执行生成代码的编译是昂贵的。
+* 避免对计划测试或 API 调用的变更进行集成测试。

Review comment:
       `planner` 是否可以不翻译,这个可以听听 @wuchong 的建议。
   我看 [这篇文章](https://zhuanlan.zhihu.com/p/132652932) 是不翻译的

##########
File path: contributing/code-style-and-quality-components.zh.md
##########
@@ -9,24 +9,24 @@ title:  "Apache Flink Code Style and Quality Guide  — Components"
 
 
 
-## Component Specific Guidelines
+## 组件特定指南
 
-_Additional guidelines about changes in specific components._
+_关于特定组件更改的附加指南。_
 
 
-### Configuration Changes
+### 配置更改
 
-Where should the config option go?
+配置选项应该放在哪里?
 
-* <span style="text-decoration:underline;">‘flink-conf.yaml’:</span> All
configuration that pertains to execution behavior that one may want to standardize across
jobs. Think of it as parameters someone would set wearing an “ops” hat, or someone that
provides a stream processing platform to other teams.
+* <span style="text-decoration:underline;">‘flink-conf.yaml’:</span> 所有属于可能要跨作业标准化的执行行为配置。可以将其想像成
Ops 的工作人员,或为其他团队提供流处理平台的设置参数。
 
-* <span style="text-decoration:underline;">‘ExecutionConfig’</span>: Parameters
specific to an individual Flink application, needed by the operators during execution. Typical
examples are watermark interval, serializer parameters, object reuse.
-* <span style="text-decoration:underline;">ExecutionEnvironment (in code)</span>:
Everything that is specific to an individual Flink application and is only needed to build
program / dataflow, not needed inside the operators during execution.
+* <span style="text-decoration:underline;">‘ExecutionConfig’</span>: 执行期间算子需要特定于单个
Flink 应用程序的参数,典型的例子是水印间隔,序列化参数,对象重用。
+* <span style="text-decoration:underline;">ExecutionEnvironment (在代码里)</span>:
所有特定于单个 Flink 应用程序的东西,仅在构建程序/数据流时需要,在算子执行期间不需要。
 
-How to name config keys:
+如何命名配置键:
 
-* Config key names should be hierarchical.
-  Think of the configuration as nested objects (JSON style)
+* 配置键名应该分层级。
+  将配置视为嵌套对象(JSON 样式)

Review comment:
       建议把这两行合并到一行,现在分开成两行的话,“。” 和 “将”
中间会有空格




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



Mime
View raw message