flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From alpinegizmo <...@git.apache.org>
Subject [GitHub] flink pull request #4013: [FLINK-6745] [docs] Updated Table API / SQL docs: ...
Date Thu, 01 Jun 2017 11:45:40 GMT
Github user alpinegizmo commented on a diff in the pull request:

    --- Diff: docs/dev/tableApi.md ---
    @@ -25,32 +25,16 @@ specific language governing permissions and limitations
     under the License.
    -**Table API and SQL are experimental features**
    +Apache Flink features two relational APIs - the Table API and SQL - for unified stream
and batch processing. The Table API is a language-integrated query API for Scala and Java
that allows the composition of queries from relational operators such as selection, filter,
and join in a very intuitive way. Flink's SQL support is based on [Apache Calcite](https://calcite.apache.org)
which implements the SQL standard. Queries specified in either interface have the same semantics
and specify the same result regardless whether the input is a batch input (DataSet) or a stream
input (DataStream).
    -The Table API is a SQL-like expression language for relational stream and batch processing
that can be easily embedded in Flink's DataSet and DataStream APIs (Java and Scala).
    -The Table API and SQL interface operate on a relational `Table` abstraction, which can
be created from external data sources, or existing DataSets and DataStreams. With the Table
API, you can apply relational operators such as selection, aggregation, and joins on `Table`s.
    +The Table API and the SQL interfaces are tightly integrated with each other as well as
Flink's DataStream and DataSet APIs. You can easily switch between all APIs and libraries
which build upon the APIs. For instance, you can extract patterns from a DataStream using
the [CEP library]({{ site.baseurl }}/dev/libs/cep.html) and later use the Table API to analyze
the patterns, or you scan, filter, and aggregate a batch table using a SQL query before running
a [Gelly graph algorithm]({{ site.baseurl }}/dev/libs/gelly) on the preprocessed data.
    --- End diff --
    ... or you might scan ...

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.

View raw message