flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <...@apache.org>
Subject [GitHub] kl0u commented on a change in pull request #7876: [FLINK-11751] Extend release notes for Flink 1.8
Date Fri, 01 Mar 2019 15:41:44 GMT
kl0u commented on a change in pull request #7876: [FLINK-11751] Extend release notes for Flink
URL: https://github.com/apache/flink/pull/7876#discussion_r261650358

 File path: docs/release-notes/#flink-1.8.md#
 @@ -0,0 +1,174 @@
+ Notes - Flink 1.8"
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+  http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+These release notes discuss important aspects, such as configuration, behavior, or dependencies,
that changed between Flink 1.7 and Flink 1.8. Please read these notes carefully if you are
planning to upgrade your Flink version to 1.8. 
+* This will be replaced by the TOC
+### State
+#### Continuous incremental cleanup of old Keyed State with TTL
+We introduced TTL (time-to-live) for Keyed state in Flink 1.6
+([FLINK-9510](https://issues.apache.org/jira/browse/FLINK-9510)).  This feature
+allowed to clean up and make inaccessible keyed state entries when accessing
+them. In addition state would now also being cleaned up when writing a
+Flink 1.8 introduces continous cleanup of old entries for both the RocksDB
+state backend
+([FLINK-10471](https://issues.apache.org/jira/browse/FLINK-10471)) and the heap
+state backend
+([FLINK-10473](https://issues.apache.org/jira/browse/FLINK-10473)). This means
+that old entries (according to the ttl setting) are continously being cleanup
+#### New Support for Schema Migration when restoring Savepoints
+With Flink 1.7.0 we added support for changing the schema of state when using
+the `AvroSerializer`
+([FLINK-10605](https://issues.apache.org/jira/browse/FLINK-10605)). With Flink
+1.8.0 we made great progress migrating all built-in `TypeSerializers` to a new
+serializer snapshot abstraction that theoretically allows schema migration. Of
+the serializers that come with Flink, we now support schema migration for the
+([FLINK-11485](https://issues.apache.org/jira/browse/FLINK-11485)), and Java
+([FLINK-11334](https://issues.apache.org/jira/browse/FLINK-11334)), As well as
+for Kryo in limited cases
+#### Savepoint compatibility
+Savepoints from Flink 1.2 that contain a Scala `TraversableSerializer` are not
+compatible with Flink 1.8 anymore because of an update in this serializer
+#### RocksDB version bump and switch to FRocksDB ([FLINK-10471](https://issues.apache.org/jira/browse/FLINK-10471))
+We needed to switch to a custom build of RocksDB called FRocksDB because we
+need certain changes in RocksDB for supporting continuous state cleanup with
+TTL. The used build of FRocksDB is based on the upgraded version 5.17.2 of
+RocksDB. For Mac OS X, RocksDB version 5.17.2 is supported only for OS X
+version >= 10.13. See also: https://github.com/facebook/rocksdb/issues/4862.
+### Maven Dependencies
+#### Changes to bundling of Hadoop libraries with Flink ([FLINK-11266](https://issues.apache.org/jira/browse/FLINK-11266))
+Convenience binaries that include hadoop are no longer released.
+If a deployment relies on `flink-shaded-hadoop2` being included in
+`flink-dist`, then it must be manually downloaded and copied into the /lib
+directory.  Alternatively, a Flink distribution that includes hadoop can be
+built by packaging `flink-dist` and activating the `include-hadoop` maven
+As hadoop is no longer included in `flink-dist` by default, specifying
+`-DwithoutHadoop` when packaging `flink-dist` no longer impacts the build.
+### Configuration
+#### TaskManager configuration ([FLINK-11447](https://issues.apache.org/jira/browse/FLINK-11447))
+`TaskManagers` now pick the IP address to bind against when being started.
+The behaviour can be controlled by the configuration option
+`taskmanager.network.bind-policy`. If your Flink cluster should experience
+inexplicable connection problems after upgrading, then try to set
+`taskmanager.network.bind-policy: name` in your `flink-conf.yaml` to return to
+the pre-1.8 behaviour.
+### Table API
+#### Deprecation of direct `Table` constructor usage ([FLINK-11447](https://issues.apache.org/jira/browse/FLINK-11447))
+Flink 1.8 deprecates direct usage of the constructor of the `Table` class in
+the Table API. This constructor would previously be used to perform a join with
+a _lateral table_. You should now use `table.joinLateral()` or
+`table.leftOuterJoinLateral()` instead.
+This change is necessary for converting the Table class into an interface,
+which will make the API more maintainable and cleaner in the future.
+#### Introduction of new CSV format descriptor ([FLINK-9964](https://issues.apache.org/jira/browse/FLINK-9964))
+This release introduces a new format descriptor for CSV files that is compliant
+with RFC 4180. The new descriptor is available as
+`org.apache.flink.table.descriptors.Csv`. For now, this can only be used
+together with the Kafka connector. The old descriptor is availabla as
+`org.apache.flink.table.descriptors.OldCsv` for use with file system
+#### Deprecation of static builder methods on TableEnvironment ([FLINK-11445](https://issues.apache.org/jira/browse/FLINK-11445))
+In order to separate API from actual implementation, the static methods
+`TableEnvironment.getTableEnvironment()` are deprecated. You should now use
+`Batch/StreamTableEnvironment.create()` instead.
+#### Change in the Maven modules of Table API ([FLINK-11064](https://issues.apache.org/jira/browse/FLINK-11064))
+Users that had a `flink-table` dependency before need to update their
 Review comment:
   "dependency before need to update" -> "dependency before **,** need to update"

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:

With regards,
Apache Git Services

View raw message