flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-3474) Partial aggregate interface design and sort-based implementation
Date Fri, 04 Mar 2016 10:20:40 GMT

    [ https://issues.apache.org/jira/browse/FLINK-3474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15179677#comment-15179677
] 

ASF GitHub Bot commented on FLINK-3474:
---------------------------------------

Github user fhueske commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1746#discussion_r55013024
  
    --- Diff: flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/runtime/aggregate/AggregateUtil.scala
---
    @@ -0,0 +1,329 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one
    + * or more contributor license agreements.  See the NOTICE file
    + * distributed with this work for additional information
    + * regarding copyright ownership.  The ASF licenses this file
    + * to you under the Apache License, Version 2.0 (the
    + * "License"); you may not use this file except in compliance
    + * with the License.  You may obtain a copy of the License at
    + *
    + *     http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.flink.api.table.runtime.aggregate
    +
    +import java.util
    +
    +import org.apache.calcite.rel.`type`._
    +import org.apache.calcite.rel.core.AggregateCall
    +import org.apache.calcite.sql.SqlAggFunction
    +import org.apache.calcite.sql.`type`.SqlTypeName._
    +import org.apache.calcite.sql.`type`.{SqlTypeFactoryImpl, SqlTypeName}
    +import org.apache.calcite.sql.fun._
    +import org.apache.flink.api.common.functions.{GroupReduceFunction, MapFunction}
    +import org.apache.flink.api.common.typeinfo.TypeInformation
    +import org.apache.flink.api.table.plan.PlanGenException
    +import org.apache.flink.api.table.typeinfo.RowTypeInfo
    +import org.apache.flink.api.table.{Row, TableConfig}
    +
    +import scala.collection.JavaConversions._
    +import scala.collection.mutable.ArrayBuffer
    +
    +object AggregateUtil {
    +
    +  type CalcitePair[T, R] = org.apache.calcite.util.Pair[T, R]
    +  type JavaList[T] = java.util.List[T]
    +
    +  /**
    +   * Create Flink operator functions for aggregates. It includes 2 implementations of
Flink 
    +   * operator functions:
    +   * [[org.apache.flink.api.common.functions.MapFunction]] and 
    +   * [[org.apache.flink.api.common.functions.GroupReduceFunction]](if it's partial aggregate,
    +   * should also implement [[org.apache.flink.api.common.functions.CombineFunction]]
as well). 
    +   * The output of [[org.apache.flink.api.common.functions.MapFunction]] contains the

    +   * intermediate aggregate values of all aggregate function, it's stored in Row by the
following
    +   * format:
    +   *
    +   * {{{
    +   *                   avg(x) aggOffsetInRow = 2          count(z) aggOffsetInRow = 5
    +   *                             |                          |
    +   *                             v                          v
    +   *        +---------+---------+--------+--------+--------+--------+
    +   *        |groupKey1|groupKey2|  sum1  | count1 |  sum2  | count2 |
    +   *        +---------+---------+--------+--------+--------+--------+
    +   *                                              ^
    +   *                                              |
    +   *                               sum(y) aggOffsetInRow = 4
    +   * }}}
    +   *
    +   */
    +  def createOperatorFunctionsForAggregates(namedAggregates: Seq[CalcitePair[AggregateCall,
String]],
    +      inputType: RelDataType, outputType: RelDataType,
    +      groupings: Array[Int]): AggregateResult = {
    +
    +    val aggregateFunctionsAndFieldIndexes =
    +      transformToAggregateFunctions(namedAggregates.map(_.getKey), inputType, groupings.length)
    +    // store the aggregate fields of each aggregate function, by the same order of aggregates.
    +    val aggFieldIndexes = aggregateFunctionsAndFieldIndexes._1
    +    val aggregates = aggregateFunctionsAndFieldIndexes._2
    +
    +    val mapFunction = (
    +        config: TableConfig,
    +        inputType: TypeInformation[Any],
    +        returnType: TypeInformation[Any]) => {
    +
    +      val aggregateMapFunction = new AggregateMapFunction[Row, Row](
    +        aggregates, aggFieldIndexes, groupings, returnType.asInstanceOf[RowTypeInfo])
    +
    +      aggregateMapFunction.asInstanceOf[MapFunction[Any, Any]]
    +    }
    +
    +    val bufferDataType: RelRecordType =
    +      createAggregateBufferDataType(groupings, aggregates, inputType)
    +
    +    // the mapping relation between field index of intermediate aggregate Row and output
Row.
    +    val groupingOffsetMapping = getGroupKeysMapping(inputType, outputType, groupings)
    +
    +    // the mapping relation between aggregate function index in list and its corresponding
    +    // field index in output Row.
    +    val aggOffsetMapping = getAggregateMapping(namedAggregates, outputType)
    +
    +    if (groupingOffsetMapping.length != groupings.length ||
    +        aggOffsetMapping.length != namedAggregates.length) {
    +      throw new PlanGenException("Could not find output field in input data type " +
    +          "or aggregate functions.")
    +    }
    +
    +    val allPartialAggregate = aggregates.map(_.supportPartial).reduce(_ && _)
    +
    +    val intermediateRowArity = groupings.length + aggregates.map(_.intermediateDataType.length).sum
    +
    +    val reduceGroupFunction =
    +      if (allPartialAggregate) {
    +        (config: TableConfig, inputType: TypeInformation[Row], returnType: TypeInformation[Row])
=>
    +          new AggregateReduceCombineFunction(aggregates, groupingOffsetMapping,
    +            aggOffsetMapping, intermediateRowArity)
    +      } else {
    +        (config: TableConfig, inputType: TypeInformation[Row], returnType: TypeInformation[Row])
=>
    +          new AggregateReduceGroupFunction(aggregates, groupingOffsetMapping,
    +            aggOffsetMapping, intermediateRowArity)
    +      }
    +
    +    new AggregateResult(mapFunction, reduceGroupFunction, bufferDataType)
    +  }
    +
    +  private def transformToAggregateFunctions(
    +      aggregateCalls: Seq[AggregateCall],
    +      inputType: RelDataType,
    +      groupKeysCount: Int): (Array[Int], Array[Aggregate[_ <: Any]]) = {
    +
    +    // store the aggregate fields of each aggregate function, by the same order of aggregates.
    +    val aggFieldIndexes = new Array[Int](aggregateCalls.size)
    +    val aggregates = new Array[Aggregate[_ <: Any]](aggregateCalls.size)
    +
    +    // set the start offset of aggregate buffer value to group keys' length, 
    +    // as all the group keys would be moved to the start fields of intermediate
    +    // aggregate data.
    +    var aggOffset = groupKeysCount
    +
    +    // create aggregate function instances by function type and aggregate field data
type.
    +    aggregateCalls.zipWithIndex.foreach { case (aggregateCall, index) =>
    +      val argList: util.List[Integer] = aggregateCall.getArgList
    +      if (argList.isEmpty) {
    +        if (aggregateCall.getAggregation.isInstanceOf[SqlCountAggFunction]) {
    +          aggFieldIndexes(index) = 0
    +        } else {
    +          throw new PlanGenException("Aggregate fields should not be empty.")
    +        }
    +      } else {
    +        if (argList.size() > 1) {
    +          throw new PlanGenException("Currently, do not support aggregate on multi fields.")
    +        }
    +        aggFieldIndexes(index) = argList.get(0)
    +      }
    +      val sqlTypeName = inputType.getFieldList.get(aggFieldIndexes(index)).getType.getSqlTypeName
    +      aggregateCall.getAggregation match {
    +        case _: SqlSumAggFunction | _: SqlSumEmptyIsZeroAggFunction => {
    +          sqlTypeName match {
    --- End diff --
    
    we can make the code more concise like this:
    ```
    aggregates(index) = sqlTypeName match {
      case TINYINT => new ByteSumAggregate
      ...
    }
    ```


> Partial aggregate interface design and sort-based implementation
> ----------------------------------------------------------------
>
>                 Key: FLINK-3474
>                 URL: https://issues.apache.org/jira/browse/FLINK-3474
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Table API
>            Reporter: Chengxiang Li
>            Assignee: Chengxiang Li
>
> The scope of this sub task includes:
> # Partial aggregate interface.
> # Simple aggregate function implementation, such as SUM/AVG/COUNT/MIN/MAX.
> # DataSetAggregateRule which translate logical calcite aggregate node to Flink user functions.
As hash-based combiner is not available yet(see PR #1517), we would use sort-based combine
as default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message