flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-2997) Support range partition with user customized data distribution.
Date Thu, 10 Mar 2016 10:40:40 GMT

    [ https://issues.apache.org/jira/browse/FLINK-2997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15189067#comment-15189067
] 

ASF GitHub Bot commented on FLINK-2997:
---------------------------------------

Github user fhueske commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1776#discussion_r55661252
  
    --- Diff: flink-tests/src/test/java/org/apache/flink/test/javaApiOperators/CustomDistributionITCase.java
---
    @@ -0,0 +1,142 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one
    + * or more contributor license agreements.  See the NOTICE file
    + * distributed with this work for additional information
    + * regarding copyright ownership.  The ASF licenses this file
    + * to you under the Apache License, Version 2.0 (the
    + * "License"); you may not use this file except in compliance
    + * with the License.  You may obtain a copy of the License at
    + *
    + *     http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.flink.test.javaApiOperators;
    +
    +import org.apache.flink.api.java.tuple.Tuple2;
    +import org.apache.flink.test.distribution.CustomDistribution;
    +import org.apache.flink.api.common.functions.MapPartitionFunction;
    +import org.apache.flink.api.java.DataSet;
    +import org.apache.flink.api.java.ExecutionEnvironment;
    +import org.apache.flink.api.java.tuple.Tuple1;
    +import org.apache.flink.api.java.tuple.Tuple3;
    +import org.apache.flink.api.java.utils.DataSetUtils;
    +import org.apache.flink.types.IntValue;
    +import org.apache.flink.util.Collector;
    +import org.junit.Test;
    +
    +
    +import static org.junit.Assert.assertEquals;
    +
    +
    +public class CustomDistributionITCase {
    +	
    +	@Test
    +	public void testRangeWithDistribution1() throws Exception{
    +
    +		ExecutionEnvironment env = ExecutionEnvironment.createLocalEnvironment();
    +
    +		DataSet<Tuple3<Integer, Integer, String>> input1 = env.fromElements(
    +				new Tuple3<>(1, 1, "Hi"),
    +				new Tuple3<>(1, 2, "Hello"),
    +				new Tuple3<>(1, 3, "Hello world"),
    +				new Tuple3<>(2, 4, "Hello world, how are you?"),
    +				new Tuple3<>(2, 5, "I am fine."),
    +				new Tuple3<>(3, 6, "Luke Skywalker"),
    +				new Tuple3<>(4, 7, "Comment#1"),
    +				new Tuple3<>(4, 8, "Comment#2"),
    +				new Tuple3<>(4, 9, "Comment#3"),
    +				new Tuple3<>(5, 10, "Comment#4"));
    +
    +		IntValue[][] keys = new IntValue[2][2];
    +
    +		env.setParallelism(3);
    +
    +		for (int i = 0; i < 2; i++)
    +		{
    +			for (int j = 0; j < 2; j++)
    +			{
    +				keys[i][j] = new IntValue(i + j);
    +			}
    +		}
    +
    +		CustomDistribution cd = new CustomDistribution(keys);
    +
    +		DataSet<Tuple2<IntValue, IntValue>> out1 = DataSetUtils.partitionByRange(input1.mapPartition(
    +				new MapPartitionFunction<Tuple3<Integer, Integer, String>, Tuple2<IntValue,
IntValue>>() {
    +			public void mapPartition(Iterable<Tuple3<Integer, Integer, String>> values,
Collector<Tuple2<IntValue, IntValue>> out) {
    +				IntValue key1;
    +				IntValue key2;
    +				for (Tuple3<Integer, Integer, String> s : values) {
    +					key1 = new IntValue(s.f0);
    +					key2 = new IntValue(s.f1);
    +					out.collect(new Tuple2<>(key1, key2));
    +				}
    +			}
    +		}), cd, 0, 1).groupBy(0).sum(0);
    +
    +		String expected = "[(1,3), (4,5), (2,2), (3,6), (5,10), (12,9)]";
    +		assertEquals(expected, out1.collect().toString());
    +	}
    +
    +	@Test
    +	public void testRangeWithDistribution2() throws Exception{
    --- End diff --
    
    Comments of the previous tests apply to this method as well.


> Support range partition with user customized data distribution.
> ---------------------------------------------------------------
>
>                 Key: FLINK-2997
>                 URL: https://issues.apache.org/jira/browse/FLINK-2997
>             Project: Flink
>          Issue Type: New Feature
>            Reporter: Chengxiang Li
>
> This is a followup work of FLINK-7, sometime user have better knowledge of the source
data, and they can build customized data distribution to do range partition more efficiently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message