flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-3477) Add hash-based combine strategy for ReduceFunction
Date Sun, 10 Jul 2016 14:42:10 GMT

    [ https://issues.apache.org/jira/browse/FLINK-3477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15369673#comment-15369673
] 

ASF GitHub Bot commented on FLINK-3477:
---------------------------------------

Github user ggevay commented on the issue:

    https://github.com/apache/flink/pull/1517
  
    > Reading back through the discussion I see that there are many ideas for future performance
enhancements. If not already suggested I'd like to consider skipping staging for fixed length
records.
    
    Thanks, I've added this to my notes.
    
    > I'm missing why we can't update in place with smaller records. The deserializer is
responsible for detecting the end of the record and we wouldn't need to change the pointer
value when replacing with a smaller record.
    
    A problem would arise in `EntryIterator`: after reading a record, we wouldn't know where
the next record starts. (As it is now, it always starts right after the previous.)
    
    Thanks @greghogan for pushing this forward. I think I have addressed all your comments.


> Add hash-based combine strategy for ReduceFunction
> --------------------------------------------------
>
>                 Key: FLINK-3477
>                 URL: https://issues.apache.org/jira/browse/FLINK-3477
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Local Runtime
>            Reporter: Fabian Hueske
>            Assignee: Gabor Gevay
>
> This issue is about adding a hash-based combine strategy for ReduceFunctions.
> The interface of the {{reduce()}} method is as follows:
> {code}
> public T reduce(T v1, T v2)
> {code}
> Input type and output type are identical and the function returns only a single value.
A Reduce function is incrementally applied to compute a final aggregated value. This allows
to hold the preaggregated value in a hash-table and update it with each function call. 
> The hash-based strategy requires special implementation of an in-memory hash table. The
hash table should support in place updates of elements (if the updated value has the same
size as the new value) but also appending updates with invalidation of the old value (if the
binary length of the new value differs). The hash table needs to be able to evict and emit
all elements if it runs out-of-memory.
> We should also add {{HASH}} and {{SORT}} compiler hints to {{DataSet.reduce()}} and {{Grouping.reduce()}}
to allow users to pick the execution strategy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message