hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Work logged] (HIVE-21217) Optimize range calculation for PTF
Date Fri, 15 Feb 2019 15:40:00 GMT

     [ https://issues.apache.org/jira/browse/HIVE-21217?focusedWorklogId=199293&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-199293
]

ASF GitHub Bot logged work on HIVE-21217:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 15/Feb/19 15:39
            Start Date: 15/Feb/19 15:39
    Worklog Time Spent: 10m 
      Work Description: pvary commented on pull request #538: HIVE-21217: Optimize range calculation
for PTF
URL: https://github.com/apache/hive/pull/538#discussion_r257278647
 
 

 ##########
 File path: ql/src/java/org/apache/hadoop/hive/ql/udf/ptf/ValueBoundaryScanner.java
 ##########
 @@ -44,10 +49,207 @@ public ValueBoundaryScanner(BoundaryDef start, BoundaryDef end, boolean
nullsLas
     this.nullsLast = nullsLast;
   }
 
+  public abstract Object computeValue(Object row) throws HiveException;
+
+  /**
+   * Checks if the distance of v2 to v1 is greater than the given amt.
+   * @return True if the value of v1 - v2 is greater than amt or either value is null.
+   */
+  public abstract boolean isDistanceGreater(Object v1, Object v2, int amt);
+
+  /**
+   * Checks if the values of v1 or v2 are the same.
+   * @return True if both values are the same or both are nulls.
+   */
+  public abstract boolean isEqual(Object v1, Object v2);
+
   public abstract int computeStart(int rowIdx, PTFPartition p) throws HiveException;
 
   public abstract int computeEnd(int rowIdx, PTFPartition p) throws HiveException;
 
+  /**
+   * Checks and maintains cache content - optimizes cache window to always be around current
row
+   * thereby makes it follow the current progress.
+   * @param rowIdx current row
+   * @param p current partition for the PTF operator
+   * @throws HiveException
+   */
+  public void handleCache(int rowIdx, PTFPartition p) throws HiveException {
+    BoundaryCache cache = p.getBoundaryCache();
+    if (cache == null) {
+      return;
+    }
+
+    //Start of partition
+    if (rowIdx == 0) {
+      cache.clear();
+    }
+    if (cache.isComplete()) {
+      return;
+    }
+
+    int cachePos = cache.approxCachePositionOf(rowIdx);
+
+    if (cache.isEmpty()) {
+      fillCacheUntilEndOrFull(rowIdx, p);
+    } else if (cachePos > 50 && cachePos <= 75) {
 
 Review comment:
   This is strange to me. Do we know the size of the window in advance? Shall we not use our
cache accordingly? If the window size is 5 then we should cache the 7 values (1 before, 1
after, and the 5 values)?
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 199293)
    Time Spent: 20m  (was: 10m)

> Optimize range calculation for PTF
> ----------------------------------
>
>                 Key: HIVE-21217
>                 URL: https://issues.apache.org/jira/browse/HIVE-21217
>             Project: Hive
>          Issue Type: Improvement
>            Reporter: Adam Szita
>            Assignee: Adam Szita
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: HIVE-21217.0.patch, HIVE-21217.1.patch, HIVE-21217.2.patch
>
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> During window function execution Hive has to iterate on neighbouring rows of the current
row to find the beginning and end of the proper range (on which the aggregation will be executed).
> When we're using range based windows and have many rows with a certain key value this
can take a lot of time. (e.g. partition size of 80M, in which we have 2 ranges of 40M rows
according to the orderby column: within these 40M rowsets we're doing 40M x 40M/2 steps..
which is of n^2 time complexity)
> I propose to introduce a cache that keeps track of already calculated range ends so it
can be reused in future scans.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message