hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (Jira)" <j...@apache.org>
Subject [jira] [Work logged] (HIVE-24061) Improve llap task scheduling for better cache hit rate
Date Wed, 26 Aug 2020 00:47:00 GMT

     [ https://issues.apache.org/jira/browse/HIVE-24061?focusedWorklogId=474585&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-474585
]

ASF GitHub Bot logged work on HIVE-24061:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 26/Aug/20 00:46
            Start Date: 26/Aug/20 00:46
    Worklog Time Spent: 10m 
      Work Description: rbalamohan commented on pull request #1431:
URL: https://github.com/apache/hive/pull/1431#issuecomment-680377420


   Thanks @prasanthj . Made a minor fix, where "isClusterCapacityFull" has to be reset in
trySchedulingPendingTasks as well. It is needed, as we need to ensure that scheduling opportunity
is given during task deallocations, node addition etc.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 474585)
    Time Spent: 40m  (was: 0.5h)

> Improve llap task scheduling for better cache hit rate 
> -------------------------------------------------------
>
>                 Key: HIVE-24061
>                 URL: https://issues.apache.org/jira/browse/HIVE-24061
>             Project: Hive
>          Issue Type: Improvement
>            Reporter: Rajesh Balamohan
>            Assignee: Rajesh Balamohan
>            Priority: Major
>              Labels: perfomance, pull-request-available
>          Time Spent: 40m
>  Remaining Estimate: 0h
>
> TaskInfo is initialized with the "requestTime and locality delay". When lots of vertices
are in the same level, "taskInfo" details would be available upfront. By the time, it gets
to scheduling, "requestTime + localityDelay" won't be higher than current time. Due to this,
it misses scheduling delay details and ends up choosing random node. This ends up missing
cache hits and reads data from remote storage.
> E.g Observed this pattern in Q75 of tpcds.
> Related lines of interest in scheduler: [https://github.com/apache/hive/blob/master/llap-tez/src/java/org/apache/hadoop/hive/llap/tezplugins/LlapTaskSchedulerService.java
|https://github.com/apache/hive/blob/master/llap-tez/src/java/org/apache/hadoop/hive/llap/tezplugins/LlapTaskSchedulerService.java]
> {code:java}
>    boolean shouldDelayForLocality = request.shouldDelayForLocality(schedulerAttemptTime);
> ..
> ..
>     boolean shouldDelayForLocality(long schedulerAttemptTime) {
>       return localityDelayTimeout > schedulerAttemptTime;
>     }
> {code}
>  
> Ideally, "localityDelayTimeout" should be adjusted based on it's first scheduling opportunity.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Mime
View raw message