hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (Jira)" <j...@apache.org>
Subject [jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state
Date Mon, 26 Jul 2021 09:16:00 GMT

     [ https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=627596&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-627596
]

ASF GitHub Bot logged work on HIVE-25115:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 26/Jul/21 09:15
            Start Date: 26/Jul/21 09:15
    Worklog Time Spent: 10m 
      Work Description: klcopp commented on a change in pull request #2277:
URL: https://github.com/apache/hive/pull/2277#discussion_r676430685



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##########
@@ -282,15 +282,12 @@ private ValidReaderWriteIdList getValidCleanerWriteIdList(CompactionInfo
ci, Tab
     assert rsp != null && rsp.getTblValidWriteIdsSize() == 1;
     ValidReaderWriteIdList validWriteIdList =
         TxnCommonUtils.createValidReaderWriteIdList(rsp.getTblValidWriteIds().get(0));
-    boolean delayedCleanupEnabled = conf.getBoolVar(HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED);
-    if (delayedCleanupEnabled) {
-      /*
-       * If delayed cleanup enabled, we need to filter the obsoletes dir list, to only remove
directories that were made obsolete by this compaction
-       * If we have a higher retentionTime it is possible for a second compaction to run
on the same partition. Cleaning up the first compaction
-       * should not touch the newer obsolete directories to not to violate the retentionTime
for those.
-       */
-      validWriteIdList = validWriteIdList.updateHighWatermark(ci.highestWriteId);
-    }
+    /*
+     * We need to filter the obsoletes dir list, to only remove directories that were made
obsolete by this compaction
+     * If we have a higher retentionTime it is possible for a second compaction to run on
the same partition. Cleaning up the first compaction
+     * should not touch the newer obsolete directories to not to violate the retentionTime
for those.
+     */
+    validWriteIdList = validWriteIdList.updateHighWatermark(ci.highestWriteId);

Review comment:
       And we're 100% sure that we're lowering it and not raising it? Maybe we could include
some sort of assertion that ci.highestWriteId <= previous high watermark?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 627596)
    Time Spent: 3h  (was: 2h 50m)

> Compaction queue entries may accumulate in "ready for cleaning" state
> ---------------------------------------------------------------------
>
>                 Key: HIVE-25115
>                 URL: https://issues.apache.org/jira/browse/HIVE-25115
>             Project: Hive
>          Issue Type: Improvement
>            Reporter: Karen Coppage
>            Assignee: Denys Kuzmenko
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 3h
>  Remaining Estimate: 0h
>
> If the Cleaner does not delete any files, the compaction queue entry is thrown back to
the queue and remains in "ready for cleaning" state.
> Problem: If 2 compactions run on the same table and enter "ready for cleaning" state
at the same time, only one "cleaning" will remove obsolete files, the other entry will remain
in the queue in "ready for cleaning" state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Mime
View raw message