hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hive QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-17684) HoS memory issues with MapJoinMemoryExhaustionHandler
Date Wed, 05 Sep 2018 04:53:00 GMT

    [ https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16603896#comment-16603896
] 

Hive QA commented on HIVE-17684:
--------------------------------



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12938303/HIVE-17684.05.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 34 failed/errored test(s), 14924 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin] (batchId=11)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_1] (batchId=23)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join10] (batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join14] (batchId=15)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join15] (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=77)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join26] (batchId=14)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join33] (batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join_without_localtask] (batchId=1)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_1] (batchId=34)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin12] (batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin8] (batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin9] (batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[convert_decimal64_to_decimal] (batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[correlationoptimizer7] (batchId=22)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_bucket_sort_convert_join] (batchId=57)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join33] (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_empty] (batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[nonblock_op_deduplicate] (batchId=23)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[skewjoin_mapjoin10] (batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union22] (batchId=14)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union34] (batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union_remove_14] (batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_char_mapjoin1] (batchId=34)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_include_no_sel] (batchId=4)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_left_outer_join] (batchId=23)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_outer_join2] (batchId=32)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_outer_join3] (batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_reduce_groupby_duplicate_cols]
(batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_context] (batchId=34)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_mapjoin3] (batchId=12)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbase_ppd_join] (batchId=103)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query39] (batchId=266)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13593/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13593/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13593/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 34 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12938303 - PreCommit-HIVE-Build

> HoS memory issues with MapJoinMemoryExhaustionHandler
> -----------------------------------------------------
>
>                 Key: HIVE-17684
>                 URL: https://issues.apache.org/jira/browse/HIVE-17684
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>            Reporter: Sahil Takiar
>            Assignee: Misha Dmitriev
>            Priority: Major
>         Attachments: HIVE-17684.01.patch, HIVE-17684.02.patch, HIVE-17684.03.patch, HIVE-17684.04.patch,
HIVE-17684.05.patch
>
>
> We have seen a number of memory issues due the {{HashSinkOperator}} use of the {{MapJoinMemoryExhaustionHandler}}.
This handler is meant to detect scenarios where the small table is taking too much space in
memory, in which case a {{MapJoinMemoryExhaustionError}} is thrown.
> The configs to control this logic are:
> {{hive.mapjoin.localtask.max.memory.usage}} (default 0.90)
> {{hive.mapjoin.followby.gby.localtask.max.memory.usage}} (default 0.55)
> The handler works by using the {{MemoryMXBean}} and uses the following logic to estimate
how much memory the {{HashMap}} is consuming: {{MemoryMXBean#getHeapMemoryUsage().getUsed()
/ MemoryMXBean#getHeapMemoryUsage().getMax()}}
> The issue is that {{MemoryMXBean#getHeapMemoryUsage().getUsed()}} can be inaccurate.
The value returned by this method returns all reachable and unreachable memory on the heap,
so there may be a bunch of garbage data, and the JVM just hasn't taken the time to reclaim
it all. This can lead to intermittent failures of this check even though a simple GC would
have reclaimed enough space for the process to continue working.
> We should re-think the usage of {{MapJoinMemoryExhaustionHandler}} for HoS. In Hive-on-MR
this probably made sense to use because every Hive task was run in a dedicated container,
so a Hive Task could assume it created most of the data on the heap. However, in Hive-on-Spark
there can be multiple Hive Tasks running in a single executor, each doing different things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message