hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hive QA (JIRA)" <>
Subject [jira] [Commented] (HIVE-17458) VectorizedOrcAcidRowBatchReader doesn't handle 'original' files
Date Wed, 01 Nov 2017 15:53:00 GMT


Hive QA commented on HIVE-17458:

Here are the results of testing the latest attachment:

{color:green}SUCCESS:{color} +1 due to 6 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 11339 tests executed
*Failed tests:*
TestOperationLoggingAPIWithMr - did not produce a TEST-*.xml file (likely timed out) (batchId=227)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=156)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[ct_noperm_loc] (batchId=94)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] (batchId=111)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=206)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=223)

Test results:
Console output:
Test logs:

Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed

This message is automatically generated.

ATTACHMENT ID: 12895204 - PreCommit-HIVE-Build

> VectorizedOrcAcidRowBatchReader doesn't handle 'original' files
> ---------------------------------------------------------------
>                 Key: HIVE-17458
>                 URL:
>             Project: Hive
>          Issue Type: Improvement
>    Affects Versions: 2.2.0
>            Reporter: Eugene Koifman
>            Assignee: Eugene Koifman
>            Priority: Critical
>         Attachments: HIVE-17458.01.patch, HIVE-17458.02.patch, HIVE-17458.03.patch, HIVE-17458.04.patch,
HIVE-17458.05.patch, HIVE-17458.06.patch, HIVE-17458.07.patch, HIVE-17458.07.patch, HIVE-17458.08.patch,
HIVE-17458.09.patch, HIVE-17458.10.patch, HIVE-17458.11.patch, HIVE-17458.12.patch, HIVE-17458.12.patch,
HIVE-17458.13.patch, HIVE-17458.14.patch
> VectorizedOrcAcidRowBatchReader will not be used for original files.  This will likely
look like a perf regression when converting a table from non-acid to acid until it runs through
a major compaction.
> With Load Data support, if large files are added via Load Data, the read ops will not
vectorize until major compaction.  
> There is no reason why this should be the case.  Just like OrcRawRecordMerger, VectorizedOrcAcidRowBatchReader
can look at the other files in the logical tranche/bucket and calculate the offset for the
RowBatch of the split.  (Presumably getRecordReader().getRowNumber() works the same in vector
> In this case we don't even need OrcSplit.isOriginal() - the reader can infer it from
file path... which in particular simplifies OrcInputFormat.determineSplitStrategies()

This message was sent by Atlassian JIRA

View raw message