[ https://issues.apache.org/jira/browse/HIVE-22670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17072367#comment-17072367
]
Hive QA commented on HIVE-22670:
--------------------------------
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12998319/HIVE-22670.2.patch
{color:red}ERROR:{color} -1 due to no test(s) being added or modified.
{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 18164 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[topnkey_grouping_sets] (batchId=1)
org.apache.hadoop.hive.ql.TestWarehouseExternalDir.org.apache.hadoop.hive.ql.TestWarehouseExternalDir
(batchId=264)
org.apache.hadoop.hive.ql.TestWarehouseExternalDir.testExternalDefaultPaths (batchId=264)
{noformat}
Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/21363/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21363/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21363/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12998319 - PreCommit-HIVE-Build
> ArrayIndexOutOfBoundsException when vectorized reader is used for reading a parquet file
> ----------------------------------------------------------------------------------------
>
> Key: HIVE-22670
> URL: https://issues.apache.org/jira/browse/HIVE-22670
> Project: Hive
> Issue Type: Bug
> Components: Parquet, Vectorization
> Affects Versions: 3.1.2, 2.3.6
> Reporter: Ganesha Shreedhara
> Assignee: Ganesha Shreedhara
> Priority: Major
> Attachments: HIVE-22670.1.patch, HIVE-22670.2.patch
>
>
> ArrayIndexOutOfBoundsException is getting thrown while decoding dictionaryIds of a row
group in parquet file with vectorization enabled.
> *Exception stack trace:*
> {code:java}
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
> at org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainBinaryDictionary.decodeToBinary(PlainValuesDictionary.java:122)
> at org.apache.hadoop.hive.ql.io.parquet.vector.ParquetDataColumnReaderFactory$DefaultParquetDataColumnReader.readString(ParquetDataColumnReaderFactory.java:95)
> at org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedPrimitiveColumnReader.decodeDictionaryIds(VectorizedPrimitiveColumnReader.java:467)
> at org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedPrimitiveColumnReader.readBatch(VectorizedPrimitiveColumnReader.java:68)
> at org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:410)
> at org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:353)
> at org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:92)
> at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365)
> ... 24 more{code}
>
> This issue seems to be caused by re-using the same dictionary column vector while reading
consecutive row groups. This looks like one of the corner case bug which occurs for a certain
distribution of dictionary/plain encoded data while we read/populate the underlying bit packed
dictionary data into a column-vector based data structure.
> Similar issue issue was reported in spark (Ref: https://issues.apache.org/jira/browse/SPARK-16334)
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
|