hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hive QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-18265) desc formatted/extended or show create table can not fully display the result when field or table comment contains tab character
Date Thu, 14 Dec 2017 22:33:00 GMT

    [ https://issues.apache.org/jira/browse/HIVE-18265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16291722#comment-16291722
] 

Hive QA commented on HIVE-18265:
--------------------------------



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12902065/HIVE-18265.1.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 16 failed/errored test(s), 11533 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2]
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2]
(batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
(batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[quotedid_smb] (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_1] (batchId=93)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_10] (batchId=138)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_7] (batchId=128)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=120)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] (batchId=113)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=226)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8247/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8247/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8247/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 16 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12902065 - PreCommit-HIVE-Build

> desc formatted/extended or show create table can not fully display the result when field
or table comment contains tab character
> --------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-18265
>                 URL: https://issues.apache.org/jira/browse/HIVE-18265
>             Project: Hive
>          Issue Type: Bug
>          Components: CLI
>    Affects Versions: 3.0.0
>            Reporter: Hui Huang
>            Assignee: Hui Huang
>             Fix For: 3.0.0
>
>         Attachments: HIVE-18265.1.patch, HIVE-18265.patch
>
>
> Here are some examples:
> create table test_comment (id1 string comment 'full_\tname1', id2 string comment 'full_\tname2',
id3 string comment 'full_\tname3') stored as textfile;
> When execute `show create table test_comment`, we can see the following content in the
console,
> {quote}
> createtab_stmt
> CREATE TABLE `test_comment`(
>   `id1` string COMMENT 'full_
>   `id2` string COMMENT 'full_
>   `id3` string COMMENT 'full_
> ROW FORMAT SERDE
>   'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.mapred.TextInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
> LOCATION
>   'hdfs://xxx/user/huanghui/warehouse/huanghuitest.db/test_comment'
> TBLPROPERTIES (
>   'transient_lastDdlTime'='1513095570')
> {quote}
> And the output of `desc formatted table ` is a little similar,
> {quote}
> col_name	data_type	comment
> \# col_name            	data_type           	comment
> id1                 	string              	full_
> id2                 	string              	full_
> id3                 	string              	full_
> \# Detailed Table Information
> (ignore)...
> {quote}
> When execute `desc extended test_comment`, the problem is more obvious,
> {quote}
> col_name	data_type	comment
> id1                 	string              	full_
> id2                 	string              	full_
> id3                 	string              	full_
> Detailed Table Information	Table(tableName:test_comment, dbName:huanghuitest, owner:huanghui,
createTime:1513095570, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:id1,
type:string, comment:full_	name1), FieldSchema(name:id2, type:string, comment:full_
> {quote}
> *the rest of the content is lost*.
> The content is not really lost, it's just can not display normal. Because hive store
the result in LazyStruct, and LazyStruct use '\t' as field separator:
> {code:java}
> // LazyStruct.java#parse()
> // Go through all bytes in the byte[]
>     while (fieldByteEnd <= structByteEnd) {
>       if (fieldByteEnd == structByteEnd || bytes[fieldByteEnd] == separator) {
>         // Reached the end of a field?
>         if (lastColumnTakesRest && fieldId == fields.length - 1) {
>           fieldByteEnd = structByteEnd;
>         }
>         startPosition[fieldId] = fieldByteBegin;
>         fieldId++;
>         if (fieldId == fields.length || fieldByteEnd == structByteEnd) {
>           // All fields have been parsed, or bytes have been parsed.
>           // We need to set the startPosition of fields.length to ensure we
>           // can use the same formula to calculate the length of each field.
>           // For missing fields, their starting positions will all be the same,
>           // which will make their lengths to be -1 and uncheckedGetField will
>           // return these fields as NULLs.
>           for (int i = fieldId; i <= fields.length; i++) {
>             startPosition[i] = fieldByteEnd + 1;
>           }
>           break;
>         }
>         fieldByteBegin = fieldByteEnd + 1;
>         fieldByteEnd++;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message