hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hive QA (JIRA)" <>
Subject [jira] [Commented] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
Date Wed, 22 Feb 2017 22:35:44 GMT


Hive QA commented on HIVE-14901:

Here are the results of testing the latest attachment:

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 10254 tests executed
*Failed tests:*
TestDerbyConnector - did not produce a TEST-*.xml file (likely timed out) (batchId=235)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14] (batchId=223)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query23] (batchId=223)

Test results:
Console output:
Test logs:

Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed

This message is automatically generated.

ATTACHMENT ID: 12854009 - PreCommit-HIVE-Build

> HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
> --------------------------------------------------------------------------------
>                 Key: HIVE-14901
>                 URL:
>             Project: Hive
>          Issue Type: Sub-task
>          Components: HiveServer2, JDBC, ODBC
>    Affects Versions: 2.1.0
>            Reporter: Vaibhav Gumashta
>            Assignee: Norris Lee
>         Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, HIVE-14901.3.patch, HIVE-14901.4.patch,
HIVE-14901.5.patch, HIVE-14901.patch
> Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide the max
number of rows that we write in tasks. However, we should ideally use the user supplied value
(which can be extracted from the ThriftCLIService.FetchResults' request parameter) to decide
how many rows to serialize in a blob in the tasks. We should however use {{hive.server2.thrift.resultset.max.fetch.size}}
to have an upper bound on it, so that we don't go OOM in tasks and HS2. 

This message was sent by Atlassian JIRA

View raw message