hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vaibhav Gumashta (JIRA)" <>
Subject [jira] [Updated] (HIVE-12049) Provide an option to write serialized thrift objects in final tasks
Date Wed, 10 Feb 2016 23:52:18 GMT


Vaibhav Gumashta updated HIVE-12049:
    Attachment: HIVE-12049.4.patch

So uploading an end to end patch here which will need some testing and improvement.

> Provide an option to write serialized thrift objects in final tasks
> -------------------------------------------------------------------
>                 Key: HIVE-12049
>                 URL:
>             Project: Hive
>          Issue Type: Sub-task
>          Components: HiveServer2
>            Reporter: Rohit Dholakia
>            Assignee: Rohit Dholakia
>         Attachments: HIVE-12049.1.patch, HIVE-12049.2.patch, HIVE-12049.3.patch, HIVE-12049.4.patch
> For each fetch request to HiveServer2, we pay the penalty of deserializing the row objects
and translating them into a different representation suitable for the RPC transfer. In a moderate
to high concurrency scenarios, this can result in significant CPU and memory wastage. By having
each task write the appropriate thrift objects to the output files, HiveServer2 can simply
stream a batch of rows on the wire without incurring any of the additional cost of deserialization
and translation. 
> This can be implemented by writing a new SerDe, which the FileSinkOperator can use to
write thrift formatted row batches to the output file. Using the pluggable property of the
{{hive.query.result.fileformat}}, we can set it to use SequenceFile and write a batch of thrift
formatted rows as a value blob. The FetchTask can now simply read the blob and send it over
the wire. On the client side, the *DBC driver can read the blob and since it is already formatted
in the way it expects, it can continue building the ResultSet the way it does in the current

This message was sent by Atlassian JIRA

View raw message