hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sahil Takiar (JIRA)" <>
Subject [jira] [Commented] (HIVE-20512) Improve record and memory usage logging in SparkRecordHandler
Date Thu, 18 Oct 2018 15:33:00 GMT


Sahil Takiar commented on HIVE-20512:

A few comments:
(1) I think the logging should be done in a separate thread so that we don't have to invoke
{{logMemoryInfo()}} for each record, which can add significant overhead to per-record processing
(2) I think we should start with a lower interval, something like 15 seconds

You could try and add a unit test that logs to a string buffers, and then parse that string
buffer in a unit test. However, I don't think its necessary.

CC: [~asinkovits]

> Improve record and memory usage logging in SparkRecordHandler
> -------------------------------------------------------------
>                 Key: HIVE-20512
>                 URL:
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Sahil Takiar
>            Assignee: Bharathkrishna Guruvayoor Murali
>            Priority: Major
>         Attachments: HIVE-20512.1.patch
> We currently log memory usage and # of records processed in Spark tasks, but we should
improve the methodology for how frequently we log this info. Currently we use the following
> {code:java}
> private long getNextLogThreshold(long currentThreshold) {
>     // A very simple counter to keep track of number of rows processed by the
>     // reducer. It dumps
>     // every 1 million times, and quickly before that
>     if (currentThreshold >= 1000000) {
>       return currentThreshold + 1000000;
>     }
>     return 10 * currentThreshold;
>   }
> {code}
> The issue is that after a while, the increase by 10x factor means that you have to process
a huge # of records before this gets triggered.
> A better approach would be to log this info at a given interval. This would help in debugging
tasks that are seemingly hung.

This message was sent by Atlassian JIRA

View raw message