hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hive QA (JIRA)" <>
Subject [jira] [Commented] (HIVE-10291) Hive on Spark job configuration needs to be logged [Spark Branch]
Date Fri, 10 Apr 2015 06:22:12 GMT


Hive QA commented on HIVE-10291:

{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:

{color:green}SUCCESS:{color} +1 8710 tests passed

Test results:
Console output:
Test logs:

Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase

This message is automatically generated.

ATTACHMENT ID: 12724459 - PreCommit-HIVE-SPARK-Build

> Hive on Spark job configuration needs to be logged [Spark Branch]
> -----------------------------------------------------------------
>                 Key: HIVE-10291
>                 URL:
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>    Affects Versions: 1.1.0
>            Reporter: Szehon Ho
>            Assignee: Szehon Ho
>         Attachments: HIVE-10291-spark.patch, HIVE-10291.2-spark.patch
> In a Hive on MR job, all the job properties are put into the JobConf, which can then
be viewed via the MR2 HistoryServer's Job UI.
> However, in Hive on Spark we are submitting an application that is long-lived.  Hence,
we only put properties into the SparkConf relevant to application submission (spark and yarn
properties).  Only these are viewable through the Spark HistoryServer Application UI.
> It is the Hive application code (RemoteDriver, aka RemoteSparkContext) that is responsible
for serializing and deserializing the job.xml per job (ie, query) within the application.
 Thus, for supportability we also need to give an equivalent mechanism to print the job.xml
per job.

This message was sent by Atlassian JIRA

View raw message