hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Xuefu Zhang (JIRA)" <>
Subject [jira] [Commented] (HIVE-12650) Increase default value of hive.spark.client.server.connect.timeout to exceeds
Date Mon, 01 Feb 2016 14:26:39 GMT


Xuefu Zhang commented on HIVE-12650:

Hi [~lirui], since application master in the context of Hive on Spark takes a container from
yarn. In a busy cluster, spark-submit may wait up to to launch the
master. On the other hand, Hive waits for  hive.spark.client.server.connect.timeout  before
declaring that the remote driver is not connecting back. If the latter is less than the former,
it's possible that Hive prematurely disconnects, causing an unstable condition. []
had a description of the problem in the user list.

I think we need at least to make hive.spark.client.server.connect.timeout greater than
by default. To further guard against the problem, Hive can increase hive.spark.client.server.connect.timeout
automatically based on the value of;

[~vanzin], please share your thoughts as well.

> Increase default value of hive.spark.client.server.connect.timeout to exceeds
> ----------------------------------------------------------------------------------------------------
>                 Key: HIVE-12650
>                 URL:
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 1.1.1, 1.2.1
>            Reporter: JoneZhang
>            Assignee: Xuefu Zhang
> I think hive.spark.client.server.connect.timeout should be set greater than
The default value for 
> is 100s, and the default value for hive.spark.client.server.connect.timeout
is 90s, which is not good. We can increase it to a larger value such as 120s.

This message was sent by Atlassian JIRA

View raw message