hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Xuefu Zhang (JIRA)" <>
Subject [jira] [Commented] (HIVE-12650) Increase default value of hive.spark.client.server.connect.timeout to exceeds
Date Mon, 01 Feb 2016 19:29:39 GMT


Xuefu Zhang commented on HIVE-12650:

Thanks for the clarification, [~vanzin]. I agree with you. Do you know what factors (such
as a lack of available executors) might make Spark AM wait for SparkContext to be initialized
for longer period of time (say, a minute)? The problem seems to be that Hive times out first
while the AM still appears running, waiting for the context to be initialized. It will eventually
fail either the context gets initialized for timeout occurs. This might look a bit confusing.
I'm think if we make Hive waits longer than that, then we can avoid the scenario. Any further

> Increase default value of hive.spark.client.server.connect.timeout to exceeds
> ----------------------------------------------------------------------------------------------------
>                 Key: HIVE-12650
>                 URL:
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 1.1.1, 1.2.1
>            Reporter: JoneZhang
>            Assignee: Xuefu Zhang
> I think hive.spark.client.server.connect.timeout should be set greater than
The default value for 
> is 100s, and the default value for hive.spark.client.server.connect.timeout
is 90s, which is not good. We can increase it to a larger value such as 120s.

This message was sent by Atlassian JIRA

View raw message