hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Xuefu Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-12650) Increase default value of hive.spark.client.server.connect.timeout to exceeds spark.yarn.am.waitTime
Date Wed, 03 Feb 2016 02:33:39 GMT

    [ https://issues.apache.org/jira/browse/HIVE-12650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15129655#comment-15129655
] 

Xuefu Zhang commented on HIVE-12650:
------------------------------------

Hi [~lirui], thanks for the info. It's good that spark-submit is killed when Hive times out.
Now the user's problem seems more interesting, though we cannot do much unless we have more
information.

"Client closed before SASL negotiation finished" could be caused by the fact that AM tries
to connect back to Hive, but Hive has already timed out. While Spark-submit is killed, is
possible that YARN RM still has the request which will be eventually served?


> Increase default value of hive.spark.client.server.connect.timeout to exceeds spark.yarn.am.waitTime
> ----------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-12650
>                 URL: https://issues.apache.org/jira/browse/HIVE-12650
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 1.1.1, 1.2.1
>            Reporter: JoneZhang
>            Assignee: Xuefu Zhang
>
> I think hive.spark.client.server.connect.timeout should be set greater than spark.yarn.am.waitTime.
The default value for 
> spark.yarn.am.waitTime is 100s, and the default value for hive.spark.client.server.connect.timeout
is 90s, which is not good. We can increase it to a larger value such as 120s.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message