hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rui Li (JIRA)" <>
Subject [jira] [Commented] (HIVE-16071) Spark remote driver misuses the timeout in RPC handshake
Date Wed, 08 Mar 2017 03:25:38 GMT


Rui Li commented on HIVE-16071:

Hi [~xuefuz], in your example, if the SASL handshake doesn't finish in time, the client side
will exit after 1s. Even if netty can't detect the disconnection immediately, I don't think
it takes 1h to detect it. Besides, the cancelTask only closes the channel, it doesn't set
failure to the Future. Therefore we can't really rely on the cancelTask to stop the waiting.
My proposal is:
# We need to reliably detect disconnection. I think netty is good enough for this (maybe with
some reasonable delay). But I'm also OK to keep the cancelTask to close the channel ourselves.
# We need to reliably cancel the Future when disconnection is detected. This can be done in
the SaslHandler which monitors the channel inactive event.

I also did some tests to verify. I modified the client code so that it makes the connection
but doesn't finish SASL handshake. I tried two ways to do this, one is the client never sends
the SaslMessage, the other is the client sends the SaslMessage and then just exits. The test
is done in yarn-cluster mode.
# If no SaslMessage is sent, Hive will still wait for {{hive.spark.client.server.connect.timeout}},
even if cancelTask closes the channel after 1s.
# If SaslMessage is sent, SaslHandler will detect the disconnection and cancel the Future,
no matter whether the cancelTask fires or not. Of course, this requires netty to detect the

> Spark remote driver misuses the timeout in RPC handshake
> --------------------------------------------------------
>                 Key: HIVE-16071
>                 URL:
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>            Reporter: Chaoyu Tang
>            Assignee: Chaoyu Tang
>         Attachments: HIVE-16071.patch
> Based on its property description in HiveConf and the comments in HIVE-12650 (,
hive.spark.client.connect.timeout is the timeout when the spark remote driver makes a socket
connection (channel) to RPC server. But currently it is also used by the remote driver for
RPC client/server handshaking, which is not right. Instead, hive.spark.client.server.connect.timeout
should be used and it has already been used by the RPCServer in the handshaking.
> The error like following is usually caused by this issue, since the default hive.spark.client.connect.timeout
value (1000ms) used by remote driver for handshaking is a little too short.
> {code}
> 17/02/20 08:46:08 ERROR yarn.ApplicationMaster: User class threw exception: java.util.concurrent.ExecutionException: Client closed before SASL negotiation finished.
> java.util.concurrent.ExecutionException: Client closed
before SASL negotiation finished.
>         at io.netty.util.concurrent.AbstractFuture.get(
>         at org.apache.hive.spark.client.RemoteDriver.<init>(
>         at org.apache.hive.spark.client.RemoteDriver.main(
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(
>         at java.lang.reflect.Method.invoke(
>         at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$
> Caused by: Client closed before SASL negotiation finished.
>         at org.apache.hive.spark.client.rpc.Rpc$SaslClientHandler.dispose(
>         at org.apache.hive.spark.client.rpc.SaslHandler.channelInactive(
> {code}

This message was sent by Atlassian JIRA

View raw message