spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Wei Zhang <>
Subject Re: Driver pods stuck in running state indefinitely
Date Fri, 10 Apr 2020 02:49:21 GMT
Is there any internal domain name resolving issues?

> Caused by: spark-1586333186571-driver-svc.fractal-segmentation.svc

From: Prudhvi Chennuru (CONT) <>
Sent: Friday, April 10, 2020 2:44
To: user
Subject: Driver pods stuck in running state indefinitely


   We are running spark batch jobs on K8s.
   Kubernetes version: 1.11.5 ,
   spark version: 2.3.2,
  docker version: 19.3.8

   Issue: Few Driver pods are stuck in running state indefinitely with error

   The Initial job has not accepted any resources; check your cluster UI to ensure that workers
are registered and have sufficient resources.

Below is the log of the errored out executor pods

   Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:63)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:188)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:293)
at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
Caused by: org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:205)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:101)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:201)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$
at org.apache.spark.deploy.SparkHadoopUtil$$anon$
at Method)
... 4 more
Caused by: Failed to connect to spark-1586333186571-driver-svc.fractal-segmentation.svc:7078
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:198)
at org.apache.spark.rpc.netty.Outbox$$anon$
at org.apache.spark.rpc.netty.Outbox$$anon$
at java.util.concurrent.ThreadPoolExecutor.runWorker(
at java.util.concurrent.ThreadPoolExecutor$
Caused by: spark-1586333186571-driver-svc.fractal-segmentation.svc
at io.netty.util.internal.SocketUtils$
at io.netty.util.internal.SocketUtils$
at Method)
at io.netty.util.internal.SocketUtils.addressByName(
at io.netty.resolver.DefaultNameResolver.doResolve(
at io.netty.resolver.SimpleNameResolver.resolve(
at io.netty.resolver.SimpleNameResolver.resolve(
at io.netty.resolver.InetSocketAddressResolver.doResolve(
at io.netty.resolver.InetSocketAddressResolver.doResolve(
at io.netty.resolver.AbstractAddressResolver.resolve(
at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(
at io.netty.bootstrap.Bootstrap.access$000(
at io.netty.bootstrap.Bootstrap$1.operationComplete(
at io.netty.bootstrap.Bootstrap$1.operationComplete(
at io.netty.util.concurrent.DefaultPromise.notifyListener0(
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(
at io.netty.util.concurrent.DefaultPromise.notifyListeners(
at io.netty.util.concurrent.DefaultPromise.trySuccess(
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(
at io.netty.util.concurrent.SingleThreadEventExecutor$
at io.netty.util.concurrent.DefaultThreadFactory$
... 1 more


The stuck driver pod and errored executor pods are running on different nodes but what i observed
is some other driver pods and executor pods ran successfully on the nodes on which executor
pod errored out.
When i checked calico pods on those nodes i don't see network related errors but i see below
error in kube-proxy pods and i also see that service is created for the stuck driver pods.

E0408 06:39:01.649573       1 proxier.go:1306] Failed to execute iptables-restore: exit status
1 (iptables-restore: line 988 failed
    I0408 06:39:01.649619       1 proxier.go:1308] Closing local ports after iptables-restore


My understanding was as the services created for drivers are headless, connectivity between
executor and driver is established by executor directly hitting driver pod IP and there is
no involvement of kube-proxy in routing the request to driver pod.

Is there a way to find the root cause for this issue, if it's not k8s related could there
be any kernel, docker or mismatch of the versions i am using.

The cluster was created 35 days ago and I have been seeing this issue for the past 4 days.

Appreciate the help.

Prudhvi Chennuru.

The information contained in this e-mail is confidential and/or proprietary to Capital One
and/or its affiliates and may only be used solely in performance of work or services for Capital
One. The information transmitted herewith is intended only for use by the individual or entity
to which it is addressed. If the reader of this message is not the intended recipient, you
are hereby notified that any review, retransmission, dissemination, distribution, copying
or other use of, or taking of any action in reliance upon this information is strictly prohibited.
If you have received this communication in error, please contact the sender and delete the
material from your computer.

To unsubscribe e-mail:

View raw message