spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From <>
Subject Cancelled Key exception
Date Tue, 23 Sep 2014 00:43:39 GMT
Hi Sparklers,

I was wondering if some else has also encountered this... (Actually I am not even sure if
this is an issue)...

I have a spark job that reads data from Hbase does a bunch of transformation

sparkContext.newAPIHadoopRDD -> flatMapToPair -> groupByKey -> mapValues

After this I do a take(10) on the result to print it out in the log file.

I always get the results which I am 100% sure are correct. However, every once in a while
I get the following in the log file even if the results are correct -

14/09/22 20:16:22 INFO CoarseGrainedExecutorBackend: Driver commanded a shutdown
14/09/22 20:16:22 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
14/09/22 20:16:22 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down;
proceeding with flushing remote transports.
14/09/22 20:16:22 INFO Remoting: Remoting shut down
14/09/22 20:16:22 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
14/09/22 20:16:24 INFO ConnectionManager: Key not valid ?
14/09/22 20:16:24 INFO ConnectionManager: Removing SendingConnection to ConnectionManagerId(tr-pan-xxxx-04,55008)
14/09/22 20:16:24 INFO ConnectionManager: Removing ReceivingConnection to ConnectionManagerId(tr-pan-xxxx-04,55008)
14/09/22 20:16:24 ERROR ConnectionManager: Corresponding SendingConnectionManagerId not found
14/09/22 20:16:24 INFO ConnectionManager: key already cancelled ?

The spark dashboard also does not show any error of executors failing.

Would it be possible for someone to throw some light into what this actually means? and whether
we should be concerned about it?

I am running a Cloudera CDH 5.1.2 cluster with I believe spark v1.0.0

The spark job is submitted to yarn.


View raw message