spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nan Zhu <>
Subject how to run the program compiled with spark 1.0.0 in the branch-0.1-jdbc cluster
Date Sun, 13 Jul 2014 12:56:37 GMT
Hi, all  

I’m trying the JDBC server, so the cluster is running the version compiled from branch-0.1-jdbc

Unfortunately (and as expected), it cannot run the programs compiled with the dependency on
spark 1.0 (i.e. download from maven)

1. The first error I met is the different SerializationVersionUID in ExecuterStatus  

I resolved by explicitly declare SerializationVersionUID in ExecuterStatus.scala and recompile

2. Then I start the program compiled with spark-1.0, what I met is  

14/07/13 05:08:11 WARN AppClient$ClientActor: Could not connect to akka.tcp://sparkMaster@172.31.*.*:*:
java.util.NoSuchElementException: key not found: 6  
14/07/13 05:08:11 WARN AppClient$ClientActor: Connection to akka.tcp://sparkMaster@172.31.*.*:*
failed; waiting for master to reconnect...

I don’t understand how "key not found: 6” comes

Also I tried to start JDBC server with spark-1.0 cluster, after resolving different SerializationVersionUID,
what I met is that when I use beeline to run “show tables;”, it shows some executors get
lost and tasks failed for unknown reason

Anyone can give some suggestions on how to make spark-1.0 cluster work with JDBC?  

(maybe I need to have a internal maven repo and change all spark dependency to that?)


Nan Zhu

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message