I just wondering why when I run my program I need to add jdbc.jar into —driver-class-path instead treat it like a dependency by —jars.
My program works with these config
./bin/spark-submit --packages org.apache.spark:spark-streaming-kafka_2.10:1.6.1 --master "local" --class com.ants.util.kafka.PersistenceData --driver-class-path /Users/giaosudau/Downloads/postgresql-9.3-1102.jdbc41.jar /Users/giaosudau/workspace/KafkaJobs/target/scala-2.10/kafkajobs-prod.jar
This is a bug related the the classloader
I am using Spark 1.6.1 and by issue tell that already fixed in 1.4.1 and 1.5