Hi everyone,
I just wondering why when I run my program I need to add jdbc.jar into —driver-class-path instead treat it like a dependency by —jars.

My program works with these config
./bin/spark-submit --packages org.apache.spark:spark-streaming-kafka_2.10:1.6.1 --master "local[4]" --class com.ants.util.kafka.PersistenceData --driver-class-path /Users/giaosudau/Downloads/postgresql-9.3-1102.jdbc41.jar /Users/giaosudau/workspace/KafkaJobs/target/scala-2.10/kafkajobs-prod.jar

According by http://stackoverflow.com/a/30947090/523075 and http://stackoverflow.com/a/31012955/523075
This is a bug related the the classloader

I checked this https://github.com/apache/spark/pull/6900 was merged.

I am using Spark 1.6.1 and by issue tell that already fixed in 1.4.1 and 1.5


Regards,
Chanh