spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ouruia@cnsuning.com" <our...@cnsuning.com>
Subject Re: Re: --jars option using hdfs jars cannot effect when spark standlone deploymode with cluster
Date Thu, 05 Nov 2015 08:36:42 GMT
Akhil,
    a examle as following, when read hbase data using standlone driver locally,but cluster
is error .
the dependencies:
    hbasejars=/home/spark/software/spark/ext/hbase/guava-12.0.1.jar,/home/spark/software/spark/ext/hbase/guice-
   3.0.jar,/home/spark/software/spark/ext/hbase/hbase-client-0.98.8.jar,/home/spark/software/spark/ext/hbase/hbase-common-0.98.8.jar,/home/spark/software/spark/ext/hbase/hbase-prefix-tree-0.98.8.jar,/home/spark/software/spark/ext/hbase/hbase-protocol-0.98.8.jar,/home/spark/software/spark/ext/hbase/hbase-server-0.98.8.jar,/home/spark/software/spark/ext/hbase/hbase-thrift-0.98.8.jar,/home/spark/software/spark/ext/hbase/htrace-core-2.04.jar,/home/spark/software/spark/ext/hbase/protobuf-java-2.5.0.jar,/home/spark/software/spark/ext/hbase/zookeeper-3.4.6.jar
 
the submit command :
spark-submit --deploy cluster --jars $hbasejars --class com.suning.spark.hbase.HBaseTest hdfs:///user/spark/HBaseTest-v1.jar
the I check the driver directory ,no dependencies was downloaded by cluster driver program

the driver stderr :

the last is driver launch command :
Launch Command: "/home/spark/software/java/bin/java" "-cp" "/home/spark/software/spark/ext/hadoop-lzo-0.4.20-SNAPSHOT.jar:/home/spark/software/spark/ext/mysql-connector-java-5.1.27-bin.jar:/home/spark/software/spark/ext/suning-hive-inputformat.jar:/home/spark/software/spark/ext/AuthHook.jar:/home/spark/software/spark/ext/cbtDriver.jar:/home/spark/software/spark-1.4.0.2-bin-2.4.0/sbin/../conf/:/home/spark/software/spark/lib/spark-assembly-1.4.0-hadoop2.4.0.jar:/home/spark/software/spark-1.4.0.2-bin-2.4.0/lib/datanucleus-core-3.2.10.jar:/home/spark/software/spark-1.4.0.2-bin-2.4.0/lib/datanucleus-api-jdo-3.2.6.jar:/home/spark/software/spark-1.4.0.2-bin-2.4.0/lib/datanucleus-rdbms-3.2.9.jar:/home/bigdata/software/hadoop/etc/hadoop/"
"-Xms4096M" "-Xmx4096M" "-Dspark.driver.memory=4096m" "-Dspark.eventLog.enabled=true" "-Dakka.loglevel=WARNING"
"-Dspark.executor.extraClassPath=/home/spark/software/spark/ext/hadoop-lzo-0.4.20-SNAPSHOT.jar:/home/spark/software/spark/ext/mysql-connector-java-5.1.27-bin.jar:/home/spark/software/spark/ext/suning-hive-inputformat.jar:/home/spark/software/spark/ext/AuthHook.jar:/home/spark/software/spark/ext/cbtDriver.jar"
"-Dspark.worker.cleanup.appDataTtl=3600" "-Dspark.shuffle.consolidateFiles=true" "-Dspark.executor.memory=4g"
"-Dspark.driver.extraJavaOptions=-XX:MaxPermSize=512m" "-Dspark.executor.extraLibraryPath=/home/bigdata/software/hadoop/lib/native/Linux-amd64-64/"
"-Dspark.serializer=org.apache.spark.serializer.KryoSerializer" "-Dspark.driver.extraClassPath=/home/spark/software/spark/ext/hadoop-lzo-0.4.20-SNAPSHOT.jar:/home/spark/software/spark/ext/mysql-connector-java-5.1.27-bin.jar:/home/spark/software/spark/ext/suning-hive-inputformat.jar:/home/spark/software/spark/ext/AuthHook.jar:/home/spark/software/spark/ext/cbtDriver.jar"
"-Dspark.master=spark://namenode1-sit.cnsuning.com:7077,namenode2-sit.cnsuning.com:7077" "-Dspark.driver.supervise=false"
"-Dspark.app.name=com.suning.spark.hbase.HBaseTest" "-Dspark.jars=file:/home/spark/software/spark/ext/hbase/guava-12.0.1.jar,file:/home/spark/software/spark/ext/hbase/guice-3.0.jar,file:/home/spark/software/spark/ext/hbase/hbase-client-0.98.8.jar,file:/home/spark/software/spark/ext/hbase/hbase-common-0.98.8.jar,file:/home/spark/software/spark/ext/hbase/hbase-prefix-tree-0.98.8.jar,file:/home/spark/software/spark/ext/hbase/hbase-protocol-0.98.8.jar,file:/home/spark/software/spark/ext/hbase/hbase-server-0.98.8.jar,file:/home/spark/software/spark/ext/hbase/hbase-thrift-0.98.8.jar,file:/home/spark/software/spark/ext/hbase/htrace-core-2.04.jar,file:/home/spark/software/spark/ext/hbase/protobuf-java-2.5.0.jar,file:/home/spark/software/spark/ext/hbase/zookeeper-3.4.6.jar,hdfs:///user/spark/HBaseTest-v1.jar"
"-Dspark.worker.cleanup.interval=600" "-Dspark.cores.max=2" "-Dspark.sql.shuffle.partitions=200"
"-Dspark.default.parallelism=20" "-Dspark.rpc.askTimeout=10" "-Dspark.eventLog.dir=hdfs://SuningHadoop2/sparklogs/sparklogshistorylog"
"-Dspark.worker.cleanup.enabled=true" "-Dspark.driver.cores=1" "-XX:MaxPermSize=512m" "org.apache.spark.deploy.worker.DriverWrapper"
"akka.tcp://sparkWorker@10.27.1.143:8079/user/Worker" "/data10/spark/work/driver-20151105162328-0412/HBaseTest-v1.jar"
"com.suning.spark.hbase.HBaseTest"

                     

     Thanks
                                                             Best Regards










Ricky  Ou(欧   锐)


 
From: ouruia@cnsuning.com
Date: 2015-11-04 13:59
To: Akhil Das
CC: user; 494165115
Subject: Re: Re: --jars option using hdfs jars cannot effect when spark standlone deploymode
with cluster
Akhil,
    
     In locally ,all nodes will has the same  jar   because  the driver will be assgined to
random node ;otherwise the driver log wiil report :no jar was founded .












Ricky  Ou(欧   锐)


 
From: Akhil Das
Date: 2015-11-02 17:59
To: ouruia@cnsuning.com
CC: user; 494165115
Subject: Re: --jars option using hdfs jars cannot effect when spark standlone deploymode with
cluster
Can you give a try putting the jar locally without hdfs?

Thanks
Best Regards

On Wed, Oct 28, 2015 at 8:40 AM, ouruia@cnsuning.com <ouruia@cnsuning.com> wrote:
hi all,
   when using command:
        spark-submit --deploy-mode cluster --jars hdfs:///user/spark/cypher.jar --class com.suning.spark.jdbc.MysqlJdbcTest
hdfs:///user/spark/MysqlJdbcTest.jar
the program throw exception that  cannot find class in cypher.jar, the driver log show no
--jars download with  cluster mode.  Isn't it  only use fatjar?
  



 



   











Ricky  Ou(欧   锐)



Mime
View raw message