spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Egor Pahomov <pahomov.e...@gmail.com>
Subject Re: SPARK 1.1.0 on yarn-cluster and external JARs
Date Thu, 25 Sep 2014 14:29:33 GMT
SparkContext.addJar()?

Why you didn't like fat jar way?

2014-09-25 16:25 GMT+04:00 rzykov <rzykov@gmail.com>:

> We build some SPARK jobs with external jars. I compile jobs by including
> them
> in one assembly.
> But look for an approach to put all external jars into HDFS.
>
> We have already put  spark jar in a HDFS folder and set up the variable
> SPARK_JAR.
> What is the best way to do that for other external jars (MongoDB, algebird
> and so on)?
>
> Thanks in advance
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/SPARK-1-1-0-on-yarn-cluster-and-external-JARs-tp15136.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>


-- 



*Sincerely yoursEgor PakhomovScala Developer, Yandex*

Mime
View raw message