I think this IS possible?

You must set the HADOOP_CONF_DIR variable on the machine you’re running the Java process that creates the SparkContext.  The Hadoop configuration specifies the YARN ResourceManager IPs, and Spark will use that configuration.  

mn

On Nov 21, 2014, at 8:10 AM, Prannoy <prannoy@sigmoidanalytics.com> wrote:

Hi naveen, 

I dont think this is possible. If you are setting the master with your cluster details you cannot execute any job from your local machine. You have to execute the jobs inside your yarn machine so that sparkconf is able to connect with all the provided details. 

If this is not the case such give a detail explaintation of what exactly you are trying to do :)

Thanks.

On Fri, Nov 21, 2014 at 8:11 PM, Naveen Kumar Pokala [via Apache Spark User List] <[hidden email]> wrote:

Hi,

 

I am executing my spark jobs on yarn cluster by forming conf object in the following way.

 

SparkConf conf = new SparkConf().setAppName("NewJob").setMaster("yarn-cluster");

 

Now I want to execute spark jobs from my local machine how to do that.

 

What I mean is there a way to give IP address, port all the details to connect a master(YARN) on some other network from my local spark Program.

 

-Naveen




To start a new topic under Apache Spark User List, email [hidden email]
To unsubscribe from Apache Spark User List, click here.
NAML



View this message in context: Re: Execute Spark programs from local machine on Yarn-hadoop cluster
Sent from the Apache Spark User List mailing list archive at Nabble.com.