From user-return-28574-apmail-spark-user-archive=spark.apache.org@spark.apache.org Thu Mar 12 16:19:11 2015 Return-Path: X-Original-To: apmail-spark-user-archive@minotaur.apache.org Delivered-To: apmail-spark-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id F13AD173C0 for ; Thu, 12 Mar 2015 16:19:11 +0000 (UTC) Received: (qmail 64456 invoked by uid 500); 12 Mar 2015 16:19:08 -0000 Delivered-To: apmail-spark-user-archive@spark.apache.org Received: (qmail 64382 invoked by uid 500); 12 Mar 2015 16:19:08 -0000 Mailing-List: contact user-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list user@spark.apache.org Received: (qmail 64372 invoked by uid 99); 12 Mar 2015 16:19:08 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 12 Mar 2015 16:19:08 +0000 X-ASF-Spam-Status: No, hits=3.2 required=5.0 tests=FORGED_YAHOO_RCVD,HTML_MESSAGE,RCVD_IN_DNSWL_NONE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of rgrandl@yahoo.com designates 98.138.229.184 as permitted sender) Received: from [98.138.229.184] (HELO nm40-vm8.bullet.mail.ne1.yahoo.com) (98.138.229.184) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 12 Mar 2015 16:19:01 +0000 Received: from [127.0.0.1] by nm40.bullet.mail.ne1.yahoo.com with NNFMP; 12 Mar 2015 16:15:32 -0000 Received: from [98.138.100.113] by nm40.bullet.mail.ne1.yahoo.com with NNFMP; 12 Mar 2015 16:12:42 -0000 Received: from [98.139.215.143] by tm104.bullet.mail.ne1.yahoo.com with NNFMP; 12 Mar 2015 16:12:42 -0000 Received: from [98.139.215.228] by tm14.bullet.mail.bf1.yahoo.com with NNFMP; 12 Mar 2015 16:12:42 -0000 Received: from [127.0.0.1] by omp1068.mail.bf1.yahoo.com with NNFMP; 12 Mar 2015 16:12:42 -0000 X-Yahoo-Newman-Property: ymail-4 X-Yahoo-Newman-Id: 179527.70094.bm@omp1068.mail.bf1.yahoo.com X-YMail-OSG: JFWlwroVM1mDb3lCYqkPxFXZGY2QtsVo76nxn21miU_P24a07zLmb.9NrEz3SED 6sNLNFXaCTbvOCRZFKAR8NMj8LyGBLL6wpq.rH7lCIv3j3BkjYZBGVoVO0kocuWCbeoEPOwm_KKG kUMKXl9Jm9g0AoRB3o4nJG6oAsj4p.vIhTgOVr5aBZCGc9Td47BMd2nx.SCVjeBYb_a9ORwIOSSO cD5cES1x75mABq3ff6mFCnJz2.7q3qnIz.5txjjGwSKr7hHRsMIAfWpasDx4xyQS87L8Cv1.pn4m HFNcMCNLRmK4Dun5Qspr3wvKIrzyj7EPzznYNs_qZ0qyGDV3xJUni96GyLCEEgIYbbXMXgQNsUha SIiHXk8zKWShIIEvYRSkSG25hQOIv1GjYSudqhoQHAN4JreIc_eovHwH7DU0V24c41Mefhnkpm7v trWz0zRm2FQY75wCXBcmIcvbZtZ02y6CGuNq_FgCHbnF1DJJY4kD3LsbHSVK44UoAbtk1GN99f2W vPSGKK9C6wuvrpvJjet1zuxZIfa2GG6QXpsa8wF5VAeNyfuBMA53NfAiIgDG2o7ed2vAKGvnb3Wu 0WVDcN_I_h3QNg8EwvKIdqCAdZ2lNVgpm1_oq0SqXtw-- Received: by 76.13.27.54; Thu, 12 Mar 2015 16:12:41 +0000 Date: Thu, 12 Mar 2015 16:12:41 +0000 (UTC) From: Grandl Robert Reply-To: Grandl Robert To: Grandl Robert , User Message-ID: <1315518070.4923950.1426176761296.JavaMail.yahoo@mail.yahoo.com> In-Reply-To: <980301227.4888137.1426175855363.JavaMail.yahoo@mail.yahoo.com> References: <980301227.4888137.1426175855363.JavaMail.yahoo@mail.yahoo.com> Subject: Re: run spark standalone mode MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_Part_4923949_1161423250.1426176761277" X-Virus-Checked: Checked by ClamAV on apache.org ------=_Part_4923949_1161423250.1426176761277 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Sorry guys for this.=20 It seems that I need to start the thrift server with --master spark://ms022= 0:7077 option and now I can see applications running in my web UI. Thanks,Robert =20 On Thursday, March 12, 2015 10:57 AM, Grandl Robert wrote: =20 I figured out for spark-shell by passing the --master option. However, sti= ll troubleshooting for launching sql queries. My current command is like th= at: ./bin/beeline -u jdbc:hive2://ms0220:10000 -n `whoami` -p ignored -f tpch_q= uery10.sql =20 On Thursday, March 12, 2015 10:37 AM, Grandl Robert wrote: =20 Hi guys, I have a stupid question, but I am not sure how to get out of it.=C2=A0 I deployed spark 1.2.1 on a cluster of 30 nodes. Looking at master:8088 I c= an see all the workers I have created so far. (I start the cluster with sbi= n/start-all.sh) However, when running a Spark SQL query or even spark-shell, I cannot see a= ny job executing at master webUI, but the jobs are able to finish. I suspec= t they are executing locally on the master, but I don't understand why/how = and why not on slave machines.=20 My conf/spark-env.sh is as following:export SPARK_MASTER_IP=3D"ms0220" export SPARK_CLASSPATH=3D$SPARK_CLASSPATH:/users/rgrandl/software/spark-1.2= .1-bin-hadoop2.4/lib/snappy-java-1.0.4.1.jar export SPARK_LOCAL_DIRS=3D"/users/rgrandl/software/data/spark/local" export SPARK_WORKER_MEMORY=3D"52000M" export SPARK_WORKER_INSTANCES=3D"2" export SPARK_WORKER_CORES=3D"2" export SPARK_WORKER_DIR=3D"/users/rgrandl/software/data/spark/worker" export SPARK_DAEMON_MEMORY=3D"5200M" #export SPARK_DAEMON_JAVA_OPTS=3D"4800M" While conf/slaves is populated with the list of machines used for workers. = I have to mention that spark-env.sh and slaves files are deployed on all ma= chines.=20 Thank you,Robert =20 ------=_Part_4923949_1161423250.1426176761277 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Sorry guys for this.

It seems that I need to start the thrift server with --mas= ter spark://ms0220:7077 option and now I can see applications running in my= web UI.

=
Thanks,
Robert
<= div>


On Thursday, March 12, 2015 10:57 AM, Grandl Robert <rgrandl@yah= oo.com.INVALID> wrote:


I figured= out for spark-shell by passing the --master option. However, still trouble= shooting for launching sql queries. My current command is like that:
<= div dir=3D"ltr" id=3D"yiv2700181143yui_3_16_0_1_1426174134175_49523">
./bin/beeline -u jdbc:hive2://ms0220:10000 -n `whoami` -p ign= ored -f tpch_query10.sql


On Thursday, March 12, 2015 10:37 AM, Grandl Robert <rgrandl@yah= oo.com.INVALID> wrote:


Hi guys,

I have a s= tupid question, but I am not sure how to get out of it. 

I deployed spark 1.2.1 on a cluster of 30 nodes. Looking at master:808= 8 I can see all the workers I have created so far. (I start the cluster wit= h sbin/start-all.sh)

However, when running a Spark SQL qu= ery or even spark-shell, I cannot see any job executing at master webUI, bu= t the jobs are able to finish. I suspect they are executing locally on the = master, but I don't understand why/how and why not on slave machines.

My conf/spark-env.sh is as following:
expor= t SPARK_MASTER_IP=3D"ms0220"
export SPARK_CLASSPATH=3D$SPARK_CLASSPATH:/users/rgrandl/software/sp= ark-1.2.1-bin-hadoop2.4/lib/snappy-java-1.0.4.1.jar

export SPARK_LOCAL_DIRS=3D"/users/rgrandl/software/data/spark/loc= al"

export SPARK_WORKER_MEMORY=3D"52000M"export SPARK_WORKER_INS= TANCES=3D"2"
export SP= ARK_WORKER_CORES=3D"2"

export SPARK_WORKER_= DIR=3D"/users/rgrandl/software/data/spark/worker"
export SPARK_DAEMON_MEMORY=3D"5200M"
#export SPARK_DAEMON_JAVA_OPTS= =3D"4800M"

While conf/slaves is populated with the list of machines u= sed for workers. I have to mention that spark-env.sh and slaves files are d= eployed on all machines.

Thank you,
Robert



=


------=_Part_4923949_1161423250.1426176761277--