From issues-return-128416-apmail-spark-issues-archive=spark.apache.org@spark.apache.org Thu Sep 1 08:22:22 2016 Return-Path: X-Original-To: apmail-spark-issues-archive@minotaur.apache.org Delivered-To: apmail-spark-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 944A219C70 for ; Thu, 1 Sep 2016 08:22:22 +0000 (UTC) Received: (qmail 56221 invoked by uid 500); 1 Sep 2016 08:22:22 -0000 Delivered-To: apmail-spark-issues-archive@spark.apache.org Received: (qmail 56140 invoked by uid 500); 1 Sep 2016 08:22:22 -0000 Mailing-List: contact issues-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@spark.apache.org Received: (qmail 55977 invoked by uid 99); 1 Sep 2016 08:22:22 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 01 Sep 2016 08:22:22 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 2BC932C1B83 for ; Thu, 1 Sep 2016 08:22:22 +0000 (UTC) Date: Thu, 1 Sep 2016 08:22:22 +0000 (UTC) From: "Sean Owen (JIRA)" To: issues@spark.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (SPARK-17336) Repeated calls sbin/spark-config.sh file Causes ${PYTHONPATH} Value duplicate MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/SPARK-17336?page=3Dcom.atlassia= n.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D154= 54745#comment-15454745 ]=20 Sean Owen commented on SPARK-17336: ----------------------------------- [~axu4apache] I assume you modified the scripts to print PYTHONPATH in orde= r to show this behavior. Yeah it looks like the file that sets it is repeatedly sourced so this path= is appended many times. I don't know of a cleaner way to do this other than to see if the variable = already contains the path and only append if it doesn't. Do you want to try= that in a PR? > Repeated calls sbin/spark-config.sh file Causes ${PYTHONPATH} Value dupli= cate > -------------------------------------------------------------------------= ---- > > Key: SPARK-17336 > URL: https://issues.apache.org/jira/browse/SPARK-17336 > Project: Spark > Issue Type: Bug > Components: PySpark > Affects Versions: 2.0.0 > Reporter: anxu > > On Spark start up by command: sbin/start-all.sh, the sbin/spark-config.sh= Repeated calls. In sbin/spark-config.sh code. > {code:title=3Dsbin/spark-config.sh|borderStyle=3Dsolid} > # Add the PySpark classes to the PYTHONPATH: > export PYTHONPATH=3D"${SPARK_HOME}/python:${PYTHONPATH}" > export PYTHONPATH=3D"${SPARK_HOME}/python/lib/py4j-0.10.3-src.zip:${PYTHO= NPATH}" > {code} > {color:red}PYTHONPATH{color} has duplicate Value. > example: > {code:borderStyle=3Dsolid} > axu4iMac:spark-2.0.0-hadoop2.4 axu$ sbin/start-all.sh | grep PYTHONPATH > axu.print [Log] [6,16,31] [sbin/spark-config.sh] =E5=AE=9A=E4=B9=89PYTHON= PATH > axu.print [sbin/spark-config.sh] [Define Global] PYTHONPATH(1): [/Users/a= xu/code/axuProject/spark-2.0.0-hadoop2.4/python:] > axu.print [Log] [7,17,32] [sbin/spark-config.sh] =E5=86=8D=E6=AC=A1=E5=AE= =9A=E4=B9=89PYTHONPATH > axu.print [sbin/spark-config.sh] [Define Global] PYTHONPATH(2): [/Users/a= xu/code/axuProject/spark-2.0.0-hadoop2.4/python/lib/py4j-0.10.1-src.zip:/Us= ers/axu/code/axuProject/spark-2.0.0-hadoop2.4/python:] > axu.print [Log] [6,16,31] [sbin/spark-config.sh] =E5=AE=9A=E4=B9=89PYTHON= PATH > axu.print [sbin/spark-config.sh] [Define Global] PYTHONPATH(1): [/Users/a= xu/code/axuProject/spark-2.0.0-hadoop2.4/python:/Users/axu/code/axuProject/= spark-2.0.0-hadoop2.4/python/lib/py4j-0.10.1-src.zip:/Users/axu/code/axuPro= ject/spark-2.0.0-hadoop2.4/python:] > axu.print [Log] [7,17,32] [sbin/spark-config.sh] =E5=86=8D=E6=AC=A1=E5=AE= =9A=E4=B9=89PYTHONPATH > axu.print [sbin/spark-config.sh] [Define Global] PYTHONPATH(2): [/Users/a= xu/code/axuProject/spark-2.0.0-hadoop2.4/python/lib/py4j-0.10.1-src.zip:/Us= ers/axu/code/axuProject/spark-2.0.0-hadoop2.4/python:/Users/axu/code/axuPro= ject/spark-2.0.0-hadoop2.4/python/lib/py4j-0.10.1-src.zip:/Users/axu/code/a= xuProject/spark-2.0.0-hadoop2.4/python:] > axu.print [Log] [6,16,31] [sbin/spark-config.sh] =E5=AE=9A=E4=B9=89PYTHON= PATH > axu.print [sbin/spark-config.sh] [Define Global] PYTHONPATH(1): [/Users/a= xu/code/axuProject/spark-2.0.0-hadoop2.4/python:/Users/axu/code/axuProject/= spark-2.0.0-hadoop2.4/python/lib/py4j-0.10.1-src.zip:/Users/axu/code/axuPro= ject/spark-2.0.0-hadoop2.4/python:/Users/axu/code/axuProject/spark-2.0.0-ha= doop2.4/python/lib/py4j-0.10.1-src.zip:/Users/axu/code/axuProject/spark-2.0= .0-hadoop2.4/python:] > axu.print [Log] [7,17,32] [sbin/spark-config.sh] =E5=86=8D=E6=AC=A1=E5=AE= =9A=E4=B9=89PYTHONPATH > axu.print [sbin/spark-config.sh] [Define Global] PYTHONPATH(2): [/Users/a= xu/code/axuProject/spark-2.0.0-hadoop2.4/python/lib/py4j-0.10.1-src.zip:/Us= ers/axu/code/axuProject/spark-2.0.0-hadoop2.4/python:/Users/axu/code/axuPro= ject/spark-2.0.0-hadoop2.4/python/lib/py4j-0.10.1-src.zip:/Users/axu/code/a= xuProject/spark-2.0.0-hadoop2.4/python:/Users/axu/code/axuProject/spark-2.0= .0-hadoop2.4/python/lib/py4j-0.10.1-src.zip:/Users/axu/code/axuProject/spar= k-2.0.0-hadoop2.4/python:] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org For additional commands, e-mail: issues-help@spark.apache.org