From user-return-13513-apmail-spark-user-archive=spark.apache.org@spark.apache.org Wed Aug 6 18:44:00 2014 Return-Path: X-Original-To: apmail-spark-user-archive@minotaur.apache.org Delivered-To: apmail-spark-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 3C83111E9C for ; Wed, 6 Aug 2014 18:44:00 +0000 (UTC) Received: (qmail 94732 invoked by uid 500); 6 Aug 2014 18:43:58 -0000 Delivered-To: apmail-spark-user-archive@spark.apache.org Received: (qmail 94671 invoked by uid 500); 6 Aug 2014 18:43:58 -0000 Mailing-List: contact user-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list user@spark.apache.org Received: (qmail 94661 invoked by uid 99); 6 Aug 2014 18:43:58 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 06 Aug 2014 18:43:58 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of malouf.gary@gmail.com designates 209.85.192.54 as permitted sender) Received: from [209.85.192.54] (HELO mail-qg0-f54.google.com) (209.85.192.54) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 06 Aug 2014 18:43:54 +0000 Received: by mail-qg0-f54.google.com with SMTP id z60so3145147qgd.41 for ; Wed, 06 Aug 2014 11:43:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=8jVKv9ZX9eqvfKpFQ+zJSsik52b7r06SUTXFez3ed1A=; b=Tnf4ZTRS8g7jjkqDth775gypxu+UgCE+dNHFxxcy1PU4QnLKIPfezqprvpe1fMlAL3 5tMqxSKTM+sRB64VF7MWNwYKF0nJnmCUCAeTT/MZ7l7F/gph+tZDPa3bH14RCUJDXwaB t1LP6etRprfsD2EQ1mX8XUmqReJYB7d14gknQwzDU884+vZOwh2+laqIHv07COgLg/aZ 8LV3hsI/5i2uCjTdZlAxMsD0Irx+y7JcZ30kftjP2iozVEfZnqGa9nzwifpuD+Sqyh2E bqsdlnZGrEUkBR7Z5GJCPupvLJvkdXFpOJlwj4LNhcmaZxwuLgHuzaZcJ8VnViU6mG+H mJfQ== MIME-Version: 1.0 X-Received: by 10.224.28.133 with SMTP id m5mr19516424qac.16.1407350613954; Wed, 06 Aug 2014 11:43:33 -0700 (PDT) Received: by 10.140.29.102 with HTTP; Wed, 6 Aug 2014 11:43:33 -0700 (PDT) In-Reply-To: References: Date: Wed, 6 Aug 2014 14:43:33 -0400 Message-ID: Subject: Re: Runnning a Spark Shell locally against EC2 From: Gary Malouf To: Andrew Or Cc: user@spark.apache.org Content-Type: multipart/alternative; boundary=001a11c1db706b5da704fffa5971 X-Virus-Checked: Checked by ClamAV on apache.org --001a11c1db706b5da704fffa5971 Content-Type: text/plain; charset=UTF-8 This will be awesome - it's been one of the major issues for our analytics team as they hope to use their own python libraries. On Wed, Aug 6, 2014 at 2:40 PM, Andrew Or wrote: > Hi Gary, > > This has indeed been a limitation of Spark, in that drivers and executors > use random ephemeral ports to talk to each other. If you are submitting a > Spark job from your local machine in client mode (meaning, the driver runs > on your machine), you will need to open up all TCP ports from your worker > machines, a requirement that is not super secure. However, a very recent > commit changes this ( > https://github.com/apache/spark/commit/09f7e4587bbdf74207d2629e8c1314f93d865999) > in that you can now manually configure all ports and only open up the ones > you configured. This will be available in Spark 1.1. > > -Andrew > > > 2014-08-06 8:29 GMT-07:00 Gary Malouf : > > We have Spark 1.0.1 on Mesos deployed as a cluster in EC2. Our Devops >> lead tells me that Spark jobs can not be submitted from local machines due >> to the complexity of opening the right ports to the world etc. >> >> Are other people running the shell locally in a production environment? >> > > --001a11c1db706b5da704fffa5971 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
This will be awesome - it's been one of the major issu= es for our analytics team as they hope to use their own python libraries.


On Wed, A= ug 6, 2014 at 2:40 PM, Andrew Or <andrew@databricks.com>= wrote:
Hi Gary,

This has indeed been a limitation of Spark, in that drivers and executors = use random ephemeral ports to talk to each other. If you are submitting a S= park job from your local machine in client mode (meaning, the driver runs o= n your machine), you will need to open up all TCP ports from your worker ma= chines, a requirement that is not super secure. However, a very recent comm= it changes this (https://github.com/apac= he/spark/commit/09f7e4587bbdf74207d2629e8c1314f93d865999) in that you c= an now manually configure all ports and only open up the ones you configure= d. This will be available in Spark 1.1.

-Andrew


<= div class=3D"gmail_quote">2014-08-06 8:29 GMT-07:00 Gary Malouf <mal= ouf.gary@gmail.com>:

We have Spark 1.0.1 on Meso= s deployed as a cluster in EC2. =C2=A0Our Devops lead tells me that Spark j= obs can not be submitted from local machines due to the complexity of openi= ng the right ports to the world etc.

Are other people running the shell locally in a production e= nvironment?


--001a11c1db706b5da704fffa5971--