spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrew Or <and...@databricks.com>
Subject Re: Problem creating EC2 cluster using spark-ec2
Date Wed, 03 Dec 2014 22:11:36 GMT
This should be fixed now. Thanks for bringing this to our attention.

2014-12-03 13:31 GMT-08:00 Andrew Or <andrew@databricks.com>:

> Yeah this is currently broken for 1.1.1. I will submit a fix later today.
>
> 2014-12-02 17:17 GMT-08:00 Shivaram Venkataraman <
> shivaram@eecs.berkeley.edu>:
>
> +Andrew
>>
>> Actually I think this is because we haven't uploaded the Spark binaries
>> to cloudfront / pushed the change to mesos/spark-ec2.
>>
>> Andrew, can you take care of this ?
>>
>>
>>
>> On Tue, Dec 2, 2014 at 5:11 PM, Nicholas Chammas <
>> nicholas.chammas@gmail.com> wrote:
>>
>>> Interesting. Do you have any problems when launching in us-east-1? What
>>> is the full output of spark-ec2 when launching a cluster? (Post it to a
>>> gist if it’s too big for email.)
>>> ​
>>>
>>> On Mon, Dec 1, 2014 at 10:34 AM, Dave Challis <dave.challis@aistemos.com
>>> > wrote:
>>>
>>>> I've been trying to create a Spark cluster on EC2 using the
>>>> documentation at https://spark.apache.org/docs/latest/ec2-scripts.html
>>>> (with Spark 1.1.1).
>>>>
>>>> Running the script successfully creates some EC2 instances, HDFS etc.,
>>>> but appears to fail to copy the actual files needed to run Spark
>>>> across.
>>>>
>>>> I ran the following commands:
>>>>
>>>> $ cd ~/src/spark-1.1.1/ec2
>>>> $ ./spark-ec2 --key-pair=* --identity-file=* --slaves=1
>>>> --region=eu-west-1 --zone=eu-west-1a --instance-type=m3.medium
>>>> --no-ganglia launch foocluster
>>>>
>>>> I see the following in the script's output:
>>>>
>>>> (instance and HDFS set up happens here)
>>>> ...
>>>> Persistent HDFS installed, won't start by default...
>>>> ~/spark-ec2 ~/spark-ec2
>>>> Setting up spark-standalone
>>>> RSYNC'ing /root/spark/conf to slaves...
>>>> *****.eu-west-1.compute.amazonaws.com
>>>> RSYNC'ing /root/spark-ec2 to slaves...
>>>> *****.eu-west-1.compute.amazonaws.com
>>>> ./spark-standalone/setup.sh: line 22: /root/spark/sbin/stop-all.sh: No
>>>> such file or directory
>>>> ./spark-standalone/setup.sh: line 27:
>>>> /root/spark/sbin/start-master.sh: No such file or directory
>>>> ./spark-standalone/setup.sh: line 33:
>>>> /root/spark/sbin/start-slaves.sh: No such file or directory
>>>> Setting up tachyon
>>>> RSYNC'ing /root/tachyon to slaves...
>>>> ...
>>>> (Tachyon setup happens here without any problem)
>>>>
>>>> I can ssh to the master (using the ./spark-ec2 login), and looking in
>>>> /root/, it contains:
>>>>
>>>> $ ls /root
>>>> ephemeral-hdfs  hadoop-native  mapreduce  persistent-hdfs  scala
>>>> shark  spark  spark-ec2  tachyon
>>>>
>>>> If I look in /root/spark (where the sbin directory should be found),
>>>> it only contains a single 'conf' directory:
>>>>
>>>> $ ls /root/spark
>>>> conf
>>>>
>>>> Any idea why spark-ec2 might have failed to copy these files across?
>>>>
>>>> Thanks,
>>>> Dave
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
>>>> For additional commands, e-mail: user-help@spark.apache.org
>>>>
>>>>
>>>
>>
>

Mime
View raw message