spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Eduardo Cusa <eduardo.c...@usmediaconsulting.com>
Subject Re: EC2 VPC script
Date Mon, 29 Dec 2014 17:48:27 GMT
I running the master branch.

Finally I can make it work, changing  all occurrences of "*public_dns_name*"
property with "*private_ip_address*" in the spark_ec2.py script.

My VPC instances always have null value in "*public_dns_name*" property

Now my script only work for VPC instances.

Regards
Eduardo










On Sat, Dec 20, 2014 at 7:53 PM, Nicholas Chammas <
nicholas.chammas@gmail.com> wrote:

> What version of the script are you running? What did you see in the EC2
> web console when this happened?
>
> Sometimes instances just don't come up in a reasonable amount of time and
> you have to kill and restart the process.
>
> Does this always happen, or was it just once?
>
> Nick
>
> On Thu, Dec 18, 2014 at 9:42 AM, Eduardo Cusa <
> eduardo.cusa@usmediaconsulting.com> wrote:
>
>> Hi guys.
>>
>> I run the folling command to lauch a new cluster :
>>
>> ./spark-ec2 -k test -i test.pem -s 1  --vpc-id vpc-XXXXX --subnet-id
>> subnet-XXXXX launch  vpc_spark
>>
>> The instances started ok but the command never end. With the following
>> output:
>>
>>
>> Setting up security groups...
>> Searching for existing cluster vpc_spark...
>> Spark AMI: ami-5bb18832
>> Launching instances...
>> Launched 1 slaves in us-east-1a, regid = r-e9d603c4
>> Launched master in us-east-1a, regid = r-89d104a4
>> Waiting for cluster to enter 'ssh-ready' state...............
>>
>>
>> any ideas what happend?
>>
>>
>> regards
>> Eduardo
>>
>>
>>
>

Mime
View raw message