whirr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrei Savu <savu.and...@gmail.com>
Subject Re: hadoop issues on Ubuntu AMIs
Date Mon, 05 Dec 2011 11:19:27 GMT
Here you can find a list of Ubuntu AMIs packaged by Canonical:
http://cloud.ubuntu.com/ami/

Try a recipe like this:

whirr.cluster-name=hadoop-asavu

whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,1
hadoop-datanode+hadoop-tasktracker

whirr.hadoop.install-function=install_cdh_hadoop
whirr.hadoop.configure-function=configure_cdh_hadoop

whirr.provider=aws-ec2
whirr.identity=${env:AWS_ACCESS_KEY_ID}
whirr.credential=${env:AWS_SECRET_ACCESS_KEY}

If you don't specify and ami ID Whirr will automatically select an Ubuntu
10.04 for you.


>
> *Questions:*
>
>    1. Assuming everything is fine, where does Hadoop gets installed on
>    the EC2 instance? What is the path?
>
>
Run jps as root and you should see the daemons running.

>
>    1. Even if Hadoop is successfully installed on the EC2 instance, are
>    the env variables properly changed on that instance? Like, path must be
>    updated either on its .bashrc or .bash_profile ...right?
>
>
Try to run "hadoop fs -ls /" as root.

>
>    1. Am I missing any important step here which is not documented?
>
> Nope.


>
>    1. The stdout.log file on the instance says "reading package lists..".
>    I do not see logs about hadoop getting installed...as I see for Java
>    ("setting up sun-java6-jdk" ...). Is there a way to enable verbose logging?
>    I am using m1.small hardware. So, I am sure it will have enough space to
>    install hadoop and run it.
>    2. If you know of any Ubuntu AMI that you have consistently run
>    Hadoop, please let me know. I will definitely try that.
>
> I am asking the above questions because I feel I am not looking at the
> right place. After switching several AMIs, if I still see the same
> behavior, I must be looking at the wrong places.
>
> I am doing something stupid here. Not sure what. I am properly exporting
> the hadoop conf dir. The ssh key pairs are good. I do not know why
> connection gets refused and do not understand the last line (highlighted in
> yellow). Am I missing any important step?
>
> Also, the funny thing is this: I am able to see the dfshealth.jsp page on
> my firefox browser (after running the proxy shell script). But, when I
> click on the link to show the filesystem, it is unable to display
> them...connection to server problem!
>

Have you also added the proxy to firefox?


>
> Any suggestions/best practices?
>
> Thanks,
> PD
>
>

Mime
View raw message