whirr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From a b <autohan...@yahoo.com>
Subject Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Date Tue, 06 Aug 2013 23:46:47 GMT
ok, then - i'll give up for today - i'd tweak, but i've no idea what to do. thanks so much
for helping me - sorry, i'm having such a struggle getting out of the nest.



________________________________
 From: Andrew Bayer <andrew.bayer@gmail.com>
To: user@whirr.apache.org; a b <autohandle@yahoo.com> 
Sent: Tuesday, August 6, 2013 4:22 PM
Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
 


That suggests you need to tweak the SSH settings on the image.

I'll try to make a run at testing your config tomorrow.

A.



On Tue, Aug 6, 2013 at 4:20 PM, a b <autohandle@yahoo.com> wrote:

so i had an inspiration - i booted an ubuntu server, installed oracle java 7 by hand, and
then saved it as a new private ec2 ami. i put the id for ami in the properties file and tried
to launch - for both the 0.8.2 version and the current version a got a packet size error -
and both hung in some kind of a slow loop, repeating 7 tries to get the right packet size.
should launching from my own ami work?
>
>
>
>
>
>
>________________________________
> From: Andrew Bayer <andrew.bayer@gmail.com>
>To: user@whirr.apache.org; a b <autohandle@yahoo.com> 
>Sent: Tuesday, August 6, 2013 12:11 PM
>
>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
> 
>
>
>On the no private key issue - that's odd. Looks like a race condition of some sort, but
I'm not sure how it's happening. That does look like a bug of some sort in jclouds, I think.
I'll dig into it when I get a chance. But it doesn't seem to be actually blocking anything
- the instances are still getting created, and the initial login to the instances is working
fine.
>
>
>The really weird thing is that one instance is behaving differently than the other - apt-get
update is getting repo URLS at archive.ubuntu.com on the bad one, and us-east-1.ec2.archive.ubuntu.com.
That's just strange. Mind trying with a larger size than t1.micro, just in case that's being
weird for some reason?
>
>
>A.
>
>
>
>On Tue, Aug 6, 2013 at 11:35 AM, a b <autohandle@yahoo.com> wrote:
>
>can you help me move forward?
>>
>>o i can't use the 0.8 version - it doesn't install a headless version on a server.
>>o i can't modify the current version - unmodified, it gets a: java.util.concurrent.ExecutionException:
java.lang.IllegalArgumentException: no private key configured
>>
>>
>>
>>o maybe with some coaching - i'm a git beginner, i can check out a 0.8 version and
modify that? or is there a better path?
>>
>>
>>
>>________________________________
>> From: Andrew Bayer <andrew.bayer@gmail.com>
>>To: a b <autohandle@yahoo.com> 
>>Cc: "user@whirr.apache.org" <user@whirr.apache.org> 
>>Sent: Tuesday, August 6, 2013 11:00 AM
>>
>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>> 
>>
>>
>>Random weird luck is always real. =) If you'd like to open a JIRA for the Oracle JDK
download issues, that'd be appreciated - I'll see what I can do with it.
>>
>>
>>A.
>>
>>
>>On Tue, Aug 6, 2013 at 10:59 AM, a b <autohandle@yahoo.com> wrote:
>>
>>so now i feel like an idiot, it is running now:
>>>
>>>ab@ubuntu12-64:~$ rm whirr.log 
>>>ab@ubuntu12-64:~$ !1544
>>>
>>>~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties

>>>Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXX
>>>Started cluster of 2 instances
>>>Cluster{instances=[Instance{roles=[hadoop-datanode, hadoop-tasktracker], publicIp=54.224.175.65,
privateIp=10.179.37.185, id=us-east-1/i-5fc0083f, nodeMetadata={id=us-east-1/i-5fc0083f, providerId=i-5fc0083f,
name=hadoop-ec2-5fc0083f, location={scope=ZONE, id=us-east-1d, description=us-east-1d, parent=us-east-1,
iso3166Codes=[US-VA]}, group=hadoop-ec2, imageId=us-east-1/ami-23d9a94a, os={family=ubuntu,
arch=paravirtual, version=12.04,
 description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624, is64Bit=true},
status=RUNNING[running], loginPort=22, hostname=ip-10-179-37-185, privateAddresses=[10.179.37.185],
publicAddresses=[54.224.175.65], hardware={id=t1.micro, providerId=t1.micro, processors=[{cores=1.0,
speed=1.0}], ram=630, volumes=[{id=vol-b85f76f8, type=SAN, device=/dev/sda1, bootDevice=true,
durable=true}], hypervisor=xen, supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)},
loginUser=ubuntu, userMetadata={Name=hadoop-ec2-5fc0083f}}}, Instance{roles=[hadoop-namenode,
hadoop-jobtracker], publicIp=54.227.189.132, privateIp=10.10.141.106, id=us-east-1/i-69eede0b,
nodeMetadata={id=us-east-1/i-69eede0b, providerId=i-69eede0b, name=hadoop-ec2-69eede0b, location={scope=ZONE,
id=us-east-1d, description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]}, group=hadoop-ec2,
 imageId=us-east-1/ami-23d9a94a, os={family=ubuntu, arch=paravirtual, version=12.04, description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624,
is64Bit=true}, status=RUNNING[running], loginPort=22, hostname=ip-10-10-141-106, privateAddresses=[10.10.141.106],
publicAddresses=[54.227.189.132], hardware={id=t1.micro, providerId=t1.micro, processors=[{cores=1.0,
speed=1.0}], ram=630, volumes=[{id=vol-b95f76f9, type=SAN, device=/dev/sda1, bootDevice=true,
durable=true}], hypervisor=xen, supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)},
loginUser=ubuntu, userMetadata={Name=hadoop-ec2-69eede0b}}}]}
>>>
>>>
>>>You can log into instances using the following ssh commands:
>>>[hadoop-datanode+hadoop-tasktracker]: ssh -i /home/ab/.ssh/id_rsa -o "UserKnownHostsFile
/dev/null" -o StrictHostKeyChecking=no ab@54.224.175.65
>>>[hadoop-namenode+hadoop-jobtracker]: ssh -i
 /home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.227.189.132
>>>
>>>To destroy cluster, run 'whirr destroy-cluster' with the same options used to
launch it.
>>>ab@ubuntu12-64:~$ 
>>>
>>>
>>>as you know, i did change the properties file this morning to download the openjdk
- i don't know if that is a difference or not. let me check if java is installed and hadoop
is running - i think i should have gotten an error, since i didn't update the oracle java
7 script.
>>>
>>>
>>>
>>>
>>>________________________________
>>> From: Andrew Bayer <andrew.bayer@gmail.com>
>>>To: user@whirr.apache.org; a b <autohandle@yahoo.com> 
>>>
>>>Sent: Tuesday, August 6, 2013 10:35 AM
>>>
>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>> 
>>>
>>>
>>>Do you have the whirr.log from that attempt?
>>>
>>>
>>>A.
>>>
>>>
>>>On Mon, Aug 5, 2013 at 8:34 PM, a b <autohandle@yahoo.com> wrote:
>>>
>>>i built whirr in my own directory - i didn't change it (yet) - i just check it
out and tried to compile - you can see i had some memory issues that i didn't notice right
away:
>>>>
>>>>
>>>>
>>>> 1530  git clone git://git.apache.org/whirr.git
>>>> 1531  mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
>>>> 1532  cd whirr/
>>>> 1533  mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
>>>> 1534  mvn install
>>>> 1535  cd
>>>> 1536  rm whirr.log
>>>> 1537  ~/git/whirr/bin/whirr launch-cluster
 --config ~/whirr/recipes/hadoop.properties 
>>>> 1538  export MAVEN_OPTS=-Xmx200m
>>>> 1539  cd ~/git/whirr/
>>>> 1540  mvn install
>>>> 1541  export MAVEN_OPTS=-Xmx1G
>>>> 1542  mvn install
>>>> 1543  cd
>>>> 1544  ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties

>>>> 1545  history
>>>>
>>>>
>>>>
>>>>this is the console at the end of "mvn install"
>>>>
>>>>
>>>>[INFO] ------------------------------------------------------------------------
>>>>[INFO] Reactor Summary:
>>>>[INFO] 
>>>>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.519s]
>>>>[INFO] Whirr ............................................. SUCCESS [4.662s]
>>>>[INFO] Apache Whirr Core ................................. SUCCESS [1:43.913s]
>>>>[INFO] Apache Whirr Cassandra ............................ SUCCESS [10.705s]
>>>>[INFO] Apache Whirr Ganglia .............................. SUCCESS [3.683s]
>>>>[INFO] Apache Whirr Hadoop ............................... SUCCESS [11.294s]
>>>>[INFO] Apache Whirr ZooKeeper ............................ SUCCESS [4.477s]
>>>>[INFO] Apache Whirr HBase
 ................................ SUCCESS [5.196s]
>>>>[INFO] Apache Whirr YARN ................................. SUCCESS [4.867s]
>>>>[INFO] Apache Whirr CDH .................................. SUCCESS [5.552s]
>>>>[INFO] Apache Whirr Mahout ............................... SUCCESS [2.452s]
>>>>[INFO] Apache Whirr Pig .................................. SUCCESS [2.393s]
>>>>[INFO] Apache Whirr ElasticSearch ........................ SUCCESS [4.145s]
>>>>[INFO] Apache Whirr Hama ................................. SUCCESS [2.841s]
>>>>[INFO] Apache Whirr Puppet ............................... SUCCESS [3.242s]
>>>>[INFO] Apache Whirr Chef ................................. SUCCESS [12.153s]
>>>>[INFO] Apache Whirr Solr ................................. SUCCESS [4.997s]
>>>>[INFO] Apache Whirr Kerberos ............................. SUCCESS [12.207s]
>>>>[INFO] Apache Whirr CLI .................................. SUCCESS [14.940s]
>>>>[INFO] Apache Whirr Examples
 ............................. SUCCESS [4.494s]
>>>>[INFO] ------------------------------------------------------------------------
>>>>[INFO] BUILD SUCCESS
>>>>[INFO] ------------------------------------------------------------------------
>>>>[INFO] Total time: 3:43.355s
>>>>[INFO] Finished at: Mon Aug 05 20:11:58 PDT 2013
>>>>[INFO] Final Memory: 109M/262M
>>>>[INFO] ------------------------------------------------------------------------
>>>>ab@ubuntu12-64:~/git/whirr$ cd
>>>>ab@ubuntu12-64:~$ ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties

>>>>Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXXXX
>>>>Exception in thread "main" java.lang.RuntimeException: java.io.IOException:
java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while
bootstrapping! 0 successfully started instances while 0 instances failed
>>>>    at
 org.apache.whirr.ClusterController.launchCluster(ClusterController.java:128)
>>>>    at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:69)
>>>>    at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:59)
>>>>    at org.apache.whirr.cli.Main.run(Main.java:69)
>>>>    at org.apache.whirr.cli.Main.main(Main.java:102)
>>>>Caused by: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException:
Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances
failed
>>>>    at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:125)
>>>>    at org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:131)
>>>>    at
 org.apache.whirr.ClusterController.bootstrapCluster(ClusterController.java:137)
>>>>    at org.apache.whirr.ClusterController.launchCluster(ClusterController.java:113)
>>>>    ... 4 more
>>>>Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Too
many instance failed while bootstrapping! 0 successfully started instances while 0 instances
failed
>>>>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>>>>    at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>>>>    at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:120)
>>>>    ... 7 more
>>>>Caused by: java.io.IOException: Too many instance failed while bootstrapping!
0 successfully started instances while 0 instances failed
>>>>    at
 org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:93)
>>>>    at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:41)
>>>>    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>>    at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>>    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>    at java.lang.Thread.run(Thread.java:724)
>>>>
>>>>
>>>>
>>>>2 instances were started - like i asked for - the 1st instance was terminated
almost immediately, the 2nd was left running for a while - eventually, i hit ^c on the launch
and terminated the 2nd instance from the aws console.
>>>>
>>>>
>>>>i need some more coaching.
>>>>
>>>>
>>>>
>>>>
>>>>________________________________
>>>> From: Andrew Bayer <andrew.bayer@gmail.com>
>>>>To: user@whirr.apache.org; a b <autohandle@yahoo.com> 
>>>>Sent: Monday, August 5, 2013 5:54 PM
>>>>
>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>> 
>>>>
>>>>
>>>>Try not building as root - that can throw things off.
>>>>
>>>>
>>>>A.
>>>>
>>>>
>>>>On Mon, Aug 5, 2013 at 5:44 PM, a b <autohandle@yahoo.com> wrote:
>>>>
>>>>as for the java 7 problem - i found this suggestion:
>>>>>
>>>>>http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>>>>>
>>>>>
>>>>>i tried to download whirr - as suggested here:
>>>>>
>>>>>
>>>>>https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>>>>>
>>>>>
>>>>>o i did: git clone ...
>>>>>
>>>>>o i modified: core/src/main/resources/functions/...
>>>>>o i did: mvn eclipse:eclipse ...
>>>>>o i skipped: eclipse import
>>>>>o i ran: mvn install
>>>>>
>>>>>
>>>>>it fails in the "mvn install" during test
>>>>>
>>>>>
>>>>>Tests in error: 
>>>>> 
 testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest):
cluster-user != root or do not run as root
>>>>>  testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest):
cluster-user != root or do not run as root
>>>>>  testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest):
cluster-user != root or do not run as root
>>>>>  testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest):
cluster-user != root or do not run as root
>>>>>  testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest):
cluster-user != root or do not run as root
>>>>>[..]
>>>>>  testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest):
cluster-user != root or do not run as root
>>>>>  testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest):
cluster-user != root or do not run as root
>>>>>  testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest):
cluster-user != root or do not run as root
>>>>>  testOverrides(org.apache.whirr.command.AbstractClusterCommandTest):
cluster-user != root or do not run as root
>>>>>
>>>>>Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>>>>>
>>>>>[INFO] ------------------------------------------------------------------------
>>>>>[INFO] Reactor Summary:
>>>>>[INFO] 
>>>>>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
>>>>>[INFO] Whirr ............................................. SUCCESS [4.488s]
>>>>>[INFO] Apache Whirr Core ................................. FAILURE [17.113s]
>>>>>[INFO] Apache Whirr Cassandra ............................ SKIPPED
>>>>>[INFO]
 Apache Whirr Ganglia .............................. SKIPPED
>>>>>[INFO] Apache Whirr Hadoop ............................... SKIPPED
>>>>>
>>>>>
>>>>>
>>>>>is there a better way to try this suggestion?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>________________________________
>>>>> From: Andrew Bayer <andrew.bayer@gmail.com>
>>>>>
>>>>>To: a b <autohandle@yahoo.com> 
>>>>>Cc: "user@whirr.apache.org" <user@whirr.apache.org> 
>>>>>Sent: Monday, August 5, 2013 4:48 PM
>>>>>
>>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>> 
>>>>>
>>>>>
>>>>>Looks like it's the Oracle JDK7 download that's failing - http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gz
isn't actually there. I don't know if they actually have a consistent link to get the JDK7
tarball regardless of version. I'd just use OpenJDK for now, if you can.
>>>>>
>>>>>
>>>>>A.
>>>>>
>>>>>
>>>>>On Mon, Aug 5, 2013 at 4:33 PM, a b <autohandle@yahoo.com> wrote:
>>>>>
>>>>>Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64 Packages
[1,273 kB]^M
>>>>>>Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64
Packages [4,786 kB]^M
>>>>>>Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages
[1,274 kB]^M
>>>>>>Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386
Packages [4,796 kB]^M
>>>>>>Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main TranslationIndex
[3,706 B]^M
>>>>>>Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe TranslationIndex
[2,922 B]^M
>>>>>>Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
Sources [412 kB]^M
>>>>>>Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
Sources [93.1 kB]^M
>>>>>>Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
amd64 Packages [672 kB]^M
>>>>>>Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
amd64 Packages [210 kB]^M
>>>>>>Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
i386 Packages [692 kB]^M
>>>>>>Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
i386 Packages [214 kB]^M
>>>>>>Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
TranslationIndex [3,564 B]^M
>>>>>>Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
TranslationIndex [2,850 B]^M
>>>>>>Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main Translation-en
[726 kB]^M
>>>>>>Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe Translation-en
[3,341 kB]^M
>>>>>>Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
Translation-en [298 kB]^M
>>>>>>Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
Translation-en [123 kB]^M
>>>>>>Fetched 26.1 MB in 21s (1,241 kB/s)^M
>>>>>>Reading package lists...^M
>>>>>>Could not download  http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5.
Continuing.^M
>>>>>>, error=^M
>>>>>>gzip: stdin: not in gzip format^M
>>>>>>tar: Child returned status 1^M
>>>>>>tar: Error is not recoverable: exiting now^M
>>>>>>mv: cannot stat `jdk1*': No such file or directory^M
>>>>>>update-alternatives: error: alternative path /usr/lib/jvm/java-7-oracle/bin/java
doesn't exist.^M
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>________________________________
>>>>>> From: Andrew Bayer <andrew.bayer@gmail.com>
>>>>>>To: user@whirr.apache.org; a b <autohandle@yahoo.com> 
>>>>>>Sent: Monday, August 5, 2013 4:22 PM
>>>>>>
>>>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is
mds
>>>>>> 
>>>>>>
>>>>>>
>>>>>>Ok, Whirr does try to download the md5 file, but if it fails to find
it, that's not a blocking error - it'll keep going anyway. What's after that in the logs?
>>>>>>
>>>>>>
>>>>>>A.
>>>>>>
>>>>>>
>>>>>>On Mon, Aug 5, 2013 at 4:19 PM, a b <autohandle@yahoo.com> wrote:
>>>>>>
>>>>>>ok - i'm not sure what you are asking.
>>>>>>>
>>>>>>>whirr launch-cluster --config
~/whirr/recipes/hadoop.properties 
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>where these are the properties i think i changed or added from
the original recipe:
>>>>>>>
>>>>>>>
>>>>>>>whirr.cluster-name=hadoop-ec2
>>>>>>>whirr.instance-templates=1
hadoop-namenode+hadoop-jobtracker,1 hadoop-datanode+hadoop-tasktracker
>>>>>>>whirr.hardware-id=t1.micro
>>>>>>>whirr.image-id=us-east-1/ami-25d9a94c
>>>>>>>whirr.hadoop.version=1.2.1
>>>>>>>whirr.provider=aws-ec2
>>>>>>>whirr.identity=${env:AWS_ACCESS_KEY} 
>>>>>>>whirr.credential=${env:AWS_SECRET_KEY}
>>>>>>>whirr.location-id=us-east-1
>>>>>>>whirr.java.install-function=install_oracle_jdk7
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>________________________________
>>>>>>> From: Andrew Bayer <andrew.bayer@gmail.com>
>>>>>>>To: user@whirr.apache.org; a b <autohandle@yahoo.com> 
>>>>>>>Sent: Monday, August 5, 2013 4:11 PM
>>>>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix
is mds
>>>>>>> 
>>>>>>>
>>>>>>>
>>>>>>>Ok, it looks like they've actually been doing .mds for a while.
Where are you seeing this error?
>>>>>>>
>>>>>>>
>>>>>>>A.
>>>>>>>
>>>>>>>
>>>>>>>On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <andrew.bayer@gmail.com>
wrote:
>>>>>>>
>>>>>>>That looks like they broke the hadoop-1.2.1 release - the file
should be .md5. I'd bug the Hadoop project about that.
>>>>>>>>
>>>>>>>>A.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>On Mon, Aug 5, 2013 at 3:42 PM, a b <autohandle@yahoo.com>
wrote:
>>>>>>>>
>>>>>>>>i get a whirr error:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>Could not download  http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>when i browse over to: http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
>>>>>>>>>i can see the file is named: hadoop-1.2.1.tar.gz.mds
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>how do i tell whirr to use a different suffix?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>
>>
>>
>
>
>
Mime
View raw message