whirr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrei Savu <savu.and...@gmail.com>
Subject Re: whirr.hadoop.tarball.url property or specifying exact releases
Date Fri, 28 Oct 2011 10:04:51 GMT
I have created a new JIRA issue for this:

https://issues.apache.org/jira/browse/WHIRR-415

It seems like the scripts automatically go for the latest release.

-- Andrei Savu

On Fri, Oct 28, 2011 at 12:48 AM, Andrei Savu <savu.andrei@gmail.com> wrote:

> Paul,
>
> Thanks for sharing this impressive recipe. The problem you are seeing is
> not really a problem it's more like a "feature". The tarball URLs are
> completely ignored if you are deploying CDH. All the binaries are deployed
> from the Cloudera CDH repos (check the
> {install/config_cdh}_{zookeeper/hbase/hadoop}).
>
> I will start to look for a way of deploying cdh3u1 - we should be able to
> specify this as a version.
>
> -- Andrei
>
> On Thu, Oct 27, 2011 at 10:48 PM, Paul Baclace <paul.baclace@gmail.com>wrote:
>
>>  On 20111027 2:20 , Andrei Savu wrote:
>>
>>
>>  On Thu, Oct 27, 2011 at 12:05 PM, Paul Baclace <paul.baclace@gmail.com>wrote:
>>
>>> I don't expect that the cdh3u2 files came from a cdh3u1 tarball.
>>
>>
>> I see no cdh3u2 files inside that tarball. Can you share the full
>> .properties file?
>>
>> My best guess is that some other installation specification
>> (*.install-function prop) has the side effect of overriding the tarball
>> property, if there is a more recent CDH release.  If that is the case, then
>> either the tarball.url props need to be documented as "set all tarballs or
>> none" (a dicey feature) or the installation logic must allow a single
>> tarball.url to override implied installations (be they tarball or packages).
>>
>> The actual, full whirr.config from the particular run is below (with
>> sensitive bits removed).  Some values are supplied by env on the launching
>> host.
>>
>>
>> Paul
>>
>> --------------------whirr.config----------------
>> hadoop-common.fs.checkpoint.dir=/mnt/hadoop/dfs/namesecondary
>> hadoop-common.fs.s3.awsAccessKeyId=XXXXXXXXXXXXXX
>> hadoop-common.fs.s3.awsSecretAccessKey=YYYYYYYYYYYYYYYYYYY
>> hadoop-common.fs.s3bfs.awsAccessKeyId=XXXXXXXXXXXXXX
>> hadoop-common.fs.s3bfs.awsSecretAccessKey=YYYYYYYYYYYYYYYYYYY
>> hadoop-common.fs.s3.block.size=${env:BLOCK_SIZE}
>> hadoop-common.fs.s3.maxRetries=20
>> hadoop-common.fs.s3n.awsAccessKeyId=XXXXXXXXXXXXXX
>> hadoop-common.fs.s3n.awsSecretAccessKey=YYYYYYYYYYYYYYYYYYY
>> hadoop-common.fs.s3.sleepTimeSeconds=4
>> hadoop-common.hadoop.tmp.dir=/mnt/hadoop/tmp/user_${user.name}
>> hadoop-common.io.file.buffer.size=65536
>> hadoop-common.io.sort.factor=25
>> hadoop-common.io.sort.mb=100
>> hadoop-common.webinterface.private.actions=true
>> hadoop-hdfs.dfs.block.size=${env:BLOCK_SIZE}
>> hadoop-hdfs.dfs.data.dir=/mnt/hadoop/dfs/data
>> hadoop-hdfs.dfs.datanode.du.reserved=500000000
>> hadoop-hdfs.dfs.datanode.max.xcievers=1000
>> hadoop-hdfs.dfs.heartbeat.interval=1
>> hadoop-hdfs.dfs.name.dir=/mnt/hadoop/dfs/name
>> hadoop-hdfs.dfs.permissions=false
>> hadoop-hdfs.dfs.replication=${env:REPLICATION_FACTOR}
>> hadoop-hdfs.dfs.support.append=true
>> hadoop-mapreduce.keep.failed.task.file=true
>> hadoop-mapreduce.mapred.child.java.opts=-Xmx550m -Xms200m
>> -Djava.net.preferIPv4Stack=true
>> hadoop-mapreduce.mapred.child.ulimit=1126400
>> hadoop-mapreduce.mapred.compress.map.output=true
>> hadoop-mapreduce.mapred.job.reuse.jvm.num.tasks=1
>> hadoop-mapreduce.mapred.jobtracker.completeuserjobs.maximum=1000
>> hadoop-mapreduce.mapred.local.dir=/mnt/hadoop/mapred/local/user_${
>> user.name}
>> hadoop-mapreduce.mapred.map.max.attempts=2
>> hadoop-mapreduce.mapred.map.tasks=${env:N_MAP_TASKS_JOB_DEFAULT}
>> hadoop-mapreduce.mapred.map.tasks.speculative.execution=false
>> hadoop-mapreduce.mapred.min.split.size=${env:BLOCK_SIZE}
>> hadoop-mapreduce.mapred.output.compression.type=BLOCK
>> hadoop-mapreduce.mapred.reduce.max.attempts=2
>> hadoop-mapreduce.mapred.reduce.tasks=${env:N_REDUCE_TASKS_JOB_DEFAULT}
>> hadoop-mapreduce.mapred.reduce.tasks.speculative.execution=false
>> hadoop-mapreduce.mapred.system.dir=/hadoop/system/mapred
>>
>> hadoop-mapreduce.mapred.tasktracker.map.tasks.maximum=${env:N_MAP_TASKS_PER_TRACKER}
>>
>> hadoop-mapreduce.mapred.tasktracker.reduce.tasks.maximum=${env:N_REDUCE_TASKS_PER_TRACKER}
>> hadoop-mapreduce.mapred.temp.dir=/mnt/hadoop/mapred/temp/user_${user.name
>> }
>> hadoop-mapreduce.mapreduce.jobtracker.staging.root.dir=/user
>> hbase-site.dfs.datanode.max.xcievers=1000
>> hbase-site.dfs.replication=2
>> hbase-site.dfs.support.append=true
>> hbase-site.hbase.client.pause=3000
>> hbase-site.hbase.cluster.distributed=true
>> hbase-site.hbase.rootdir=${fs.default.name}user/hbase
>> hbase-site.hbase.tmp.dir=/mnt/hbase/tmp
>> hbase-site.hbase.zookeeper.property.dataDir=/mnt/zookeeper/snapshot
>> hbase-site.hbase.zookeeper.property.initLimit=30
>> hbase-site.hbase.zookeeper.property.maxClientCnxns=2000
>> hbase-site.hbase.zookeeper.property.syncLimit=10
>> hbase-site.hbase.zookeeper.property.tickTime=6000
>> hbase-site.hbase.zookeeper.quorum=${fs.default.name}
>> hbase-site.zookeeper.session.timeout=120000
>> jclouds.aws-s3.endpoint=us-west-1
>>
>> jclouds.ec2.ami-query=owner-id=999999999999;state=available;image-type=machine;root-device-type=instance-store;architecture=x86_32
>> jclouds.ec2.cc-regions=us-west-1
>> jclouds.ec2.timeout.securitygroup-present=1500
>> whirr.credential=${env:AWS_SECRET_ACCESS_KEY}
>> whirr.hadoop.configure-function=configure_cdh_hadoop
>> whirr.hadoop.install-function=install_cdh_hadoop
>> whirr.hadoop.tarball.url=
>> http://archive.cloudera.com/cdh/3/hadoop-0.20.2-cdh3u1.tar.gz
>> whirr.hadoop.version=0.20.2
>> whirr.hardware-id=c1.medium
>> whirr.hbase.configure-function=configure_cdh_hbase
>> whirr.hbase.install-function=install_cdh_hbase
>> whirr.hbase.tarball.url=
>> http://apache.cs.utah.edu/hbase/hbase-0.90.3/hbase-0.90.3.tar.gz
>> whirr.identity=${env:AWS_ACCESS_KEY_ID}
>> whirr.image-id=us-west-1/ami-ffffffff
>> whirr.instance-templates=1
>> hadoop-jobtracker+hadoop-namenode+hbase-master+zookeeper+ganglia-metad,2
>> hadoop-datanode+hadoop-tasktracker+hbase-regionserver+ganglia-monitor
>> whirr.instance-templates-minimum-number-of-instances=1
>> hadoop-jobtracker+hadoop-namenode+hbase-master+zookeeper+ganglia-metad,2
>> hadoop-datanode+hadoop-tasktracker+hbase-regionserver+ganglia-monitor
>> whirr.location-id=us-west-1
>> whirr.login-user=ubuntu
>> whirr.max-startup-retries=4
>> whirr.private-key-file=${sys:user.home}/.ssh/id_rsa
>> whirr.provider=aws-ec2
>> whirr.public-key-file=${sys:user.home}/.ssh/id_rsa.pub
>> whirr.zookeeper.configure-function=configure_cdh_zookeeper
>> whirr.zookeeper.install-function=install_cdh_zookeeper
>> --------------------
>>
>
>

Mime
View raw message