spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Akhil Das <ak...@sigmoidanalytics.com>
Subject Re: use additional ebs volumes for hsdf storage with spark-ec2
Date Thu, 30 Oct 2014 06:56:50 GMT
I think you can check in the core-site.xml or hdfs-site.xml file under
/root/ephemeral-hdfs/etc/hadoop/ where you can see data node dir property
which will be a comma separated list of volumes.

Thanks
Best Regards

On Thu, Oct 30, 2014 at 5:21 AM, Daniel Mahler <dmahler@gmail.com> wrote:

> I started my ec2 spark cluster with
>
>     ./ec2/spark---ebs-vol-{size=100,num=8,type=gp2} -t m3.xlarge -s 10
> launch mycluster
>
> I see the additional volumes attached but they do not seem to be set up
> for hdfs.
> How can I check if they are being utilized on all workers,
> and how can I get all workers to utilize the extra volumes for hdfs.
> I do not have experience using hadoop directly, only through spark.
>
> thanks
> Daniel
>

Mime
View raw message