spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From kant kodali <kanth...@gmail.com>
Subject Re: How to install spark with s3 on AWS?
Date Fri, 26 Aug 2016 12:59:12 GMT
Hmm do I always need to have that in my driver program? Why can't I set it
somewhere such that spark cluster realizes that is needs to use s3?





On Fri, Aug 26, 2016 5:13 AM, Devi P.V devip2136@gmail.com wrote:
The following piece of code works for me to read data from S3 using Spark.

val conf = new SparkConf().setAppName("Simple Application").setMaster("local [*]")
val sc = new SparkContext(conf)
val hadoopConf=sc.hadoopConfigurat ion;
hadoopConf.set("fs.s3.impl", "org.apache.hadoop.fs.s3native .NativeS3FileSystem")
hadoopConf.set("fs.s3.awsAcces sKeyId",AccessKey)
hadoopConf.set("fs.s3.awsSecre tAccessKey",SecretKey)
var jobInput = sc.textFile("s3://path to bucket")

Thanks


On Fri, Aug 26, 2016 at 5:16 PM, kant kodali < kanth909@gmail.com > wrote:
Hi guys,
Are there any instructions on how to setup spark with S3 on AWS?
Thanks!
Mime
View raw message