spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sujit Pal <sujitatgt...@gmail.com>
Subject Re: use S3-Compatible Storage with spark
Date Fri, 17 Jul 2015 15:55:38 GMT
Hi Schmirr,

The part after the s3n:// is your bucket name and folder name, ie
s3n://${bucket_name}/${folder_name}[/${subfolder_name}]*. Bucket names are
unique across S3, so the resulting path is also unique. There is no concept
of hostname in s3 urls as far as I know.

-sujit


On Fri, Jul 17, 2015 at 1:36 AM, Schmirr Wurst <schmirrwurst@gmail.com>
wrote:

> Hi,
>
> I wonder how to use S3 compatible Storage in Spark ?
> If I'm using s3n:// url schema, the it will point to amazon, is there
> a way I can specify the host somewhere ?
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>

Mime
View raw message