spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Samy Dindane <s...@dindane.com>
Subject Re: How to write a custom file system?
Date Mon, 21 Nov 2016 19:13:39 GMT
We don't use HDFS but GlusterFS which works like your typical local POSIX file system.

On 11/21/2016 06:49 PM, Jörn Franke wrote:
> Once you configured a custom file system in Hadoop it can be used by Spark out of the
box. Depending what you implement in the custom file system you may think about side effects
to any application including spark (memory consumption etc).
>
>> On 21 Nov 2016, at 18:26, Samy Dindane <samy@dindane.com> wrote:
>>
>> Hi,
>>
>> I'd like to extend the file:// file system and add some custom logic to the API that
lists files.
>> I think I need to extend FileSystem or LocalFileSystem from org.apache.hadoop.fs,
but I am not sure how to go about it exactly.
>>
>> How to write a custom file system and make it usable by Spark?
>>
>> Thank you,
>>
>> Samy
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
>>

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org


Mime
View raw message