Before 2.0, Spark has built-in support for caching RDD data on Tachyon(Alluxio), but that support is removed since 2.0. In either case, Spark does not support writing shuffle data to Tachyon.

Since Alluxio has experimental support for FUSE (, you can try it and set spark.local.dir to point to the directory of Alluxio FUSE.

There is also on-going effort trying to take advantage of SSD to improve shuffle performance, see The PR is ready, but not get merged. You may give it a try by yourself.

On Aug 24, 2016, at 22:30, wrote:

Hi, Saisai and Rui,
Thanks a lot for your answer.  Alluxio tried to work as the middle layer between storage and Spark, so is it possible to use Alluxio to resolve the issue? We want to have 1 SSD for every datanode and use Alluxio to manage mem,ssd and hdd. 

Thanks and Regards,

From: Sun Rui
Date: 2016-08-24 22:17
Subject: Re: Can we redirect Spark shuffle spill data to HDFS or Alluxio?
Yes, I also tried FUSE before, it is not stable and I don’t recommend it
On Aug 24, 2016, at 22:15, Saisai Shao <> wrote:

Also fuse is another candidate (, but not so stable as I tried before.

On Wed, Aug 24, 2016 at 10:09 PM, Sun Rui <> wrote:
For HDFS, maybe you can try mount HDFS as NFS. But not sure about the stability, and also there is additional overhead of network I/O and replica of HDFS files.

On Aug 24, 2016, at 21:02, Saisai Shao <> wrote:

Spark Shuffle uses Java File related API to create local dirs and R/W data, so it can only be worked with OS supported FS. It doesn't leverage Hadoop FileSystem API, so writing to Hadoop compatible FS is not worked.

Also it is not suitable to write temporary shuffle data into distributed FS, this will bring unnecessary overhead. In you case if you have large memory on each node, you could use ramfs instead to store shuffle data.


On Wed, Aug 24, 2016 at 8:11 PM, <> wrote:
Hi, All,
When we run Spark on very large data, spark will do shuffle and the shuffle data will write to local disk. Because we have limited capacity at local disk, the shuffled data will occupied all of the local disk and then will be failed.  So is there a way we can write the shuffle spill data to HDFS? Or if we introduce alluxio in our system, can the shuffled data write to alluxio?

Thanks and Regards,


微信: zhitao_yan
QQ : 4707059
-------------------------------------------------------------------------------------------------------- - 让数据说话