spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Iulian DragoČ™ <iulian.dra...@typesafe.com>
Subject Re: Temp files are not removed when done (Mesos)
Date Wed, 07 Oct 2015 15:39:35 GMT
https://issues.apache.org/jira/browse/SPARK-10975

On Wed, Oct 7, 2015 at 11:36 AM, Iulian DragoČ™ <iulian.dragos@typesafe.com>
wrote:

> It is indeed a bug. I believe the shutdown procedure in #7820 only kicks
> in when the external shuffle service is enabled (a pre-requisite of dynamic
> allocation). As a workaround you can use dynamic allocation (you can set
> spark.dynamicAllocation.maxExecutors and
> spark.dynamicAllocation.minExecutors to the same value.
>
> I'll file a Jira ticket.
>
> On Wed, Oct 7, 2015 at 10:14 AM, Alexei Bakanov <russisk@gmail.com> wrote:
>
>> Hi
>>
>> I'm running Spark 1.5.1 on Mesos in coarse-grained mode. No dynamic
>> allocation or shuffle service. I see that there are two types of temporary
>> files under /tmp folder associated with every executor: /tmp/spark-<UUID>
>> and /tmp/blockmgr-<UUID>. When job is finished /tmp/spark-<UUID> is gone,
>> but blockmgr directory is left with all gigabytes in it. In Spark 1.4.1
>> blockmgr-<UUID> folder was under /tmp/spark-<UUID>, so when /tmp/spark
>> folder was removed blockmgr was gone too.
>> Is it a bug in 1.5.1?
>>
>> By the way, in fine-grain mode /tmp/spark-<UUID> folder does not get
>> removed in neither 1.4.1 nor 1.5.1 for some reason.
>>
>> Thanks,
>> Alexei
>>
>
>
>
> --
>
> --
> Iulian Dragos
>
> ------
> Reactive Apps on the JVM
> www.typesafe.com
>
>


-- 

--
Iulian Dragos

------
Reactive Apps on the JVM
www.typesafe.com

Mime
View raw message