flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Flavio Pompermaier <pomperma...@okkam.it>
Subject Re: How to make Flink to write less temporary files?
Date Mon, 10 Nov 2014 08:24:10 GMT
I also see a lot of blobStore-XXX directories in /tmp

On Mon, Nov 10, 2014 at 8:48 AM, Ufuk Celebi <uce@apache.org> wrote:

> Hey Malte,
>
> thanks for reporting the issue. Did you change the default configuration?
> I've just checked and the default config hard codes the heap size for the
> task managers to 512 MB. In that case some of the algorithms will start
> spilling to disk earlier than necessary (assuming that you have more main
> memory available). You can either remove the config key "
> taskmanager.heap.mb" and let the JVM set a default max heap size or set
> the config key to a value appropriate for your machines/setup. Could you
> try this out and report back?
>
> Regarding the temp directories: How much disk space is available before
> running your program? I remember a user reporting the same issue, because
> of some old files lingering around in the temp directories.
>
> – Ufuk
>
>
> On Sun, Nov 9, 2014 at 10:54 PM, Malte Schwarzer <ms@mieo.de> wrote:
>
>> Hi,
>>
>>  I’m having the problem that my tmp dir is running out of hard drive
>> space when running a map reduce job on a 1TB file. The job fails with "no
>> space left on device“ exception.
>>
>> Probably the intermediate result set is getting too big. Is there a way
>> to avoid this? Or make flink to write less temporary files?
>>
>> Thanks,
>> Malte
>>
>>
>>
>

Mime
View raw message