flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ramkrishna.s.vasudevan (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-4094) Off heap memory deallocation might not properly work
Date Tue, 02 Aug 2016 16:56:20 GMT

    [ https://issues.apache.org/jira/browse/FLINK-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15404378#comment-15404378

ramkrishna.s.vasudevan commented on FLINK-4094:

bq. We cannot really manually release the memory when freeing the segment, because the ByteBuffer
wrapper object may still exist. 
Ideally when we are going to pool we won't try to free the memory - so the ByteBuffer wrapper
will exist and that is what we will pool. I think once we do this we wont do segment.free()
on that buffer and we will allow the address to be valid - if am not wrong.
Just a question, In case of { preallocation = true }, what does happen if the number of requests
is more than the initial size? So we consume all the buffers in the pool but new requets won't
be served?
bq.What we can do now, is to discourage the use of off-heap memory with preallocation set
to false. For example, print a prominent warning and add a hint to the documentation.
May be for now we can do it. 
bq. I think before we change memory allocation behavior, we should discuss that on the Flink
mailing list.
Ok sounds like a plan. So once we discuss I think we can go with the lazy allocation pooling
model and that should be beneficial. Because anyway current pooling is with a unbounded queue
and similarly it can be done here too. 
One thing to note is that even with pooling if the MaxDirectMemory is still not configured
right we will not be able to work with offheap buffers. The only thing is we won't grow infinitely.

> Off heap memory deallocation might not properly work
> ----------------------------------------------------
>                 Key: FLINK-4094
>                 URL: https://issues.apache.org/jira/browse/FLINK-4094
>             Project: Flink
>          Issue Type: Bug
>          Components: Local Runtime
>    Affects Versions: 1.1.0
>            Reporter: Till Rohrmann
>            Assignee: ramkrishna.s.vasudevan
>            Priority: Critical
>             Fix For: 1.1.0
> A user reported that off-heap memory is not properly deallocated when setting {{taskmanager.memory.preallocate:false}}
(per default) [1]. This can cause the TaskManager process being killed by the OS.
> It should be possible to execute multiple batch jobs with preallocation turned off. No
longer used direct memory buffers should be properly garbage collected so that the JVM process
does not exceed it's maximum memory bounds.
> [1] http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/offheap-memory-allocation-and-memory-leak-bug-td12154.html

This message was sent by Atlassian JIRA

View raw message