flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-2235) Local Flink cluster allocates too much memory
Date Thu, 09 Jul 2015 09:14:04 GMT

    [ https://issues.apache.org/jira/browse/FLINK-2235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14620163#comment-14620163

ASF GitHub Bot commented on FLINK-2235:

Github user mxm commented on the pull request:

    Typically, programs can allocate as much memory as they like. We only take a fraction
of the free physical memory for the manged memory. We could also take only half of the physical
memory. Or, alternatively, fail with an exception that the maximum memory for the JVM is not
set (-Xmx is missing). In my opinion, it is ok to take a fraction of the physical memory for
local execution.

> Local Flink cluster allocates too much memory
> ---------------------------------------------
>                 Key: FLINK-2235
>                 URL: https://issues.apache.org/jira/browse/FLINK-2235
>             Project: Flink
>          Issue Type: Bug
>          Components: Local Runtime, TaskManager
>    Affects Versions: 0.9
>         Environment: Oracle JDK: 1.6.0_65-b14-462
> Eclipse
>            Reporter: Maximilian Michels
>            Priority: Minor
> When executing a Flink job locally, the task manager gets initialized with an insane
amount of memory. After a quick look in the code it seems that the call to {{EnvironmentInformation.getSizeOfFreeHeapMemoryWithDefrag()}}
returns a wrong estimate of the heap memory size.
> Moreover, the same user switched to Oracle JDK 1.8 and that made the error disappear.
So I'm guessing this is some Java 1.6 quirk.

This message was sent by Atlassian JIRA

View raw message