flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Zhenqiu Huang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-12342) Yarn Resource Manager Acquires Too Many Containers
Date Wed, 01 May 2019 05:47:00 GMT

    [ https://issues.apache.org/jira/browse/FLINK-12342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16830875#comment-16830875

Zhenqiu Huang commented on FLINK-12342:

After reading the AMRMAsynclient, I just find the resource request is actually sent in each
of heartbeat. If the existing pending request N is not removed yet, the new added request
will be added as N +1 and be sent to RM. For the issue we observe, I think it is caused by
 FAST_YARN_HEARTBEAT_INTERVAL_MS = 500 is set during the resource allocation triggered by
SlotManager.  Somehow the number of container allocated is always less than pending request
within 500 millisecond. So each of fast heartbeat will ask for extra number of containers.
If we change FAST_YARN_HEARTBEAT_INTERVAL_MS to 2000 ms, and wait for more containers be returned
before sending another heartbeat, we can definitely reduce the total number of requested containers.

Thus, the solution I would like to propose is to make the FAST_YARN_HEARTBEAT_INTERVAL_MS
as one of YarnConfigOptions. So that the parameter can be tuned according to the size of job/cluster.
How do you think?

> Yarn Resource Manager Acquires Too Many Containers
> --------------------------------------------------
>                 Key: FLINK-12342
>                 URL: https://issues.apache.org/jira/browse/FLINK-12342
>             Project: Flink
>          Issue Type: Bug
>          Components: Deployment / YARN
>    Affects Versions: 1.6.4, 1.7.2, 1.8.0
>         Environment: We runs job in Flink release 1.6.3. 
>            Reporter: Zhenqiu Huang
>            Assignee: Zhenqiu Huang
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: Screen Shot 2019-04-29 at 12.06.23 AM.png, container.log, flink-1.4.png,
>          Time Spent: 10m
>  Remaining Estimate: 0h
> In currently implementation of YarnFlinkResourceManager, it starts to acquire new container
one by one when get request from SlotManager. The mechanism works when job is still, say less
than 32 containers. If the job has 256 container, containers can't be immediately allocated
and appending requests in AMRMClient will be not removed accordingly. We observe the situation
that AMRMClient ask for current pending request + 1 (the new request from slot manager) containers.
In this way, during the start time of such job, it asked for 4000+ containers. If there is
an external dependency issue happens, for example hdfs access is slow. Then, the whole job
will be blocked without getting enough resource and finally killed with SlotManager request
> Thus, we should use the total number of container asked rather than pending request in
AMRMClient as threshold to make decision whether we need to add one more resource request.

This message was sent by Atlassian JIRA

View raw message