flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Till Rohrmann (JIRA)" <j...@apache.org>
Subject [jira] [Reopened] (FLINK-7851) Improve scheduling balance in case of fewer sub tasks than input operator
Date Fri, 16 Mar 2018 09:41:00 GMT

     [ https://issues.apache.org/jira/browse/FLINK-7851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Till Rohrmann reopened FLINK-7851:
----------------------------------

[~pnowojski] and [~NicoK] reported that this commit has subtle influences how tasks are co-located
with respect to their number of assigned key groups. It actually happens that when you have
a number of key groups which is not dividable by the number of tasks that tasks with +1 key
groups are co-located. This leads to a higher load for the TM on which these tasks run and
thus a deterioration of overall throughput.

 

We should revert the change in 1.5 and 1.6.

> Improve scheduling balance in case of fewer sub tasks than input operator
> -------------------------------------------------------------------------
>
>                 Key: FLINK-7851
>                 URL: https://issues.apache.org/jira/browse/FLINK-7851
>             Project: Flink
>          Issue Type: Improvement
>          Components: Distributed Coordination
>    Affects Versions: 1.4.0, 1.3.2
>            Reporter: Till Rohrmann
>            Assignee: Till Rohrmann
>            Priority: Major
>             Fix For: 1.5.0
>
>
> When having a job where we have a mapper {{m1}} running with dop {{n}} followed by a
key by and a mapper {{m2}} (all-to-all communication) which runs with dop {{m}} and {{n >
m}}, it happens that the sub tasks of {{m2}} are not uniformly spread out across all currently
used {{TaskManagers}}.
> For example: {{n = 4}}, {{m = 2}} and we have 2 TaskManagers with 2 slots each. The deployment
would look the following:
> TM1: 
> Slot 1: {{m1_1}} -> {{m_2_1}}
> Slot 2: {{m1_3}} -> {{m_2_2}}
> TM2:
> Slot 1: {{m1_2}}
> Slot 2: {{m1_4}}
> The problem for this behaviour is that when there are too many preferred locations (currently
8) due to an all-to-all communication pattern, then we will simply poll the next slot from
the MultiMap in {{SlotSharingGroupAssignment}}. The polling algorithm first drains all available
slots for a single machine before it polls slots from another machine. 
> I think it would be better to poll slots in a round robin fashion wrt to the machines.
That way we would get a better resource utilisation by spreading the tasks more evenly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message