beam-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Work logged] (BEAM-4783) Spark SourceRDD Not Designed With Dynamic Allocation In Mind
Date Fri, 14 Sep 2018 12:52:00 GMT

     [ https://issues.apache.org/jira/browse/BEAM-4783?focusedWorklogId=144271&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-144271
]

ASF GitHub Bot logged work on BEAM-4783:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 14/Sep/18 12:51
            Start Date: 14/Sep/18 12:51
    Worklog Time Spent: 10m 
      Work Description: iemejia commented on issue #6181: [BEAM-4783] Add bundleSize for splitting
BoundedSources.
URL: https://github.com/apache/beam/pull/6181#issuecomment-421348635
 
 
   Hi, I have not forgotten about this one (sorry for the delay),  The default parallelism
is calculated to use the ‘optimal’ number of cores and I think it is a reasonable default
(it maximizes core utilization in particular for streaming). I prefer not to change this until
we have a better way to replace the default value (if you have any suggestion on how to do
this with the new approach, it is welcome).
   
   I want to include your changes but not as the default for the moment, but let’s say an
‘alternative’ only applied if the user sets the bundle size (we have to doc the partitioner
change and mark this method @Experimental). This way we can evaluate if it double shuffles
happens or not, and eventually if the performance advantages justify making this behavior
the default. WDYT ?
   
   Beam design philosophy has always being to reduce ‘knobs’ to its minimum, but I understand
that with Spark this might be sometimes needed.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 144271)
    Time Spent: 1h 50m  (was: 1h 40m)

> Spark SourceRDD Not Designed With Dynamic Allocation In Mind
> ------------------------------------------------------------
>
>                 Key: BEAM-4783
>                 URL: https://issues.apache.org/jira/browse/BEAM-4783
>             Project: Beam
>          Issue Type: Improvement
>          Components: runner-spark
>    Affects Versions: 2.5.0
>            Reporter: Kyle Winkelman
>            Assignee: Jean-Baptiste Onofré
>            Priority: Major
>              Labels: newbie
>          Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> When the spark-runner is used along with the configuration spark.dynamicAllocation.enabled=true
the SourceRDD does not detect this. It then falls back to the value calculated in this description:
>       // when running on YARN/SparkDeploy it's the result of max(totalCores, 2).
>       // when running on Mesos it's 8.
>       // when running local it's the total number of cores (local = 1, local[N] = N,
>       // local[*] = estimation of the machine's cores).
>       // ** the configuration "spark.default.parallelism" takes precedence over all of
the above **
> So in most cases this default is quite small. This is an issue when using a very large
input file as it will only get split in half.
> I believe that when Dynamic Allocation is enable the SourceRDD should use the DEFAULT_BUNDLE_SIZE
and possibly expose a SparkPipelineOptions that allows you to change this DEFAULT_BUNDLE_SIZE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message