spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Thomas Graves (Jira)" <>
Subject [jira] [Commented] (SPARK-28859) Remove value check of MEMORY_OFFHEAP_SIZE in declaration section
Date Thu, 10 Oct 2019 14:40:00 GMT


Thomas Graves commented on SPARK-28859:

I wouldn't expect users to specify the size when enabled its false. If they do specify false,
I guess its ok for it to be 0, but not sure we really need to special case this.

Default of 0 is fine that is why I said if the user specifies a value it should be > 0,
but I haven't looked to see when the configEntry does the validation on this.  If it validates
the default value then we can't change it, or validator needs change. This is what the Jira
is to investigate.  Taking a skim of the code it looks like the validator only runs on the
non-default value.

> Remove value check of MEMORY_OFFHEAP_SIZE in declaration section
> ----------------------------------------------------------------
>                 Key: SPARK-28859
>                 URL:
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 3.0.0
>            Reporter: Yang Jie
>            Assignee: yifan
>            Priority: Minor
> Now MEMORY_OFFHEAP_SIZE has default value 0, but It should be greater than 0 when 
> MEMORY_OFFHEAP_ENABLED is true,, should we check this condition in code?
> SPARK-28577 add this check before request memory resource to Yarn 

This message was sent by Atlassian Jira

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message