spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "oskarryn (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (SPARK-26863) Add minimal values for spark.driver.memory and spark.executor.memory
Date Wed, 13 Feb 2019 13:10:00 GMT

     [ https://issues.apache.org/jira/browse/SPARK-26863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

oskarryn updated SPARK-26863:
-----------------------------
    Description: 
I propose to change `1g` to `1g, with minimum of 472m` in "Default" column for spark.driver.memory
and spark.executor.memory properties in [Application Properties](https://spark.apache.org/docs/latest/configuration.html#application-properties).

Reasoning:

In UnifiedMemoryManager.scala file I see definition of RESERVED_SYSTEM_MEMORY_BYTES:

{code:scala}
// Set aside a fixed amount of memory for non-storage, non-execution purposes.
// This serves a function similar to `spark.memory.fraction`, but guarantees that we reserve
// sufficient memory for the system even for small heaps. E.g. if we have a 1GB JVM, then
// the memory used for execution and storage will be (1024 - 300) * 0.6 = 434MB by default.
private val RESERVED_SYSTEM_MEMORY_BYTES = 300 * 1024 * 1024
{code}

Then `reservedMemory` takes on this value and also `minSystemMemory` is defined: 
{code:scala}
val minSystemMemory = (reservedMemory * 1.5).ceil.toLong
{code}
Consequently driver heap size and executor memory are checked if they are bigger than  minSystemMemory
(471859200B) or IllegalArgumentException is thrown. It seems that 472MB is absolute minimum
for spark.driver.memory and spark.executor.memory. 

Side question: how is this 472MB established as sufficient memory for small heaps? What do
I risk if I build Spark with smaller RESERVED_SYSTEM_MEMORY_BYTES?

EDIT: I actually just tried to set spark.driver.memory to 472m and it turns out the systemMemory
variable was 440401920 not 471859200, so the exception persists (bug?). It only works when
spark.driver.memory is set to at least 505m to have systemMemory >= minSystemMemory. I
don't know why is it the case.

  was:
I propose to change `1g` to `1g, with minimum of 472m` in "Default" column for spark.driver.memory
and spark.executor.memory properties in [Application Properties](https://spark.apache.org/docs/latest/configuration.html#application-properties).

Reasoning:

In UnifiedMemoryManager.scala file I see definition of RESERVED_SYSTEM_MEMORY_BYTES:

{code:scala}
// Set aside a fixed amount of memory for non-storage, non-execution purposes.
// This serves a function similar to `spark.memory.fraction`, but guarantees that we reserve
// sufficient memory for the system even for small heaps. E.g. if we have a 1GB JVM, then
// the memory used for execution and storage will be (1024 - 300) * 0.6 = 434MB by default.
private val RESERVED_SYSTEM_MEMORY_BYTES = 300 * 1024 * 1024
{code}

Then `reservedMemory` takes on this value and also `minSystemMemory` is defined: 
{code:scala}
val minSystemMemory = (reservedMemory * 1.5).ceil.toLong
{code}
Consequently driver heap size and executor memory are checked if they are bigger than  minSystemMemory
(471859200B) or IllegalArgumentException is thrown. It seems that 472MB is absolute minimum
for spark.driver.memory and spark.executor.memory. 

Side question: how is this 472MB established as sufficient memory for small heaps? What do
I risk if I build Spark with smaller RESERVED_SYSTEM_MEMORY_BYTES?


> Add minimal values for spark.driver.memory and spark.executor.memory
> --------------------------------------------------------------------
>
>                 Key: SPARK-26863
>                 URL: https://issues.apache.org/jira/browse/SPARK-26863
>             Project: Spark
>          Issue Type: Documentation
>          Components: Documentation
>    Affects Versions: 2.4.0
>            Reporter: oskarryn
>            Priority: Trivial
>
> I propose to change `1g` to `1g, with minimum of 472m` in "Default" column for spark.driver.memory
and spark.executor.memory properties in [Application Properties](https://spark.apache.org/docs/latest/configuration.html#application-properties).
> Reasoning:
> In UnifiedMemoryManager.scala file I see definition of RESERVED_SYSTEM_MEMORY_BYTES:
> {code:scala}
> // Set aside a fixed amount of memory for non-storage, non-execution purposes.
> // This serves a function similar to `spark.memory.fraction`, but guarantees that we
reserve
> // sufficient memory for the system even for small heaps. E.g. if we have a 1GB JVM,
then
> // the memory used for execution and storage will be (1024 - 300) * 0.6 = 434MB by default.
> private val RESERVED_SYSTEM_MEMORY_BYTES = 300 * 1024 * 1024
> {code}
> Then `reservedMemory` takes on this value and also `minSystemMemory` is defined: 
> {code:scala}
> val minSystemMemory = (reservedMemory * 1.5).ceil.toLong
> {code}
> Consequently driver heap size and executor memory are checked if they are bigger than 
minSystemMemory (471859200B) or IllegalArgumentException is thrown. It seems that 472MB is
absolute minimum for spark.driver.memory and spark.executor.memory. 
> Side question: how is this 472MB established as sufficient memory for small heaps? What
do I risk if I build Spark with smaller RESERVED_SYSTEM_MEMORY_BYTES?
> EDIT: I actually just tried to set spark.driver.memory to 472m and it turns out the systemMemory
variable was 440401920 not 471859200, so the exception persists (bug?). It only works when
spark.driver.memory is set to at least 505m to have systemMemory >= minSystemMemory. I
don't know why is it the case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message