spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Joseph K. Bradley (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-3071) Increase default driver memory
Date Tue, 03 Mar 2015 00:55:05 GMT

    [ https://issues.apache.org/jira/browse/SPARK-3071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344186#comment-14344186
] 

Joseph K. Bradley commented on SPARK-3071:
------------------------------------------

+1000 for increasing default driver memory.  The default value should be >= the amount
of memory used in unit tests (2GB).

My personal interest: This increase would fix an issue with an MLlib example in the docs caused
by needing to save a Parquet file whose schema has a large number of columns (about 13 when
flattened): [https://issues.apache.org/jira/browse/SPARK-6120]

> Increase default driver memory
> ------------------------------
>
>                 Key: SPARK-3071
>                 URL: https://issues.apache.org/jira/browse/SPARK-3071
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>            Reporter: Xiangrui Meng
>
> The current default is 512M, which is usually too small because user also uses driver
to do some computation. In local mode, executor memory setting is ignored while only driver
memory is used, which provides more incentive to increase the default driver memory.
> I suggest
> 1. 2GB in local mode and warn users if executor memory is set a bigger value
> 2. same as worker memory on an EC2 standalone server



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message