spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jim Blomo <jim.bl...@gmail.com>
Subject Re: Pyspark worker memory
Date Wed, 19 Mar 2014 07:53:55 GMT
To document this, it would be nice to clarify what environment
variables should be used to set which Java system properties, and what
type of process they affect.  I'd be happy to start a page if you can
point me to the right place:

SPARK_JAVA_OPTS:
  -Dspark.executor.memory can by set on the machine running the driver
(typically the master host) and will affect the memory available to
the Executor running on a slave node
  -D....

SPARK_DAEMON_OPTS:
  ....

On Wed, Mar 19, 2014 at 12:48 AM, Jim Blomo <jim.blomo@gmail.com> wrote:
> Thanks for the suggestion, Matei.  I've tracked this down to a setting
> I had to make on the Driver.  It looks like spark-env.sh has no impact
> on the Executor, which confused me for a long while with settings like
> SPARK_EXECUTOR_MEMORY.  The only setting that mattered was setting the
> system property in the *driver* (in this case pyspark/shell.py) or
> using -Dspark.executor.memory in SPARK_JAVA_OPTS *on the master*.  I'm
> not sure how this varies from 0.9.0 release, but it seems to work on
> SNAPSHOT.
>
> On Tue, Mar 18, 2014 at 11:52 PM, Matei Zaharia <matei.zaharia@gmail.com> wrote:
>> Try checking spark-env.sh on the workers as well. Maybe code there is
>> somehow overriding the spark.executor.memory setting.
>>
>> Matei
>>
>> On Mar 18, 2014, at 6:17 PM, Jim Blomo <jim.blomo@gmail.com> wrote:
>>
>> Hello, I'm using the Github snapshot of PySpark and having trouble setting
>> the worker memory correctly. I've set spark.executor.memory to 5g, but
>> somewhere along the way Xmx is getting capped to 512M. This was not
>> occurring with the same setup and 0.9.0. How many places do I need to
>> configure the memory? Thank you!
>>
>>

Mime
View raw message