spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sandy Ryza <sandy.r...@cloudera.com>
Subject Re: Boosting spark.yarn.executor.memoryOverhead
Date Tue, 11 Aug 2015 23:41:36 GMT
Hi Eric,

This is likely because you are putting the parameter after the primary
resource (latest_msmtdt_by_gridid_and_source.py), which makes it a
parameter to your application instead of a parameter to Spark/

-Sandy

On Wed, Aug 12, 2015 at 4:40 AM, Eric Bless <eric.bless@yahoo.com.invalid>
wrote:

> Previously I was getting a failure which included the message
>     Container killed by YARN for exceeding memory limits. 2.1 GB of 2 GB
> physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
>
> So I attempted the following -
>     spark-submit --jars examples.jar latest_msmtdt_by_gridid_and_source.py
> --conf spark.yarn.executor.memoryOverhead=1024 host table
>
> This resulted in -
>     Application application_1438983806434_24070 failed 2 times due to AM
> Container for appattempt_1438983806434_24070_000002 exited with exitCode:
> -1000
>
> Am I specifying the spark.yarn.executor.memoryOverhead incorrectly?
>
>

Mime
View raw message