spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mich Talebzadeh <mich.talebza...@gmail.com>
Subject Re: Standalone executor memory is fixed while executor cores are load balanced between workers
Date Thu, 18 Aug 2016 14:18:22 GMT
Can you provide some info

In your conf/spark-env.sh, what do you set these

# Options for the daemons used in the standalone deploy mode
SPARK_WORKER_CORES=? ##, total number of cores to be used by executors by
each worker
SPARK_WORKER_MEMORY=?g ##, to set how much total memory workers have to
give executors (e.g. 1000m, 2g)
SPARK_WORKER_INSTANCES=?##, to set the number of worker processes per node



Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 18 August 2016 at 15:06, Petr Novak <oss.mlists@gmail.com> wrote:

> Hello,
> when I set spark.executor.cores e.g. to 8 cores and spark.executor.memory
> to 8GB. It can allocate more executors with less cores for my app but each
> executors gets 8GB RAM.
>
> It is a problem because I can allocate more memory across cluster than
> expected, the worst case is 8x 1core executors, each with 8GB => 64GB RAM,
> instead of about 8GB I need for app.
>
> If I would plan spark.executor.memory to some lower amount, than I can end
> up with less executors, even a single one (if other nodes are full) which
> wouldn't have enough memory. I don't know how to configure executor memory
> in a predictable way.
>
> The only predictable way we found is to set 1 core to
> spark.executor.cores. And divide required memory for app by
> spark.cores.max. But having many JVMs for small executors doesn't look
> optimal to me.
>
> Is it a known issue or do I miss something?
>
> Many thanks,
> Petr
>
>

Mime
View raw message