spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Petr Novak <oss.mli...@gmail.com>
Subject Standalone executor memory is fixed while executor cores are load balanced between workers
Date Thu, 18 Aug 2016 14:06:21 GMT
Hello,
when I set spark.executor.cores e.g. to 8 cores and spark.executor.memory
to 8GB. It can allocate more executors with less cores for my app but each
executors gets 8GB RAM.

It is a problem because I can allocate more memory across cluster than
expected, the worst case is 8x 1core executors, each with 8GB => 64GB RAM,
instead of about 8GB I need for app.

If I would plan spark.executor.memory to some lower amount, than I can end
up with less executors, even a single one (if other nodes are full) which
wouldn't have enough memory. I don't know how to configure executor memory
in a predictable way.

The only predictable way we found is to set 1 core to spark.executor.cores.
And divide required memory for app by spark.cores.max. But having many JVMs
for small executors doesn't look optimal to me.

Is it a known issue or do I miss something?

Many thanks,
Petr

Mime
View raw message