spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Anton Puzanov <antonpuzdeve...@gmail.com>
Subject Spark dynamic allocation with special executor configuration
Date Tue, 26 Feb 2019 06:28:35 GMT
Hello everyone,

Spark has a dynamic resource allocation scheme, where, when available Spark
manager will automatically add executors to the application resource.

Spark's default configuration is for executors to allocate the entire
worker node they are running on, but this is configurable, my question is,
if an executor is set to use half of the worker node. Is it possible that
Spark will spawn two executors which belong to the same application on the
same worker node?

Thanks,
Anton.

Mime
View raw message