spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Akshat Aranya <aara...@gmail.com>
Subject Re: Relation between worker memory and executor memory in standalone mode
Date Wed, 01 Oct 2014 18:49:40 GMT
On Wed, Oct 1, 2014 at 11:33 AM, Akshat Aranya <aaranya@gmail.com> wrote:

>
>
> On Wed, Oct 1, 2014 at 11:00 AM, Boromir Widas <vcsubsvc@gmail.com> wrote:
>
>> 1. worker memory caps executor.
>> 2. With default config, every job gets one executor per worker. This
>> executor runs with all cores available to the worker.
>>
>> By the job do you mean one SparkContext or one stage execution within a
> program?  Does that also mean that two concurrent jobs will get one
> executor each at the same time?
>

Experimenting with this some more, I figured out that an executor takes
away "spark.executor.memory" amount of memory from the configured worker
memory.  It also takes up all the cores, so even if there is still some
memory left, there are no cores left for starting another executor.  Is my
assessment correct? Is there no way to configure the number of cores that
an executor can use?


>
>>
>> On Wed, Oct 1, 2014 at 11:04 AM, Akshat Aranya <aaranya@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> What's the relationship between Spark worker and executor memory
>>> settings in standalone mode?  Do they work independently or does the worker
>>> cap executor memory?
>>>
>>> Also, is the number of concurrent executors per worker capped by the
>>> number of CPU cores configured for the worker?
>>>
>>
>>
>

Mime
View raw message