spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris Teoh <chris.t...@gmail.com>
Subject Re: Request more yarn vcores than executors
Date Sun, 08 Dec 2019 09:23:45 GMT
I thought --executor-cores is the same the other argument. If anything,
just set --executor-cores to something greater than 1 and don't set the
other one you mentioned. You'll then get greater number of cores per
executor so you can take on more simultaneous tasks per executor.

On Sun, 8 Dec 2019, 8:16 pm jelmer, <jkuperus@gmail.com> wrote:

> I have a job, running on yarn, that uses multithreading inside of a
> mapPartitions transformation
>
> Ideally I would like to have a small number of partitions but have a high
> number of yarn vcores allocated to the task (that i can take advantage of
> because of multi threading)
>
> Is this possible?
>
> I tried running with  : --executor-cores 1 --conf
> spark.yarn.executor.cores=20
> But it seems spark.yarn.executor.cores gets ignored
>

Mime
View raw message