spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Axel Dahl <a...@whisperstream.com>
Subject how do I execute a job on a single worker node in standalone mode
Date Mon, 17 Aug 2015 22:36:13 GMT
I have a 4 node cluster and have been playing around with the num-executors
parameters, executor-memory and executor-cores

I set the following:
--executor-memory=10G
--num-executors=1
--executor-cores=8

But when I run the job, I see that each worker, is running one executor
which has  2 cores and 2.5G memory.

What I'd like to do instead is have Spark just allocate the job to a single
worker node?

Is that possible in standalone mode or do I need a job/resource scheduler
like Yarn to do that?

Thanks in advance,

-Axel

Mime
View raw message