spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aseem Bansal <asmbans...@gmail.com>
Subject Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
Date Thu, 02 Mar 2017 11:04:51 GMT
I have been trying to get basic spark cluster up on single machine.  I know
it should be distributed but want to get something running before I do
distributed in a higher environment.

So I used sbin/start-master.sh and sbin/start-slave.sh

I keep on getting *WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient resources*

I read up and
changed /opt/spark-2.1.0-bin-hadoop2.7/conf/spark-defaults.conf to contain
this

spark.executor.cores               2
spark.cores.max                    8

I changed /opt/spark-2.1.0-bin-hadoop2.7/conf/spark-env.sh to contain

SPARK_WORKER_CORES=4

My understanding is that after this spark will use 8 cores in total with
the worker using 4 cores and hence being able to support 2 executor on that
worker.

But I still keep on getting the same error

For my master I have
[image: Inline image 1]

For my slave I have
[image: Inline image 2]

Mime
View raw message