spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marco Mistroni <mmistr...@gmail.com>
Subject Re: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
Date Thu, 02 Mar 2017 11:33:13 GMT
Hi
 I have found exactly same issue....I even have a script which simulates a
random file read.
2 nodes, 4 core. I am submitting code from each node passing max core 1 but
one of the programme occupy 2/4 nodes and the other is In waiting state
I am creating standalone cluster for SPK 2.0. Can send sample code if
someone can help
Kr

On 2 Mar 2017 11:04 am, "Aseem Bansal" <asmbansal2@gmail.com> wrote:

I have been trying to get basic spark cluster up on single machine.  I know
it should be distributed but want to get something running before I do
distributed in a higher environment.

So I used sbin/start-master.sh and sbin/start-slave.sh

I keep on getting *WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient resources*

I read up and changed /opt/spark-2.1.0-bin-hadoop2.7/conf/spark-defaults.conf
to contain this

spark.executor.cores               2
spark.cores.max                    8

I changed /opt/spark-2.1.0-bin-hadoop2.7/conf/spark-env.sh to contain

SPARK_WORKER_CORES=4

My understanding is that after this spark will use 8 cores in total with
the worker using 4 cores and hence being able to support 2 executor on that
worker.

But I still keep on getting the same error

For my master I have
[image: Inline image 1]

For my slave I have
[image: Inline image 2]

Mime
View raw message