spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mark Hamstra <m...@clearstorydata.com>
Subject Re: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
Date Fri, 03 Mar 2017 16:03:26 GMT
Removing dev. This is a basic user question; please don't add noise to the
development list.

If your jobs are not accepting any resources, then it is almost certainly
because no resource offers are being received. Check the status of your
workers and their reachability from the driver.

On Fri, Mar 3, 2017 at 1:14 AM, Aseem Bansal <asmbansal2@gmail.com> wrote:

> When Initial jobs have not accepted any resources then what all can be
> wrong? Going through stackoverflow and various blogs does not help. Maybe
> need better logging for this? Adding dev
>
> On Thu, Mar 2, 2017 at 5:03 PM, Marco Mistroni <mmistroni@gmail.com>
> wrote:
>
>> Hi
>>  I have found exactly same issue....I even have a script which simulates
>> a random file read.
>> 2 nodes, 4 core. I am submitting code from each node passing max core 1
>> but one of the programme occupy 2/4 nodes and the other is In waiting state
>> I am creating standalone cluster for SPK 2.0. Can send sample code if
>> someone can help
>> Kr
>>
>> On 2 Mar 2017 11:04 am, "Aseem Bansal" <asmbansal2@gmail.com> wrote:
>>
>> I have been trying to get basic spark cluster up on single machine.  I
>> know it should be distributed but want to get something running before I do
>> distributed in a higher environment.
>>
>> So I used sbin/start-master.sh and sbin/start-slave.sh
>>
>> I keep on getting *WARN TaskSchedulerImpl: Initial job has not accepted
>> any resources; check your cluster UI to ensure that workers are registered
>> and have sufficient resources*
>>
>> I read up and changed /opt/spark-2.1.0-bin-h
>> adoop2.7/conf/spark-defaults.conf to contain this
>>
>> spark.executor.cores               2
>> spark.cores.max                    8
>>
>> I changed /opt/spark-2.1.0-bin-hadoop2.7/conf/spark-env.sh to contain
>>
>> SPARK_WORKER_CORES=4
>>
>> My understanding is that after this spark will use 8 cores in total with
>> the worker using 4 cores and hence being able to support 2 executor on that
>> worker.
>>
>> But I still keep on getting the same error
>>
>> For my master I have
>> [image: Inline image 1]
>>
>> For my slave I have
>> [image: Inline image 2]
>>
>>
>>
>

Mime
View raw message