spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Archit Thakur <archit279tha...@gmail.com>
Subject Re: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
Date Thu, 02 Jan 2014 15:43:35 GMT
Yes, it has already been set to 50g.


On Thu, Jan 2, 2014 at 7:05 PM, Eugen Cepoi <cepoi.eugen@gmail.com> wrote:

> Did you try to define the spark.executor.memory property to the amount of
> memory you want per worker?
>
> For example spark.executor.memory=2g
>
> http://spark.incubator.apache.org/docs/latest/configuration.html
>
>
> 2014/1/2 Archit Thakur <archit279thakur@gmail.com>
>
>> Need not mention Workers could be seen on the UI.
>>
>>
>> On Thu, Jan 2, 2014 at 5:01 PM, Archit Thakur <archit279thakur@gmail.com>wrote:
>>
>>> Hi,
>>>
>>> I have some 5G of data. distributed in some 597 sequence files. My
>>> application does a flatmap on the union of all rdd's created from
>>> individual files. The flatmap statement throws java.lang.stackOverflowError
>>> with the default stack size. I increased the stack size to 1g (both system
>>> and jvm). Now, it has started printing "Initial job has not accepted any
>>> resources; check your cluster UI to ensure that workers are registered and
>>> have sufficient memory" and is not moving forward. Just printing it in the
>>> continuous loop. Any ideas? Or suggestions would help. Archit.
>>>
>>> -Thx.
>>>
>>
>>
>

Mime
View raw message