spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nathan Kronenfeld <nkronenf...@oculusinfo.com>
Subject Re: Cluster not accepting jobs
Date Fri, 06 Dec 2013 18:54:24 GMT
Never mind, I figured it out - apparently it was different DNS resolutions
locally and within the cluster; when I use the IP address instead of the
machine name in MASTER, it all seems to work.


On Fri, Dec 6, 2013 at 1:38 PM, Nathan Kronenfeld <
nkronenfeld@oculusinfo.com> wrote:

> Hi, all.
>
> I'm trying to connect to a remote cluster from my machine, using spark
> 0.7.3.  In conf/spark-env.sh, I've set MASTER, SCALA_HOME, SPARK_MASTER_IP,
> and SPARK_MASTER_PORT.
>
> When I try to run a job, it starts, but never gets anywhere, and I keep
> getting the following error message:
>
> 13/12/06 13:37:20 WARN cluster.ClusterScheduler: Initial job has not
> accepted any resources; check your cluster UI to ensure that workers are
> registered
>
>
> I look at the cluster UI in a browser, and it says it has 8 workers
> registered, all alive.
>
> What does this error mean?  I assume I'm missing something in the setup -
> does anyone know what?
>
> Thanks in advance,
>                        -Nathan Kronenfeld
>
>
>


-- 
Nathan Kronenfeld
Senior Visualization Developer
Oculus Info Inc
2 Berkeley Street, Suite 600,
Toronto, Ontario M5A 4J5
Phone:  +1-416-203-3003 x 238
Email:  nkronenfeld@oculusinfo.com

Mime
View raw message