spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Matei Zaharia <matei.zaha...@gmail.com>
Subject Re: Cluster not accepting jobs
Date Fri, 06 Dec 2013 19:22:58 GMT
Yeah, in general, make sure you use exactly the same “cluster URL” string shown on the
master’s web UI. There’s currently a limitation in Akka where different ways of specifying
the hostname won’t work.

Matei

On Dec 6, 2013, at 10:54 AM, Nathan Kronenfeld <nkronenfeld@oculusinfo.com> wrote:

> Never mind, I figured it out - apparently it was different DNS resolutions locally and
within the cluster; when I use the IP address instead of the machine name in MASTER, it all
seems to work.
> 
> 
> On Fri, Dec 6, 2013 at 1:38 PM, Nathan Kronenfeld <nkronenfeld@oculusinfo.com>
wrote:
> Hi, all.
> 
> I'm trying to connect to a remote cluster from my machine, using spark 0.7.3.  In conf/spark-env.sh,
I've set MASTER, SCALA_HOME, SPARK_MASTER_IP, and SPARK_MASTER_PORT.
> 
> When I try to run a job, it starts, but never gets anywhere, and I keep getting the following
error message:
> 
> 13/12/06 13:37:20 WARN cluster.ClusterScheduler: Initial job has not accepted any resources;
check your cluster UI to ensure that workers are registered
> 
> I look at the cluster UI in a browser, and it says it has 8 workers registered, all alive.
> 
> What does this error mean?  I assume I'm missing something in the setup - does anyone
know what?
> 
> Thanks in advance,
>                        -Nathan Kronenfeld
> 
> 
> 
> 
> 
> -- 
> Nathan Kronenfeld
> Senior Visualization Developer
> Oculus Info Inc
> 2 Berkeley Street, Suite 600,
> Toronto, Ontario M5A 4J5
> Phone:  +1-416-203-3003 x 238
> Email:  nkronenfeld@oculusinfo.com


Mime
View raw message