spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Akhil <ak...@sigmoidanalytics.com>
Subject Re: Spark Job hangs up on multi-node cluster but passes on a single node
Date Tue, 23 Dec 2014 08:06:34 GMT
That's because in your code at some place you have specified localhost
instead of the ip address of the machine running the service. When run it in
local mode it will work fine because everything happens on that machine and
hence it will be able to connect to localhost which runs the service, now on
the cluster mode, when you specify localhost, those workers will connect to
their localhost (which doesn't have that service running), So Instead of
localhost you can specify the ip address (either internal or public) of that
machine running that service.



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Job-hangs-up-on-multi-node-cluster-but-passes-on-a-single-node-tp15886p20827.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Mime
View raw message