spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Federico Ragona <federico.rag...@gmail.com>
Subject Worker never used by our Spark applications
Date Mon, 26 Jan 2015 08:58:58 GMT
Hello,
we are running Spark 1.2.0 standalone on a cluster made up of 4 machines, each of them running
one Worker and one of them also running the Master; they are all connected to the same HDFS
instance. 

Until a few days ago, they were all configured with 

	SPARK_WORKER_MEMORY = 18G

and the jobs running on our cluster were making use of all of them. 
A few days ago though, we added a new machine to the cluster, set up one Worker on that machine,
and reconfigured the machines as follows:

| machine   | SPARK_WORKER_MEMORY |
| #1             | 16G |
| #2             | 18G |
| #3             | 24G |
| #4             | 18G |
| #5 (new)   | 36G |

Ever since we introduced this configuration change, our applications running on the cluster
are not using the Worker running on machine #1 anymore, even though it is regularly registered
to the cluster. 

I would be very grateful if anybody could explain how Spark chooses which workers to use and
why that one is not used anymore.

Regards,
Federico Ragona
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Mime
View raw message