That will not work, I am currently in the same dilemma. You need to pass some parameters to the JVMs running on the slaves like Xms and Xmx. I managed to do that while running the workload locally but I could not do this on the cluster.
Changing SPARK_DAEMON_MEMORY will have no effect, the memory used will be 512 M
On Thursday, January 23, 2014 11:23 PM, Manoj Samel <firstname.lastname@example.org> wrote:
I could increase the memory for master.
However, my understanding of the Master is that it just does the DAG scheduling for workers and does not do any RDD processing itself. If this is true and since only one application was running; does the master needs > 512 Mb just to execute the DAG for 3105 tasks ?