spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Manoj Samel <>
Subject Division of work between master, worker, executor and driver
Date Fri, 24 Jan 2014 17:59:49 GMT
On cluster with HDFS + Spark (in standalone deploy mode), there is a master
node + 4 worker nodes. When a spark-shell connects to master, it creates 4
executor JVMs on each of the 4 worker nodes.

When the application reads a HDFS files and does computations in RDDs, what
work gets done on master, worker, executor and driver  ?


View raw message