spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nan Zhu <zhunanmcg...@gmail.com>
Subject Re: Master and worker nodes in standalone deployment
Date Thu, 16 Jan 2014 04:40:41 GMT
it maintains the running of worker process, create executor for the tasks in the worker nodes,
contacts with driver program, etc.

-- 
Nan Zhu


On Wednesday, January 15, 2014 at 11:37 PM, Manoj Samel wrote:

> Thanks,
> 
> Could you still explain what does master process does ?
> 
> 
> On Wed, Jan 15, 2014 at 8:36 PM, Nan Zhu <zhunanmcgill@gmail.com (mailto:zhunanmcgill@gmail.com)>
wrote:
> > you can start a worker process in the master node
> > 
> > so that all nodes in your cluster can participate in the computation
> > 
> > Best, 
> > 
> > -- 
> > Nan Zhu
> > 
> > 
> > On Wednesday, January 15, 2014 at 11:32 PM, Manoj Samel wrote:
> > 
> > > When spark is deployed on cluster in standalone deployment mode (V 0.81), one
of the node is started as master and others as workers.
> > > 
> > > What does the master node does ? Can it participates in actual computations
or does it just acts as coordinator ? 
> > > 
> > > Thanks,
> > > 
> > > Manoj 
> > 
> 


Mime
View raw message