spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Nick Pentreath" <nick.pentre...@gmail.com>
Subject Re: Spark Master on Hadoop Job Tracker?
Date Tue, 21 Jan 2014 06:30:10 GMT
If you intend to run Hadoop mapReduce and Spark on the same cluster concurrently, and you have
enough memory on the jobtracker master, then you can run the Spark master (for standalone
as Raymond mentions) on the same node . This is not necessary but more for convenience so
you only have so ssh into one master (usually id put hive/shark server, spark master, etc
on same node).—
Sent from Mailbox for iPhone

On Mon, Jan 20, 2014 at 8:14 PM, mharwida <majdharwida@yahoo.com> wrote:

> Hi,
> Should the Spark Master run on the Hadoop Job Tracker node (and Spark
> workers on Task Trackers) or the placement of the Spark Master could reside
> on any Hadoop node?
> Thanks
> Majd
> --
> View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Master-on-Hadoop-Job-Tracker-tp680.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
Mime
View raw message