spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Allen Charles - callen <>
Subject Spark and mesos cluster utilization
Date Tue, 01 Oct 2013 18:43:39 GMT
The dynamic cluster capabilities of mesos are pretty neat, but I'm having trouble figuring
out how to utilize it to its fullest. As a simple example, if I have a cluster with 4 machines
as follows:

Two machines with 8GB ram
Two machines with 64GB ram

And I want to run a spark job which capitalizes as much of the cluster as possible, I can
easily run the job with either executor memory less than 8GB and lose out on the ran on the
64G machine, or run it with a larger executor memory and completely ignore the 8GB machines.

Is there a way to either:
A) Run spark executors with memory appropriate for each mesos slave.
B) Run multiple spark executors on the larger nodes (with a smaller memory footprint each).


Charles Allen
The information contained in this communication is confidential, is
intended only for the use of the recipient named above, and may be legally

If the reader of this message is not the intended recipient, you are
hereby notified that any dissemination, distribution or copying of this
communication is strictly prohibited.

If you have received this communication in error, please resend this
communication to the sender and delete the original message or any copy
of it from your computer system.

Thank You.

View raw message