spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Allen Charles - callen <>
Subject RE: Spark in a heterogeneous computing environment
Date Tue, 08 Oct 2013 15:59:33 GMT

Hello Markus,

I had a similar question (
) a few days ago. You can exclude small memory footprint mesos nodes by specifying the executor
memory as being pretty high. But I agree with what you're trying to do. Being able to handle
heterogeneous clusters would be a very handy feature to add to Spark. Ex: smart job creation
per mesos node appropriate for that node's resources. 



-----Original Message-----
From: Markus Losoi [] 
Sent: Monday, October 07, 2013 11:32 PM
Subject: Spark in a heterogeneous computing environment


Is it currently possible to define in Spark that some worker node should be preferred to the
other worker nodes? That is, in a heterogeneous computing environment some computing units
can be more powerful than the others and assigning computing jobs to them should be prioritized.

Best regards,
Markus Losoi (

The information contained in this communication is confidential, is
intended only for the use of the recipient named above, and may be legally

If the reader of this message is not the intended recipient, you are
hereby notified that any dissemination, distribution or copying of this
communication is strictly prohibited.

If you have received this communication in error, please resend this
communication to the sender and delete the original message or any copy
of it from your computer system.

Thank You.

View raw message