spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "" <>
Subject heterogeneous cluster hardware
Date Wed, 06 Aug 2014 19:11:26 GMT
I'm sure this must be a fairly common use-case for spark, yet I have not
found a satisfactory discussion of it on the spark website or forum:

I work at a company with a lot of previous-generation server hardware
sitting idle-- I want to add this hardware to my spark cluster to increase
performance! BUT: It is unclear as to whether the spark master will be able
to properly apportion jobs to the slaves if they have differing hardware

As I understand, the default spark launch scripts are incompatible with
per-node hardware configurations, but it seems I could compose custom files for each slave to fully utilize its hardware.

Would the master take these per-node configurations into consideration when
allocating work? or would the cluster necessarily fall to the

Is this an area which needs development? I might be willing to look into
attempting to introduce this functionality if it is lacking.

View this message in context:
Sent from the Apache Spark User List mailing list archive at

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message