spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From davidkl <>
Subject Re: Spark Streaming scheduling control
Date Mon, 20 Oct 2014 15:29:48 GMT
One detail, even forcing partitions (/repartition/), spark is still holding
some tasks; if I increase the load of the system (increasing
/spark.streaming.receiver.maxRate/), even if all workers are used, the one
with the receiver gets twice as many tasks compared with the other workers. 

Total delay keeps growing in this scenario, even if there are workers that
are not 100% loaded :-/

What is the load distribution criteria/policy in Spark? Is there any
documentation? Anything will help, thanks :-)

View this message in context:
Sent from the Apache Spark User List mailing list archive at

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message