spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gerard Maas <gerard.m...@gmail.com>
Subject Spark Streaming on Mesos: How is the nr of coarse-grained executors calculated?
Date Tue, 02 Dec 2014 12:55:12 GMT
Hi,

We're running several Spark Streaming-kafka-Cassandra jobs on Mesos.
I'm currently working on tuning and validating scalability, and I'm looking
for the way to  configure the number of coarse grained task executors for a
job.

For example:
I'm consuming 2 Kafka topics with 12 partitions and I've  4 Kafka consumers
per topic.
I assign max-cores to 16 (2x4 for kafka consumers + 8 for Spark). Then, I
sometimes get 3 executors and sometimes 4.  Ideally, I want to control that
number and always use 4 executors to maximize the distribution of network
load.

How is the current number of executors for Spark Streaming currently
decided?

(My hunch is that the number of executors is based on Mesos offers until
the (cpu, mem) resources requested  have been fulfilled, so it will
dynamically depend on the cluster load at any point in time)

Any thoughts?

-kr, Gerard.

Mime
View raw message