spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Akhil Das <>
Subject Re: Running multiple batch jobs in parallel using Spark on Mesos
Date Tue, 04 Aug 2015 06:56:15 GMT
One approach would be to use a Jobserver in between, create SparkContexts
in it. Lets say you create two, one which is configured to run on
coarse-grained and another set to fine-grained. Let the high priority jobs
hit the coarse-grained SparkContext and the other jobs use the fine-grained

Best Regards

On Mon, Aug 3, 2015 at 2:25 PM, Akash Mishra <>

> Hello *,
> We are trying to build some Batch jobs using Spark on Mesos. Mesos offer's
> two main mode of deployment of Spark job.
> 1. Fine-grained
> 2. Coarse-grained
> When we are running the spark jobs in fine grained mode then spark is
> using max amount of offers from Mesos and running the job. Running batch
> jobs in this mode can easily starve the high priority jobs in the cluster
> and one job can easily use large part of the cluster. There is no way to
> specify a max limit of resource which should be used by one particular
> framework.
> Problem with coarse-grained model is that the cluster reserves the given
> amount of resource at start and then run the spark job on those resources.
> This becomes a problem as we have to reserve more resources then it might
> need so that the job never fails. This will lead to the wastage of
> resources and gives us static partitioning of resource on Mesos cluster.
> Can anyone share their experience in managing multiple batch Spark job on
> Mesos Cluster?
> --
> Regards,
> Akash Mishra.
> "Its not our abilities that make us, but our decisions."--Albus Dumbledore

View raw message