spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rodrick Brown <>
Subject Re: how to use spark.mesos.constraints
Date Wed, 27 Jul 2016 01:38:22 GMT
Shuffle service has nothing to do with constraints it is however advised to
run the mesos-shuffle-service on each of your agent nodes running spark.

Here is the command I use to run a typical spark jobs on my cluster using
constraints (this is generated but from another script we run but should
give you a clear idea)

Jobs not being accepted by any resources could mean what you're asking for
is way larger than the resources you have available.

/usr/bin/timeout 3600 /opt/spark-1.6.1/bin/spark-submit
--master "mesos://zk://prod-zk-1:2181,prod-zk-2:2181,prod-zk-3:2181/mesos"
--conf spark.ui.port=40046
--conf spark.mesos.coarse=true
--conf spark.sql.broadcastTimeout=3600
--conf spark.cores.max=5
--conf spark.mesos.constraints="rack:spark"
--conf spark.sql.tungsten.enabled=true
--conf spark.shuffle.service.enabled=true
--conf spark.dynamicAllocation.enabled=true
--conf spark.mesos.executor.memoryOverhead=3211
--total-executor-cores 5
--driver-memory 5734M
--executor-memory 8028M
--jars /data/orchard/etc/config/load-accountdetail-accumulo-prod.jar
/data/orchard/jars/dataloader-library-assembled.jar 1

Nodes used for my spark jobs are all using the constraint 'rack:spark'

I hope this helps!

On Tue, Jul 26, 2016 at 7:10 PM, Jia Yu <> wrote:

> Hi,
> I am also trying to use the spark.mesos.constraints but it gives me the
> same error: job has not be accepted by any resources.
> I am doubting that I should start some additional service like
> ./sbin/ Am I correct?
> Thanks,
> Jia
> On Tue, Dec 1, 2015 at 5:14 PM, rarediel <>
> wrote:
>> I am trying to add mesos constraints to my spark-submit command in my
>> marathon file I am setting it to spark.mesos.coarse=true.
>> Here is an example of a constraint I am trying to set.
>>  --conf spark.mesos.constraint=cpus:2
>> I want to use the constraints to control the amount of executors are
>> created
>> so I can control the total memory of my spark job.
>> I've tried many variations of resource constraints, but no matter which
>> resource or what number, range, etc. I do I always get the error "Initial
>> job has not accepted any resources; check your cluster UI...".  My cluster
>> has the available resources.  Is there any examples I can look at where
>> people use resource constraints?
>> --
>> View this message in context:
>> Sent from the Apache Spark User List mailing list archive at
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail:
>> For additional commands, e-mail:


[image: Orchard Platform] <>

*Rodrick Brown */ *DevOPs*

9174456839 /

Orchard Platform
101 5th Avenue, 4th Floor, New York, NY

*NOTICE TO RECIPIENTS*: This communication is confidential and intended for 
the use of the addressee only. If you are not an intended recipient of this 
communication, please delete it immediately and notify the sender by return 
email. Unauthorized reading, dissemination, distribution or copying of this 
communication is prohibited. This communication does not constitute an 
offer to sell or a solicitation of an indication of interest to purchase 
any loan, security or any other financial product or instrument, nor is it 
an offer to sell or a solicitation of an indication of interest to purchase 
any products or services to any persons who are prohibited from receiving 
such information under applicable law. The contents of this communication 
may not be accurate or complete and are subject to change without notice. 
As such, Orchard App, Inc. (including its subsidiaries and affiliates, 
"Orchard") makes no representation regarding the accuracy or completeness 
of the information contained herein. The intended recipient is advised to 
consult its own professional advisors, including those specializing in 
legal, tax and accounting matters. Orchard does not provide legal, tax or 
accounting advice.

View raw message