spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Thodoris Zois <z...@ics.forth.gr>
Subject Re: Spark on Mesos - Weird behavior
Date Wed, 11 Jul 2018 14:10:58 GMT
Hello,

Yeah you are right, but I think that works only if you use Spark dynamic allocation. Am I
wrong?

-Thodoris

> On 11 Jul 2018, at 17:09, Pavel Plotnikov <pavel.plotnikov@team.wrike.com> wrote:
> 
> Hi, Thodoris
> You can configure resources per executor and manipulate with number of executers instead
using spark.max.cores. I think spark.dynamicAllocation.minExecutors and spark.dynamicAllocation.maxExecutors
configuration values can help you.
> 
> On Tue, Jul 10, 2018 at 5:07 PM Thodoris Zois <zois@ics.forth.gr <mailto:zois@ics.forth.gr>>
wrote:
> Actually after some experiments we figured out that spark.max.cores / spark.executor.cores
is the upper bound for the executors. Spark apps will run even only if one executor can be
launched. 
> 
> Is there any way to specify also the lower bound? It is a bit annoying that seems that
we can’t control the resource usage of an application. By the way, we are not using dynamic
allocation. 
> 
> - Thodoris 
> 
> 
> On 10 Jul 2018, at 14:35, Pavel Plotnikov <pavel.plotnikov@team.wrike.com <mailto:pavel.plotnikov@team.wrike.com>>
wrote:
> 
>> Hello Thodoris!
>> Have you checked this:
>>  - does mesos cluster have available resources?
>>   - if spark have waiting tasks in queue more than spark.dynamicAllocation.schedulerBacklogTimeout
configuration value?
>>  - And then, have you checked that mesos send offers to spark app mesos framework
at least with 10 cores and 2GB RAM?
>> 
>> If mesos have not available offers with 10 cores, for example, but have with 8 or
9, so you can use smaller executers for better fit for available resources on nodes for example
with 4 cores and 1 GB RAM, for example
>> 
>> Cheers,
>> Pavel
>> 
>> On Mon, Jul 9, 2018 at 9:05 PM Thodoris Zois <zois@ics.forth.gr <mailto:zois@ics.forth.gr>>
wrote:
>> Hello list,
>> 
>> We are running Apache Spark on a Mesos cluster and we face a weird behavior of executors.
When we submit an app with e.g 10 cores and 2GB of memory and max cores 30, we expect to see
3 executors running on the cluster. However, sometimes there are only 2... Spark applications
are not the only one that run on the cluster. I guess that Spark starts executors on the available
offers even if it does not satisfy our needs. Is there any configuration that we can use in
order to prevent Spark from starting when there are no resource offers for the total number
of executors?
>> 
>> Thank you 
>> - Thodoris 
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: user-unsubscribe@spark.apache.org <mailto:user-unsubscribe@spark.apache.org>
>> 


Mime
View raw message