spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chawla,Sumit " <sumitkcha...@gmail.com>
Subject Re: Mesos Spark Fine Grained Execution - CPU count
Date Mon, 19 Dec 2016 20:09:54 GMT
Ah thanks. looks like i skipped reading this *"Neither will executors
terminate when they’re idle."*

So in my job scenario,  I should preassume that No of executors should be
less than number of tasks. Ideally one executor should execute 1 or more
tasks.  But i am observing something strange instead.  I start my job with
48 partitions for a spark job. In mesos ui i see that number of tasks is
48, but no. of CPUs is 78 which is way more than 48.  Here i am assuming
that 1 CPU is 1 executor.   I am not specifying any configuration to set
number of cores per executor.

Regards
Sumit Chawla


On Mon, Dec 19, 2016 at 11:35 AM, Joris Van Remoortere <joris@mesosphere.io>
wrote:

> That makes sense. From the documentation it looks like the executors are
> not supposed to terminate:
> http://spark.apache.org/docs/latest/running-on-mesos.html#
> fine-grained-deprecated
>
>> Note that while Spark tasks in fine-grained will relinquish cores as they
>> terminate, they will not relinquish memory, as the JVM does not give memory
>> back to the Operating System. Neither will executors terminate when they’re
>> idle.
>
>
> I suppose your task to executor CPU ratio is low enough that it looks like
> most of the resources are not being reclaimed. If your tasks were using
> significantly more CPU the amortized cost of the idle executors would not
> be such a big deal.
>
>
> —
> *Joris Van Remoortere*
> Mesosphere
>
> On Mon, Dec 19, 2016 at 11:26 AM, Timothy Chen <tnachen@gmail.com> wrote:
>
>> Hi Chawla,
>>
>> One possible reason is that Mesos fine grain mode also takes up cores
>> to run the executor per host, so if you have 20 agents running Fine
>> grained executor it will take up 20 cores while it's still running.
>>
>> Tim
>>
>> On Fri, Dec 16, 2016 at 8:41 AM, Chawla,Sumit <sumitkchawla@gmail.com>
>> wrote:
>> > Hi
>> >
>> > I am using Spark 1.6. I have one query about Fine Grained model in
>> Spark.
>> > I have a simple Spark application which transforms A -> B.  Its a single
>> > stage application.  To begin the program, It starts with 48 partitions.
>> > When the program starts running, in mesos UI it shows 48 tasks and 48
>> CPUs
>> > allocated to job.  Now as the tasks get done, the number of active tasks
>> > number starts decreasing.  How ever, the number of CPUs does not
>> decrease
>> > propotionally.  When the job was about to finish, there was a single
>> > remaininig task, however CPU count was still 20.
>> >
>> > My questions, is why there is no one to one mapping between tasks and
>> cpus
>> > in Fine grained?  How can these CPUs be released when the job is done,
>> so
>> > that other jobs can start.
>> >
>> >
>> > Regards
>> > Sumit Chawla
>>
>
>

Mime
View raw message