spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Koert Kuipers <ko...@tresata.com>
Subject Re: life if an executor
Date Tue, 20 May 2014 19:34:06 GMT
interesting, so it sounds to me like spark is forced to choose between the
ability to add jars during lifetime and the ability to run tasks with user
classpath first (which important for the ability to run jobs on spark
clusters not under your control, so for the viability of 3rd party spark
apps)


On Tue, May 20, 2014 at 1:06 PM, Aaron Davidson <ilikerps@gmail.com> wrote:

> One issue is that new jars can be added during the lifetime of a
> SparkContext, which can mean after executors are already started. Off-heap
> storage is always serialized, correct.
>
>
> On Tue, May 20, 2014 at 6:48 AM, Koert Kuipers <koert@tresata.com> wrote:
>
>> just for my clarification: off heap cannot be java objects, correct? so
>> we are always talking about serialized off-heap storage?
>> On May 20, 2014 1:27 AM, "Tathagata Das" <tathagata.das1565@gmail.com>
>> wrote:
>>
>>> That's one the main motivation in using Tachyon ;)
>>> http://tachyon-project.org/
>>>
>>> It gives off heap in-memory caching. And starting Spark 0.9, you can
>>> cache any RDD in Tachyon just by specifying the appropriate StorageLevel.
>>>
>>> TD
>>>
>>>
>>>
>>>
>>> On Mon, May 19, 2014 at 10:22 PM, Mohit Jaggi <mohitjaggi@gmail.com>wrote:
>>>
>>>> I guess it "needs" to be this way to benefit from caching of RDDs in
>>>> memory. It would be nice however if the RDD cache can be dissociated from
>>>> the JVM heap so that in cases where garbage collection is difficult to
>>>> tune, one could choose to discard the JVM and run the next operation in a
>>>> few one.
>>>>
>>>>
>>>> On Mon, May 19, 2014 at 10:06 PM, Matei Zaharia <
>>>> matei.zaharia@gmail.com> wrote:
>>>>
>>>>> They’re tied to the SparkContext (application) that launched them.
>>>>>
>>>>> Matei
>>>>>
>>>>> On May 19, 2014, at 8:44 PM, Koert Kuipers <koert@tresata.com>
wrote:
>>>>>
>>>>> from looking at the source code i see executors run in their own jvm
>>>>> subprocesses.
>>>>>
>>>>> how long to they live for? as long as the worker/slave? or are they
>>>>> tied to the sparkcontext and life/die with it?
>>>>>
>>>>> thx
>>>>>
>>>>>
>>>>>
>>>>
>>>
>

Mime
View raw message