spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ganelin, Ilya" <Ilya.Gane...@capitalone.com>
Subject Re: Understanding disk usage with Accumulators
Date Tue, 16 Dec 2014 18:26:17 GMT
Also, this may be related to this issue https://issues.apache.org/jira/browse/SPARK-3885.

Further, to clarify, data is being written to Hadoop on the data nodes.

Would really appreciate any help. Thanks!

From: <Ganelin>, "Ganelin, Ilya" <ilya.ganelin@capitalone.com<mailto:ilya.ganelin@capitalone.com>>
Date: Tuesday, December 16, 2014 at 10:23 AM
To: "'user@spark.apache.org<mailto:'user@spark.apache.org>'" <user@spark.apache.org<mailto:user@spark.apache.org>>
Subject: Understanding disk usage with Accumulators

Hi all – I’m running a long running batch-processing job with Spark through Yarn. I am
doing the following

Batch Process

val resultsArr = sc.accumulableCollection(mutable.ArrayBuffer[ListenableFuture[Result]]())

InMemoryArray.forEach{
1) Using a thread pool, generate callable jobs that operate on an RDD
1a) These callable jobs perform an operation combining that RDD and a broadcasted array and
store the result of that computation as an Array (Result)
2) Store the results of this operation (upon resolution) in the accumulableCollection
}

sc.parallelize(resultsArr).saveAsObjectFile (about 1gb of data), happens a total of about
4 times during execution over the course of several hours.

My immediate problem is that during this execution two things happen.

Firstly, on my driver node I eventually run out of memory, and start swapping to disk (which
causes slowdown). However, each Batch can be processed entirely within the available memory
on the driver, so basically this memory is somehow not being released between runs (even though
I leave the context of the function running the Batch process)

Secondly, during execution, things are being written to HDFS and I am running out of space
on the local partitions on the node. Note, this is NOT the explicit saveAsObjectFile call
that I am making, but appears to be something going on with Spark internally.


Can anyone speak to what is going on under the hood here and what I can do to resolve this?

________________________________

The information contained in this e-mail is confidential and/or proprietary to Capital One
and/or its affiliates. The information transmitted herewith is intended only for use by the
individual or entity to which it is addressed.  If the reader of this message is not the intended
recipient, you are hereby notified that any review, retransmission, dissemination, distribution,
copying or other use of, or taking of any action in reliance upon this information is strictly
prohibited. If you have received this communication in error, please contact the sender and
delete the material from your computer.
________________________________________________________

The information contained in this e-mail is confidential and/or proprietary to Capital One
and/or its affiliates. The information transmitted herewith is intended only for use by the
individual or entity to which it is addressed.  If the reader of this message is not the intended
recipient, you are hereby notified that any review, retransmission, dissemination, distribution,
copying or other use of, or taking of any action in reliance upon this information is strictly
prohibited. If you have received this communication in error, please contact the sender and
delete the material from your computer.

Mime
View raw message