spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sean Owen <so...@cloudera.com>
Subject Re: spark-shell running out of memory even with 6GB ?
Date Tue, 10 Jan 2017 10:40:01 GMT
Maybe ... here are a bunch of things I'd check:

Are you running out of memory, or just see a lot of mem usage? JVMs will
happily use all the memory you allow them even if some of it could be
reclaimed.

Did the driver run out of mem? did you give 6G to the driver or executor?

OOM errors do show where they occur of course, although often it tells you
exactly where it happened but not exactly why it happened. That's true of
any JVM.

Memory config is hard, yeah: you have to think about the resource manager's
config (e.g. YARN), the JVM's, and then Spark's. It's gotten simpler over
time but defaults invariably need tuning. Dynamic allocation can help to
some extent.

I don't know if a repartition() would run you out of memory.
The 2GB issue is mostly an artifact of byte[] arrays having a max length of
2^31-1. Fixing that is pretty hard, and yeah for now the usual advice is
"don't do that" -- find ways to avoid huge allocations because it's
probably a symptom of performance bottlenecks anyway.

On Tue, Jan 10, 2017 at 2:21 AM Kevin Burton <burton@spinn3r.com> wrote:

> Ah.. ok. I think I know what's happening now. I think we found this
> problem when running a job and doing a repartition()
>
> Spark is just way way way too sensitive to memory configuration.
>
> The 2GB per shuffle limit is also insanely silly in 2017.
>
> So I think what we did is did a repartition too large and now we ran out
> of memory in spark shell.
>
> On Mon, Jan 9, 2017 at 5:53 PM, Steven Ruppert <steven@fullcontact.com>
> wrote:
>
> The spark-shell process alone shouldn't take up that much memory, at least
> in my experience. Have you dumped the heap to see what's all in there? What
> environment are you running spark in?
>
> Doing stuff like RDD.collect() or .countByKey will pull potentially a lot
> of data the spark-shell heap. Another thing thing that can fill up the
> spark master process heap (which is also run in the spark-shell process) is
> running lots of jobs, the logged SparkEvents of which stick around in order
> for the UI to render. There are some options under `spark.ui.retained*` to
> limit that if it's a problem.
>
>
> On Mon, Jan 9, 2017 at 6:00 PM, Kevin Burton <burton@spinn3r.com> wrote:
>
> We've had various OOM issues with spark and have been trying to track them
> down one by one.
>
> Now we have one in spark-shell which is super surprising.
>
> We currently allocate 6GB to spark shell, as confirmed via 'ps'
>
> Why the heck would the *shell* need that much memory.
>
> I'm going to try to give it more of course but would be nice to know if
> this is a legitimate memory constraint or there is a bug somewhere.
>
> PS: One thought I had was that it would be nice to have spark keep track
> of where an OOM was encountered, in what component.
>
> Kevin
>
>
> --
>
> We’re hiring if you know of any awesome Java Devops or Linux Operations
> Engineers!
>
> Founder/CEO Spinn3r.com
> Location: *San Francisco, CA*
> blog: http://burtonator.wordpress.com
> … or check out my Google+ profile
> <https://plus.google.com/102718274791889610666/posts>
>
>
>
> *CONFIDENTIALITY NOTICE: This email message, and any documents, files or
> previous e-mail messages attached to it is for the sole use of the intended
> recipient(s) and may contain confidential and privileged information. Any
> unauthorized review, use, disclosure or distribution is prohibited. If you
> are not the intended recipient, please contact the sender by reply email
> and destroy all copies of the original message.*
>
>
>
>
> --
>
> We’re hiring if you know of any awesome Java Devops or Linux Operations
> Engineers!
>
> Founder/CEO Spinn3r.com
> Location: *San Francisco, CA*
> blog: http://burtonator.wordpress.com
> … or check out my Google+ profile
> <https://plus.google.com/102718274791889610666/posts>
>
>

Mime
View raw message