spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <>
Subject Re: spark on yarn wastes one box (or 1 GB on each box) for am container
Date Tue, 09 Feb 2016 11:29:11 GMT

> On 9 Feb 2016, at 06:53, Sean Owen <> wrote:
> I think you can let YARN over-commit RAM though, and allocate more
> memory than it actually has. It may be beneficial to let them all
> think they have an extra GB, and let one node running the AM
> technically be overcommitted, a state which won't hurt at all unless
> you're really really tight on memory, in which case something might
> get killed.

from my test VMs

        <description>Whether physical memory limits will be enforced for


it does mean that a container can swap massively, hurting the performance of all containers
around it as IO bandwidth gets soaked up —which is why the checks are on for shared clusters.
If it's dedicated, you can overcommit
View raw message