spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Xu (Simon) Chen" <xche...@gmail.com>
Subject spark worker and yarn memory
Date Thu, 05 Jun 2014 13:44:47 GMT
I am slightly confused about the "--executor-memory" setting. My yarn
cluster has a maximum container memory of 8192MB.

When I specify "--executor-memory 8G" in my spark-shell, no container can
be started at all. It only works when I lower the executor memory to 7G.
But then, on yarn, I see 2 container per node, using 16G of memory.

Then on the spark UI, it shows that each worker has 4GB of memory, rather
than 7.

Can someone explain the relationship among the numbers I see here?

Thanks.

Mime
View raw message