Yes, It seems like that CMS is better. I have tried G1 as databricks' blog recommended, but it's too slow.


------------------ 原始邮件 ------------------
发件人: "condor join";<spark_kernal@outlook.com>;
发送时间: 2016年5月30日(星期一) 上午10:17
收件人: "Ted Yu"<yuzhihong@gmail.com>;
抄送: "user@spark.apache.org"<user@spark.apache.org>;
主题: 答复: G1 GC takes too much time

The follwing are the parameters:
-XX:+UseG1GC    
-XX:+UnlockDiagnostivVMOptions
-XX:G1SummarizeConcMark
-XX:InitiatingHeapOccupancyPercent=35
spark.executor.memory=4G


发件人: Ted Yu <yuzhihong@gmail.com>
发送时间: 2016年5月30日 9:47:05
收件人: condor join
抄送: user@spark.apache.org
主题: Re: G1 GC takes too much time
 
bq. It happens during the Reduce majority.

Did the above refer to reduce operation ?

Can you share your G1GC parameters (and heap size for workers) ?

Thanks

On Sun, May 29, 2016 at 6:15 PM, condor join <spark_kernal@outlook.com> wrote:
Hi,
my spark application failed due to take too much time during GC. Looking at the logs I found these things:
1.there are Young GC takes too much time,and not found Full GC happen this;
2.The time takes too much during the object copy;
3.It happened  more easily when there were not enough resources;
4.It happens during the Reduce majority.

have anyone met the same question?
thanks



---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org