hadoop-yarn-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From lohit <lohit.vijayar...@gmail.com>
Subject BIG jobs on YARN
Date Sat, 05 Jan 2013 20:44:49 GMT
Hi Devs,

Has anyone seen issues when running big jobs on YARN.
I am trying 10 TB terasort where input is 3 way replicated. This generates
job.split and job.splitmetainfo of more than 10MB. I see that first
container launched crashes without any error files.
Debugging little bit I see that job.jar symlink is not created property
which was strange.
If I try same 10TB terasort but with input one way replicated the job runs
fine. job.split and job.splitmetainfo is much less in this case, which
makes me believe there is some kind of limit I might be hitting.
I tried to set mapreduce.job.split.metainfo.maxsize to 100M, but that did
not help.
Any experience running big jobs and any related configs you guys use?

Have a Nice Day!

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message