spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alexander Kapustin <kp...@hotmail.com>
Subject RE: spark job automatically killed without rhyme or reason
Date Fri, 17 Jun 2016 07:52:10 GMT
Hi,

Did you submit spark job via YARN? In some cases (memory configuration probably), yarn can
kill containers where spark tasks are executed. In this situation, please check yarn userlogs
for more information…

--
WBR, Alexander

From: Zhiliang Zhu<mailto:zchl.jump@yahoo.com.INVALID>
Sent: 17 июня 2016 г. 9:36
To: Zhiliang Zhu<mailto:zchl.jump@yahoo.com>; User<mailto:user@spark.apache.org>
Subject: Re: spark job automatically killed without rhyme or reason

anyone ever met the similar problem, which is quite strange ...

    On Friday, June 17, 2016 2:13 PM, Zhiliang Zhu <zchl.jump@yahoo.com.INVALID> wrote:


 Hi All,
I have a big job which mainly takes more than one hour to run the whole, however, it is very
much unreasonable to exit & finish to run midway (almost 80% of the job finished actually,
but not all), without any apparent error or exception log.
I submitted the same job for many times, it is same as that.In the last line of the run log,
just one word "killed" to end, or sometimes not any  other wrong log, all seems okay but should
not finish.
What is the way for the problem? Is there any other friends that ever met the similar issue
...
Thanks in advance!



Mime
View raw message