hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "liyunzhang (JIRA)" <>
Subject [jira] [Commented] (HIVE-18301) Investigate to enable MapInput cache in Hive on Spark
Date Fri, 22 Dec 2017 17:35:00 GMT


liyunzhang commented on HIVE-18301:

[~xuefuz],[~csun]: I read jiras about MapInput IOContext problem and enable MapInput rdd cache.
And found the problem only happens OContext problem with multiple MapWorks cloned for multi-insert
\[Spark Branch\] like HIVE-8920 mentioned.
In HIVE-8920, I found the failure case is like
from (select * from dec union all select * from dec2) s
insert overwrite table dec3 select, sum(s.value) group by
insert overwrite table dec4 select, s.value order by s.value;
I indeed saw the exception in my hive.log like
Caused by: java.lang.IllegalStateException: Invalid input path hdfs://localhost:8020/user/hive/warehouse/dec2/dec.txt
        at org.apache.hadoop.hive.ql.exec.MapOperator.getNominalPath(
        at org.apache.hadoop.hive.ql.exec.MapOperator.cleanUpInputFileChangedOp(

here the problem  happens on the MapInput is the union result of dec and dec2 case. But when
I modify case
from (select * from dec ) s
insert overwrite table dec3 select, sum(s.value) group by
insert overwrite table dec4 select, s.value order by s.value;
No such exception whether in local or yarn mode.

Whether the problem only happens  in such complicated case( the rdd cache is the  union result
of two tables)?  If only happen in such complicated case, why not only disable MapInput rdd
cache in such case? Is there any other reason to disable MapInput#rdd cache? Please spend
some time to view it as both of you have experience on it, thanks!

> Investigate to enable MapInput cache in Hive on Spark
> -----------------------------------------------------
>                 Key: HIVE-18301
>                 URL:
>             Project: Hive
>          Issue Type: Bug
>            Reporter: liyunzhang
>            Assignee: liyunzhang
> Before IOContext problem is found in MapTran when spark rdd cache is enabled in HIVE-8920.
> so we disabled rdd cache in MapTran at [SparkPlanGenerator|].
 The problem is IOContext seems not initialized correctly in the spark yarn client/cluster
mode and caused the exception like 
> {code}
> Job aborted due to stage failure: Task 93 in stage 0.0 failed 4 times, most recent failure:
Lost task 93.3 in stage 0.0 (TID 616, bdpe48): java.lang.RuntimeException: Error processing
row: java.lang.NullPointerException
> 	at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(
> 	at org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(
> 	at org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(
> 	at org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(
> 	at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42)
> 	at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(
> 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
> 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
> 	at
> 	at org.apache.spark.executor.Executor$
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(
> 	at java.util.concurrent.ThreadPoolExecutor$
> 	at
> Caused by: java.lang.NullPointerException
> 	at org.apache.hadoop.hive.ql.exec.AbstractMapOperator.getNominalPath(
> 	at org.apache.hadoop.hive.ql.exec.MapOperator.cleanUpInputFileChangedOp(
> 	at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(
> 	at org.apache.hadoop.hive.ql.exec.MapOperator.process(
> 	at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(
> 	... 12 more
> Driver stacktrace:
> {code}
> in yarn client/cluster mode, sometimes [ExecMapperContext#currentInputPath|]
is null when rdd cach is enabled.

This message was sent by Atlassian JIRA

View raw message