drill-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "qinbs@tsingning.com" <qi...@tsingning.com>
Subject Drill 1.12 query hive transactional orc table
Date Tue, 26 Jun 2018 03:41:40 GMT
Hi:
     I am sorry about that my English is poor.
     I have a problem and need your help.
     Drill 1.12 uses Hive 1.2.1.  
     My  Drill 1.12. 
     My Hive version is 1.2.1 
     Things working fine :  use drill to query normal hive table .

Now a  Hive table :
 create table db_test.t_test_log(
  create_time string, 
  log_id string, 
  log_type string)
clustered by (log_id) into 2 buckets
  ROW FORMAT DELIMITED  FIELDS TERMINATED BY '\001'  LINES TERMINATED BY '\n'
stored as orc
tblproperties ('transactional'='true');
data stream : flume -->hive,it's Quasi real-time insertion.
Query this table,things working fine with hive sql,but when I use drill to query this table
it do not work. Then Exception info:

==========================================================================================================================================================================================
2018-06-25 16:28:25,650 [24cf5855-cf24-48e7-92c7-be27fbae9370:foreman] INFO  o.a.drill.exec.work.foreman.Foreman
- Query text for query id 24cf5855-cf24-48e7-92c7-be27fbae9370: select count(*) cnt  from
hive.db_test.t_test_log  
2018-06-25 16:28:25,969 [24cf5855-cf24-48e7-92c7-be27fbae9370:frag:0:0] INFO  o.a.d.e.w.fragment.FragmentExecutor
- 24cf5855-cf24-48e7-92c7-be27fbae9370:0:0: State change requested AWAITING_ALLOCATION -->
RUNNING
2018-06-25 16:28:25,969 [24cf5855-cf24-48e7-92c7-be27fbae9370:frag:0:0] INFO  o.a.d.e.w.f.FragmentStatusReporter
- 24cf5855-cf24-48e7-92c7-be27fbae9370:0:0: State to report: RUNNING
2018-06-25 16:28:27,251 [24cf5855-cf24-48e7-92c7-be27fbae9370:frag:0:0] ERROR o.a.d.exec.physical.impl.ScanBatch
- SYSTEM ERROR: IOException: Cannot obtain block length for LocatedBlock{BP-2057246263-10.30.208.135-1515072017012:blk_1074371083_630359;
getBlockSize()=904; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.30.208.135:50010,DS-8fc25c0e-3c81-49d5-b6d9-d229129b5525,DISK],
DatanodeInfoWithStorage[10.31.0.7:50010,DS-e91fa806-0e81-48ca-864f-e9019001822c,DISK], DatanodeInfoWithStorage[10.31.76.49:50010,DS-edfb09a8-dc1f-4e8e-b99f-c72a89cd2b1e,DISK]]}

Setup failed for HiveOrcReader

[Error Id: d7a136a7-c880-4356-947f-90e68238a4f0 ]
org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: IOException: Cannot obtain
block length for LocatedBlock{BP-2057246263-10.30.208.135-1515072017012:blk_1074371083_630359;
getBlockSize()=904; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.30.208.135:50010,DS-8fc25c0e-3c81-49d5-b6d9-d229129b5525,DISK],
DatanodeInfoWithStorage[10.31.0.7:50010,DS-e91fa806-0e81-48ca-864f-e9019001822c,DISK], DatanodeInfoWithStorage[10.31.76.49:50010,DS-edfb09a8-dc1f-4e8e-b99f-c72a89cd2b1e,DISK]]}

Setup failed for HiveOrcReader

[Error Id: d7a136a7-c880-4356-947f-90e68238a4f0 ]
at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:586)
~[drill-common-1.12.0.jar:1.12.0]
at org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:213) [drill-java-exec-1.12.0.jar:1.12.0]
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) [drill-java-exec-1.12.0.jar:1.12.0]
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) [drill-java-exec-1.12.0.jar:1.12.0]
at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
[drill-java-exec-1.12.0.jar:1.12.0]
at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:134)
[drill-java-exec-1.12.0.jar:1.12.0]
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164) [drill-java-exec-1.12.0.jar:1.12.0]
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) [drill-java-exec-1.12.0.jar:1.12.0]
at org.apache.drill.exec.test.generated.StreamingAggregatorGen1.doWork(StreamingAggTemplate.java:187)
[na:na]
at org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch.innerNext(StreamingAggBatch.java:181)
[drill-java-exec-1.12.0.jar:1.12.0]
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164) [drill-java-exec-1.12.0.jar:1.12.0]
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) [drill-java-exec-1.12.0.jar:1.12.0]
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) [drill-java-exec-1.12.0.jar:1.12.0]
at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
[drill-java-exec-1.12.0.jar:1.12.0]
at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:134)
[drill-java-exec-1.12.0.jar:1.12.0]
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164) [drill-java-exec-1.12.0.jar:1.12.0]
at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:105) [drill-java-exec-1.12.0.jar:1.12.0]
at org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:79)
[drill-java-exec-1.12.0.jar:1.12.0]
at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:95) [drill-java-exec-1.12.0.jar:1.12.0]
at org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:234) [drill-java-exec-1.12.0.jar:1.12.0]
at org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:227) [drill-java-exec-1.12.0.jar:1.12.0]
at java.security.AccessController.doPrivileged(Native Method) [na:1.8.0_131]
at javax.security.auth.Subject.doAs(Subject.java:422) [na:1.8.0_131]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) [hadoop-common-2.7.1.jar:na]
at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:227) [drill-java-exec-1.12.0.jar:1.12.0]
at org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) [drill-common-1.12.0.jar:1.12.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
Caused by: org.apache.drill.common.exceptions.ExecutionSetupException: java.lang.reflect.UndeclaredThrowableException
at org.apache.drill.common.exceptions.ExecutionSetupException.fromThrowable(ExecutionSetupException.java:30)
~[drill-logical-1.12.0.jar:1.12.0]
at org.apache.drill.exec.store.hive.readers.HiveAbstractReader.setup(HiveAbstractReader.java:311)
~[drill-storage-hive-core-1.12.0.jar:1.12.0]
at org.apache.drill.exec.physical.impl.ScanBatch.getNextReaderIfHas(ScanBatch.java:242) [drill-java-exec-1.12.0.jar:1.12.0]
at org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:166) [drill-java-exec-1.12.0.jar:1.12.0]
... 27 common frames omitted
Caused by: java.util.concurrent.ExecutionException: java.lang.reflect.UndeclaredThrowableException
at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[na:1.8.0_131]
at java.util.concurrent.FutureTask.get(FutureTask.java:192) ~[na:1.8.0_131]
at org.apache.drill.exec.store.hive.readers.HiveAbstractReader.setup(HiveAbstractReader.java:304)
~[drill-storage-hive-core-1.12.0.jar:1.12.0]
... 29 common frames omitted
Caused by: java.lang.reflect.UndeclaredThrowableException: null
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1672) [hadoop-common-2.7.1.jar:na]
at org.apache.drill.exec.ops.OperatorContextImpl$1.call(OperatorContextImpl.java:111) ~[drill-java-exec-1.12.0.jar:1.12.0]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_131]
... 3 common frames omitted
Caused by: org.apache.drill.common.exceptions.ExecutionSetupException: Failed to get o.a.hadoop.mapred.RecordReader
from Hive InputFormat
at org.apache.drill.exec.store.hive.readers.HiveAbstractReader.initNextReader(HiveAbstractReader.java:264)
~[drill-storage-hive-core-1.12.0.jar:1.12.0]
at org.apache.drill.exec.store.hive.readers.HiveAbstractReader.init(HiveAbstractReader.java:242)
~[drill-storage-hive-core-1.12.0.jar:1.12.0]
at org.apache.drill.exec.store.hive.readers.HiveAbstractReader.access$000(HiveAbstractReader.java:71)
~[drill-storage-hive-core-1.12.0.jar:1.12.0]
at org.apache.drill.exec.store.hive.readers.HiveAbstractReader$1.call(HiveAbstractReader.java:297)
~[drill-storage-hive-core-1.12.0.jar:1.12.0]
at org.apache.drill.exec.store.hive.readers.HiveAbstractReader$1.call(HiveAbstractReader.java:294)
~[drill-storage-hive-core-1.12.0.jar:1.12.0]
at org.apache.drill.exec.ops.OperatorContextImpl$1$1.run(OperatorContextImpl.java:114) ~[drill-java-exec-1.12.0.jar:1.12.0]
at java.security.AccessController.doPrivileged(Native Method) [na:1.8.0_131]
at javax.security.auth.Subject.doAs(Subject.java:422) [na:1.8.0_131]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) [hadoop-common-2.7.1.jar:na]
... 5 common frames omitted
Caused by: java.io.IOException: Cannot obtain block length for LocatedBlock{BP-2057246263-10.30.208.135-1515072017012:blk_1074371083_630359;
getBlockSize()=904; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.30.208.135:50010,DS-8fc25c0e-3c81-49d5-b6d9-d229129b5525,DISK],
DatanodeInfoWithStorage[10.31.0.7:50010,DS-e91fa806-0e81-48ca-864f-e9019001822c,DISK], DatanodeInfoWithStorage[10.31.76.49:50010,DS-edfb09a8-dc1f-4e8e-b99f-c72a89cd2b1e,DISK]]}
at org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:390) ~[hadoop-hdfs-2.7.1.jar:na]
at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:333)
~[hadoop-hdfs-2.7.1.jar:na]
at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:269) ~[hadoop-hdfs-2.7.1.jar:na]
at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:261) ~[hadoop-hdfs-2.7.1.jar:na]
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1540) ~[hadoop-hdfs-2.7.1.jar:na]
at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:304) ~[hadoop-hdfs-2.7.1.jar:na]
at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:299) ~[hadoop-hdfs-2.7.1.jar:na]
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-common-2.7.1.jar:na]
at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:312) ~[hadoop-hdfs-2.7.1.jar:na]
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:767) ~[hadoop-common-2.7.1.jar:na]
at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.extractMetaInfoFromFooter(ReaderImpl.java:355)
~[drill-hive-exec-shaded-1.12.0.jar:1.12.0]
at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:316) ~[drill-hive-exec-shaded-1.12.0.jar:1.12.0]
at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:237) ~[drill-hive-exec-shaded-1.12.0.jar:1.12.0]
at org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.<init>(OrcRawRecordMerger.java:464)
~[drill-hive-exec-shaded-1.12.0.jar:1.12.0]
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getReader(OrcInputFormat.java:1215) ~[drill-hive-exec-shaded-1.12.0.jar:1.12.0]
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:1113)
~[drill-hive-exec-shaded-1.12.0.jar:1.12.0]
at org.apache.drill.exec.store.hive.readers.HiveAbstractReader.initNextReader(HiveAbstractReader.java:261)
~[drill-storage-hive-core-1.12.0.jar:1.12.0]
... 13 common frames omitted
2018-06-25 16:28:27,252 [24cf5855-cf24-48e7-92c7-be27fbae9370:frag:0:0] INFO  o.a.d.e.w.fragment.FragmentExecutor
- 24cf5855-cf24-48e7-92c7-be27fbae9370:0:0: State change requested RUNNING --> FAILED
2018-06-25 16:28:27,252 [24cf5855-cf24-48e7-92c7-be27fbae9370:frag:0:0] INFO  o.a.d.e.w.fragment.FragmentExecutor
- 24cf5855-cf24-48e7-92c7-be27fbae9370:0:0: State change requested FAILED --> FINISHED
2018-06-25 16:28:27,264 [BitServer-5] INFO  o.a.d.e.w.fragment.FragmentExecutor - 24cf5855-cf24-48e7-92c7-be27fbae9370:0:0:
State change requested FAILED --> CANCELLATION_REQUESTED
2018-06-25 16:28:27,264 [BitServer-5] WARN  o.a.d.e.w.fragment.FragmentExecutor - 24cf5855-cf24-48e7-92c7-be27fbae9370:0:0:
Ignoring unexpected state transition FAILED --> CANCELLATION_REQUESTED
2018-06-25 16:28:27,275 [24cf5855-cf24-48e7-92c7-be27fbae9370:frag:0:0] WARN  o.a.drill.exec.work.foreman.Foreman
- Dropping request to move to COMPLETED state as query is already at FAILED state (which is
terminal).
Mon Jun 25 16:47:03 CST 2018 Terminating drillbit pid 32755
2018-06-25 16:47:03,956 [Drillbit-ShutdownHook#0] INFO  o.apache.drill.exec.server.Drillbit
- Received shutdown request.
==========================================================================================================================================================================================
The boss disagrees upgrade Hive from 1.2.1  to 2.3.
What should I do to use drill query this table.
I'm looking forward to your reply.
   Thanks!



qinbs@tsingning.com 
name:qin
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message