drill-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Abhishek Girish <abhishek.gir...@gmail.com>
Subject Re: "java.lang.OutOfMemoryError: Java heap space" error which in-turn kills drill bit of one of the node
Date Tue, 03 May 2016 16:03:14 GMT
Can you try bumping up Drill Heap memory and restarting Drillbits? Looks
related to DRILL-3678

Refer to http://drill.apache.org/docs/configuring-drill-memory/

On Tue, May 3, 2016 at 3:19 AM, Anup Tiwari <anup.tiwari@games24x7.com>
wrote:

> Hi All,
>
> Sometimes I am getting below error while creating a table in drill using a
> hive table :-
>
> *"*java.lang.OutOfMemoryError: Java heap space*"* which in-turn kills drill
> bit of one of the node where i have executed respective query.
>
> *Query Type :-*
>
> create table glv_abc as select sessionid, max(serverTime) as max_serverTime
> from hive.hive_logs_daily
> where log_date = '2016-05-02'
> group by sessionid;
>
>
> Kindly help me in this.
>
> Please find *output of drillbit.log* below :-
>
> 2016-05-03 15:33:15,628 [28d7890f-a7d6-b55e-3853-23f1ea828751:frag:2:12]
> ERROR o.a.drill.common.CatastrophicFailure - Catastrophic Failure Occurr
> ed, exiting. Information message: Unable to handle out of memory condition
> in FragmentExecutor.
> java.lang.OutOfMemoryError: Java heap space
>         at
>
> hive.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:755)
> ~[drill-hive-exec-shaded-1.6.0.jar:1.6.
> 0]
>         at
>
> hive.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:494)
> ~[drill-hive-exec-shaded-1.6.0.jar:1.6.0]
>         at
>
> hive.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:127)
> ~[drill-hive-exec-shaded-1.6.0.jar:1.6
> .0]
>         at
>
> hive.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:208)
> ~[drill-hive-exec-shaded-1.6.0.jar:
> 1.6.0]
>         at
>
> hive.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:201)
> ~[drill-hive-exec-shaded-1.6.0.jar:1.6.0]
>         at
>
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:206)
> ~[drill-hive-exec-shade
> d-1.6.0.jar:1.6.0]
>         at
>
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:62)
> ~[drill-hive-exec-shaded
> -1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.store.hive.HiveRecordReader.next(HiveRecordReader.java:321)
> ~[drill-storage-hive-core-1.6.0.jar:1.6.0]
>         at
> org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:191)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:94)
> ~[drill-java-exec-1.6.0.jar:1
> .6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:129)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.test.generated.HashAggregatorGen731.doWork(HashAggTemplate.java:314)
> ~[na:na]
>         at
>
> org.apache.drill.exec.physical.impl.aggregate.HashAggBatch.innerNext(HashAggBatch.java:133)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:129)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
> 2016-05-03 15:33:16,648 [Drillbit-ShutdownHook#0] INFO
> o.apache.drill.exec.server.Drillbit - Received shutdown request.
> 2016-05-03 15:33:16,669 [28d7890f-a7d6-b55e-3853-23f1ea828751:frag:2:16]
> INFO  o.a.d.e.w.fragment.FragmentExecutor -
> 28d7890f-a7d6-b55e-3853-23f1ea828751:2:16: State change requested RUNNING
> --> FAILED
> 2016-05-03 15:33:16,670 [28d7890f-a7d6-b55e-3853-23f1ea828751:frag:2:16]
> INFO  o.a.d.e.w.fragment.FragmentExecutor -
> 28d7890f-a7d6-b55e-3853-23f1ea828751:2:16: State change requested FAILED
> --> FINISHED
> 2016-05-03 15:33:16,675 [28d7890f-a7d6-b55e-3853-23f1ea828751:frag:2:16]
> ERROR o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: IOException:
> Filesystem closed
>
> Fragment 2:16
>
> [Error Id: 8604418f-ac5e-4e79-b66b-cd7d779b38f7 on namenode:31010]
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR:
> IOException: Filesystem closed
>
>
> Regards,
> *Anup*
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message