hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alexey Zalensky <sv.permi...@gmail.com>
Subject Hadoop Map/Reduce job that imports data fails
Date Tue, 22 May 2012 14:53:30 GMT
Hello!

Have tried to import around 100Gb of data into cluster with HBase.
What can be wrong with my configuration (attached).

HBase: 0.90.4-cdh3u3

Map/Reduce jobs that imports data has many exceptions like one below
and finally fails.

12/05/21 16:45:10 INFO mapred.JobClient: Task Id :
attempt_201205181216_0007_m_000002_1, Status : FAILED
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException:
Failed 19 actions: NotServingRegionException: 19 times, servers with
issues: 10.2.81.15:60020,
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1424)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchOfPuts(HConnectionManager.java:1438)
        at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:840)
        at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:696)
        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:681)
        at org.apache.hadoop.hbase.mapreduce.MultiTableOutputFormat$MultiTableRecordWriter.write(MultiTableOutputFormat.java:132)
        at org.apache.hadoop.hbase.mapreduce.MultiTableOutputFormat$MultiTableRecordWriter.write(MultiTableOutputFormat.java:68)
        at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:531)
        at org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutput

Along with these exceptions I also have many warnings:
Region bigdata_full_20120521,he1007_5_1326854350_884431943,1337616178993.98990f6aa4feb12929ec4ca9fa3abcdc.
has too many store files; delaying flush up to 90000ms

...and errors like this one:
Failed open of hdfs://ausdevdwhdp01.aus.biowareonline.int:8020/hbase/bigdata/e84451f65aae8a8ecb3e0e48f547f768/L/4388754001869115482.93a7d377b5bb10dc47771431af4d712a;
presumption is that file was corrupted at flush and lost edits picked
up by commit log replay. Verify!
java.io.FileNotFoundException: File does not exist:
/hbase/bigdata/93a7d377b5bb10dc47771431af4d712a/L/4388754001869115482
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1822)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.<init>(DFSClient.java:1813)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:544)
at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:187)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:456)
at org.apache.hadoop.hbase.io.hfile.HFile$Reader.<init>(HFile.java:748)
at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.<init>(StoreFile.java:899)
at org.apache.hadoop.hbase.io.HalfStoreFileReader.<init>(HalfStoreFileReader.java:65)
at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:375)
at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:438)
at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:272)
at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:214)
at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2109)
at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:359)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2770)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2756)
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312)
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)

--
Alexey Zalensky

Mime
View raw message