hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From David Alves <dr-al...@criticalsoftware.com>
Subject Re: StackOverFlow Error in HBase
Date Sun, 30 Mar 2008 21:07:55 GMT
Hi St.Ack

	Thanks for explaining it. I really really need this kind of
functionality, can you suggest a different approach (without using
filters).

Thanks again
David Alves
On Sun, 2008-03-30 at 13:36 -0700, stack wrote:
> You're doing nothing wrong.
> 
> The filters as written recurse until they find a match.  If long 
> stretches between matching rows, then you will get a 
> StackOverflowError.  Filters need to be changed.  Thanks for pointing 
> this out.  Can you do without them for the moment until we get a chance 
> to fix it?  (HBASE-554)
> 
> Thanks,
> St.Ack
> 
> 
> 
> David Alves wrote:
> > Hi St.Ack and all
> > 	
> > 	The error always occurs when trying to see if there are more rows to
> > process.
> > 	Yes I'm using a filter(RegExpRowFilter) to select only the rows (any
> > row key) that match a specific value in one of the columns.
> > 	Then I obtain the scanner just test the hasNext method, close the
> > scanner and return.
> > 	Am I doing something wrong?
> > 	Still StackOverflowError is not supposed to happen right?
> >
> > Regards
> > David Alves
> > On Thu, 2008-03-27 at 12:36 -0700, stack wrote:
> >   
> >> You are using a filter?  If so, tell us more about it.
> >> St.Ack
> >>
> >> David Alves wrote:
> >>     
> >>> Hi guys 
> >>>
> >>> 	I 'm using HBase to keep data that is later indexed.
> >>> 	The data is indexed in chunks so the cycle is get XXXX records index
> >>> them check for more records etc...
> >>> 	When I tryed the candidate-2 instead of the old 0.16.0 (which I
> >>> switched to do to the regionservers becoming unresponsive) I got the
> >>> error in the end of this email well into an indexing job.
> >>> 	So you have any idea why? Am I doing something wrong?
> >>>
> >>> David Alves
> >>>
> >>> java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException:
> >>> java.io.IOException: java.lang.StackOverflowError
> >>>         at java.io.DataInputStream.readFully(DataInputStream.java:178)
> >>>         at java.io.DataInputStream.readLong(DataInputStream.java:399)
> >>>         at org.apache.hadoop.dfs.DFSClient
> >>> $BlockReader.readChunk(DFSClient.java:735)
> >>>         at
> >>> org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:234)
> >>>         at
> >>> org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
> >>>         at
> >>> org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
> >>>         at
> >>> org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:157)
> >>>         at org.apache.hadoop.dfs.DFSClient
> >>> $BlockReader.read(DFSClient.java:658)
> >>>         at org.apache.hadoop.dfs.DFSClient
> >>> $DFSInputStream.readBuffer(DFSClient.java:1130)
> >>>         at org.apache.hadoop.dfs.DFSClient
> >>> $DFSInputStream.read(DFSClient.java:1166)
> >>>         at java.io.DataInputStream.readFully(DataInputStream.java:178)
> >>>         at org.apache.hadoop.io.DataOutputBuffer
> >>> $Buffer.write(DataOutputBuffer.java:56)
> >>>         at
> >>> org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:90)
> >>>         at org.apache.hadoop.io.SequenceFile
> >>> $Reader.next(SequenceFile.java:1829)
> >>>         at org.apache.hadoop.io.SequenceFile
> >>> $Reader.next(SequenceFile.java:1729)
> >>>         at org.apache.hadoop.io.SequenceFile
> >>> $Reader.next(SequenceFile.java:1775)
> >>>         at org.apache.hadoop.io.MapFile$Reader.next(MapFile.java:461)
> >>>         at org.apache.hadoop.hbase.HStore
> >>> $StoreFileScanner.getNext(HStore.java:2350)
> >>>         at
> >>> org.apache.hadoop.hbase.HAbstractScanner.next(HAbstractScanner.java:256)
> >>>         at org.apache.hadoop.hbase.HStore
> >>> $HStoreScanner.next(HStore.java:2561)
> >>>         at org.apache.hadoop.hbase.HRegion
> >>> $HScanner.next(HRegion.java:1807)
> >>>         at org.apache.hadoop.hbase.HRegion
> >>> $HScanner.next(HRegion.java:1843)
> >>>         at org.apache.hadoop.hbase.HRegion
> >>> $HScanner.next(HRegion.java:1843)
> >>>         at org.apache.hadoop.hbase.HRegion
> >>> $HScanner.next(HRegion.java:1843)
> >>>         at org.apache.hadoop.hbase.HRegion
> >>> $HScanner.next(HRegion.java:1843)
> >>>         at org.apache.hadoop.hbase.HRegion
> >>> $HScanner.next(HRegion.java:1843)
> >>>         at org.apache.hadoop.hbase.HRegion
> >>> $HScanner.next(HRegion.java:1843)
> >>>         at org.apache.hadoop.hbase.HRegion
> >>> $HScanner.next(HRegion.java:1843)
> >>>         at org.apache.hadoop.hbase.HRegion
> >>> $HScanner.next(HRegion.java:1843)
> >>>         at org.apache.hadoop.hbase.HRegion
> >>> $HScanner.next(HRegion.java:1843)
> >>>         at org.apache.hadoop.hbase.HRegion
> >>> $HScanner.next(HRegion.java:1843)
> >>>         at org.apache.hadoop.hbase.HRegion
> >>> $HScanner.next(HRegion.java:1843)
> >>>         at org.apache.hadoop.hbase.HRegion
> >>> $HScanner.next(HRegion.java:1843)
> >>>         at org.apache.hadoop.hbase.HRegion
> >>> $HScanner.next(HRegion.java:1843)
> >>>         at org.apache.hadoop.hbase.HRegion
> >>> $HScanner.next(HRegion.java:1843)
> >>>         at org.apache.hadoop.hbase.HRegion
> >>> $HScanner.next(HRegion.java:1843)
> >>>         at org.apache.hadoop.hbase.HRegion
> >>> $HScanner.next(HRegion.java:1843)
> >>>         at org.apache.hadoop.hbase.HRegion
> >>> $HScanner.next(HRegion.java:1843)
> >>> ...
> >>>   
> >>>
> >>>   
> >>>       
> >
> >   
> 


Mime
View raw message