hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John <johnnyenglish...@gmail.com>
Subject HBase Region Server crash if column size become to big
Date Wed, 11 Sep 2013 11:07:38 GMT

I store a lot of columns for one row key and if the size become to big the
relevant Region Server crashs if I try to get or scan the row. For example
if I try to get the relevant row I got this error:

2013-09-11 12:46:43,696 WARN org.apache.hadoop.ipc.HBaseServer:
(operationTooLarge): {"processingtimems":3091,"client":"

If I try to load the relevant row via Apache Pig and the HBaseStorage
Loader (use the scan operation) I got this message and after that the
Region Servers crashs:

2013-09-11 10:30:23,542 WARN org.apache.hadoop.ipc.HBaseServer:
(responseTooLarge): {"processingtimems":1851,"call":"next(-588368116791418695,
1), rpc version=1, client version=29,$

I'm using Cloudera 4.4.0 with 0.94.6-cdh4.4.0

Any clues?


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message