hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Marc Spaggiari <jean-m...@spaggiari.org>
Subject Re: Premature EOF: no length prefix available
Date Thu, 02 May 2013 19:57:39 GMT
hbase@ip-10-238-38-193:/mnt/log/hadoop-hdfs$ hbase -version
java version "1.6.0_31"
Java(TM) SE Runtime Environment (build 1.6.0_31-b04)
Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)
hbase@ip-10-238-38-193:/mnt/log/hadoop-hdfs$ hbase shell
13/05/02 19:44:05 WARN conf.Configuration: hadoop.native.lib is deprecated.
Instead, use io.native.lib.available
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.94.2-cdh4.2.0, rUnknown, Fri Feb 15 11:48:32 PST 2013

hbase@ip-10-238-38-193:/mnt/log/hadoop-hdfs$ hadoop version
Hadoop 2.0.0-cdh4.2.0
Subversion
file:///var/lib/jenkins/workspace/generic-package-ubuntu64-12-04/CDH4.2.0-Packaging-Hadoop-2013-02-15_10-38-54/hadoop-2.0.0+922-1.cdh4.2.0.p0.12~precise/src/hadoop-common-project/hadoop-common
-r 8bce4bd28a464e0a92950c50ba01a9deb1d85686
Compiled by jenkins on Fri Feb 15 11:13:37 PST 2013
>From source with checksum 3eefc211a14ac7b6e764d6ded2eeeb26

Because the datanode is not able to write this file, it's excluded from
HBase, and things are going wrong after that.

Replication factor is setup to 1. I tried to touch the file and it's
working fine with the HDFS user. What's strange is that it's sometime
working fine and I'm able to fix the server and get everything right. But
then soon after that it's going bad again...

Logs from the namenode:
2013-05-02 14:02:41,321 WARN
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault:
Not able to place enough replicas, still in need of 1 to reach 1
For more information, please enable DEBUG log level on
org.apache.commons.logging.impl.Log4JLogger
2013-05-02 14:02:41,321 ERROR
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hbase (auth:SIMPLE) cause:java.io.IOException: File
/hbase/events/d8215fe52cf86f91905f80b1817909df/recovered.edits/0000000000287157949.temp
could only be replicated to 0 nodes instead of minReplication (=1).  There
are 1 datanode(s) running and 1 node(s) are excluded in this operation.
2013-05-02 14:02:41,322 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 11 on 8020, call
org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from
10.238.38.193:33353: error: java.io.IOException: File
/hbase/events/d8215fe52cf86f91905f80b1817909df/recovered.edits/0000000000287157949.temp
could only be replicated to 0 nodes instead of minReplication (=1).  There
are 1 datanode(s) running and 1 node(s) are excluded in this operation.
java.io.IOException: File
/hbase/events/d8215fe52cf86f91905f80b1817909df/recovered.edits/0000000000287157949.temp
could only be replicated to 0 nodes instead of minReplication (=1).  There
are 1 datanode(s) running and 1 node(s) are excluded in this operation.



2013/5/2 Ted Yu <yuzhihong@gmail.com>

> This seems to be hadoop issue.
>
> What is HBase / hadoop version you were using ?
>
> Thanks
>
> On Thu, May 2, 2013 at 12:27 PM, Jean-Marc Spaggiari <
> jean-marc@spaggiari.org> wrote:
>
> > Hi,
> >
> > Any idea what can be the cause of a "Premature EOF: no length prefix
> > available" error?
> >
> > 2013-05-02 14:02:41,063 INFO org.apache.hadoop.hdfs.DFSClient: Exception
> in
> > createBlockOutputStream
> > java.io.EOFException: Premature EOF: no length prefix available
> >         at
> >
> >
> org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:171)
> >         at
> >
> >
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1105)
> >         at
> >
> >
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1039)
> >         at
> >
> >
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:487)
> > 2013-05-02 14:02:41,064 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
> > BP-1179773663-10.238.38.193-1363960970263:blk_7082931589039745816_1955950
> > 2013-05-02 14:02:41,068 INFO org.apache.hadoop.hdfs.DFSClient: Excluding
> > datanode 10.238.38.193:50010
> >
> >
> >
> > I'm getting that on a server start. Logs are splitted correctly,
> > coprocessors deployed corretly, and then I'm getting this exception. It's
> > excluding the datanode, and because of that almost everything remaining
> is
> > failing.
> >
> > There is only one server in this "cluster"... But even so, it should be
> > working. There is one master, one RS, one NN and one DN. On a AWS host.
> >
> > At the same time on the hadoop datanode side I'm getting that:
> >
> > 2013-05-02 14:02:41,063 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock
> > BP-1179773663-10.238.38.193-1363960970263:blk_7082931589039745816_1955950
> > received exception java.io.FileNotFoundException:
> >
> >
> /mnt/dfs/dn/current/BP-1179773663-10.238.38.193-1363960970263/current/rbw/blk_7082931589039745816_1955950.meta
> > (Invalid argument)
> > 2013-05-02 14:02:41,063 ERROR
> > org.apache.hadoop.hdfs.server.datanode.DataNode:
> > ip-10-238-38-193.eu-west-1.compute.internal:50010:DataXceiver error
> > processing WRITE_BLOCK operation  src: /10.238.38.193:39831 dest: /
> > 10.238.38.193:50010
> > java.io.FileNotFoundException:
> >
> >
> /mnt/dfs/dn/current/BP-1179773663-10.238.38.193-1363960970263/current/rbw/blk_7082931589039745816_1955950.meta
> > (Invalid argument)
> >         at java.io.RandomAccessFile.open(Native Method)
> >         at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >         at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.ReplicaInPipeline.createStreams(ReplicaInPipeline.java:187)
> >         at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:199)
> >         at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:457)
> >         at
> >
> >
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:103)
> >         at
> >
> >
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:67)
> >         at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
> >         at java.lang.Thread.run(Thread.java:662)
> >
> >
> > Does is sound more an hadoop issue than an HBase one?
> >
> > JM
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message