hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrew Purtell <apurt...@apache.org>
Subject Re: Premature EOF: no length prefix available
Date Thu, 02 May 2013 20:18:55 GMT
Oh, I have faced issues with Hadoop on AWS personally. :-) But not this
one. I use instance-store aka "ephemeral" volumes for DataNode block
storage. Are you by chance using EBS?


On Thu, May 2, 2013 at 1:10 PM, Jean-Marc Spaggiari <jean-marc@spaggiari.org
> wrote:

> But that's wierld. This instance is running on AWS. If there issues with
> Hadoop and AWS I think some other people will have faced it before me.
>
> Ok. I will move the discussion on the Hadoop mailing list since it seems to
> be more related to hadoop vs OS.
>
> Thank,
>
> JM
>
> 2013/5/2 Andrew Purtell <apurtell@apache.org>
>
> > > 2013-05-02 14:02:41,063 INFO org.apache.hadoop.hdfs.DFSClient:
> Exception
> > in
> > createBlockOutputStream java.io.EOFException: Premature EOF: no length
> > prefix available
> >
> > The DataNode aborted the block transfer.
> >
> > > 2013-05-02 14:02:41,063 ERROR org.apache.hadoop.hdfs.server.
> > datanode.DataNode:
> > ip-10-238-38-193.eu-west-1.compute.internal:50010:DataXceiver
> > error processing WRITE_BLOCK operation  src: /10.238.38.193:39831 dest:
> /
> > 10.238.38.193:50010 java.io.FileNotFoundException:
> /mnt/dfs/dn/current/BP-
> > 1179773663-10.238.38.193-1363960970263/current/rbw/blk_
> > 7082931589039745816_1955950.meta (Invalid argument)
> > >        at java.io.RandomAccessFile.open(Native Method)
> > >        at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >
> > This looks like the native (OS level) side of RAF got EINVAL back from
> > create() or open(). Go from there.
> >
> >
> >
> > On Thu, May 2, 2013 at 12:27 PM, Jean-Marc Spaggiari <
> > jean-marc@spaggiari.org> wrote:
> >
> > > Hi,
> > >
> > > Any idea what can be the cause of a "Premature EOF: no length prefix
> > > available" error?
> > >
> > > 2013-05-02 14:02:41,063 INFO org.apache.hadoop.hdfs.DFSClient:
> Exception
> > in
> > > createBlockOutputStream
> > > java.io.EOFException: Premature EOF: no length prefix available
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:171)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1105)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1039)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:487)
> > > 2013-05-02 14:02:41,064 INFO org.apache.hadoop.hdfs.DFSClient:
> Abandoning
> > >
> BP-1179773663-10.238.38.193-1363960970263:blk_7082931589039745816_1955950
> > > 2013-05-02 14:02:41,068 INFO org.apache.hadoop.hdfs.DFSClient:
> Excluding
> > > datanode 10.238.38.193:50010
> > >
> > >
> > >
> > > I'm getting that on a server start. Logs are splitted correctly,
> > > coprocessors deployed corretly, and then I'm getting this exception.
> It's
> > > excluding the datanode, and because of that almost everything remaining
> > is
> > > failing.
> > >
> > > There is only one server in this "cluster"... But even so, it should be
> > > working. There is one master, one RS, one NN and one DN. On a AWS host.
> > >
> > > At the same time on the hadoop datanode side I'm getting that:
> > >
> > > 2013-05-02 14:02:41,063 INFO
> > > org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock
> > >
> BP-1179773663-10.238.38.193-1363960970263:blk_7082931589039745816_1955950
> > > received exception java.io.FileNotFoundException:
> > >
> > >
> >
> /mnt/dfs/dn/current/BP-1179773663-10.238.38.193-1363960970263/current/rbw/blk_7082931589039745816_1955950.meta
> > > (Invalid argument)
> > > 2013-05-02 14:02:41,063 ERROR
> > > org.apache.hadoop.hdfs.server.datanode.DataNode:
> > > ip-10-238-38-193.eu-west-1.compute.internal:50010:DataXceiver error
> > > processing WRITE_BLOCK operation  src: /10.238.38.193:39831 dest: /
> > > 10.238.38.193:50010
> > > java.io.FileNotFoundException:
> > >
> > >
> >
> /mnt/dfs/dn/current/BP-1179773663-10.238.38.193-1363960970263/current/rbw/blk_7082931589039745816_1955950.meta
> > > (Invalid argument)
> > >         at java.io.RandomAccessFile.open(Native Method)
> > >         at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.ReplicaInPipeline.createStreams(ReplicaInPipeline.java:187)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:199)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:457)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:103)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:67)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
> > >         at java.lang.Thread.run(Thread.java:662)
> > >
> > >
> > > Does is sound more an hadoop issue than an HBase one?
> > >
> > > JM
> > >
> >
> >
> >
> > --
> > Best regards,
> >
> >    - Andy
> >
> > Problems worthy of attack prove their worth by hitting back. - Piet Hein
> > (via Tom White)
> >
>



-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message