hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jthie...@ina.fr
Subject Re : Data lost during intensive writes
Date Fri, 06 Mar 2009 10:03:29 GMT
I set hadoop log level to DEBUG.
This exception occurs even with few active connections, (5 here). So, it can't be a problems
of Xceivers instances number.
Does somebody have an idea of the problem ?

Each exception create a dead socket of this type :

netstat infos:

Proto Recv-Q   Send-Q     Local Address                 Foreign Address            State 
                  User         Inode          PID/Program name    Timer
tcp     0             121395      aphrodite:50010             aphrodite:42858            
FIN_WAIT1         root           0                  -                                    
  probe (55.17/0/0)
tcp    72729     0                 aphrodite:42858             aphrodite:50010           
 ESTABLISHED  hadoop     5888205   13471/java                     off (0.00/0/0)

The socket are not closed before I stop HBase.

Jérôme Thièvre



2009-03-05 23:30:41,848 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.1.188.249:50010,
storageID=DS-482125953-10.1.188.249-50010-1236075545212, infoPort=50075, ipcPort=50020):DataXceiver
java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready
for write. ch : java.nio.channels.SocketChannel[connected local=/10.1.188.249:50010 remote=/10.1.188.141:38072]
        at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:185)
        at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
        at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:293)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:387)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:179)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:94)
        at java.lang.Thread.run(Thread.java:619)
2009-03-05 23:30:41,848 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.1.188.249:50010,
storageID=DS-482125953-10.1.188.249-50010-1236075545212, infoPort=50075, ipcPort=50020):Number
of active connections i
s: 5
--
2009-03-05 23:41:22,264 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.1.188.249:50010,
storageID=DS-482125953-10.1.188.249-50010-1236075545212, infoPort=50075, ipcPort=50020):DataXceiver
java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready
for write. ch : java.nio.channels.SocketChannel[connected local=/10.1.188.249:50010 remote=/10.1.188.141:47006]
        at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:185)
        at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
        at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:293)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:387)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:179)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:94)
        at java.lang.Thread.run(Thread.java:619)
2009-03-05 23:41:22,264 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.1.188.249:50010,
storageID=DS-482125953-10.1.188.249-50010-1236075545212, infoPort=50075, ipcPort=50020):Number
of active connections i
s: 4
--
2009-03-05 23:52:55,908 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.1.188.249:50010,
storageID=DS-482125953-10.1.188.249-50010-1236075545212, infoPort=50075, ipcPort=50020):DataXceiver
java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready
for write. ch : java.nio.channels.SocketChannel[connected local=/10.1.188.249:50010 remote=/10.1.188.141:40436]
        at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:185)
        at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
        at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:293)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:387)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:179)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:94)
        at java.lang.Thread.run(Thread.java:619)
2009-03-05 23:52:55,908 DEBUG org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.1.188.249:50010,
storageID=DS-482125953-10.1.188.249-50010-1236075545212, infoPort=50075, ipcPort=50020):Number
of active connections i
s: 6


----- Message d'origine -----
De: jthievre@ina.fr
Date: Mercredi, Mars 4, 2009 6:18 pm
Objet: Data lost during intensive writes


Mime
View raw message