hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yi Liu (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HADOOP-11039) ByteBufferReadable API doc is inconsistent with the implementations.
Date Mon, 01 Sep 2014 01:12:21 GMT
Yi Liu created HADOOP-11039:

             Summary: ByteBufferReadable API doc is inconsistent with the implementations.
                 Key: HADOOP-11039
                 URL: https://issues.apache.org/jira/browse/HADOOP-11039
             Project: Hadoop Common
          Issue Type: Bug
          Components: documentation
            Reporter: Yi Liu
            Assignee: Yi Liu
            Priority: Minor

In {{ByteBufferReadable}}, API doc of {{int read(ByteBuffer buf)}} says:
After a successful call, buf.position() and buf.limit() should be unchanged, and therefore
any data can be immediately read from buf. buf.mark() may be cleared or updated.
@param buf
                the ByteBuffer to receive the results of the read operation. Up to
                buf.limit() - buf.position() bytes may be read.

But actually the implementations (e.g. {{DFSInputStream}}, {{RemoteBlockReader2}}) would be:

*Upon return, buf.position() will be advanced by the number of bytes read.*
code implementation of {{RemoteBlockReader2}} is as following:
  public int read(ByteBuffer buf) throws IOException {
    if (curDataSlice == null || curDataSlice.remaining() == 0 && bytesNeededToFinish
> 0) {
    if (curDataSlice.remaining() == 0) {
      // we're at EOF now
      return -1;

    int nRead = Math.min(curDataSlice.remaining(), buf.remaining());
    ByteBuffer writeSlice = curDataSlice.duplicate();
    writeSlice.limit(writeSlice.position() + nRead);

    return nRead;

This description is very important and will guide user how to use this API, and all the implementations
should keep the same behavior. We should fix the javadoc.

This message was sent by Atlassian JIRA

View raw message