hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sanjay Radia (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-6904) A baby step towards inter-version communications between dfs client and NameNode
Date Sat, 04 Dec 2010 00:56:15 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-6904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12966768#action_12966768

Sanjay Radia commented on HADOOP-6904:

> .. But adding or removing a method would not require a change of this [major #] value.
In the original Major-minor proposal a deletion of a method does require a change in Major#.
* Use case 1.  Remove method foo()  because we think it is not needed and we are willing to
break client apps.
Old clients connect and realize that the Major# has changed and disconnect. Old applications
cannot run against new server. We detect this,  not when foo() is called,  but whent
getProxy() is called. If the server has broken compatibility I want clean failure  rather
then a partial failure when the missing method is called.

* Use case 2. Service has  a method bar and we add method fasterBar(). getProxy succeeds because
it is a compatible change.
   New client side checks, at the time of the method call,  to see if fasterBar is supported.
If not then calls bar instead.

For both use cases there is  a mismatch of the list of methods. But in  case 2 it is okay
(bar is fall-back for fasterBar). 
In use case 1 it is not okay. I guess one could argue that we should let the app run and if
it uses the removed method then, and only then, fail. I am uncomfortable about that but and
willing to be convinced. The point is that the two approaches are NOT equivalent.

My current thought is that Major number changes when you delete a method, change the signature
of the method, of the serialization of the method. 

> A baby step towards inter-version communications between dfs client and NameNode
> --------------------------------------------------------------------------------
>                 Key: HADOOP-6904
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6904
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: ipc
>    Affects Versions: 0.22.0
>            Reporter: Hairong Kuang
>            Assignee: Hairong Kuang
>             Fix For: 0.22.0
>         Attachments: majorMinorVersion.patch, majorMinorVersion1.patch, rpcVersion.patch,
> Currently RPC communications in Hadoop is very strict. If a client has a different version
from that of the server, a VersionMismatched exception is thrown and the client can not connect
to the server. This force us to update both client and server all at once if a RPC protocol
is changed. But sometime different versions do not mean the client & server are not compatible.
It would be nice if we could relax this restriction and allows us to support inter-version
> My idea is that DfsClient catches VersionMismatched exception when it connects to NameNode.
It then checks if the client & the server is compatible. If yes, it sets the NameNode
version in the dfs client and allows the client to continue talking to NameNode. Otherwise,
rethrow the VersionMismatch exception.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message