hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sanjay Radia (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-9683) Wrap IpcConnectionContext in RPC headers
Date Thu, 11 Jul 2013 00:55:48 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-9683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13705301#comment-13705301

Sanjay Radia commented on HADOOP-9683:

To summarize a couple of offline discussion I had with Luke and Daryn:

Q> Why can't we allow multiple ipcConnectionContexts in the current scheme - does adding
RPC header fundamentally change things. Is it matter of code simplicity or something more
A> If the connection context is sent sans a rpc header then it's not possible to associate
it with the correct multiplexed stream.  The rpc server will also be unable to differentiate
the raw protobuf on the wire from a normal rpc packet, leading to decoding errors – unless
we are willing to block all other multiplexed streams while a new stream is negotiating.

A> Given a raw protobuf buffer you need an explicit protobuf class to deserialize them.
RPC v8 code assumes that sasl exchange and connection context happens in an ordered sequence
for a given connection. The explicit order gives you an implicit mapping what protobuf class
to use to deserialize a buffer. When the we multiplex streams/sessions over a connection,
sasl and connection context setting can happen in parallel and/or in pipeline. You'd have
no way to tell how to deserialize an rpc packet unless it's wrapped in rpc headers. Currently
by convention, negative call id has special meanings, -1 is for ping, -33 for sasl, -2 is
for connection context, so we can pick the right protobuf class (RpcSaslProto or ) to deserialize
a buffer. The point is that with the wrapped connection context, we'll be able to do these
sessions in a parallel/non-blocking fashion, so that round-trips can be amortized across multiple
sessions.  Otherwise, we'll have to force a partial order of the rpc messages, say the packet
after each sasl exchange must be connection context, which not only make the code more complex
but also unnecessarily force extra round-trips for each session. Note, the multi-session thing
is not implemented but RPC v9 (with wrapped connection context) will allow us to add it in
the future in a simple and backward-compatible way.

> Wrap IpcConnectionContext in RPC headers
> ----------------------------------------
>                 Key: HADOOP-9683
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9683
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: ipc
>            Reporter: Luke Lu
>            Assignee: Daryn Sharp
>            Priority: Blocker
>         Attachments: HADOOP-9683.patch
> After HADOOP-9421, all RPC exchanges (including SASL) are wrapped in RPC headers except
IpcConnectionContext, which is still raw protobuf, which makes request pipelining (a desirable
feature for things like HDFS-2856) impossible to achieve in a backward compatible way. Let's
finish the job and wrap IpcConnectionContext with the RPC request header with the call id
of SET_IPC_CONNECTION_CONTEXT. Or simply make it an optional field in the RPC request header
that gets set for the first RPC call of a given stream.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message