hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aaron T. Myers (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-10070) RPC client doesn't use per-connection conf to determine server's expected Kerberos principal name
Date Tue, 11 Feb 2014 18:32:22 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898107#comment-13898107

Aaron T. Myers commented on HADOOP-10070:

bq. Ex. Client1 uses a conf with a principal1 and establishes a connection. Client2 uses an
equivalent conf with a different principal.

I don't follow this example. Could you perhaps be a bit more explicit? Are you suggesting
in the above that "principal1" is the client or server principal? Note that connections are
uniquely identified by (remote address, protocol, client UGI).

bq. Unless the connection from client1 has closed, won't client2 reuse the open connection?
If correct, is this valid behavior in your case?

Take a look at the example repro case I provided in TestKerberosClient.java. With this fix,
one need not close the first-opened connection before opening the second. The issue I'm trying
to address is that, regardless of whether or not the first client is closed, the second client
will use the Configuration of the first, which is clearly incorrect.

> RPC client doesn't use per-connection conf to determine server's expected Kerberos principal
> -------------------------------------------------------------------------------------------------
>                 Key: HADOOP-10070
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10070
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: security
>    Affects Versions: 2.2.0
>            Reporter: Aaron T. Myers
>            Assignee: Aaron T. Myers
>         Attachments: HADOOP-10070.patch, HADOOP-10070.patch, TestKerberosClient.java
> Currently, RPC client caches the {{Configuration}} object that was passed in to its constructor
and uses that same conf for every connection it sets up thereafter. This can cause problems
when security is enabled if the {{Configuration}} object provided when the first RPC connection
was made does not contain all possible entries for all server principals that will later be
used by subsequent connections. When this happens, it will result in later RPC connections
incorrectly failing with the error "Failed to specify server's Kerberos principal name" even
though the principal name was specified in the {{Configuration}} object provided on later
RPC connection attempts.
> I believe this means that we've inadvertently reintroduced HADOOP-6907.

This message was sent by Atlassian JIRA

View raw message